您的位置:首页 > 编程语言 > Python开发

python读取大文件处理方式

2016-10-11 16:07 495 查看
一.前言

我们在处理小的文本文件时一般使用
.read()
.readline()
.readlines(),当我们的文件有10个G甚至更大时,用上面的方法内存就直接爆掉了。


二.解决办法

1.看到文件这么大,我们的第一反应都是把文件分割成小块的读取不就好了吗

def read_in_chunks(filePath, chunk_size=1024*1024):
"""
Lazy function (generator) to read a file piece by piece.
Default chunk size: 1M
You can set your own chunk size
"""
file_object = open(filePath)
while True:
chunk_data = file_object.read(chunk_size)
if not chunk_data:
break
yield chunk_data
if __name__ == "__main__":
filePath = './path/filename'
for chunk in read_in_chunks(filePath):
process(chunk) # <do something with chunk>


2.使用
with open()


#If the file is line based
with open(...) as f:
for line in f:
process(line) # <do something with line>


3.fileinput处理

import fileinput
for line in fileinput.input(['sum.log']):
print line


参考:http://chenqx.github.io/2014/10/29/Python-fastest-way-to-read-a-large-file/

http://www.zhidaow.com/post/python-read-big-file
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: