python大数据分析操作系统日志
2017-11-22 11:23
316 查看
一 代码
1、大文件切分
2、Mapper代码
3.Reducer代码
二 运行结果
依次运行上面3个程序,得到最终结果:
07/10/2013:4634
07/16/2013:51
08/15/2013:3958
07/11/2013:1
10/09/2013:733
12/11/2013:564
02/12/2014:4102
05/14/2014:737
1、大文件切分
import os import os.path import time def FileSplit(sourceFile, targetFolder): if not os.path.isfile(sourceFile): print(sourceFile, ' does not exist.') return if not os.path.isdir(targetFolder): os.mkdir(targetFolder) tempData = [] number = 1000 fileNum = 1 linesRead = 0 with open(sourceFile, 'r') as srcFile: dataLine = srcFile.readline().strip() while dataLine: for i in range(number): tempData.append(dataLine) dataLine = srcFile.readline() if not dataLine: break desFile = os.path.join(targetFolder, sourceFile[0:-4] + str(fileNum) + '.txt') with open(desFile, 'a+') as f: f.writelines(tempData) tempData = [] fileNum = fileNum + 1 if __name__ == '__main__': #sourceFile = input('Input the source file to split:') #targetFolder = input('Input the target folder you want to place the split files:') sourceFile = 'test.txt' targetFolder = 'test' FileSplit(sourceFile, targetFolder)
2、Mapper代码
import os import re import threading import time def Map(sourceFile): if not os.path.exists(sourceFile): print(sourceFile, ' does not exist.') return pattern = re.compile(r'[0-9]{1,2}/[0-9]{1,2}/[0-9]{4}') result = {} with open(sourceFile, 'r') as srcFile: for dataLine in srcFile: r = pattern.findall(dataLine) if r: t = result.get(r[0], 0) t += 1 result[r[0]] = t desFile = sourceFile[0:-4] + '_map.txt' with open(desFile, 'a+') as fp: for k, v in result.items(): fp.write(k + ':' + str(v) + '\n') if __name__ == '__main__': desFolder = 'test' files = os.listdir(desFolder) #如果不使用多线程,可以直接这样写 '''for f in files: Map(desFolder + '\\' + f)''' #使用多线程 def Main(i): Map(desFolder + '\\' + files[i]) fileNumber = len(files) for i in range(fileNumber): t = threading.Thread(target = Main, args =(i,)) t.start()
3.Reducer代码
import os def Reduce(sourceFolder, targetFile): if not os.path.isdir(sourceFolder): print(sourceFolder, ' does not exist.') return result = {} #Deal only with the mapped files allFiles = [sourceFolder+'\\'+f for f in os.listdir(sourceFolder) if f.endswith('_map.txt')] for f in allFiles: with open(f, 'r') as fp: for line in fp: line = line.strip() if not line: continue position = line.index(':') key = line[0:position] value = int(line[position + 1:]) result[key] = result.get(key,0) + value with open(targetFile, 'w') as fp: for k,v in result.items(): fp.write(k + ':' + str(v) + '\n') if __name__ == '__main__': Reduce('test', 'test\\result.txt')
二 运行结果
依次运行上面3个程序,得到最终结果:
07/10/2013:4634
07/16/2013:51
08/15/2013:3958
07/11/2013:1
10/09/2013:733
12/11/2013:564
02/12/2014:4102
05/14/2014:737
相关文章推荐
- python利用大数据和管道分析操作系统日志
- 分析nginx大日志文件,python多线程必备! .
- 玩转Python大数据分析 《Python for Data Analysis》的读书笔记-第08页
- python 3.x 分析日志的模块(正则匹配)
- Python在大数据分析及机器学习中的兵器谱
- python(2):使用python分析大日志文件思路及过程
- 用python分析apache等web日志
- Python脚本收集腾讯云CDN日志,并入ELK日志分析
- 玩转Python大数据分析 《Python for Data Analysis》的读书笔记-第09页
- python 日志分析
- iis日志和tomcat访问日志批量分析demo(python)
- 一次用bash+python分析NGINX日志的记录
- python写的分析mysql binlog日志工具
- Python 分析Nginx 日志并存入MySQL数据库(单线程) 推荐
- python 正则分析nginx日志
- python分析nignx访问日志脚本分享
- Python(Stackless) + MongoDB Apache 日志(2G)分析
- python,日志分析脚本
- python 经典语句日志分析
- 使用python构建基于hadoop的mapreduce日志分析平台