您的位置:首页 > 编程语言 > Python开发

[置顶] 新闻分类系统(Python):爬虫(bs+rq)+数据处理(jieba分词)+分类器(贝叶斯)

2017-07-11 12:12 731 查看

新闻分类系统(Python):爬虫(bs+rq)+数据处理(jieba分词)+分类器(贝叶斯)

简介

新闻分类系统可以对十种新闻进行自动分类并显示准确性的结果。(交叉验证准确性在65%~70%,数据集一共3183,可增加数据集提高准确率。)

系统分为三部分:

爬虫部分,使用Requests处理http,post请求。Beautiful Soup处理HTML页面标签并提取信息。

目标网站是谣言百科网站,其实这个实战是我谣言处理系统的一部分,但是现阶段对于谣言处理系统我遇到了问题就是精度提高。

现阶段的方法我的想法是,第一个数据集增加,因为网络上很多谣言都是相似的,尤其是养生,历史之类的谣言新闻都是完全重复或者部分重复率很高的,这个算是从数据特征点出发的办法。

第二个是建立知识图谱,这个做起来就太胖大了,所以可以找某一部分做测试,然后得出结果。

第三个就是我现在研究的方法,对于“小数据”的研究其实可以看作一种聚类,以后数据很多,大部分数据都是没有标注标签的,人工标注不可能,现阶段用半监督学习的方法比较多,所以从小数据集中开始学习是一个以后经常遇到的问题,聚类问题现在发展和关注度相比神经网络比较低。

推荐原谷歌大脑成员Marcin Olof Szummer的博士论文《Learning from Partially Labeled Data》。

数据处理部分,数据集的整理和数据的预处理。(停用词数据集从网上随便找的)

分类器部分 ,用sklearn的SVM 支持向量机来分类数据集。

代码已经全部上传至github:https://github.com/sileixinhua/News-classification

开发环境

Beautiful Soup 4.4.0 文档: http://beautifulsoup.readthedocs.io/zh_CN/latest/#id28

Requests : http://cn.python-requests.org/zh_CN/latest/

Python3

sklearn :http://scikit-learn.org/stable/

Windows10 (CPU:4G 分类部分运行共需51秒)

sublime

jieba分词



爬虫数据集的网站:

http://www.yaoyanbaike.com/

目标网站的爬虫策略分析

图1:目标网站是谣言百科网站,是动态网站,信息分类明确,有爬去价值,没有敏感和保密信息,安全。



图2:十种信息的分类。



图3:单独新闻信息的页面,有标题,来源网站或作者,更新时间,内容。



图4:对应的HTML页面信息。标题和内容的class命名明确。



图5:某一类别信息下的新闻目录。



图6:下一页新闻信息的点击地址。



图7:下一页地址的html标签地址。



爬虫部分

# 2017年7月4日00:11:42
# silei
# 爬虫目标网站:http://www.yaoyanbaike.com/
# 获取信息BeautifulSoup+request

# -*- coding:UTF-8 -*-

from urllib import request
from bs4 import BeautifulSoup
import re
import sys
import codecs

if __name__ == "__main__":
text_file_number = 0    # 同一类新闻下的索引数
number = 1  # 同类别新闻不同页面下的索引数
while (number <= 2):
if number==1:   # 第一个新闻下地址是baby不是baby_数字所以要区分判断一下
get_url = 'http://www.yaoyanbaike.com/category/baby.html'
else:
get_url = 'http://www.yaoyanbaike.com/category/baby_'+str(number)+'.html'   #这个是baby_数字,number就是目录索引数
head = {}   #设置头
head['User-Agent'] = 'Mozilla/5.0 (Linux; Android 4.1.1; Nexus 7 Build/JRO03D) AppleWebKit/535.19 (KHTML, like Gecko) Chrome/18.0.1025.166  Safari/535.19'
# 模拟浏览器模式,定制请求头
download_req_get = request.Request(url = get_url, headers = head)
# 设置Request
download_response_get = request.urlopen(download_req_get)
# 设置urlopen获取页面所有内容
download_html_get = download_response_get.read().decode('UTF-8','ignore')
# UTF-8模式读取获取的页面信息标签和内容
soup_texts = BeautifulSoup(download_html_get, 'lxml')
# BeautifulSoup读取页面html标签和内容的信息
for link  in soup_texts.find_all(["a"]):
print(str(text_file_number)+"   "+str(number)+"    "+link.get('href'))
# 打印文件地址用于测试
s=link.get('href')
if s.find("/a/") == -1:
print("错误网址")   # 只有包含"/a/"字符的才是有新闻的有效地址
else:
download_url = link.get('href')
head = {}
head['User-Agent'] = 'Mozilla/5.0 (Linux; Android 4.1.1; Nexus 7 Build/JRO03D) AppleWebKit/535.19 (KHTML, like Gecko) Chrome/18.0.1025.166  Safari/535.19'
download_req = request.Request(url = "http://www.yaoyanbaike.com"+download_url, headers = head)
print("http://www.yaoyanbaike.com"+download_url)
download_response = request.urlopen(download_req)
download_html = download_response.read().decode('UTF-8','ignore')
soup_texts = BeautifulSoup(download_html, 'lxml')
texts = soup_texts.find_all('article')
soup_text = BeautifulSoup(str(texts), 'lxml')
p = re.compile("<[^>]+>")
text=p.sub("", str(soup_text))
# 去除页面标签
f1 = codecs.open('../data/baby/'+str(text_file_number)+'.txt','w','UTF-8')
# 将信息存储在本地
f1.write(text)
f1.close()
text_file_number = text_file_number + 1
number = number + 1


数据处理部分

这部分分为两部分,第一部分是分词加去除停用词,去除停用词的停用词表是我从网上找的一个,停用词就是汉语当中的“的”,“得”,“地”等词。

第二部分是创建单词列表,并对出现的词做出排序,为创建词向量做的准备步骤。

# 2017年7月4日00:13:40
# silei
# jieba分词,停用词,数据可视化,知识图谱
# 数据文件数一共3183个
# baby,car,food,health,legend,life,love,news,science,sexual
# 129,410,409,406,396,409,158,409,409,38

# -*- coding:UTF-8 -*-

import jieba

dir = {'baby': 129,'car': 410,'food': 409,'health': 406,'legend': 396,'life': 409,'love': 158,'news': 409,'science': 409,'sexual': 38}
# 设置词典,分别是类别名称和该类别下一共包含的文本数量
data_file_number = 0
# 当前处理文件索引数

for world_data_name,world_data_number in dir.items():
# 将词典中的数据分别复制到world_data_name,world_data_number中
while (data_file_number < world_data_number):
print(world_data_name)
print(world_data_number)
print(data_file_number)
# 打印文件索引信息
file = open('../data/raw_data/'+world_data_name+'/'+str(data_file_number)+'.txt','r',encoding= 'UTF-8')
file_w = open('../data/train_data/'+world_data_name+'/'+str(data_file_number)+'.txt','w',encoding= 'UTF-8')
for line in file:
stoplist = {}.fromkeys([ line.strip() for line in open("../data/stopword.txt",encoding= 'UTF-8') ])
# 读取停用词在列表中
seg_list = jieba.lcut(line,cut_all=False)
# jieba分词精确模式
seg_list = [word for word in list(seg_list) if word not in stoplist]
# 去除停用词
print("Default Mode:", "/ ".join(seg_list))
for i in range(len(seg_list)):
file_w.write(str(seg_list[i])+'\n')
# 分完词分行输入到文本中
# file_w.write(str(seg_list))
# print(line, end='')
file_w.close()
file.close()
data_file_number = data_file_number + 1
data_file_number = 0


# 2017年7月4日17:08:15
# silei
# 训练模型,查看效果
# 数据文件数一共3183个
# baby,car,food,health,legend,life,love,news,science,sexual
# 129,410,409,406,396,409,158,409,409,38

# -*- coding:UTF-8 -*-

dir = {'baby': 129,'car': 410,'food': 409,'health': 406,'legend': 396,'life': 409,'love': 158,'news': 409,'science': 409,'sexual': 38}
# 设置词典,分别是类别名称和该类别下一共包含的文本数量
data_file_number = 0
# 当前处理文件索引数

def MakeAllWordsList(train_datasseg):
# 统计词频
all_words = {}
for train_dataseg in train_datasseg:
for word in train_dataseg:
if word in all_words:
all_words[word] += 1
else:
all_words[word] = 1
# 所有出现过的词数目
print("all_words length in all the train datas: ", len(all_words.keys()))
# key函数利用词频进行降序排序
all_words_reverse = sorted(all_words.items(), key=lambda f:f[1], reverse=True) # 内建函数sorted参数需为list
for all_word_reverse in all_words_reverse:
print(all_word_reverse[0], "\t", all_word_reverse[1])
all_words_list = [all_word_reverse[0] for all_word_reverse in all_words_reverse if len(all_word_reverse[0])>1]
return all_words_list

if __name__ == "__main__":
for world_data_name,world_data_number in dir.items():
while (data_file_number < world_data_number):
print(world_data_name)
print(world_data_number)
print(data_file_number)
file = open('../data/raw_data/'+world_data_name+'/'+str(data_file_number)+'.txt','r',encoding= 'UTF-8')
MakeAllWordsList(file)
for line in file:
print(line+'\n', end='')
file.close()


分类识别部分

这部分有2个贝叶斯分类器可以选择,一个是nltk,另一个是sklearn,我选用的sklearn。

#coding: utf-8
import os
import time
import random
import jieba
import nltk
import sklearn
from sklearn.naive_bayes import MultinomialNB
import numpy as np
import pylab as pl
import matplotlib.pyplot as plt

def MakeWordsSet(words_file):
words_set = set()
with open(words_file, 'r', encoding='UTF-8') as fp:
for line in fp.readlines():
word = line.strip()
if len(word)>0 and word not in words_set: # 去重
words_set.add(word)
return words_set

def TextProcessing(folder_path, test_size=0.2):
folder_list = os.listdir(folder_path)
data_list = []
class_list = []

# 类间循环
for folder in folder_list:
new_folder_path = os.path.join(folder_path, folder)
files = os.listdir(new_folder_path)
# 类内循环
j = 0
for file in files:
if j > 410: # 每类text样本数最多100
break
with open(os.path.join(new_folder_path, file), 'r', encoding='UTF-8') as fp:
raw = fp.read()
# print raw
## --------------------------------------------------------------------------------
## jieba分词
# jieba.enable_parallel(4) # 开启并行分词模式,参数为并行进程数,不支持windows
word_cut = jieba.cut(raw, cut_all=False) # 精确模式,返回的结构是一个可迭代的genertor
word_list = list(word_cut) # genertor转化为list,每个词unicode格式
# jieba.disable_parallel() # 关闭并行分词模式
# print word_list
## --------------------------------------------------------------------------------
data_list.append(word_list)
class_list.append(folder)
j += 1

## 划分训练集和测试集
# train_data_list, test_data_list, train_class_list, test_class_list = sklearn.cross_validation.train_test_split(data_list, class_list, test_size=test_size)
data_class_list = list(zip(data_list, class_list))
random.shuffle(data_class_list)
index = int(len(data_class_list)*test_size)+1
train_list = data_class_list[index:]
test_list = data_class_list[:index]
train_data_list, train_class_list = zip(*train_list)
test_data_list, test_class_list = zip(*test_list)

# 统计词频放入all_words_dict
all_words_dict = {}
for word_list in train_data_list:
for word in word_list:
if word in all_words_dict:
all_words_dict[word] += 1
else:
all_words_dict[word] = 1
# key函数利用词频进行降序排序
all_words_tuple_list = sorted(all_words_dict.items(), key=lambda f:f[1], reverse=True) # 内建函数sorted参数需为list
all_words_list = list(zip(*all_words_tuple_list))[0]

return all_words_list, train_data_list, test_data_list, train_class_list, test_class_list

def words_dict(all_words_list, deleteN, stopwords_set=set()):
# 选取特征词
feature_words = []
n = 1
for t in range(deleteN, len(all_words_list), 1):
if n > 1000: # feature_words的维度1000
break
# print all_words_list[t]
if not all_words_list[t].isdigit() and all_words_list[t] not in stopwords_set and 1<len(all_words_list[t])<5:
feature_words.append(all_words_list[t])
n += 1
return feature_words

def TextFeatures(train_data_list, test_data_list, feature_words, flag='nltk'):
def text_features(text, feature_words):
text_words = set(text)
## -----------------------------------------------------------------------------------
if flag == 'nltk':
## nltk特征 dict
features = {word:1 if word in text_words else 0 for word in feature_words}
elif flag == 'sklearn':
## sklearn特征 list
features = [1 if word in text_words else 0 for word in feature_words]
else:
features = []
## -----------------------------------------------------------------------------------
return features
train_feature_list = [text_features(text, feature_words) for text in train_data_list]
test_feature_list = [text_features(text, feature_words) for text in test_data_list]
return train_feature_list, test_feature_list

def TextClassifier(train_feature_list, test_feature_list, train_class_list, test_class_list, flag='nltk'):
## -----------------------------------------------------------------------------------
if flag == 'nltk':
## nltk分类器
train_flist = zip(train_feature_list, train_class_list)
test_flist = zip(test_feature_list, test_class_list)
classifier = nltk.classify.NaiveBayesClassifier.train(train_flist)
# print classifier.classify_many(test_feature_list)
# for test_feature in test_feature_list:
#     print classifier.classify(test_feature),
# print ''
test_accuracy = nltk.classify.accuracy(classifier, test_flist)
elif flag == 'sklearn':
## sklearn分类器
classifier = MultinomialNB().fit(train_feature_list, train_class_list)
# print classifier.predict(test_feature_list)
# for test_feature in test_feature_list:
#     print classifier.predict(test_feature)[0],
# print ''
test_accuracy = classifier.score(test_feature_list, test_class_list)
else:
test_accuracy = []
return test_accuracy

if __name__ == '__main__':

print("start")

## 文本预处理
folder_path = '../data/demo'
all_words_list, train_data_list, test_data_list, train_class_list, test_class_list = TextProcessing(folder_path, test_size=0.2)

# 生成stopwords_set
stopwords_file = '../data/stopword.txt'
stopwords_set = MakeWordsSet(stopwords_file)

## 文本特征提取和分类
# flag = 'nltk'
flag = 'sklearn'
deleteNs = range(0, 1000, 20)
test_accuracy_list = []
for deleteN in deleteNs:
# feature_words = words_dict(all_words_list, deleteN)
feature_words = words_dict(all_words_list, deleteN, stopwords_set)
train_feature_list, test_feature_list = TextFeatures(train_data_list, test_data_list, feature_words, flag)
test_accuracy = TextClassifier(train_feature_list, test_feature_list, train_class_list, test_class_list, flag)
test_accuracy_list.append(test_accuracy)
print(test_accuracy_list)

# 结果评价
plt.figure()
plt.plot(deleteNs, test_accuracy_list)
plt.title('Relationship of deleteNs and test_accuracy')
plt.xlabel('deleteNs')
plt.ylabel('test_accuracy')
plt.savefig('result.png')

print("finished")


分类结果



感想

本来是想做谣言识别的系统,但是结果并不理想,发现可以当作一个新闻分类器来结尾。

为了谣言识别系统,现在钻研知识图谱,聚类和半监督学习。

从“小数据”出发,着手解决实际问题。

——————————————————————————————————-

有学习机器学习相关同学可以加群,交流,学习,不定期更新最新的机器学习pdf书籍等资源。

QQ群号: 657119450

内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐