您的位置:首页 > 编程语言 > Python开发

python爬虫学习笔记1——糗百段子爬取

2015-10-28 17:49 671 查看
系列 —— Python爬虫实战

题目 —— 抓取糗事百科段子

语言 —— Python

目标 —— 1、抓取糗百段子

2、过滤带有图片的段子

3、实现每按一次回车显示一个段子的

发布时间发布人段子内容点赞数量

步骤

1、确定URL并抓取页面代码

初步构建代码:

======================================

# -*- coding:utf8 -*-

import urllib

import urllib2

page = 1

url = 'http://www.qiushibaike.com/hot/page/' + str(page)

try:

request = urllib2.Request(url)

response = urllib2.urlopen(request)

print response.read()

except urllib2.URLError, e:

if hasattr(e,"code"):

print e.code

if hasattr(e,"reason"):

print e.reason

==========================================

报错了:

raise BadStatusLine(line)

httplib.BadStatusLine: ''

原因:

可能是浏览器阻止了这一类访问

解决办法:

添加headers验证

修改代码如下:

============================================

# -*- coding:utf8 -*-

import urllib

import urllib2

page = 1

url = 'http://www.qiushibaike.com/hot/page/' + str(page)

user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'

headers = {'User-Agent' : user_agent}

try:

request = urllib2.Request(url,headers = headers)

response = urllib2.urlopen(request)

print response.read()

except urllib2.URLError, e:

if hasattr(e,"code"):

print e.code

if hasattr(e,"reason"):

print e.reason

==========================================

成功抓取代码

2、提取某一页段子

注意点:

a、正则表达式的运用

参考教程:
http://deerchao.net/tutorials/regex/regex.htm http://www.cnblogs.com/huxi/archive/2010/07/04/1771073.html
b、过滤带有图片的段子

打开目标网页,例如 http://www.qiushibaike.com/hot/page/2
打开调试工具,查看源代码,完成正则表达式的编写

编写代码如下:

==========================================

# -*- coding:utf8 -*-

import urllib

import urllib2

import re # 运用正则表达式别忘记导包

page = 1

url = 'http://www.qiushibaike.com/hot/page/' + str(page)

user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'

headers = {'User-Agent' : user_agent}

try:

request = urllib2.Request(url,headers = headers)

response = urllib2.urlopen(request)

content = response.read().decode('utf-8')

pattern = re.compile('<div.*?author">.*?<a.*?<img.*?>(.*?)</a>.*?<div.*?' +

'content">(.*?)<!--(.*?)-->.*?</div>(.*?)<div class="stats.*?class="number">(.*?)</i>',re.S)

items = re.findall(pattern,content)

for item in items:

haveImg = re.reaserch("img",item[3])

if not haveImg:

print item[0],item[1],item[2],item[4]

except urllib2.URLError, e:

if hasattr(e,"code"):

print e.code

if hasattr(e,"reason"):

print e.reason

=============================================

运行出现问题:

unindent does not match any outer indentation level

原因:代码没有对齐

python对于代码的缩进很严格(有的时候真的能逼死人),不要混用空格和Tab键。

3.完善代码

按下回车,读取一个段子,显示出段子的发布人,发布日期,内容以及点赞个数

=====================================================

# -*- coding:utf-8 -*-

import urllib

import urllib2

import re

import thread

import time

class QSBK:

def __init__(self):

self.pageIndex = 1

self.user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'

self.headers = { 'User-Agent' : self.user_agent }

self.stories = []

self.enable = False

def getPage(self,pageIndex):

try:

url = 'http://www.qiushibaike.com/hot/page/' + str(pageIndex)

request = urllib2.Request(url,headers = self.headers)

response = urllib2.urlopen(request)

pageCode = response.read().decode('utf-8')

return pageCode

except urllib2.URLError, e:

if hasattr(e,"reason"):

print u"连接糗事百科失败,错误原因",e.reason

return None

def getPageItems(self,pageIndex):

pageCode = self.getPage(pageIndex)

if not pageCode:

print "页面加载失败...."

return None

pattern = re.compile('<div.*?author">.*?<a.*?<img.*?>(.*?)</a>.*?<div.*?'+

'content">(.*?)<!--(.*?)-->.*?</div>(.*?)<div class="stats.*?class="number">(.*?)</i>',re.S)

items = re.findall(pattern,pageCode)

pageStories = []

for item in items:

haveImg = re.search("img",item[3])

if not haveImg:

replaceBR = re.compile('<br/>')

text = re.sub(replaceBR,"\n",item[1])

pageStories.append([item[0].strip(),text.strip(),item[2].strip(),item[4].strip()])

return pageStories

def loadPage(self):

if self.enable == True:

if len(self.stories) < 2:

pageStories = self.getPageItems(self.pageIndex)

if pageStories:

self.stories.append(pageStories)

self.pageIndex += 1

def getOneStory(self,pageStories,page):

for story in pageStories:

input = raw_input()

self.loadPage()

if input == "Q":

self.enable = False

return

print u"第%d页\t发布人:%s\t发布时间:%s\t赞:%s\n%s" %(page,story[0],story[2],story[3],story[1])

def start(self):

print u"正在读取糗事百科,按回车查"

self.enable = True

self.loadPage()

nowPage = 0

while self.enable:

if len(self.stories)>0:

pageStories = self.stories[0]

nowPage += 1

del self.stories[0]

self.getOneStory(pageStories,nowPage)

spider = QSBK()

spider.start()

=============================================

ps:

一定要注意缩进!

=============================================
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: