您的位置:首页 > 编程语言 > Python开发

python 爬虫试手 requests+BeautifulSoup

2016-04-13 23:58 976 查看
工作需要,要爬取新浪微博数据,之前一直用java, 但是遇到页面加密很伤,转到python。先拿糗事百科试试python里爬虫的写法。

工具

requests

BeautifulSoup

工具参考

Python爬虫利器一之Requests库的用法

Python爬虫利器二之Beautiful Soup的用法

还有一个据说比较好用的PyQuery, 试用了下,难用的要死!class 里有空格就懵逼了。之前在Java里一直用Jsoup解析,比较顺手,相应的感觉比较适应于BeautifulSoup,废话不多说,搞起!

页面结构







代码

import requests

from bs4 import BeautifulSoup

page = 1
rooturl = 'http://www.qiushibaike.com/hot/page/' + str(page)

# payload = {'key1': 'value1', 'key2': 'value2'}
# r = requests.get( rooturl, params=payload)
pageReq = requests.get(rooturl)

pageString = pageReq.text

doc = BeautifulSoup(pageString, "lxml")

parents = doc.find('div', id='content-left')

for elem in parents.find_all(class_="article block untagged mb15", recursive=False):
authorName = ""
if len(elem.find(class_="author clearfix").select('a')) ==2:
authorName = elem.find(class_="author clearfix").select('a')[1]['title']
content = elem.find(class_="content").get_text().strip()
num_laugh = elem.find_all("i", class_="number")[0].get_text()
num_comments = elem.find_all("i", class_="number")[1].get_text()
print "author: " + authorName + "\n" + "content: " + content + "\n" + num_laugh + " " + num_comments
print "***************************************************"
# target = soup.select('#content-left > .article block untagged mb15')


输出结果

内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: