您的位置:首页 > 编程语言 > Python开发

python爬虫学习-scrapy爬取链家房源信息并存储(翻页)

2019-05-13 18:48 399 查看

爬取链家租房频道的房源信息,含翻页,含房间详情页的内容爬取。

items.py

[code]import scrapy

class ScrapytestItem(scrapy.Item):
# define the fields for your item here like:
title = scrapy.Field()#房源名称
price = scrapy.Field()#价格
url = scrapy.Field()#详情页地址
introduce_item = scrapy.Field()#房源描述

pipelines.py

[code]import json

class ScrapytestPipeline(object):
#打开文件
def open_spider(self,spider):
self.file = open('58_chuzu.txt','w',encoding='utf-8')
print('文件被打开了')

#写入文件
def process_item(self, item, spider):
line = '{}\n'.format(json.dumps(dict(item),ensure_ascii=False))
self.file.write(line)
return item

#关闭文件
def close_spider(self,spider):
self.file.close()
print('文件被关闭了')

spider

[code]import scrapy
from ..items import ScrapytestItem
from scrapy.http import Request

class SpiderCity58Spider(scrapy.Spider):
name = 'spider_city_58'#必不可少的爬虫名字
allowed_domains = ['lianjia.com']
start_urls = ['https://bj.lianjia.com/zufang/']

def parse(self, response):
#提取页面上的信息
info_list = response.xpath('//*[@id="content"]/div[1]/div[1]/div')
for i in info_list:
item = ScrapytestItem()
item['title'] = i.xpath('normalize-space(./div/p[1]/a/text())').extract()
item['price'] = i.xpath('./div/span/em/text()').extract()
url = i.xpath('./div/p[1]/a/@href').extract_first()#相对地址补全为绝对地址
item['url'] = response.urljoin(url)

#获取详情页的URL
if item['url']:#判断URL是否为空
yield Request(
item['url'],
callback = self.detail_parse,
meta = {'item':item},#只接受字典类型的赋值,将item传递给detali_parse()
priority = 10,
dont_filter = True
)

#获取翻页URL
for page in range(2,5):
url = 'https://bj.lianjia.com/zufang/pg{}/'.format(str(page))#提取翻页链接
test_request = Request(url,callback = self.parse)
yield test_request

#获取详情页的信息
def detail_parse(self,response):
item = response.meta['item']
item['introduce_item'] = response.xpath('//*[@id="desc"]/ul/li/p[1]/text()').extract()
return item

本来打算爬58的,但是58的反爬策略我还无法破解,所以换成了链家。

目前爬了4页,证明可行。后续添加时间控制等代码即可

内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: