您的位置:首页 > 数据库 > Redis

Scrapy Redis源码 spider分析

2015-09-19 10:40 260 查看
下载的scrapy-redis源码中的spiders.py源码非常分析:

RedisSpider继承了Spider和RedisMixin这两个类,RedisMixin是用来从redis读取url的类。

当我们生成一个Spider继承RedisSpider时,调用setup_redis函数,这个函数会去连接redis数据库,然后会设置signals,一个是当spider空闲的时候(signal),会调用spider_idle函数,这个函数调用schedule_next_request函数,然后抛出DontCloseSpider异常,保证spider是一直活着的状态;另一个signal是当抓到一个item时,item_scraped函数,这个函数会调用schedule_next_request函数。

from scrapy import Spider, signals
from scrapy.exceptions import DontCloseSpider

from . import connection

class RedisMixin(object):
"""Mixin class to implement reading urls from a redis queue."""
redis_key = None  # use default '<spider>:start_urls'

def setup_redis(self):
"""Setup redis connection and idle signal.

This should be called after the spider has set its crawler object.
"""
if not self.redis_key:
self.redis_key = '%s:start_urls' % self.name

self.server = connection.from_settings(self.crawler.settings)
# idle signal is called when the spider has no requests left,
# that's when we will schedule new requests from redis queue
self.crawler.signals.connect(self.spider_idle, signal=signals.spider_idle)
self.crawler.signals.connect(self.item_scraped, signal=signals.item_scraped)
self.log("Reading URLs from redis list '%s'" % self.redis_key)

def next_request(self):
"""Returns a request to be scheduled or none."""
use_set = self.settings.getbool('REDIS_SET')

if use_set:
url = self.server.spop(self.redis_key)
else:
url = self.server.lpop(self.redis_key)

if url:
return self.make_requests_from_url(url)

def schedule_next_request(self):
"""Schedules a request if available"""
req = self.next_request()
if req:
self.crawler.engine.crawl(req, spider=self)

def spider_idle(self):
"""Schedules a request if available, otherwise waits."""
self.schedule_next_request()
raise DontCloseSpider

def item_scraped(self, *args, **kwargs):
"""Avoids waiting for the spider to  idle before scheduling the next request"""
self.schedule_next_request()

class RedisSpider(RedisMixin, Spider):
"""Spider that reads urls from redis queue when idle."""

def _set_crawler(self, crawler):
super(RedisSpider, self)._set_crawler(crawler)
self.setup_redis()


我们下到的scrapy-redis源码中有自带一个example-project项目,这个项目包含3个spider,分别是dmoz, mycrawler_redis, myspider_redis,

dmoz(class DmozSpider(CrawlSpider))这个爬虫继承的是CrawlSpider,它是用来说明Redis的持续性,当我们第一次运行dmoz爬虫,然后Ctrl-C停掉之后,再运行dmoz爬虫,之前的爬取记录是保留下来的。

myspider_redis ( class MySpider(RedisSpider) ) 这个爬虫继承了RedisSpider, 它能够支持分布式的抓取,采用的是basic spider,需要parse函数

mycrawler_redis ( class MyCrawler(RedisMixin, CrawlSpider) ) 这个爬虫,继承了RedisMixin和CrawlSpider,能够支持分布式的抓取,采用的是crawlSpider,需要设置Rule规则
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: