您的位置:首页 > 数据库 > Redis

scrapy-redis所有request爬取完毕,如何解决爬虫空跑问题?

2017-12-22 14:46 716 查看

scrapy-redis所有request爬取完毕,如何解决爬虫空跑问题?

1. 背景



根据scrapy-redis分布式爬虫的原理,多台爬虫主机共享一个爬取队列。当爬取队列中存在request时,爬虫就会取出request进行爬取,如果爬取队列中不存在request时,爬虫就会处于等待状态,行如下:

E:\Miniconda\python.exe E:/PyCharmCode/redisClawerSlaver/redisClawerSlaver/spiders/main.py
2017-12-12 15:54:18 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: scrapybot)
2017-12-12 15:54:18 [scrapy.utils.log] INFO: Overridden settings: {'SPIDER_LOADER_WARN_ONLY': True}
2017-12-12 15:54:18 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2017-12-12 15:54:18 [myspider_redis] INFO: Reading start URLs from redis key 'myspider:start_urls' (batch size: 110, encoding: utf-8
2017-12-12 15:54:18 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'redisClawerSlaver.middlewares.ProxiesMiddleware',
'redisClawerSlaver.middlewares.HeadersMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-12-12 15:54:18 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-12-12 15:54:18 [scrapy.middleware] INFO: Enabled item pipelines:
['redisClawerSlaver.pipelines.ExamplePipeline',
'scrapy_redis.pipelines.RedisPipeline']
2017-12-12 15:54:18 [scrapy.core.engine] INFO: Spider opened
2017-12-12 15:54:18 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-12-12 15:55:18 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-12-12 15:56:18 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)


可是,如果所有的request都已经爬取完毕了呢?这件事爬虫程序是不知道的,它无法区分结束和空窗期状态的不同,所以会一直处于上面的那种等待状态,也就是我们说的空跑。

那有没有办法让爬虫区分这种情况,自动结束呢?

2. 环境

系统:win7

scrapy-redis

redis 3.0.5

python 3.6.1

3. 解决方案

从背景介绍来看,基于scrapy-redis分布式爬虫的原理,爬虫结束是一个很模糊的概念,在爬虫爬取过程中,爬取队列是一个不断动态变化的过程,随着request的爬取,又会有新的request进入爬取队列。进进出出。爬取速度高于填充速度,就会有队列空窗期(爬取队列中,某一段时间会出现没有request的情况),爬取速度低于填充速度,就不会出现空窗期。所以对于爬虫结束这件事来说,只能模糊定义,没有一个精确的标准。

所以,下面这两种方案都是一种大概的思路。

3.1. 利用scrapy的关闭spider扩展功能

参考官方文档:http://scrapy-chs.readthedocs.io/zh_CN/0.24/topics/extensions.html

# 关闭spider扩展
class scrapy.contrib.closespider.CloseSpider
当某些状况发生,spider会自动关闭。每种情况使用指定的关闭原因。

关闭spider的情况可以通过下面的设置项配置:

CLOSESPIDER_TIMEOUT
CLOSESPIDER_ITEMCOUNT
CLOSESPIDER_PAGECOUNT
CLOSESPIDER_ERRORCOUNT


CLOSESPIDER_TIMEOUT

CLOSESPIDER_TIMEOUT
默认值: 0

一个整数值,单位为秒。如果一个spider在指定的秒数后仍在运行, 它将以 closespider_timeout 的原因被自动关闭。 如果值设置为0(或者没有设置),spiders不会因为超时而关闭。


CLOSESPIDER_ITEMCOUNT

CLOSESPIDER_ITEMCOUNT
缺省值: 0

一个整数值,指定条目的个数。如果spider爬取条目数超过了指定的数, 并且这些条目通过item pipeline传递,spider将会以 closespider_itemcount 的原因被自动关闭。


CLOSESPIDER_PAGECOUNT

CLOSESPIDER_PAGECOUNT
0.11 新版功能.

缺省值: 0

一个整数值,指定最大的抓取响应(reponses)数。 如果spider抓取数超过指定的值,则会以 closespider_pagecount 的原因自动关闭。 如果设置为0(或者未设置),spiders不会因为抓取的响应数而关闭。


CLOSESPIDER_ERRORCOUNT

CLOSESPIDER_ERRORCOUNT
0.11 新版功能.

缺省值: 0

一个整数值,指定spider可以接受的最大错误数。 如果spider生成多于该数目的错误,它将以 closespider_errorcount 的原因关闭。 如果设置为0(或者未设置),spiders不会因为发生错误过多而关闭。


示例:打开 settings.py,添加一个配置项,如下

# 爬虫运行超过23.5小时,如果爬虫还没有结束,则自动关闭
CLOSESPIDER_TIMEOUT = 84600


特别注意:如果爬虫在规定时限没有把request全部爬取完毕,此时强行停止的话,爬取队列中就还会存有部分request请求。那么爬虫下次开始爬取时,一定要记得在master端对爬取队列进行清空操作。

3.2. 修改scrapy-redis源码

# ----------- 修改scrapy-redis源码时,特别需要注意的是:---------
# 第一,要留有原始代码的备份。
# 第二,当项目移植到其他机器上时,需要将scrapy-redis源码一起移植过去。一般代码位置在\Lib\site-packages\scrapy_redis\下


想象一下,爬虫已经结束的特征是什么?那就是爬取队列已空,从爬取队列中无法取到request信息。那着手点应该就在从爬取队列中获取request和调度这个部分。查看scrapy-redis源码,我们发现了两个着手点:

3.2.1. 细节

# .\Lib\site-packages\scrapy_redis\schedluer.py
def next_request(self):
block_pop_timeout = self.idle_before_close
# 下面是从爬取队列中弹出request
# 这个block_pop_timeout 我尚未研究清除其作用。不过肯定不是超时时间......
request = self.queue.pop(block_pop_timeout)
if request and self.stats:
self.stats.inc_value('scheduler/dequeued/redis', spider=self.spider)
return request


# .\Lib\site-packages\scrapy_redis\spiders.py
def next_requests(self):
"""Returns a request to be scheduled or none."""
use_set = self.settings.getbool('REDIS_START_URLS_AS_SET', defaults.START_URLS_AS_SET)
fetch_one = self.server.spop if use_set else self.server.lpop
# XXX: Do we need to use a timeout here?
found = 0
# TODO: Use redis pipeline execution.
while found < self.redis_batch_size:
data = fetch_one(self.redis_key)
if not data:
# 代表爬取队列为空。但是可能是永久为空,也可能是暂时为空
# Queue empty.
break
req = self.make_request_from_data(data)
if req:
yield req
found += 1
else:
self.logger.debug("Request not made from data: %r", data)

if found:
self.logger.debug("Read %s requests from '%s'", found, self.redis_key)


参考注释,从上述源码来看,就只有这两处可以做手脚。但是爬虫在爬取过程中,队列随时都可能出现暂时的空窗期。想判断爬取队列为空,一般是设定一个时限,如果在一个时段内,队列一直持续为空,那我们可以基本认定这个爬虫已经结束了。所以有了如下的改动:

# .\Lib\site-packages\scrapy_redis\schedluer.py

# 原始代码
def next_request(self):
block_pop_timeout = self.idle_before_close
request = self.queue.pop(block_pop_timeout)
if request and self.stats:
self.stats.inc_value('scheduler/dequeued/redis', spider=self.spider)
return request

# 修改后的代码
def __init__(self, server,
persist=False,
flush_on_start=False,
queue_key=defaults.SCHEDULER_QUEUE_KEY,
queue_cls=defaults.SCHEDULER_QUEUE_CLASS,
dupefilter_key=defaults.SCHEDULER_DUPEFILTER_KEY,
dupefilter_cls=defaults.SCHEDULER_DUPEFILTER_CLASS,
idle_before_close=0,
serializer=None):
# ......
# 增加一个计数项
self.lostGetRequest = 0

def next_request(self):
block_pop_timeout = self.idle_before_close
request = self.queue.pop(block_pop_timeout)
if request and self.stats:
# 如果拿到了就恢复这个值
self.lostGetRequest = 0
self.stats.inc_value('scheduler/dequeued/redis', spider=self.spider)
if request is None:
self.lostGetRequest += 1
print(f"request is None, lostGetRequest = {self.lostGetRequest}, time = {datetime.datetime.now()}")
# 100个大概8分钟的样子
if self.lostGetRequest > 200:
print(f"request is None, close spider.")
# 结束爬虫
self.spider.crawler.engine.close_spider(self.spider, 'queue is empty')
return request


相关log信息

2017-12-14 16:18:06 [scrapy.middleware] INFO: Enabled item pipelines:
['redisClawerSlaver.pipelines.beforeRedisPipeline',
'redisClawerSlaver.pipelines.amazonRedisPipeline',
'scrapy_redis.pipelines.RedisPipeline']
2017-12-14 16:18:06 [scrapy.core.engine] INFO: Spider opened
2017-12-14 16:18:06 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
request is None, lostGetRequest = 1, time = 2017-12-14 16:18:06.370400
request is None, lostGetRequest = 2, time = 2017-12-14 16:18:11.363400
request is None, lostGetRequest = 3, time = 2017-12-14 16:18:16.363400
request is None, lostGetRequest = 4, time = 2017-12-14 16:18:21.362400
request is None, lostGetRequest = 5, time = 2017-12-14 16:18:26.363400
request is None, lostGetRequest = 6, time = 2017-12-14 16:18:31.362400
request is None, lostGetRequest = 7, time = 2017-12-14 16:18:36.363400
request is None, lostGetRequest = 8, time = 2017-12-14 16:18:41.362400
request is None, lostGetRequest = 9, time = 2017-12-14 16:18:46.363400
request is None, lostGetRequest = 10, time = 2017-12-14 16:18:51.362400
2017-12-14 16:18:56 [scrapy.core.engine] INFO: Closing spider (queue is empty)
request is None, lostGetRequest = 11, time = 2017-12-14 16:18:56.363400
request is None, close spider.
登录结果:loginRes = (235, b'Authentication successful')
登录成功,code = 235
mail has been send successfully. message:Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: base64
From: 548516910@qq.com
To: 548516910@qq.com
Subject: =?utf-8?b?54is6Jmr57uT5p2f54q25oCB5rGH5oql77yabmFtZSA9IHJlZGlzQ2xhd2VyU2xhdmVyLCByZWFzb24gPSBxdWV1ZSBpcyBlbXB0eSwgZmluaXNoZWRUaW1lID0gMjAxNy0xMi0xNCAxNjoxODo1Ni4zNjQ0MDA=?=

57uG6IqC77yacmVhc29uID0gcXVldWUgaXMgZW1wdHksIHN1Y2Nlc3NzISBhdDoyMDE3LTEyLTE0
IDE2OjE4OjU2LjM2NDQwMA==

2017-12-14 16:18:56 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'finish_reason': 'queue is empty',
'finish_time': datetime.datetime(2017, 12, 14, 8, 18, 56, 364400),
'log_count/INFO': 8,
'start_time': datetime.datetime(2017, 12, 14, 8, 18, 6, 362400)}
2017-12-14 16:18:56 [scrapy.core.engine] INFO: Spider closed (queue is empty)
Unhandled Error
Traceback (most recent call last):
File "E:\Miniconda\lib\site-packages\scrapy\commands\runspider.py", line 89, in run
self.crawler_process.start()
File "E:\Miniconda\lib\site-packages\scrapy\crawler.py", line 285, in start
reactor.run(installSignalHandlers=False)  # blocking call
File "E:\Miniconda\lib\site-packages\twisted\internet\base.py", line 1243, in run
self.mainLoop()
File "E:\Miniconda\lib\site-packages\twisted\internet\base.py", line 1252, in mainLoop
self.runUntilCurrent()
--- <exception caught here> ---
File "E:\Miniconda\lib\site-packages\twisted\internet\base.py", line 878, in runUntilCurrent
call.func(*call.args, **call.kw)
File "E:\Miniconda\lib\site-packages\scrapy\utils\reactor.py", line 41, in __call__
return self._func(*self._a, **self._kw)
File "E:\Miniconda\lib\site-packages\scrapy\core\engine.py", line 137, in _next_request
if self.spider_is_idle(spider) and slot.close_if_idle:
File "E:\Miniconda\lib\site-packages\scrapy\core\engine.py", line 189, in spider_is_idle
if self.slot.start_requests is not None:
builtins.AttributeError: 'NoneType' object has no attribute 'start_requests'

2017-12-14 16:18:56 [twisted] CRITICAL: Unhandled Error
Traceback (most recent call last):
File "E:\Miniconda\lib\site-packages\scrapy\commands\runspider.py", line 89, in run
self.crawler_process.start()
File "E:\Miniconda\lib\site-packages\scrapy\crawler.py", line 285, in start
reactor.run(installSignalHandlers=False)  # blocking call
File "E:\Miniconda\lib\site-packages\twisted\internet\base.py", line 1243, in run
self.mainLoop()
File "E:\Miniconda\lib\site-packages\twisted\internet\base.py", line 1252, in mainLoop
self.runUntilCurrent()
--- <exception caught here> ---
File "E:\Miniconda\lib\site-packages\twisted\internet\base.py", line 878, in runUntilCurrent
call.func(*call.args, **call.kw)
File "E:\Miniconda\lib\site-packages\scrapy\utils\reactor.py", line 41, in __call__
return self._func(*self._a, **self._kw)
File "E:\Miniconda\lib\site-packages\scrapy\core\engine.py", line 137, in _next_request
if self.spider_is_idle(spider) and slot.close_if_idle:
File "E:\Miniconda\lib\site-packages\scrapy\core\engine.py", line 189, in spider_is_idle
if self.slot.start_requests is not None:
builtins.AttributeError: 'NoneType' object has no attribute 'start_requests'

Process finished with exit code 0


有一个问题,如上所述,当通过engine.close_spider(spider, ‘reason’)来关闭spider时,有时会出现几个错误之后才能关闭。可能是因为scrapy会开启多个线程同时抓取,然后其中一个线程关闭了spider,其他线程就找不到spider才会报错。

3.2.2. 注意事项

整个调度过程如下:

scheduler.py



queue.py



所以,PriorityQueue和另外两种队列FifoQueue,LifoQueue有所不同,特别需要注意。如果会使用到timeout这个参数,那么在setting中就只能指定爬取队列为FifoQueue或LifoQueue。

# 指定排序爬取地址时使用的队列,
# 默认的 按优先级排序(Scrapy默认),由sorted set实现的一种非FIFO、LIFO方式。
# 'SCHEDULER_QUEUE_CLASS': 'scrapy_redis.queue.SpiderPriorityQueue',
# 可选的 按先进先出排序(FIFO)
'SCHEDULER_QUEUE_CLASS': 'scrapy_redis.queue.SpiderQueue',
# 可选的 按后进先出排序(LIFO)
# SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.SpiderStack'
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐