您的位置:首页 > 数据库 > Redis

Scrapy redis中爬取卡死 解决(清除缓存)

2019-03-22 21:11 1446 查看

环境: win10 py3.6 pycharm scrapy1.6

类似报错信息

2017-05-19 20:08:53 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-05-19 20:08:53 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-05-19 20:09:53 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-05-19 20:10:53 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-05-19 20:11:53 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-05-19 20:12:53 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-05-19 20:13:53 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
...

解决方案: 打开redis-cli.exe 输入以下命令

flushdb
lpush Noverspider:start_urls http://www.daomubiji.com/     # 这里的http://www.daomubiji.com/ 根据自己的实际情况填写

在博文 Scrapy爬取盗墓笔记0.2版 可见实际应用
参考 : https://www.jianshu.com/p/219ccf8e4efb

内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: