Scrapy建议的几个防止爬虫被禁的策略
2016-08-22 00:00
302 查看
1. 随机切换UA
配置文件settings.py 同级目录下新增下载中间件 rotate_useragent.py# -*- coding: utf-8 -*- import random class RotateUserAgentMiddleware(scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware): def __init__(self, user_agent=''): self.user_agent = user_agent def process_request(self, request, spider): request.headers.setdefault('User-Agent', random.choice(self.user_agent_list)) #UA池,更多UA头部可参考 http://www.useragentstring.com/pages/useragentstring.php user_agent_list = [ "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 ", "(KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1", "Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 ", "(KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11", "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 ", "(KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6", "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 ", "(KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6", "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 ", "(KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1", "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 ", "(KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5", "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 ", "(KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5", "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 ", "(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3", "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 ", "(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3", "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 ", "(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3", "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 ", "(KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3", "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 ", "(KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3", "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 ", "(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3", "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 ", "(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3", "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 ", "(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3", "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 ", "(KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3", "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 ", "(KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24", "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 ", "(KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24", ]
编辑配置文件settings.py,启用下载中间件
DOWNLOADER_MIDDLEWARES = { 'WebCrawler.spiders.rotate_useragent.RotateUserAgentMiddleware': 1, }
2. IP池
防止IP过频被禁3. 请求时延
限制爬取速度,一方面能避免被反爬虫措施封禁,另一方面也能减轻对服务器的压力编辑配置文件settings.py,增加如下一行:
DOWNLOAD_DELAY = 3
4. 禁用cookie
防止被行为跟踪相关文章推荐
- Scrapy爬虫系列笔记之三:正则表达式,url去重策略以及遍历算法_by_书訢
- scrapy爬虫防止被禁止 User Agent切换
- scrapy爬虫的几个案例
- Scrapy研究探索(七)——如何防止被ban之策略大集合
- Scrapy研究探索(七)——如何防止被ban之策略大集合
- Scrapy研究探索(七)——如何防止被ban之策略大集合
- scrapy项目反爬虫策略
- Scrapy研究探索(七)——如何防止被ban之策略大集合
- Scrapy 探索:如何防止爬虫被禁止(翻译、转载、整理)
- 防止爬虫被墙的几个技巧(总结篇)
- scrapy爬虫防ban策略总结
- 第三百四十五节,Python分布式爬虫打造搜索引擎Scrapy精讲—爬虫和反爬的对抗过程以及策略—scrapy架构源码分析图
- 【转】 [置顶] Scrapy研究探索(七)——如何防止被ban之策略大集合
- 【网络爬虫】【java】微博爬虫(五):防止爬虫被墙的几个技巧(总结篇)
- Scrapy研究探索(七)——如何防止被ban之策略大集合
- 爬虫,如何防止被ban之策略大集合
- Scrapy ——如何防止被ban 屏蔽 之策略大集合(六)
- 爬虫Scrapy框架的安装配置
- scrapy-redis实现爬虫分布式爬取分析与实现