(案例六)三种scrapy模拟登陆策略
2017-11-10 10:14
295 查看
注意:模拟登陆时,必须保证settings.py里的 COOKIES_ENABLED
(Cookies中间件) 处于开启状态
COOKIES_ENABLED = True或
# COOKIES_ENABLED = False
策略一:直接POST数据(比如需要登陆的账户信息)
只要是需要提供post数据的,就可以用这种方法。下面示例里post的数据是账户密码:# -*- coding: utf-8 -*- import scrapy class Renren1Spider(scrapy.Spider): name = "renren1" allowed_domains = ["renren.com"] def start_requests(self): url = 'http://www.renren.com/PLogin.do' # FormRequest 是Scrapy发送POST请求的方法 yield scrapy.FormRequest( url = url, formdata = {"email" : "mr_mao_hacker@163.com", "password" : "axxxxxxxe"}, callback = self.parse_page) def parse_page(self, response): with open("mao2.html", "w") as filename: filename.write(response.body)
策略二:标准的模拟登陆步骤
正统模拟登录方法:首先发送登录页面的get请求,获取到页面里的登录必须的参数(比如说zhihu登陆界面的 _xsrf)
然后和账户密码一起post到服务器,登录成功
# -*- coding: utf-8 -*- import scrapy class Renren2Spider(scrapy.Spider): name = "renren2" allowed_domains = ["renren.com"] start_urls = ( "http://www.renren.com/PLogin.do", ) # 处理start_urls里的登录url的响应内容,提取登陆需要的参数(如果需要的话) def parse(self, response): # 提取登陆需要的参数 #_xsrf = response.xpath("//_xsrf").extract()[0] # 发送请求参数,并调用指定回调函数处理 yield scrapy.FormRequest.from_response( response, formdata = {"email" : "mr_mao_hacker@163.com", "password" : "axxxxxxxe"},#, "_xsrf" = _xsrf}, callback = self.parse_page ) # 获取登录成功状态,访问需要登录后才能访问的页面 def parse_page(self, response): url = "http://www.renren.com/422167102/profile" yield scrapy.Request(url, callback = self.parse_newpage) # 处理响应内容 def parse_newpage(self, response): with open("xiao.html", "w") as filename: filename.write(response.body)
策略三:直接使用保存登陆状态的Cookie模拟登陆
如果实在没办法了,可以用这种方法模拟登录,虽然麻烦一点,但是成功率100%# -*- coding: utf-8 -*- import scrapy class RenrenSpider(scrapy.Spider): name = "renren" allowed_domains = ["renren.com"] start_urls = ( 'http://www.renren.com/111111', 'http://www.renren.com/222222', 'http://www.renren.com/333333', ) cookies = { "anonymid" : "ixrna3fysufnwv", "_r01_" : "1", "ap" : "327550029", "JSESSIONID" : "abciwg61A_RvtaRS3GjOv", "depovince" : "GW", "springskin" 4000 : "set", "jebe_key" : "f6fb270b-d06d-42e6-8b53-e67c3156aa7e%7Cc13c37f53bca9e1e7132d4b58ce00fa3%7C1484060607478%7C1%7C1486198628950", "t" : "691808127750a83d33704a565d8340ae9", "societyguester" : "691808127750a83d33704a565d8340ae9", "id" : "327550029", "xnsid" : "f42b25cf", "loginfrom" : "syshome" } # 可以重写Spider类的start_requests方法,附带Cookie值,发送POST请求 def start_requests(self): for url in self.start_urls: yield scrapy.FormRequest(url, cookies = self.cookies, callback = self.parse_page) # 处理响应内容 def parse_page(self, response): print "===========" + response.url with open("deng.html", "w") as filename: filename.write(response.body)
相关文章推荐
- Scrapy三种模拟登陆策略
- HtmlUnit 爬虫简单案例——模拟登陆CSDN
- Python爬虫(二十二)_selenium案例:模拟登陆豆瓣
- 《java入门第一季》模拟用户登陆注册案例集合版
- 《java入门第一季》模拟用户登陆注册案例集合版
- scrapy 的crawl模板模拟登陆
- python3 scrapy模拟登陆豆瓣
- 模拟登陆局部生成静态页面代码案例
- scrapy 模拟登陆
- XML模拟登陆页面的案例
- 【scrapy】模拟登陆知乎
- [Scrapy]模拟登陆并获取Cookie值
- 使用scrapy 模拟登陆网站后 抓取会员中心相关信息
- [Scrapy]模拟登陆并获取Cookie值
- 使用scrapy模拟登陆一般登陆不了的网站[已解决]
- HtmlUnit 爬虫简单案例——模拟登陆CSDN
- scrapy框架半自动处理验证码豆瓣网模拟登陆
- Scrapy——模拟登陆爬取github issues
- 第三百四十三节,Python分布式爬虫打造搜索引擎Scrapy精讲—scrapy模拟登陆和知乎倒立文字验证码识别
- Scrapy中用cookie模拟登陆新浪微博