python爬虫框架之Scrapy之分布式 + selenium爬取蘑菇街
2019-03-25 19:11
302 查看
版权声明:by——stx 有兴趣联系qq1152085232 https://blog.csdn.net/qq_43590972/article/details/88764405
首先创建项目
进入项目文件夹下用cmd
[code]scrapy startproject Mogu
打开项目创建app
[code]scrapy genspider dushu dushu.com
修改settings.py
[code]# -*- coding: utf-8 -*- # Scrapy settings for Mogu project # # For simplicity, this file contains only settings considered important or # commonly used. You can find more settings consulting the documentation: # # https://doc.scrapy.org/en/latest/topics/settings.html # https://doc.scrapy.org/en/latest/topics/downloader-middleware.html # https://doc.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'Mogu' SPIDER_MODULES = ['Mogu.spiders'] NEWSPIDER_MODULE = 'Mogu.spiders' # Crawl responsibly by identifying yourself (and your website) on the user-agent USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36' # Obey robots.txt rules ROBOTSTXT_OBEY = False # Configure maximum concurrent requests performed by Scrapy (default: 16) #CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0) # See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay # See also autothrottle settings and docs DOWNLOAD_DELAY = 0.5 # The download delay setting will honor only one of: #CONCURRENT_REQUESTS_PER_DOMAIN = 16 #CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default) #COOKIES_ENABLED = False # Disable Telnet Console (enabled by default) #TELNETCONSOLE_ENABLED = False # Override the default request headers: #DEFAULT_REQUEST_HEADERS = { # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', # 'Accept-Language': 'en', #} # Enable or disable spider middlewares # See https://doc.scrapy.org/en/latest/topics/spider-middleware.html #SPIDER_MIDDLEWARES = { # 'Mogu.middlewares.MoguSpiderMiddleware': 543, #} # Enable or disable downloader middlewares # See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html # DOWNLOADER_MIDDLEWARES = { # 'Mogu.middlewares.MoguDownloaderMiddleware': 543, # } # Enable or disable extensions # See https://doc.scrapy.org/en/latest/topics/extensions.html #EXTENSIONS = { # 'scrapy.extensions.telnet.TelnetConsole': None, #} # Configure item pipelines # See https://doc.scrapy.org/en/latest/topics/item-pipeline.html ITEM_PIPELINES = { # 'Mogu.pipelines.MoguPipeline': 300, 'scrapy_redis.pipelines.RedisPipeline':400, } REDIS_HOST = '101.201.237.4' REDIS_PORT = 6379 SCHEDULER = 'scrapy_redis.scheduler.Scheduler' SCHEDULER_PERSIST = True DUPEFILTER_CLASS = 'scrapy_redis.dupefilter.RFPDupeFilter' # Enable and configure the AutoThrottle extension (disabled by default) # See https://doc.scrapy.org/en/latest/topics/autothrottle.html #AUTOTHROTTLE_ENABLED = True # The initial download delay #AUTOTHROTTLE_START_DELAY = 5 # The maximum download delay to be set in case of high latencies #AUTOTHROTTLE_MAX_DELAY = 60 # The average number of requests Scrapy should be sending in parallel to # each remote server #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0 # Enable showing throttling stats for every response received: #AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default) # See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings #HTTPCACHE_ENABLED = True #HTTPCACHE_EXPIRATION_SECS = 0 #HTTPCACHE_DIR = 'httpcache' #HTTPCACHE_IGNORE_HTTP_CODES = [] #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
编写items.py
[code]# -*- coding: utf-8 -*- # Define here the models for your scraped items # # See documentation in: # https://doc.scrapy.org/en/latest/topics/items.html import scrapy class MoguItem(scrapy.Item): # define the fields for your item here like: # name = scrapy.Field() name = scrapy.Field() yuanprice = scrapy.Field() price = scrapy.Field() pic = scrapy.Field()
编写mogu.py
[code]# -*- coding: utf-8 -*- from time import sleep from lxml import etree import scrapy from scrapy_redis.spiders import RedisCrawlSpider from Mogu.items import MoguItem from selenium import webdriver class MoguSpider(RedisCrawlSpider): name = 'mogu' allowed_domains = ['mogujie.com'] # start_urls = ['https://www.mogujie.com/'] # redis_key = 'boos:start_urls' redis_key = 'mogu:start_urls' def parse(self, response): item = MoguItem() mogu_urls = response.xpath("//div[@class='item-wrap']/div[1]//a[@class='cate-item-link']//@href").extract() print('---------------------------') print(len(mogu_urls)) for mogu_url in mogu_urls: opt = webdriver.ChromeOptions() opt.add_argument('--headless') driver = webdriver.Chrome(options=opt) # 创建驱动时从本地加载驱动文件路径 driver.get(mogu_url) for i in range(30): distance = i * 1000 js = 'document.documentElement.scrollTop=%d' % distance print('正在刷新') driver.execute_script(js) sleep(0.5) html = driver.page_source # print(html) res = etree.HTML(html) goods_list = res.xpath('//div[@class="goods_list_mod clearfix J_mod_hidebox J_mod_show"]//div[@class="iwf goods_item ratio3x4"]') # print(goods_list) for i in goods_list: name = i.xpath(".//p[@class='title yahei fl']//text()") print('-------------------') item['name'] = name yield item
相关文章推荐
- 【实战\聚焦Python分布式爬虫必学框架Scrapy 打造搜索引擎项目笔记】第3章 爬虫基础知识回顾
- 【实战\聚焦Python分布式爬虫必学框架Scrapy 打造搜索引擎项目笔记】第5章 scrapy爬取知名问答网站(1)
- 第三百五十一节,Python分布式爬虫打造搜索引擎Scrapy精讲—将selenium操作谷歌浏览器集成到scrapy中
- 【实战\聚焦Python分布式爬虫必学框架Scrapy 打造搜索引擎项目笔记】第5章 scrapy爬取知名问答网站(2)
- Python爬虫框架Scrapy 学习笔记 9 ----selenium
- 【实战\聚焦Python分布式爬虫必学框架Scrapy 打造搜索引擎项目笔记】第1章 课程介绍
- 基于Python+scrapy+redis的分布式爬虫实现框架
- 基于Python,scrapy,redis的分布式爬虫实现框架
- Python爬虫scrapy框架爬取动态网站——scrapy与selenium结合爬取数据
- 基于Python,scrapy,redis的分布式爬虫实现框架
- 基于Python使用scrapy-redis框架实现分布式爬虫 注
- 【实战\聚焦Python分布式爬虫必学框架Scrapy 打造搜索引擎项目笔记】第4章 scrapy爬取知名技术文章网站(2)
- Python之Scrapy框架Redis实现分布式爬虫详解
- 第三百五十节,Python分布式爬虫打造搜索引擎Scrapy精讲—selenium模块是一个python操作浏览器软件的一个模块,可以实现js动态网页请求
- 【实战\聚焦Python分布式爬虫必学框架Scrapy 打造搜索引擎项目笔记】第4章 scrapy爬取知名技术文章网站(1)
- 【实战\聚焦Python分布式爬虫必学框架Scrapy 打造搜索引擎项目笔记】第2章 windows下搭建开发环境
- 分布式爬虫搭建系列 之四---scrapy分布式框架
- 第三百四十二节,Python分布式爬虫打造搜索引擎Scrapy精讲—爬虫数据保存
- python 网络爬虫开源框架scrapy
- 教你分分钟学会用python爬虫框架Scrapy爬取心目中的女神