您的位置:首页 > 数据库 > Mongodb

python3 [爬虫入门实战]scrapy爬取盘多多五百万数据并存mongoDB

2017-07-20 20:59 1291 查看

总结:虽然是第二次爬取,但是多多少少还是遇到一些坑,总的结果还是好的,scrapy比多线程多进程强多了啊,中途没有一次被中断过。

此版本是盘多多爬取数据的scrapy版本,涉及数据量较大,到现在已经是近500万的数据了。

1,抓取的内容



主要爬取了:文件名,文件链接,文件类型,文件大小,文件浏览量,文件收录时间

一,scrapy中item.py代码

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/items.html 
import scrapy

class PanduoduoItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
# pass
# 文件名称
docName = scrapy.Field()
# 文件链接
docLink = scrapy.Field()
# 文件分类
docType = scrapy.Field()
# 文件大小
docSize = scrapy.Field()
# 网盘类型
docPTpye = scrapy.Field()
# 浏览量
docCount = scrapy.Field()
# 收录时间
docTime = scrapy.Field()


在spider进行抓取出现的问题,(1),因为没有设置请求头信息,盘多多浏览器会返回403错误,不让进行数据的爬取,所以这里我们要进行user-agent的设置,(settings.py中)

USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36'

COOKIES_ENABLED = False

ROBOTSTXT_OBEY = False


(2)直接在
def parse(self, response):
方法中打印
response.body
会返回不是utf8的编码,(无奈的是没有做相应的处理,还是爬出来了。)

二,spider里面的代码

(1),在spider中遇到的问题还是有的,比如table下的tbody标签获取,因为内容都被tbody包裹起来的,最后测试半小时,我们可以直接获取table下的tr标签就可以了。

(2),在tr下有多个不规则的td标签,我们可以直接根据td[index]来获取相对于的数据,

贴上代码:

#encoding=utf8
import scrapy
from PanDuoDuo.items import PanduoduoItem

class Panduoduo(scrapy.Spider):
name = 'panduoduo'
allowed_domains =['panduoduo.net']
start_urls = ['http://www.panduoduo.net/c/4/{}'.format(n) for n in range(1,86151)]#6151
# start_urls = ['http://www.panduoduo.net/c/4/1']#6151
def parse(self, response):
base_url = 'http://www.panduoduo.net'
# print(str(response.body).encode('utf-8'))
node_list = response.xpath("//div[@class='ca-page']/table[@class='list-resource']")
node_list = response.xpath("//table[@class='list-resource']/tr")
# print(node_list)
for node  in node_list:
duoItem = PanduoduoItem()
title = node.xpath("./td[@class='t1']/a/text()").extract()
print(title)
duoItem['docName'] = ''.join(title)
link = node.xpath("./td[@class='t1']/a/@href").extract()
linkUrl  = base_url+''.join(link)
duoItem['docLink'] = linkUrl
print(linkUrl)
docType = node.xpath("./td[2]/a/text()").extract()
duoItem['docType'] = ''.join(docType)
print(docType)
docSize = node.xpath("./td[@class='t2']/text()").extract()
print(docSize)
duoItem['docSize'] = ''.join(docSize)
docCount = node.xpath("./td[5]/text()").extract()
docTime = node.xpath("./td[6]/text()").extract()
duoItem['docCount'] = ''.join(docCount)
duoItem['docTime'] = ''.join(docTime)
print(docCount)
print(docTime)
yield duoItem


(3)piplines.py里面的代码

在这里主要进行了存入mongodb的操作和写入json文件的操作,不过现在看来,存入json文件确实是多余的,因为数据量确实大了。(在存入mongodb的时候遇到过存入报错的问题,这时候可能是mongodb被占用的问题,把原来的进行删除再重新运行一遍就行了。)

代码:

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html import json
import pymongo
from scrapy.conf import settings

class PanduoduoPipeline(object):
def process_item(self, item, spider):
return item

class DuoDuoMongo(object):
def __init__(self):
self.client = pymongo.MongoClient(host=settings['MONGO_HOST'], port=settings['MONGO_PORT'])
self.db = self.client[settings['MONGO_DB']]
self.post = self.db[settings['MONGO_COLL']]

def process_item(self, item, spider):
postItem = dict(item)
self.post.insert(postItem)
return item

# 写入json文件
class JsonWritePipline(object):
def __init__(self):
self.file = open('盘多多.json','w',encoding='utf-8')

def process_item(self,item,spider):
line  = json.dumps(dict(item),ensure_ascii=False)+"\n"
self.file.write(line)
return item

def spider_closed(self,spider):
self.file.close()


最后附上settings里面的代码,这里的没有用到代理词,浏览器什么的,所以暂时不用设置middlewares.py里面的文件

settings代码:

# -*- coding: utf-8 -*-

# Scrapy settings for PanDuoDuo project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     http://doc.scrapy.org/en/latest/topics/settings.html #     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html #     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html 
BOT_NAME = 'PanDuoDuo'

SPIDER_MODULES = ['PanDuoDuo.spiders']
NEWSPIDER_MODULE = 'PanDuoDuo.spiders'

# Crawl responsibly by identifying yourself (and your website) on the user-agent
# USER_AGENT = 'PanDuoDuo (+http://www.yourdomain.com)'
USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False
# 配置mongoDB
MONGO_HOST = "127.0.0.1"  # 主机IP
MONGO_PORT = 27017  # 端口号
MONGO_DB = "PanDuo"  # 库名
MONGO_COLL = "pan_duo"  # collection

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay # See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html #SPIDER_MIDDLEWARES = {
#    'PanDuoDuo.middlewares.PanduoduoSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html #DOWNLOADER_MIDDLEWARES = {
#    'PanDuoDuo.middlewares.MyCustomDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html #EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html ITEM_PIPELINES = {
# 'PanDuoDuo.pipelines.PanduoduoPipeline': 300,
'PanDuoDuo.pipelines.DuoDuoMongo': 300,
'PanDuoDuo.pipelines.JsonWritePipline': 300,
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html #AUTOTHROTTLE_ENABLED = True
# The initial download delay
# AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings #HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'


最后再看一下数据库里面的数据:



再看一下现在的总数,还在继续爬取哦,从下午1:00左右爬的应该是,



到此end,下次学习任务,一定把模拟登陆搞懂了。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
相关文章推荐