您的位置:首页 > 其它

关于scrapy爬取51job网以及智联招聘信息存储文件的设置

2018-03-02 21:32 337 查看
  通过这两个文件,,可以存储数据(但是注意在爬虫文件中也在写相应的代码具体参考51job网和智联招聘两个文件)

1.先设置items文件# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html
import scrapy

class JobspiderItem(scrapy.Item):
# define the fields for your item here like:
job_name = scrapy.Field()
fan_kui_lv = scrapy.Field()
job_company_name = scrapy.Field()
job_salary = scrapy.Field()
job_place = scrapy.Field()
job_type = scrapy.Field()
job_time = scrapy.Field()

2.设置管道文件
# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html 
#pipeline:俗称管道,用于接收爬虫返回的item数据

class JobspiderPipeline(object):
def process_item(self, item, spider):
return item

class TocsvPipeline(object):

def process_item(self, item, spider):
with open("job.csv", "a",encoding="gb18030") as f:
job_name = item['job_name']
fan_kui_lv = item['fan_kui_lv']
job_company_name = item['job_company_name']
job_salary = item['job_salary']
job_place = item['job_place']
job_type = item['job_type']
job_time = item['job_time']

job_info = [job_name, fan_kui_lv,
job_company_name,
job_salary, job_place,
job_type, job_time,'\n']
f.write(",".join(job_info))
#把item传递给下一个pipeline
return item
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐