博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
scrapy CrawlSpider链接提取器, scrapy-redis分布式爬虫
阅读量:6645 次
发布时间:2019-06-25

本文共 4441 字,大约阅读时间需要 14 分钟。

crawlspider 命令

1.创建scrapy工程:scrapy startproject projectName2.创建爬虫文件:scrapy genspider -t crawl spiderName www.xxx.com  指令多了 "-t crawl",表示创建的爬虫文件是基于CrawlSpider这个类的,而不再是Spider这个基类。 3.运行 scrapy crawl name --nolog

 

spider.py

class Spider2Spider(CrawlSpider):    name = 'spider2'    # allowed_domains = ['www.xxx.com']    start_urls = ['https://dig.chouti.com/r/scoff/hot/1']    rules = (        Rule(LinkExtractor(allow=r'/r/scoff/hot/\d+'), callback='parse_item', follow=True),      Rule(LinkExtractor(allow=r'/scoff/$'), callback='parse_item', follow=True),
) def parse_item(self, response): print(response)

 

scrapy-redis命令

运行命令: 

cd scrapy2cd spidersscrapy runspider spider2.py

 

流程

1.创建scrapy工程:scrapy startproject projectName2.创建爬虫文件:scrapy genspider -t crawl spiderName www.xxx.com3.对爬虫文件中的相关属性进行修改:    - 导包:from scrapy_redis.spiders import RedisCrawlSpider    - 将当前爬虫文件的父类设置成RedisCrawlSpider    - 将起始url列表替换成redis_key = 'xxx'(调度器队列的名称)    - 注释掉start_urls = []4.在配置文件中进行配置:    - 使用组件中封装好的可以被共享的管道类, 这个类在文件中看不到:        ITEM_PIPELINES = {            'scrapy_redis.pipelines.RedisPipeline': 400        } - 配置调度器(使用组件中封装好的可以被共享的调度器)        # 增加了一个去重容器类的配置, 作用使用Redis的set集合来存储请求的指纹数据, 从而实现请求去重的持久化        DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"        # 使用scrapy-redis组件自己的调度器        SCHEDULER = "scrapy_redis.scheduler.Scheduler"        # 配置调度器是否要持久化, 也就是当爬虫结束了, 要不要清空Redis中请求队列和去重指纹的set。如果是True, 就表示要持久化存储, 就不清空数据, 否则清空数据        SCHEDULER_PERSIST = True    - 指定存储数据的redis:        REDIS_HOST = 'redis服务的ip地址'        REDIS_PORT = 6379    - 配置redis数据库的配置文件        - 取消保护模式:protected-mode no        - bind绑定: #bind 127.0.0.1    - 启动redis5.执行分布式程序    scrapy runspider xxx.py6.向调度器队列中仍入一个起始url:    在redis-cli中执行:

 

D:\program files\redis配置文件的配置:

- 注释该行:bind 127.0.0.1,表示可以让其他ip访问redis - 将yes该为no:protected-mode no,表示可以让其他ip操作redis

 

spider2.py

# -*- coding: utf-8 -*-import scrapyfrom scrapy.linkextractors import LinkExtractorfrom scrapy.spiders import CrawlSpider, Rulefrom scrapy_redis.spiders import RedisCrawlSpiderfrom scrapy2.items import Scrapy2Itemclass Spider2Spider(RedisCrawlSpider):    name = 'spider2'    # allowed_domains = ['www.xxx.com']    # start_urls = ['https://dig.chouti.com/r/scoff/hot/1']    redis_key = 'chouti'    rules = (        Rule(LinkExtractor(allow=r'/all/hot/recent/\d+'), callback='parse_item', follow=True),    )    def parse_item(self, response):        div_list = response.xpath('//div[@class="item"]')        for div in div_list:            title = div.xpath('./div[4]/div[1]/a/text()').extract_first()            author = div.xpath('./div[4]/div[2]/a[4]/b/text()').extract_first()            item = Scrapy2Item()            item['title'] = title            item['author'] = author            yield item

 

setttings.py

# -*- coding: utf-8 -*-# Scrapy settings for scrapy2 project## For simplicity, this file contains only settings considered important or# commonly used. You can find more settings consulting the documentation:##     https://doc.scrapy.org/en/latest/topics/settings.html#     https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#     https://doc.scrapy.org/en/latest/topics/spider-middleware.htmlBOT_NAME = 'scrapy2'SPIDER_MODULES = ['scrapy2.spiders']NEWSPIDER_MODULE = 'scrapy2.spiders'USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.119 Safari/537.36'# Crawl responsibly by identifying yourself (and your website) on the user-agent#USER_AGENT = 'scrapy2 (+http://www.yourdomain.com)'# Obey robots.txt rulesROBOTSTXT_OBEY = False# Configure maximum concurrent requests performed by Scrapy (default: 16)CONCURRENT_REQUESTS = 32# 增加了一个去重容器类的配置, 作用使用Redis的set集合来存储请求的指纹数据, 从而实现请求去重的持久化DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"# 使用scrapy-redis组件自己的调度器SCHEDULER = "scrapy_redis.scheduler.Scheduler"# 配置调度器是否要持久化, 也就是当爬虫结束了, 要不要清空Redis中请求队列和去重指纹的set。如果是True, 就表示要持久化存储, 就不清空数据, 否则清空数据SCHEDULER_PERSIST = True  #数据指纹REDIS_HOST = '127.0.0.1'REDIS_PORT = 6379ITEM_PIPELINES = {    'scrapy_redis.pipelines.RedisPipeline': 400}

 

items.py

# -*- coding: utf-8 -*-# Define here the models for your scraped items## See documentation in:# https://doc.scrapy.org/en/latest/topics/items.htmlimport scrapyclass Scrapy2Item(scrapy.Item):    # define the fields for your item here like:    title = scrapy.Field()    author = scrapy.Field()

 

'movie_data',item

转载于:https://www.cnblogs.com/NachoLau/p/10478961.html

你可能感兴趣的文章
qhfl-6 购物车
查看>>
双十一错题集
查看>>
iframe中有ajax,设置iframe自适应高度
查看>>
Oracle配置网络服务
查看>>
double型转换成string型
查看>>
Yahoo军规
查看>>
UVa 10100 - Longest Match
查看>>
Windows Phone 7 - DatePicker and TimePicker【转】
查看>>
PCH文件设置
查看>>
Fiddler基础使用三之请求过滤
查看>>
JS 7
查看>>
PHP删除目录下的空目录
查看>>
LeetCode-126-Word Ladder II
查看>>
水平居中与垂直居中,以及对齐
查看>>
MSchart IIS发布以后不能正常显示的问题
查看>>
(装)发布Live Writer代码着色插件CNBlogs.CodeHighlighter
查看>>
jQuery
查看>>
国庆经典八日游
查看>>
D3js-堆栈图
查看>>
CodeForces Round#480 div3 第2场
查看>>