16-增量式爬虫
时间:2019-10-10
本文章向大家介绍16-增量式爬虫,主要包括16-增量式爬虫使用实例、应用技巧、基本知识点总结和需要注意事项,具有一定的参考价值,需要的朋友可以参考一下。
增量式爬虫
增量式爬虫,顾名思义。就是当网站更新新的内容的时候,能够将新内容储存下来,而不是将原来的数据又储存下来。例如电影网站每隔一段时间就更新新的电影,小说网站每天更新新的小说。
因此增量式爬虫就是发送请求之前,判断这个url是否爬取过,解析出数据判断是否爬取过:
1、对爬取过的url进行储存
2、对爬取过的数据进行储存(数据进行哈希储存(节约资源))
3、判断储存的数据是否已经存在
去重方法:
1)将爬取中的url进行储存,储存在redis的set中。下次爬取数据时,首先对即将要发送请求的url对储存在url的set中做判断,如果存在,就不发送请求,否则继续发送请求。
2)爬取到的内容进行唯一身份的标识,然后讲这个唯一标识储存在redis的set中。当下次爬取到网页数据的时候,在进行持久化存储之前,首先可以判断该数据的身份标识是否存在在redis的set中。如果存在,就不储存。如果不存在,就储存身份标识以及数据。
1、url去重
spiders/Movie.py
# -*- coding: utf-8 -*- import scrapy from scrapy.linkextractors import LinkExtractor from scrapy.spiders import CrawlSpider, Rule #导入redis 储存数据 from redis import Redis from Addcrawl.items import AddcrawlItem class MovieSpider(CrawlSpider): name = 'Movie' #注销掉允许作用域 # allowed_domains = ['www.xxx.com'] start_urls = ['https://www.4567tv.tv/frim/index7.html'] rules = ( Rule(LinkExtractor(allow=r'/frim/index7-\d+\.html'), callback='parse_item', follow=True), ) #创建redis对象,不能放在parse_item这样每调用方法就会被实例化, # 因此应该放在只被实例化一次的地方 conn = Redis(host="127.0.0.1",port=6379) def parse_item(self, response): li_list = response.xpath('/html/body/div[1]/div/div/div/div[2]/ul/li') for li in li_list: #获取详情页的url detail_url ="http://www.4567tv.tv"+(li.xpath('./div/a/@href').extract_first()) #将详情页的url储存在redis的set中 ret = self.conn.sadd("urls",detail_url) #经验证,此数据插入成功会返回1,否则返回0 if ret == 1: print("这条url没有采集,可以进行采集!") yield scrapy.Request(url=detail_url,callback=self.parse_detail) else: print("已经爬取过,不能在爬取") def parse_detail(self,response): page = response.xpath('/html/body/div[1]/div/div/div/div[2]') item = AddcrawlItem() item["title"] = page.xpath('./h1/text()').extract_first() content = page.xpath('./p[5]/span[2]/text()').extract_first() item["content"] = "".join(content) yield item
items.py
# -*- coding: utf-8 -*- # Define here the models for your scraped items # # See documentation in: # https://docs.scrapy.org/en/latest/topics/items.html import scrapy class AddcrawlItem(scrapy.Item): # define the fields for your item here like: title = scrapy.Field() content = scrapy.Field()
pipelines.py
# -*- coding: utf-8 -*- # Define your item pipelines here # # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html class AddcrawlPipeline(object): def process_item(self, item, spider): dic = { "title":item["title"], "content":item["content"], } print(dic) #将数据储存入redis中 spider.conn.lpush("moviedatas",dic) return item
settings.py
# -*- coding: utf-8 -*- # Scrapy settings for Addcrawl project # # For simplicity, this file contains only settings considered important or # commonly used. You can find more settings consulting the documentation: # # https://docs.scrapy.org/en/latest/topics/settings.html # https://docs.scrapy.org/en/latest/topics/downloader-middleware.html # https://docs.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'Addcrawl' SPIDER_MODULES = ['Addcrawl.spiders'] NEWSPIDER_MODULE = 'Addcrawl.spiders' # Crawl responsibly by identifying yourself (and your website) on the user-agent #USER_AGENT = 'Addcrawl (+http://www.yourdomain.com)' USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36' # Obey robots.txt rules ROBOTSTXT_OBEY = False LOG_LEVEL = "ERROR" # Configure maximum concurrent requests performed by Scrapy (default: 16) #CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0) # See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay # See also autothrottle settings and docs #DOWNLOAD_DELAY = 3 # The download delay setting will honor only one of: #CONCURRENT_REQUESTS_PER_DOMAIN = 16 #CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default) #COOKIES_ENABLED = False # Disable Telnet Console (enabled by default) #TELNETCONSOLE_ENABLED = False # Override the default request headers: #DEFAULT_REQUEST_HEADERS = { # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', # 'Accept-Language': 'en', #} # Enable or disable spider middlewares # See https://docs.scrapy.org/en/latest/topics/spider-middleware.html #SPIDER_MIDDLEWARES = { # 'Addcrawl.middlewares.AddcrawlSpiderMiddleware': 543, #} # Enable or disable downloader middlewares # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html #DOWNLOADER_MIDDLEWARES = { # 'Addcrawl.middlewares.AddcrawlDownloaderMiddleware': 543, #} # Enable or disable extensions # See https://docs.scrapy.org/en/latest/topics/extensions.html #EXTENSIONS = { # 'scrapy.extensions.telnet.TelnetConsole': None, #} # Configure item pipelines # See https://docs.scrapy.org/en/latest/topics/item-pipeline.html ITEM_PIPELINES = { 'Addcrawl.pipelines.AddcrawlPipeline': 300, } # Enable and configure the AutoThrottle extension (disabled by default) # See https://docs.scrapy.org/en/latest/topics/autothrottle.html #AUTOTHROTTLE_ENABLED = True # The initial download delay #AUTOTHROTTLE_START_DELAY = 5 # The maximum download delay to be set in case of high latencies #AUTOTHROTTLE_MAX_DELAY = 60 # The average number of requests Scrapy should be sending in parallel to # each remote server #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0 # Enable showing throttling stats for every response received: #AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default) # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings #HTTPCACHE_ENABLED = True #HTTPCACHE_EXPIRATION_SECS = 0 #HTTPCACHE_DIR = 'httpcache' #HTTPCACHE_IGNORE_HTTP_CODES = [] #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
2、数据去重
# -*- coding: utf-8 -*- import scrapy from scrapy.linkextractors import LinkExtractor from scrapy.spiders import CrawlSpider, Rule #导入redis 储存数据 from redis import Redis from Addcrawl.items import AddcrawlItem #导入哈希 import hashlib class MovieSpider(CrawlSpider): name = 'Movie' #注销掉允许作用域 # allowed_domains = ['www.xxx.com'] start_urls = ['https://www.4567tv.tv/frim/index7.html'] rules = ( Rule(LinkExtractor(allow=r'/frim/index7-\d+\.html'), callback='parse_item', follow=True), ) #创建redis对象,不能放在parse_item这样每调用方法就会被实例化, # 因此应该放在只被实例化一次的地方 conn = Redis(host="127.0.0.1",port=6379) def parse_item(self, response): li_list = response.xpath('/html/body/div[1]/div/div/div/div[2]/ul/li') for li in li_list: #获取详情页的url detail_url ="http://www.4567tv.tv"+(li.xpath('./div/a/@href').extract_first()) yield scrapy.Request(url=detail_url,callback=self.parse_detail) def parse_detail(self,response): page = response.xpath('/html/body/div[1]/div/div/div/div[2]') item = AddcrawlItem() item["title"] = page.xpath('./h1/text()').extract_first() content = page.xpath('./p[5]/span[2]/text()').extract_first() item["content"] = "".join(content) #将解析到的数据生成唯一身份标识的进行redis储存 source = item["title"]+item["content"] source_id = hashlib.sha256(source.encode()).hexdigest() #将解析内容的唯一标识储存到redis的data_id中 ret = self.conn.sadd("data_id",source_id) if ret == 1: print("这条数据没有爬取过!!!") yield item else: print("已经爬取过啦!!!")
# -*- coding: utf-8 -*- # Define here the models for your scraped items # # See documentation in: # https://docs.scrapy.org/en/latest/topics/items.html import scrapy class AddcrawlItem(scrapy.Item): # define the fields for your item here like: title = scrapy.Field() content = scrapy.Field()
# -*- coding: utf-8 -*- # Define your item pipelines here # # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html class AddcrawlPipeline(object): def process_item(self, item, spider): dic = { "title":item["title"], "content":item["content"], } print(dic) #将数据储存入redis中 spider.conn.lpush("moviedatas2",dic) return item
# -*- coding: utf-8 -*- # Scrapy settings for Addcrawl project # # For simplicity, this file contains only settings considered important or # commonly used. You can find more settings consulting the documentation: # # https://docs.scrapy.org/en/latest/topics/settings.html # https://docs.scrapy.org/en/latest/topics/downloader-middleware.html # https://docs.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'Addcrawl' SPIDER_MODULES = ['Addcrawl.spiders'] NEWSPIDER_MODULE = 'Addcrawl.spiders' # Crawl responsibly by identifying yourself (and your website) on the user-agent #USER_AGENT = 'Addcrawl (+http://www.yourdomain.com)' USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36' # Obey robots.txt rules ROBOTSTXT_OBEY = False LOG_LEVEL = "ERROR" # Configure maximum concurrent requests performed by Scrapy (default: 16) #CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0) # See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay # See also autothrottle settings and docs #DOWNLOAD_DELAY = 3 # The download delay setting will honor only one of: #CONCURRENT_REQUESTS_PER_DOMAIN = 16 #CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default) #COOKIES_ENABLED = False # Disable Telnet Console (enabled by default) #TELNETCONSOLE_ENABLED = False # Override the default request headers: #DEFAULT_REQUEST_HEADERS = { # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', # 'Accept-Language': 'en', #} # Enable or disable spider middlewares # See https://docs.scrapy.org/en/latest/topics/spider-middleware.html #SPIDER_MIDDLEWARES = { # 'Addcrawl.middlewares.AddcrawlSpiderMiddleware': 543, #} # Enable or disable downloader middlewares # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html #DOWNLOADER_MIDDLEWARES = { # 'Addcrawl.middlewares.AddcrawlDownloaderMiddleware': 543, #} # Enable or disable extensions # See https://docs.scrapy.org/en/latest/topics/extensions.html #EXTENSIONS = { # 'scrapy.extensions.telnet.TelnetConsole': None, #} # Configure item pipelines # See https://docs.scrapy.org/en/latest/topics/item-pipeline.html ITEM_PIPELINES = { 'Addcrawl.pipelines.AddcrawlPipeline': 300, } # Enable and configure the AutoThrottle extension (disabled by default) # See https://docs.scrapy.org/en/latest/topics/autothrottle.html #AUTOTHROTTLE_ENABLED = True # The initial download delay #AUTOTHROTTLE_START_DELAY = 5 # The maximum download delay to be set in case of high latencies #AUTOTHROTTLE_MAX_DELAY = 60 # The average number of requests Scrapy should be sending in parallel to # each remote server #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0 # Enable showing throttling stats for every response received: #AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default) # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings #HTTPCACHE_ENABLED = True #HTTPCACHE_EXPIRATION_SECS = 0 #HTTPCACHE_DIR = 'httpcache' #HTTPCACHE_IGNORE_HTTP_CODES = [] #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
总结:增量式爬虫就是对数据去重,两种方法,一种对爬取过的url去重,另一种就是对储存过的数据去重。因此都是现将数据或者url单独储存起来,然后再将插入数据的返回值作为判断依据。来爬取最新的数据。
原文地址:https://www.cnblogs.com/lishuntao/p/11647602.html
- 微信小程序开放 Wi-Fi、NFC 连接能力,未来可直接刷地铁?
- 2017奇葩机器人大盘点:Sophia想生孩子,Atlas后空翻,贝佐斯骑高达
- silverlight:分享一个不错的自定义布局CollectionFlow(可用于制作相册的哦!)
- 无法取得ConnectionSettings的问题
- DataTable,List去重复记录的方法
- Uploadify的一点总结
- 自动驾驶时代,中国移动要以怎样的姿势进入?
- JQuery中文日期控件
- Silverlight中的帧
- 窗口自动弹出浏览器显示广告的问题
- Instagram 开源用于 Python 3的MonkeyType 工具
- 拼凑了几个自定义的Panel(包括FishEyePanel,WrapPanel等几个常用的布局)
- jquery获取父级一级节点的序号
- Docker容器学习梳理--基础知识(2)
- JavaScript 教程
- JavaScript 编辑工具
- JavaScript 与HTML
- JavaScript 与Java
- JavaScript 数据结构
- JavaScript 基本数据类型
- JavaScript 特殊数据类型
- JavaScript 运算符
- JavaScript typeof 运算符
- JavaScript 表达式
- JavaScript 类型转换
- JavaScript 基本语法
- JavaScript 注释
- Javascript 基本处理流程
- Javascript 选择结构
- Javascript if 语句
- Javascript if 语句的嵌套
- Javascript switch 语句
- Javascript 循环结构
- Javascript 循环结构实例
- Javascript 跳转语句
- Javascript 控制语句总结
- Javascript 函数介绍
- Javascript 函数的定义
- Javascript 函数调用
- Javascript 几种特殊的函数
- JavaScript 内置函数简介
- Javascript eval() 函数
- Javascript isFinite() 函数
- Javascript isNaN() 函数
- parseInt() 与 parseFloat()
- escape() 与 unescape()
- Javascript 字符串介绍
- Javascript length属性
- javascript 字符串函数
- Javascript 日期对象简介
- Javascript 日期对象用途
- Date 对象属性和方法
- Javascript 数组是什么
- Javascript 创建数组
- Javascript 数组赋值与取值
- Javascript 数组属性和方法
- 从 rollup 初版源码学习打包原理
- leetcode树之相同的树
- Mysql Sql 语句练习题 (50道)
- 【每日一具16】来了!扫描图片批量漂白修正软件
- 实现一个 webpack loader 和 webpack plugin
- 万字长文带你走进 JavaScript 的世界
- windows中常见后门持久化方法总结
- Python3爬虫实战【点触验证码】 — 模拟登陆bilibili
- BOM 是个什么玩意!
- Educational Codeforces Round 81 (Rated for Div. 2) B - Infinite Prefixes
- python-利用python写一个购物小程序
- Java技巧收录一 那些你相见恨晚的快捷键和代码注释模板
- Educational Codeforces Round 81 (Rated for Div. 2) C.Obtain The String
- 深入了解 webpack 模块加载原理
- Java中的数字类解析(包括格式化数字、大数运算等等)