技术笔记:scrapy之crawls的暂停与重启

简介: 技术笔记:scrapy之crawls的暂停与重启

Jobs: pausing and resuming crawls1


Sometimes, for big sites, it’s desirable to pause crawls and be able to resume them later.


Scrapy supports this functionality out of the box by providing the following facilities:


a scheduler that persists scheduled requests on disk


a duplicates filter that persists visited requests on disk


an extension that keeps some spider state (key/value pairs) persistent between batches


Job directory


To //代码效果参考:http://www.lyjsj.net.cn/wx/art_23214.html

enable persistence support you just need to define a job directory through the JOBDIR setting. This directory will be for storing all required data to keep the state of a single job (ie. a spider run). It’s important to note that this directory must not be shared by different spiders, or even different jobs/runs of the same spider, as it’s meant to be used for storing the state of a single job.

How to use it


To start a spider with persistence support enabled, run it like this:


scrapy crawl somespider -s JOBDIR=crawls/somespider-1


Then, you can stop the spider safely at any time (by pressing Ctrl-C or sending a signal), and resume it later by issuing the same command:


scrapy crawl somespider -s JOBDIR=crawls/somespider-1


Keeping persistent state between batches


Sometimes you’ll want to keep some persistent spider state between pause/resume batches. You can use the //代码效果参考:http://www.lyjsj.net.cn/wx/art_23212.html

spider.state attribute for that, which should be a dict. There’s a built-in extension that takes care of serializing, storing and loading that attribute from the job directory, when the spider starts and stops.

Here’s an example of a callback that uses the spider state (other spider code is omitted for brevity):


def parse_item(self, response):


# parse item here


self.state【'items_count'】 = self.state.get('items_count', 0) + 1


Persistence gotchas


There are a few things to keep in mind if you want to be able to use the Scrapy persistence support:


Cookies expiration


Cookies may expire. So, if you don’t resume your spider quickly the requests scheduled may no longer work. This won’t be an issue if you spider doesn’t rely on cookies.


Request serialization


Requests must be serializable by the pickle module, in order for persistence to work, so you should make sure that your requests are serializable.


The most common issue here is to use lambda functions on request callbacks that can’t be persisted.


So, for example, this won’t work:


def some_callback(self, response):


somearg = 'test'


return scrapy.Request('', callback=lambda r: self.other_callback(r, somearg))


def other_callback(self, response, somearg):


print("the argument passed is: %s" % somearg)


But this will:


def some_callback(self, response):


somearg = 'test'


return scrapy.Request('', callback=self.other_callback, meta={'somearg': somearg})


def other_callback(self, response):


somearg = response.meta【'somearg'】


print("the argument passed is: %s" % somearg)


If you wish to log the requests that couldn’t be serialized, you can set the SCHEDULER_DEBUG setting to True in the project’s settings page. It is False by default.


注意:


运行爬虫的时候将中间信息保存:


方式一:在settings.py文件中设置JOBDIR = ‘path’。


方式二:在具体的爬虫文件中指定:


custom_settings = {


"JOBDIR": "path"


}


Windows or Linux环境下,一次Ctrl+C进程将收到中断信号,两次Ctrl+C则强制杀掉进程。


Linux环境下,kill -f main.py进程会收到一//代码效果参考:http://www.lyjsj.net.cn/wz/art_23210.html

个中断信号,有了这个信号,scrapy就可以做一些后续的处理,若直接kill -f -9 main.py则进程无法收到一个中断信号,进程将被操作系统给强制杀掉,不会再进行任何后续处理。

示例:


scrapy crawl jobbole -s JOBDIR=job_info/001


-s是-set的意思


不同的spider需要不同的目录,不同时刻启动的spider也需要不同的目录


ctrl-c 后就会将暂停信息保存到job_info/001,要想重新开始则再次运行scrapy crawl jobbole -s JOBDIR=job_info/001然后会继续爬取没有做完的东西。


参考:


第六章 慕课网学习-scrapy的暂停与重启


python爬虫进阶之scrapy的暂停与重启


三十二 Python分布式爬虫打造搜索引擎Scrapy精讲—scrapy的暂停与重启


Scrapy 官方文档 2019-3-8

相关文章
|
6月前
|
数据采集 存储 中间件
【专栏】随着技术发展,Scrapy将在网络爬虫领域持续发挥关键作用
【4月更文挑战第27天】Scrapy是Python的高效爬虫框架,以其异步处理、多线程和中间件机制提升爬取效率。它的灵活性体现在可定制化组件、支持多种数据库存储及与Selenium、BeautifulSoup等工具集成。Scrapy易于扩展,允许自定义下载器和解析器。在实践中,涉及项目配置、Spider类编写、数据抓取、存储与分析。面对动态网页和反爬机制,Scrapy可通过Selenium等工具应对,但需注意法规与道德规范。随着技术发展,Scrapy将在网络爬虫领域持续发挥关键作用。
102 2
|
数据采集 Python
Scrapy框架--通用爬虫Broad Crawls(上)
通用爬虫(Broad Crawls)介绍 [传送:中文文档介绍],里面除了介绍还有很多配置选项。 通用爬虫一般有以下通用特性: 其爬取大量(一般来说是无限)的网站而不是特定的一些网站。
2205 0
|
数据采集 Python 前端开发
Scrapy笔框架--通用爬虫Broad Crawls(中)
rules = ( Rule(LinkExtractor(allow=r'WebPage/Company.*'),follow=True,callback='parse_company'), Rule(LinkExtractor(allow=r'WebPage/JobDetail.
1286 0
|
数据采集 Python
Scrapy框架--通用爬虫Broad Crawls(下,具体代码实现)
通过前面两章的熟悉,这里开始实现具体的爬虫代码 广西人才网 以广西人才网为例,演示基础爬虫代码实现,逻辑: 配置Rule规则:设置allow的正则-->设置回调函数 通过回调函数获取想要的信息 具体的代码实现: import scrapy from scrapy.
1247 0
|
13天前
|
数据采集 存储 JSON
Python网络爬虫:Scrapy框架的实战应用与技巧分享
【10月更文挑战第27天】本文介绍了Python网络爬虫Scrapy框架的实战应用与技巧。首先讲解了如何创建Scrapy项目、定义爬虫、处理JSON响应、设置User-Agent和代理,以及存储爬取的数据。通过具体示例,帮助读者掌握Scrapy的核心功能和使用方法,提升数据采集效率。
57 6
|
1月前
|
数据采集 中间件 开发者
Scrapy爬虫框架-自定义中间件
Scrapy爬虫框架-自定义中间件
|
1月前
|
数据采集 中间件 Python
Scrapy爬虫框架-通过Cookies模拟自动登录
Scrapy爬虫框架-通过Cookies模拟自动登录
|
14天前
|
数据采集 前端开发 中间件
Python网络爬虫:Scrapy框架的实战应用与技巧分享
【10月更文挑战第26天】Python是一种强大的编程语言,在数据抓取和网络爬虫领域应用广泛。Scrapy作为高效灵活的爬虫框架,为开发者提供了强大的工具集。本文通过实战案例,详细解析Scrapy框架的应用与技巧,并附上示例代码。文章介绍了Scrapy的基本概念、创建项目、编写简单爬虫、高级特性和技巧等内容。
39 4
|
13天前
|
数据采集 中间件 API
在Scrapy爬虫中应用Crawlera进行反爬虫策略
在Scrapy爬虫中应用Crawlera进行反爬虫策略
|
6月前
|
数据采集 中间件 Python
Scrapy爬虫:利用代理服务器爬取热门网站数据
Scrapy爬虫:利用代理服务器爬取热门网站数据