Scrapy1.4最新官方文档总结 2 Tutorial

简介: Scrapy1.4最新官方文档总结 1 介绍·安装Scrapy1.4最新官方文档总结 2 TutorialScrapy1.4最新官方文档总结 3 命令行工具这是官方文档的Tutorial(https://docs.scrapy.org/en/latest/intro/tutorial.html)。

Scrapy1.4最新官方文档总结 1 介绍·安装
Scrapy1.4最新官方文档总结 2 Tutorial
Scrapy1.4最新官方文档总结 3 命令行工具


这是官方文档的Tutorial(https://docs.scrapy.org/en/latest/intro/tutorial.html)。

推荐四个Python学习资源:

创建项目

使用命令:

scrapy startproject tutorial

会生成以下文件:

img_181d0b1d7fe19f5cd9bb8873dc9fc362.png

在tutorial/spiders文件夹新建文件quotes_spider.py,它的代码如下:

import scrapy

class QuotesSpider(scrapy.Spider):
    name = "quotes"

    def start_requests(self):
        urls = [
            'http://quotes.toscrape.com/page/1/',
            'http://quotes.toscrape.com/page/2/',
        ]
        for url in urls:
            yield scrapy.Request(url=url, callback=self.parse)

    def parse(self, response):
        page = response.url.split("/")[-2]
        filename = 'quotes-%s.html' % page
        with open(filename, 'wb') as f:
            f.write(response.body)
        self.log('Saved file %s' % filename)

start_requests方法返回 scrapy.Request对象。每收到一个,就实例化一个Response对象,并调用和request绑定的调回方法(即parse),将response作为参数。

切换到根目录,运行爬虫:

scrapy crawl quotes
img_171a4d8de5c7f58e82a818e85cf1d825.png
输出日志

根目录下会产生两个文件,quotes-1.html和quotes-2.html。

另一种方法是定义一个包含URLs的类,parse( )是Scrapy默认的调回方法,即使没有指明调回,也会执行:

import scrapy

class QuotesSpider(scrapy.Spider):
    name = "quotes"
    start_urls = [
        'http://quotes.toscrape.com/page/1/',
        'http://quotes.toscrape.com/page/2/',
    ]

    def parse(self, response):
        page = response.url.split("/")[-2]
        filename = 'quotes-%s.html' % page
        with open(filename, 'wb') as f:
            f.write(response.body)

提取信息

学习Scrapy提取信息的最好方法是使用Scrapy Shell,win7 shell运行:

scrapy shell "http://quotes.toscrape.com/page/1/"

或者,gitbash运行,注意有单引号和双引号的区别:

scrapy shell 'http://quotes.toscrape.com/page/1/'

输出如下:


img_b4d57b5054f4eb51e5d3ed18ecc2048a.png

利用CSS进行提取:

>>> response.css('title')
[<Selector xpath='descendant-or-self::title' data='<title>Quotes to Scrape</title>'>]

只提取标题的文本:

>>> response.css('title::text').extract()
['Quotes to Scrape']

::text表示只提取文本,去掉的话,显示如下:

>>> response.css('title').extract()
['<title>Quotes to Scrape</title>']

因为返回对象是一个列表,只提取第一个的话,使用:

>>> response.css('title::text').extract_first()
'Quotes to Scrape'

或者,使用序号:

>>> response.css('title::text')[0].extract()
'Quotes to Scrape'

前者更好,可以避免潜在的序号错误。

除了使用 extract()和 extract_first(),还可以用正则表达式:

>>> response.css('title::text').re(r'Quotes.*')
['Quotes to Scrape']
>>> response.css('title::text').re(r'Q\w+')
['Quotes']
>>> response.css('title::text').re(r'(\w+) to (\w+)')
['Quotes', 'Scrape']
img_38954430a6b0db6aa206a502051a9a26.png
提取日志

XPath简短介绍

Scrapy还支持XPath:

>>> response.xpath('//title')
[<Selector xpath='//title' data='<title>Quotes to Scrape</title>'>]
>>> response.xpath('//title/text()').extract_first()
'Quotes to Scrape'

其实,CSS是底层转化为XPath的,但XPath的功能更为强大,比如它可以选择包含next page的链接。更多见 using XPath with Scrapy Selectors here

继续提取名人名言

http://quotes.toscrape.com的每个名言的HTML结构如下:

<div class="quote">
    <span class="text">“The world as we have created it is a process of our
    thinking. It cannot be changed without changing our thinking.”</span>
    <span>
        by <small class="author">Albert Einstein</small>
        <a href="/author/Albert-Einstein">(about)</a>
    </span>
    <div class="tags">
        Tags:
        <a class="tag" href="/tag/change/page/1/">change</a>
        <a class="tag" href="/tag/deep-thoughts/page/1/">deep-thoughts</a>
        <a class="tag" href="/tag/thinking/page/1/">thinking</a>
        <a class="tag" href="/tag/world/page/1/">world</a>
    </div>
</div>

使用:

$ scrapy shell "http://quotes.toscrape.com"

将HTML的元素以列表的形式提取出来:

response.css("div.quote")

只要第一个:

quote = response.css("div.quote")[0]

提取出标题、作者、标签:

>>> title = quote.css("span.text::text").extract_first()
>>> title
'“The world as we have created it is a process of our thinking. It cannot be changed without changing our thinking.”'
>>> author = quote.css("small.author::text").extract_first()
>>> author
'Albert Einstein'

标签是一组字符串:

>>> tags = quote.css("div.tags a.tag::text").extract()
>>> tags
['change', 'deep-thoughts', 'thinking', 'world']

弄明白了提取每个名言,现在提取所有的:

>>> for quote in response.css("div.quote"):
...     text = quote.css("span.text::text").extract_first()
...     author = quote.css("small.author::text").extract_first()
...     tags = quote.css("div.tags a.tag::text").extract()
...     print(dict(text=text, author=author, tags=tags))
{'tags': ['change', 'deep-thoughts', 'thinking', 'world'], 'author': 'Albert Einstein', 'text': '“The world as we have created it is a process of our thinking. It cannot be changed without changing our thinking.”'}
{'tags': ['abilities', 'choices'], 'author': 'J.K. Rowling', 'text': '“It is our choices, Harry, that show what we truly are, far more than our abilities.”'}
    ... a few more of these, omitted for brevity
>>>

用爬虫提取信息

使用Python的yield:

import scrapy

class QuotesSpider(scrapy.Spider):
    name = "quotes"
    start_urls = [
        'http://quotes.toscrape.com/page/1/',
        'http://quotes.toscrape.com/page/2/',
    ]

    def parse(self, response):
        for quote in response.css('div.quote'):
            yield {
                'text': quote.css('span.text::text').extract_first(),
                'author': quote.css('small.author::text').extract_first(),
                'tags': quote.css('div.tags a.tag::text').extract(),
            }

运行爬虫,日志如下:

2016-09-19 18:57:19 [scrapy.core.scraper] DEBUG: Scraped from <200 http://quotes.toscrape.com/page/1/>
{'tags': ['life', 'love'], 'author': 'André Gide', 'text': '“It is better to be hated for what you are than to be loved for what you are not.”'}
2016-09-19 18:57:19 [scrapy.core.scraper] DEBUG: Scraped from <200 http://quotes.toscrape.com/page/1/>
{'tags': ['edison', 'failure', 'inspirational', 'paraphrased'], 'author': 'Thomas A. Edison', 'text': "“I have not failed. I've just found 10,000 ways that won't work.”"}

保存数据

最便捷的方式是使用feed export,保存为json,命令如下:

scrapy crawl quotes -o quotes.json

保存为json lines:

scrapy crawl quotes -o quotes.jl

保存为csv:

scrapy crawl quotes -o quotes.csv

提取下一页

首先看下一页的链接:

<ul class="pager">
    <li class="next">
        <a href="/page/2/">Next <span aria-hidden="true">→</span></a>
    </li>
</ul>

提取出来:

>>> response.css('li.next a').extract_first()
'<a href="/page/2/">Next <span aria-hidden="true">→</span></a>'

只要href:

>>> response.css('li.next a::attr(href)').extract_first()
'/page/2/'

利用urljoin生成完整的url,生成下一页的请求,就可以循环抓取了:

import scrapy

class QuotesSpider(scrapy.Spider):
    name = "quotes"
    start_urls = [
        'http://quotes.toscrape.com/page/1/',
    ]

    def parse(self, response):
        for quote in response.css('div.quote'):
            yield {
                'text': quote.css('span.text::text').extract_first(),
                'author': quote.css('small.author::text').extract_first(),
                'tags': quote.css('div.tags a.tag::text').extract(),
            }

        next_page = response.css('li.next a::attr(href)').extract_first()
        if next_page is not None:
            next_page = response.urljoin(next_page)
            yield scrapy.Request(next_page, callback=self.parse)

更简洁的方式是使用 response.follow:

import scrapy

class QuotesSpider(scrapy.Spider):
    name = "quotes"
    start_urls = [
        'http://quotes.toscrape.com/page/1/',
    ]

    def parse(self, response):
        for quote in response.css('div.quote'):
            yield {
                'text': quote.css('span.text::text').extract_first(),
                'author': quote.css('span small::text').extract_first(),
                'tags': quote.css('div.tags a.tag::text').extract(),
            }

        next_page = response.css('li.next a::attr(href)').extract_first()
        if next_page is not None:
            yield response.follow(next_page, callback=self.parse)

直接将参数传递给response.follow:

for href in response.css('li.next a::attr(href)'):
    yield response.follow(href, callback=self.parse)

对于a标签,response.follow可以直接使用它的属性,这样就可以变得更简洁:

for a in response.css('li.next a'):
    yield response.follow(a, callback=self.parse)

下面这个爬虫提取作者信息,使用了调回和自动获取下一页:

import scrapy

class AuthorSpider(scrapy.Spider):
    name = 'author'

    start_urls = ['http://quotes.toscrape.com/']

    def parse(self, response):
        # 作者链接
        for href in response.css('.author + a::attr(href)'):
            yield response.follow(href, self.parse_author)

        # 分页链接
        for href in response.css('li.next a::attr(href)'):
            yield response.follow(href, self.parse)

    def parse_author(self, response):
        def extract_with_css(query):
            return response.css(query).extract_first().strip()

        yield {
            'name': extract_with_css('h3.author-title::text'),
            'birthdate': extract_with_css('.author-born-date::text'),
            'bio': extract_with_css('.author-description::text'),
        }

使用爬虫参数

在命令行中使用参数,只要添加 -a:

scrapy crawl quotes -o quotes-humor.json -a tag=humor

将humor传递给tag:

import scrapy

class QuotesSpider(scrapy.Spider):
    name = "quotes"

    def start_requests(self):
        url = 'http://quotes.toscrape.com/'
        tag = getattr(self, 'tag', None)
        if tag is not None:
            url = url + 'tag/' + tag
        yield scrapy.Request(url, self.parse)

    def parse(self, response):
        for quote in response.css('div.quote'):
            yield {
                'text': quote.css('span.text::text').extract_first(),
                'author': quote.css('small.author::text').extract_first(),
            }

        next_page = response.css('li.next a::attr(href)').extract_first()
        if next_page is not None:
            yield response.follow(next_page, self.parse)

更多例子

https://github.com/scrapy/quotesbot上有个叫做quotesbot的爬虫,提供了CSS和XPath两种写法:

import scrapy

class ToScrapeCSSSpider(scrapy.Spider):
    name = "toscrape-css"
    start_urls = [
        'http://quotes.toscrape.com/',
    ]

    def parse(self, response):
        for quote in response.css("div.quote"):
            yield {
                'text': quote.css("span.text::text").extract_first(),
                'author': quote.css("small.author::text").extract_first(),
                'tags': quote.css("div.tags > a.tag::text").extract()
            }

        next_page_url = response.css("li.next > a::attr(href)").extract_first()
        if next_page_url is not None:
            yield scrapy.Request(response.urljoin(next_page_url))
import scrapy

class ToScrapeSpiderXPath(scrapy.Spider):
    name = 'toscrape-xpath'
    start_urls = [
        'http://quotes.toscrape.com/',
    ]

    def parse(self, response):
        for quote in response.xpath('//div[@class="quote"]'):
            yield {
                'text': quote.xpath('./span[@class="text"]/text()').extract_first(),
                'author': quote.xpath('.//small[@class="author"]/text()').extract_first(),
                'tags': quote.xpath('.//div[@class="tags"]/a[@class="tag"]/text()').extract()
            }

        next_page_url = response.xpath('//li[@class="next"]/a/@href').extract_first()
        if next_page_url is not None:
            yield scrapy.Request(response.urljoin(next_page_url))

Scrapy1.4最新官方文档总结 1 介绍·安装
Scrapy1.4最新官方文档总结 2 Tutorial
Scrapy1.4最新官方文档总结 3 命令行工具


目录
相关文章
|
10月前
|
数据采集 存储 中间件
scrapy简单入门
scrapy简单入门
46 0
|
4月前
|
安全 API Python
FastAPI入门指南
FastAPI是基于Python类型提示的高性能Web框架,用于构建现代API。它提供高性能、直观的编码体验,内置自动文档生成(支持OpenAPI)、数据验证和安全特性。安装FastAPI使用`pip install fastapi`,可选`uvicorn`作为服务器。简单示例展示如何定义路由和处理函数。通过Pydantic进行数据验证,`Depends`处理依赖。使用`uvicorn main:app --reload`启动应用。FastAPI简化API开发,适合高效构建API应用。5月更文挑战第21天
129 1
|
3月前
|
数据采集 中间件 数据处理
scrapy的入门和使用
scrapy的入门和使用
|
存储 中间件 Python
Scrapy框架快速入门
Scrapy框架快速入门
|
测试技术
ABTest 教程 tutorial
ABTest 教程 tutorial
|
数据采集 机器学习/深度学习 Web App开发
Crawler之Scrapy:Scrapy简介、安装、使用方法之详细攻略
Crawler之Scrapy:Scrapy简介、安装、使用方法之详细攻略
Crawler之Scrapy:Scrapy简介、安装、使用方法之详细攻略
|
自然语言处理 C++ 芯片
SystemC Tutorial 1 简介
1、SystemC是什么?VHDL、Verilog/SystemVerilog、SystemC是现代集成电路的基本设计语言,是主流硬件描述语言仿真软件如ModelSim、VCS等所并列支持的仿真器自然语言。其中VHDL是第一种基本设计语言,Verilog和基于它发展起来的SystemVerilog是第二种基本设计语言,而SystemC是第三种基本设计语言。SystemC不是一门新语言,而是基于C+
887 0
|
Web App开发 XML 中间件
scrapy官方文档提供的常见使用问题
Scrapy与BeautifulSoup或lxml相比如何? BeautifulSoup和lxml是用于解析HTML和XML的库。Scrapy是一个用于编写Web爬虫的应用程序框架,可以抓取网站并从中提取数据。
1420 0
|
数据采集 Shell Python
Scrapy1.4最新官方文档总结 3 命令行工具
Scrapy1.4最新官方文档总结 1 介绍·安装Scrapy1.4最新官方文档总结 2 TutorialScrapy1.4最新官方文档总结 3 命令行工具 这是官方文档的命令行工具https://docs.
934 0