Python编程异步爬虫——协程的基本原理(二)

简介: Python编程异步爬虫——协程的基本原理(二)

接上文 Python编程异步爬虫——协程的基本原理(一)https://developer.aliyun.com/article/1620696

多任务协程
如果想执行多次请求,应该怎么办?可以定义一个task列表,然后使用asyncio包中的wait方法执行,如下所示:

import asyncio
import requests

async def request():
    url = 'https://www.baidu.com'
    status = requests.get(url)
    return status

tasks = [asyncio.ensure_future(request()) for _ in range(5)]
print('Tasks:', tasks)

loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.wait(tasks))

for task in tasks:
    print('Task Result:', task.result())

运行结果如下:
Tasks: [<Task pending name='Task-1' coro=<request() running at /Users/bruce_liu/PycharmProjects/崔庆才--爬虫/6章异步爬虫/多任务协程.py:5>>, <Task pending name='Task-2' coro=<request() running at /Users/bruce_liu/PycharmProjects/崔庆才--爬虫/6章异步爬虫/多任务协程.py:5>>, <Task pending name='Task-3' coro=<request() running at /Users/bruce_liu/PycharmProjects/崔庆才--爬虫/6章异步爬虫/多任务协程.py:5>>, <Task pending name='Task-4' coro=<request() running at /Users/bruce_liu/PycharmProjects/崔庆才--爬虫/6章异步爬虫/多任务协程.py:5>>, <Task pending name='Task-5' coro=<request() running at /Users/bruce_liu/PycharmProjects/崔庆才--爬虫/6章异步爬虫/多任务协程.py:5>>]
Task Result: <Response [200]>
Task Result: <Response [200]>
Task Result: <Response [200]>
Task Result: <Response [200]>
Task Result: <Response [200]>

协程实现
协程在解决IO密集型任务方面的优势,耗时等待一般都是IO操作,例如文件读取、网络请求等。协程在处理这种操作时是有很大优势的,当遇到需要等待的情况时,程序可以暂时挂起,转而执行其他操作,避免浪费时间。
https://www.httpbin.org/delay/5为例,体验一下协程的效果。示例代码如下:

import asyncio
import requests
import time

start = time.time()

async def request():
    url = 'https://www.httpbin.org/delay/5'
    print('waiting for', url)
    response = requests.get(url)
    print('Get response from', url, 'response', response)

tasks = [asyncio.ensure_future(request()) for _ in range(10)]
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.wait(tasks))

end = time.time()
print('Cost time:', end - start)

运行结果如下:
waiting for https://www.httpbin.org/delay/5
Get response from https://www.httpbin.org/delay/5 response <Response [200]>
waiting for https://www.httpbin.org/delay/5
Get response from https://www.httpbin.org/delay/5 response <Response [200]>
waiting for https://www.httpbin.org/delay/5
Get response from https://www.httpbin.org/delay/5 response <Response [200]>
...
waiting for https://www.httpbin.org/delay/5
Get response from https://www.httpbin.org/delay/5 response <Response [200]>
waiting for https://www.httpbin.org/delay/5
Get response from https://www.httpbin.org/delay/5 response <Response [200]>
waiting for https://www.httpbin.org/delay/5
Get response from https://www.httpbin.org/delay/5 response <Response [200]>
Cost time: 63.61974787712097

可以发现,与正常的顺序请求没有啥区别。那么异步处理的优势呢?要实现异步处理,先得有挂起操作,当一个任务需要等待IO结果的时候,可以挂起当前任务,转而执行其他任务,这样才能充分利用好资源。

使用aiohttp
aiohttp是一个支持异步请求的库,它和asyncio配合使用,可以使我们非常方便地实现异步请求操作。
aiohttp分为两部分:一部分是Client,一部分是Server。
下面我们将aiohttp投入使用,将代码改成如下:

import asyncio
import aiohttp
import time

start = time.time()

async def get(url):
    session = aiohttp.ClientSession()
    response = await session.get(url)
    await response.text()
    await session.close()
    return response

async def request():
    url = 'https://www.httpbin.org/delay/5'
    print('Waiting for', url)
    response = await get(url)
    print('Get response from', url, 'response', response)

tasks = [asyncio.ensure_future(request()) for _ in range(10)]
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.wait(tasks))

end = time.time()
print('Cost time:', end - start)

运行结果如下:
Waiting for https://www.httpbin.org/delay/5
Waiting for https://www.httpbin.org/delay/5
Waiting for https://www.httpbin.org/delay/5
Waiting for https://www.httpbin.org/delay/5
Waiting for https://www.httpbin.org/delay/5
Waiting for https://www.httpbin.org/delay/5
...
Get response from https://www.httpbin.org/delay/5 response <ClientResponse(https://www.httpbin.org/delay/5) [200 OK]>
<CIMultiDictProxy('Date': 'Sat, 23 Mar 2024 13:42:05 GMT', 'Content-Type': 'application/json', 'Content-Length': '367', 'Connection': 'keep-alive', 'Server': 'gunicorn/19.9.0', 'Access-Control-Allow-Origin': '*', 'Access-Control-Allow-Credentials': 'true')>

Get response from https://www.httpbin.org/delay/5 response <ClientResponse(https://www.httpbin.org/delay/5) [200 OK]>
<CIMultiDictProxy('Date': 'Sat, 23 Mar 2024 13:42:05 GMT', 'Content-Type': 'application/json', 'Content-Length': '367', 'Connection': 'keep-alive', 'Server': 'gunicorn/19.9.0', 'Access-Control-Allow-Origin': '*', 'Access-Control-Allow-Credentials': 'true')>
...
Get response from https://www.httpbin.org/delay/5 response <ClientResponse(https://www.httpbin.org/delay/5) [200 OK]>
<CIMultiDictProxy('Date': 'Sat, 23 Mar 2024 13:42:05 GMT', 'Content-Type': 'application/json', 'Content-Length': '367', 'Connection': 'keep-alive', 'Server': 'gunicorn/19.9.0', 'Access-Control-Allow-Origin': '*', 'Access-Control-Allow-Credentials': 'true')>

Get response from https://www.httpbin.org/delay/5 response <ClientResponse(https://www.httpbin.org/delay/5) [200 OK]>
<CIMultiDictProxy('Date': 'Sat, 23 Mar 2024 13:42:05 GMT', 'Content-Type': 'application/json', 'Content-Length': '367', 'Connection': 'keep-alive', 'Server': 'gunicorn/19.9.0', 'Access-Control-Allow-Origin': '*', 'Access-Control-Allow-Credentials': 'true')>

Cost time: 6.868626832962036

这里将请求库由requests改成了aiohttp,利用aiohttp库里ClientSession类的get方法进行请求。

测试一下并发量分别为1、3、5、10、….、500时的耗时情况,代码如下:

import asyncio
import aiohttp
import time

def test(number):
    start = time.time()

    async def get(url):
        session = aiohttp.ClientSession()
        response = await session.get(url)
        await response.text()
        await session.close()
        return response

    async def request():
        url = 'https://www.baidu.com/'
        await get(url)

    tasks = [asyncio.ensure_future(request()) for _ in range(number)]
    loop = asyncio.get_event_loop()
    loop.run_until_complete(asyncio.wait(tasks))

    end = time.time()
    print('Number:', number, 'Cost time:', end - start)

for number in [1, 3, 5, 10, 15, 30, 50, 75, 100, 200, 500]:
    test(number)

 运行结果如下:
Number: 1 Cost time: 0.23929095268249512
Number: 3 Cost time: 0.19086170196533203
Number: 5 Cost time: 0.20035600662231445
Number: 10 Cost time: 0.21305394172668457
Number: 15 Cost time: 0.25495195388793945
Number: 30 Cost time: 0.769071102142334
Number: 50 Cost time: 0.3470029830932617
Number: 75 Cost time: 0.4492309093475342
Number: 100 Cost time: 0.586918830871582
Number: 200 Cost time: 1.0910720825195312
Number: 500 Cost time: 2.4768006801605225
相关文章
|
1月前
|
数据采集 Web App开发 数据安全/隐私保护
实战:Python爬虫如何模拟登录与维持会话状态
实战:Python爬虫如何模拟登录与维持会话状态
|
2月前
|
数据采集 Web App开发 自然语言处理
新闻热点一目了然:Python爬虫数据可视化
新闻热点一目了然:Python爬虫数据可视化
|
1月前
|
Python
Python编程:运算符详解
本文全面详解Python各类运算符,涵盖算术、比较、逻辑、赋值、位、身份、成员运算符及优先级规则,结合实例代码与运行结果,助你深入掌握Python运算符的使用方法与应用场景。
171 3
|
1月前
|
数据处理 Python
Python编程:类型转换与输入输出
本教程介绍Python中输入输出与类型转换的基础知识,涵盖input()和print()的使用,int()、float()等类型转换方法,并通过综合示例演示数据处理、错误处理及格式化输出,助你掌握核心编程技能。
394 3
|
1月前
|
数据采集 监控 数据库
Python异步编程实战:爬虫案例
🌟 蒋星熠Jaxonic,代码为舟的星际旅人。从回调地狱到async/await协程天堂,亲历Python异步编程演进。分享高性能爬虫、数据库异步操作、限流监控等实战经验,助你驾驭并发,在二进制星河中谱写极客诗篇。
Python异步编程实战:爬虫案例
|
1月前
|
数据采集 人工智能 JSON
Prompt 工程实战:如何让 AI 生成高质量的 aiohttp 异步爬虫代码
Prompt 工程实战:如何让 AI 生成高质量的 aiohttp 异步爬虫代码
|
1月前
|
并行计算 安全 计算机视觉
Python多进程编程:用multiprocessing突破GIL限制
Python中GIL限制多线程性能,尤其在CPU密集型任务中。`multiprocessing`模块通过创建独立进程,绕过GIL,实现真正的并行计算。它支持进程池、队列、管道、共享内存和同步机制,适用于科学计算、图像处理等场景。相比多线程,多进程更适合利用多核优势,虽有较高内存开销,但能显著提升性能。合理使用进程池与通信机制,可最大化效率。
250 3
|
2月前
|
数据采集 存储 XML
Python爬虫技术:从基础到实战的完整教程
最后强调: 父母法律法规限制下进行网络抓取活动; 不得侵犯他人版权隐私利益; 同时也要注意个人安全防止泄露敏感信息.
675 19
|
1月前
|
数据采集 存储 JSON
Python爬虫常见陷阱:Ajax动态生成内容的URL去重与数据拼接
Python爬虫常见陷阱:Ajax动态生成内容的URL去重与数据拼接
|
1月前
|
Java 调度 数据库
Python threading模块:多线程编程的实战指南
本文深入讲解Python多线程编程,涵盖threading模块的核心用法:线程创建、生命周期、同步机制(锁、信号量、条件变量)、线程通信(队列)、守护线程与线程池应用。结合实战案例,如多线程下载器,帮助开发者提升程序并发性能,适用于I/O密集型任务处理。
226 0

推荐镜像

更多
下一篇
oss云网关配置