scrapy辅助功能实用函数:
get_response: 获得scrapy.HtmlResponse对象, 在不新建scrapy项目工程的情况下,使用scrapy的一些函数做测试
extract_links: 解析出所有符合条件的链接
代码示例
以拉勾首页为例,获取拉勾首页所有职位链接,进一步可以单独解析这些链接,获取职位的详情信息
import requests from scrapy.http import HtmlResponse from scrapy.linkextractors import LinkExtractor def get_response(url): """ 获得scrapy.HtmlResponse对象, 在不新建scrapy项目工程的情况下, 使用scrapy的一些函数做测试 :param url: {str} 链接 :return: {HtmlResponse} scrapy响应对象 """ headers = { "User-Agent": "Mozilla/5.0 (Windows NT 6.3; rv:36.0) Gecko/20100101 Firefox/36.0" } response = requests.get(url, headers=headers) return HtmlResponse(url=url, body=response.content) def extract_links(response, allow, allow_domains): """ 解析所有符合要求的链接, 每次都解析不出来text属性,所以直接封装,可以做一些特定扩展 :param response: {scrapy.http.HtmlResponse} scrapy响应 :param allow: {tuple} 链接限定元组 :param allow_domains: {tuple} 域名限定元组 :return: {iterator({str})} """ link_extractor = LinkExtractor(allow=allow, allow_domains=allow_domains) links = link_extractor.extract_links(response) return (link.url for link in links) if __name__ == '__main__': url = "https://www.lagou.com/" response = get_response(url) links = extract_links(response, ("jobs/\d+.html"), ("lagou.com",)) for link in links: print(link) """ https://www.lagou.com/jobs/5185130.html https://www.lagou.com/jobs/4200613.html https://www.lagou.com/jobs/5039140.html https://www.lagou.com/jobs/5174337.html https://www.lagou.com/jobs/5185128.html https://www.lagou.com/jobs/5185127.html ... """