1、前言
最近小姐姐工作需要,需要爬取天猫某店的全部商品,正好小哥学习了Python几个月,就答应上手试试!结果第一道题就难住了,天猫登陆需要账号密码和验证码!!!虽然知道可以通过模拟和Session操作,但是,始终是新手开车,还没有学习那么高深,感觉会走很多弯路!!另外,也想想,有没有什么更简单的方法???
不出意思,还真发现啦!天猫的手机版可以不用登陆,全部数据访问!!!就这样~
开始吧!
2、遇到的坑点
本文主要是在 利用Python爬虫爬取指定天猫店铺全店商品信息 - 晴空行 - 博客园 这个大哥的基础上,踩坑填坑,然后增加自己一些数据要求~
- 坑一
File "/Users/HTC/Documents/Programing/Python/WebCrawlerExample/WebCrawler/Tmall_demo.py", line 63, in get_products
writer.writerows(products)
File "/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/csv.py", line 158, in writerows
return self.writer.writerows(map(self._dict_to_list, rowdicts))
File "/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/csv.py", line 151, in _dict_to_list
+ ", ".join([repr(x) for x in wrong_fields]))
ValueError: dict contains fields not in fieldnames: 'titleUnderIconList'
writer.writerows
没有找到这个'titleUnderIconList'字段,这个字段应该是天猫的接口后来返回的数据,在代码里只能删除掉:
del product['titleUnderIconList']del product['titleUnderIconList']
- 坑二
File "/Users/HTC/Documents/Programing/Python/WebCrawlerExample/WebCrawler/Tmall_demo.py", line 65, in get_products
writer.writerows(products)
File "/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/csv.py", line 158, in writerows
return self.writer.writerows(map(self._dict_to_list, rowdicts))
UnicodeEncodeError: 'ascii' codec can't encode characters in position 26-27: ordinal not in range(128)
熟悉的人儿,看到python3与python2的区别,就知道,'ascii' codec can't encode
就是编码问题,问题就出来这里writer.writerows
, python3处理、解析或转换和保存时,最好都指定一下使用 utf-8
编码吧,特别是遇到中文的情况!
最后指定编码用utf-8:
with open(self.filename, 'a', encoding="utf-8", newline='') as f:
writer = csv.DictWriter(f, fieldnames=title)
writer.writerows(products)
- 坑三
035009803B0
图片下载错误 : http//img.alicdn.com/bao/uploaded/i4/821705368/TB1Sht8cfQs8KJjSZFEXXc9RpXa_!!0-item_pic.jpg Invalid URL '035009803B0': No schema supplied. Perhaps you meant http://035009803B0?
02100713003
图片下载错误 : http//img.alicdn.com/bao/uploaded/i1/821705368/TB1_OIkXQfb_uJkSmRyXXbWxVXa_!!0-item_pic.jpg Invalid URL '02100713003': No schema supplied. Perhaps you meant http://02100713003?
02800614023
图片下载错误 : http//img.alicdn.com/bao/uploaded/i3/821705368/TB1kKK6cInI8KJjSsziXXb8QpXa_!!0-item_pic.jpg Invalid URL '02800614023': No schema supplied. Perhaps you meant http://02800614023?
下图图片失败的提示,原因是天猫接口返回的商品数据如下:
{
item_id: 14292263734,
title: "XXXXXX",
img: "//img.alicdn.com/bao/uploaded/i2/821705368/TB1Us3Qcr_I8KJjy1XaXXbsxpXa_!!0-item_pic.jpg",
sold: "3",
quantity: 0,
totalSoldQuantity: 2937,
url: "//detail.m.tmall.com/item.htm?id=xxxxx",
price: "188.00",
titleUnderIconList: [ ]
},
不带协议名字!!!不知道是什么时候的历史留下的坑点吧!!!大厂也是有坑的!!
3、总结
具体的代码,可参考我github代码:
代码详细的解析还是参考这位大神的 利用Python爬虫爬取指定天猫店铺全店商品信息 - 晴空行 - 博客园,写的非常的详细!
整体来说,因为天猫的商品数据通过js来获取,所以比较容易获取到数据,而不用大量的爬取页面的商品,这个很赞!所以,爬虫这技术活,有很多方法,能找到好的方法,才是爬虫的最高境界啊!加油~
代码
python就是利害,一百行代码就搞定!
import os
import requests
import json
import csv
import random
import re
from datetime import datetime
from urllib import request
import time
class TM_producs(object):
def __init__(self, storename):
self.storename = storename
self.url = 'https://{}.m.tmall.com'.format(storename)
self.headers = {
"user-agent": "Mozilla/5.0 (iPhone; CPU iPhone OS 9_1 like Mac OS X) AppleWebKit/601.1.46 "
"(KHTML, like Gecko) Version/9.0 Mobile/13B143 Safari/601.1"
}
datenum = datetime.now().strftime('%Y%m%d_%H%M%S')
self.filename = '{}_{}.csv'.format(self.storename, datenum)
self.get_file()
def get_file(self):
'''创建一个含有标题的表格'''
title = ['item_id', 'product_id', 'price', 'quantity', 'sold', 'title', 'totalSoldQuantity', 'url', 'img']
with open(self.filename, 'w', newline='') as f:
writer = csv.DictWriter(f, fieldnames=title)
writer.writeheader()
return
def get_totalpage(self):
'''提取总页码数'''
num = random.randint(83739921, 87739530)
endurl = '/shop/shop_auction_search.do?sort=s&p=1&page_size=12&from=h5&ajson=1&_tm_source=tmallsearch&callback=jsonp_{}'
url = self.url + endurl.format(num)
html = requests.get(url, headers=self.headers).text
infos = re.findall('\(({.*})\)', html)[0]
infos = json.loads(infos)
totalpage = infos.get('total_page')
return int(totalpage)
def get_products(self, page):
'''提取单页商品列表'''
num = random.randint(83739921, 87739530)
endurl = '/shop/shop_auction_search.do?sort=s&p={}&page_size=12&from=h5&ajson=1&_tm_source=tmallsearch&callback=jsonp_{}'
url = self.url + endurl.format(page, num)
html = requests.get(url, headers=self.headers).text
infos = re.findall('\(({.*})\)', html)[0]
infos = json.loads(infos)
products = infos.get('items')
for product in products:
del product['titleUnderIconList']
item_id = product['item_id']
product_id = self.get_product_spm(item_id)
product['product_id'] = product_id
imgUrl = 'https:' + product['img']
self.save_img(imgUrl, product_id)
item_id = product['item_id']
# print(products)
title = ['item_id', 'product_id', 'price', 'quantity', 'sold', 'title', 'totalSoldQuantity', 'url', 'img']
with open(self.filename, 'a', encoding="utf-8", newline='') as f:
writer = csv.DictWriter(f, fieldnames=title)
writer.writerows(products)
def get_product_spm(self, item_id):
url = 'https://detail.m.tmall.com/item.htm?id={}'.format(item_id)
html = requests.get(url, headers=self.headers).text
# {"货号":"07300318000 "}
product_id = re.findall(r'"货号":"(.+?)"}', html)[0].strip()
print(product_id)
return product_id
def save_img(self, img_url, file_name):
try:
# 获得图片后缀
file_suffix = os.path.splitext(img_url)[1]
cwd = os.getcwd()
save_path = os.path.join(cwd, 'images/')
if not os.path.exists(save_path):
os.makedirs(save_path)
image_path = os.path.join(save_path, file_name + file_suffix)
# 下载图片
image = requests.get(img_url, headers=self.headers)
# 命名并保存图片
with open(image_path, 'wb') as f:
f.write(image.content)
except Exception as e:
print('图片下载错误 :', file_name, e)
def main(self):
'''循环爬取所有页面宝贝'''
total_page = self.get_totalpage()
for i in range(1, total_page + 1):
self.get_products(i)
print('总计{}页商品,已经提取第{}页'.format(total_page, i))
time.sleep(1 + random.random())
if __name__ == '__main__':
storename = 'mgssp'
tm = TM_producs(storename)
tm.main()
参考
- iHTCboy/WebCrawlerExample: 网页爬虫实践示例
- 利用Python爬虫爬取指定天猫店铺全店商品信息 - 晴空行 - 博客园
- Hopetree/E-commerce-crawlers: 电商网站爬虫合集,淘宝京东亚马逊等
- 如有疑问,欢迎在评论区一起讨论!
- 如有不正确的地方,欢迎指导!
注:本文首发于 iHTCboy's blog,如若转载,请注来源