2017年5月18日
今天有同学问我贴吧为什么信息提取不出来?
下面是同学的源代码:
import requests
from bs4 import BeautifulSoup
start_url = "http://tieba.baidu.com/p/4957100148"
headers = {"User-Agent":"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.75 Safari/537.36 LBBROWSER"}
response = requests.get(start_url,headers = headers).text
soup = BeautifulSoup(response,"html.parser")
infos = soup.select('div.d_post_content j_d_post_content clearfix')
他是同find方法,找的div的class标签,对于这个问题,我们可以换个思路,这个定位找不到,就往上找,我的代码:
import requests
from bs4 import BeautifulSoup
start_url = "http://tieba.baidu.com/p/4957100148"
headers = {"User-Agent":"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.75 Safari/537.36 LBBROWSER"}
response = requests.get(start_url,headers = headers).text
soup = BeautifulSoup(response,"html.parser")
infos = soup.select('cc > div')
for info in infos:
print(info.get_text().strip())
![](https://ucc.alicdn.com/2gjjqt6uixqd6/developer-article652278/20241021/089c8d1fab9a4f93bcf880692b374432.webp?x-oss-process=image/resize,w_1400/format,webp)
总结:各种爬取方法都需要灵活使用。