【基于python3的版本】
rllib下载:
当不知道urlretrieve方法,写法如下:
1
2
3
4
5
6
7
|
from urllib import request
url = "http://inews.gtimg.com/newsapp_match/0/2711870562/0"
req = request.Request(url)
res = request.urlopen(req)
text = res.read()
with open ( "2.jpg" , "wb" ) as f:
f.write(text)
|
知道urlretrieve方法后,如下:
1
2
3
|
from urllib import request
url = "http://inews.gtimg.com/newsapp_match/0/2711870562/0"
request.urlretrieve(url, "1.jpg" )
|
urllib的代理(对比Requests的代理方法):
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
|
from urllib import request, parse
data = {
'first' : 'true' ,
'pn' : 1 ,
'kd' : 'Python'
} url = 'http://2017.ip138.com/ic.asp'
# 设置proxy proxy = request.ProxyHandler({ 'http' : '223.241.78.186:8010' })
# 挂载opener opener = request.build_opener(proxy)
# 安装opener request.install_opener(opener) data = parse.urlencode(data).encode( 'utf-8' )
page = opener. open (url, data).read()
print ( type (page))
print (page.decode( "gbk" ))
|
结果:
urllib的cookie使用:
如果已经知道cookie,或者说你是通过抓包获取到的cookie,直接放在header的信息中直接登陆就可以;
登陆京东网站的cookie信息和不登录京东的cookie信息是不一样的,你可以登录京东以后,抓取cookie的信息,然后访问任何网站就可以了。
1
2
3
4
5
6
7
8
9
|
import urllib.request
url = "http://www.jd.com"
header = { "user-agent" : "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36" ,
"cookie" : "xxxxxxxxx(登录过得用户cookie)"
}
req = urllib.request.Request(url = url, headers = header)
res = urllib.request.urlopen(req)
text = res.read().decode( "utf-8" )
print (text)
|
执行结果:
学习:
urllib的cookie相关的类
在python2中cookie的类叫做:import cookielib
在python3中cookie的类叫做:import http.cookiejar
opener的概念
当你获取一个URL你使用一个opener(一个urllib2.OpenerDirector的实例)。在前面,我们都是使用的默认的opener,也就是urlopen。
urlopen是一个特殊的opener,可以理解成opener的一个特殊实例,传入的参数仅仅是url,data,timeout。
如果我们需要用到Cookie,只用这个opener是不能达到目的的,所以我们需要创建更一般的opener来实现对Cookie的设置。
终端输出cookie对象
1
2
3
4
5
6
7
8
9
10
|
import urllib.request
import http.cookiejar
url = "http://www.hao123.com"
req = urllib.request.Request(url)
cookieh = http.cookiejar.CookieJar() #保存了cookie对象
handler = urllib.request.HTTPCookieProcessor(cookieh)
#绑定请求,也就是说在一次请求中,只要你进行访问,他就会保存下来你的cookie信息 opener = urllib.request.build_opener(handler)
r = opener. open (req)
print (cookieh)
|
打印cookie对象:
1
|
<CookieJar[<Cookie BAIDUID=E9770FE732D04AB585E90684F0E307ED:FG=1 for .hao123.com/>, <Cookie hz=0 for .www.hao123.com/>, <Cookie ft=1 for www.hao123.com/>, <Cookie v_pg=normal for www.hao123.com/>]>
|
将Cookie保存到文件中:
1
2
3
4
5
6
7
8
9
10
11
12
|
import urllib.request
import http.cookiejar
url = "http://www.hao123.com"
req = urllib.request.Request(url)
cookieFileName = "cookie.txt"
#文件cookie cookieh = http.cookiejar.MozillaCookieJar(cookieFileName)
handler = urllib.request.HTTPCookieProcessor(cookieh)
opener = urllib.request.build_opener(handler)
r = opener. open (req)
print (cookieh)
cookieh.save() |
执行:
保存在了文件cookie.txt中
Cookie从文件中读取cookie信息并访问:
1
2
3
4
5
6
7
8
9
10
11
12
|
import urllib.request
import http.cookiejar
cookie_filename = 'cookie.txt'
cookie = http.cookiejar.MozillaCookieJar(cookie_filename)
cookie.load(cookie_filename, ignore_discard = True , ignore_expires = True )
print (cookie)
url = "http://www.hao123.com"
req = urllib.request.Request(url)
handler = urllib.request.HTTPCookieProcessor(cookie)
opener = urllib.request.build_opener(handler) # 利用urllib2的build_opener方法创建一个opener
response = opener. open (req)
print (response.read().decode( "utf-8" )) #解决乱码的问题
|