python爬虫之用户验证

有些网站在打开时就会弹出提示框,直接提示你输入用户名和密码,验证成功后才能查看页面,如图所示:

如果直接爬取网页信息会报错,如下面所示:

from urllib import request

url="http://localhost:8081/manager/html"
response = request.urlopen(url)
html = response.read().decode("utf-8")
print(html)

#运行结果
# urllib.error.HTTPError: HTTP Error 401: Unauthorized
#状态码401:(未授权) 请求要求身份验证。 对于需要登录的网页,服务器可能返回此响应。

那么怎样才能解决上面的问题呢?

方式一:可已通过urllib.request高级特性中的HTTPBasicAuthHandler 来解决。代码如下:

from urllib import request,error

username = "admin"
password = "1234"
url = "http://localhost:8081/manager/html"

#第一步:实例化HTTPPasswordMgrWithDefaultRealm对象
p = request.HTTPPasswordMgrWithDefaultRealm()
#第二步:将用户名和密码添加到p对象中
p.add_password(None,url,username,password)
#第三步:根据p来实例化HTTPBasicAuthHandler对象
handler = request.HTTPBasicAuthHandler(p)
#第四步:根据handler来创建一个opener对象
opener = request.build_opener(handler)
try:
    #第五步:通过open()方法打开链接
    response = opener.open(url)
    html = response.read().decode("UTF-8")
    print(html)
except error.URLError as e:
    print(e.reason)

方式二:通过requests模块来实现

  import requests
  from requests.auth import HTTPBasicAuth
  url = "http://localhost:8081/manager/html"
  #方法一
  # response = requests.get(url,auth=HTTPBasicAuth('admin','1234'))
  #方法二
  response = requests.get(url,auth=('admin','1234'))
  print(response.status_code)

猜你喜欢

转载自blog.csdn.net/qq_40176258/article/details/84929552