Python爬虫(二)-再探豆瓣Top250

紧接着上一篇,当我继续想要爬取豆瓣的热门评论的时候,报错了,显示我的IP异常,我意识到被豆瓣反爬了,于是在网上找各种伪装的方法,几番搜索下,总结出了以下几种伪装的方法:

1.构造UA池:通过构造一个“User-Agent”池,骗过要爬取的网站,伪装成浏览器进行访问,这种方法比较常见,但如果当你的IP被禁之后,这种方法并不见得有效。

2.设置代理(构造IP池):看过很多大牛的回答,设置代理不失为一个好方法,由于囊中羞涩,所以决定去爬取一个免费代理网站的IP(免费代理 代码随后附上)。但是免费的代理一来不稳定,二来需要经常的更换,尝试了一番之后还是决定使用有偿IP,毕竟也不贵。通过包装本地IP,可以很方便的去爬取网站信息。

3.设置爬取时间间隔:看过一条评论,说是“爬虫不设置时间间隔,就跟XX不带套一样”,确实,设置时间间隔应该是最起码的道德吧,在取悦自己的时候,也不能随意增加别人的痛苦啊(增加服务器的负担)。

以上几种简单的方式对于爬取一些简单网站而言,已经足够了。本文主要是爬取豆瓣Top250电影的热门评论,即单个影片的评论页面的第一条评论,效果如下:


源码如下:

# -*- coding: utf-8 -*-
# @Time    : 2018/1/8 14:05
# @Author  : RKGG
# @File    : reptile_douban_top250_comments.py
# @Software: PyCharm

import requests
from bs4 import BeautifulSoup
import codecs
import re
import time
import UA
import random

'''   爬取最热门评论
	1.获取单个Top250影片页面
	2.获取评论区第一条评论
    3.记录下来
'''
host_url = 'http://movie.douban.com/top250'
headers = {
		'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) '
		              'Chrome/63.0.3239.132 Safari/537.36'}
proxies = {'http': 'http://122.114.31.177:808', 'https': 'https://171.13.37.85:808'}
def getProxies():
	proxies = {}
	for page in range(1,5):
		url = 'http://www.xicidaili.com/nn/'+str(page)
		html = requests.get(url,headers=headers).text
		soup = BeautifulSoup(html,"html.parser")
		ip_list = soup.find('table',attrs={'id':'ip_list'})
		for ip_tr in ip_list.find_all('tr',attrs={'class':'odd'}):
			ip = ip_tr.find_all('td')[1].getText()
			port = ip_tr.find_all('td')[2].getText()
			protocol = ip_tr.find_all('td')[5].getText()
			proxies[protocol] = protocol +':'+ ip+':'+port
			# print('page--->'+str(page))
			# print(proxies)
	return proxies


def gethtml(url):
	# headers = {'User-Agent':UA.UA[random.randint(0,len(UA.UA)-1)]}
	data = requests.get(url,headers=headers,params=proxies)
	return data.text

def parse_html(html):
	soup = BeautifulSoup(html,"html.parser")
	response = soup.find('ol',attrs={'class':'grid_view'})
	movies_list = []
	for movies_li in response.find_all('li'):
		detail = movies_li.find('div', attrs={'class': 'hd'})
		movie_name = detail.find('span', attrs={'class': 'title'}).getText()
		# 获取单个影片页面
		movies_url = detail.find('a')['href']
		# 获取评论页面
		hot_list = []
		movies_comments_url = movies_url + 'comments?sort=new_score&status=P'
		comment_data = gethtml(movies_comments_url)
		comment_soup = BeautifulSoup(comment_data, "html.parser")
		head_info = comment_soup.find('head').find('title').getText()
		if head_info!='页面不存在':
			comments_list = comment_soup.find('div', attrs={'class': 'mod-bd', 'id': 'comments'})
			hot_comments = comments_list.find('div', attrs={'class': 'comment-item'}).find('p').getText()
			hot_list.append(hot_comments)
			movie_detail = movie_name + ':' + hot_list[0]
			movies_list.append(movie_detail)
			time.sleep(random.randint(1,10))
			print(hot_list[0])
			continue
		else:
			print('页面失效!')

	# 跳转页面
	next_page = soup.find('span', attrs={'class': 'next'}).find('a')
	# print('--->'+next_page)
	if next_page:
		return movies_list, host_url + next_page['href']
	return movies_list,None


if __name__ == '__main__':
	url = host_url
	with codecs.open('movies_hot_comments.txt','w',encoding='utf-8') as fp:
		while url:
			html = gethtml(url)
			# print(html)
			movies, url = parse_html(html)
			fp.write(u'\n'.join(movies))
 在爬取评论的过程中发现,有些评论页面已经失效,所以在上文中添加了判断页面是否存在的一段代码。

下面是爬取免费代理的源码:

import requests
from bs4 import BeautifulSoup
import time
import random
import re
headers = {
		'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) '
		              'Chrome/63.0.3239.132 Safari/537.36'}
def getProxies():
	proxies = {}
	# for page in random.ra(1,5):
	url = 'http://www.xicidaili.com/nn/'+str(random.randint(1,5))
	html = requests.get(url,headers=headers).text
	soup = BeautifulSoup(html,"html.parser")
	ip_list = soup.find('table',attrs={'id':'ip_list'})
	for ip_tr in ip_list.find_all('tr',attrs={'class':'odd'}):
		ip = ip_tr.find_all('td')[1].getText()
		port = ip_tr.find_all('td')[2].getText()
		protocol = ip_tr.find_all('td')[5].getText()
		proxies[protocol.lower()] = protocol.lower() +'://'+ ip+':'+port
		# print('page--->'+str(page))
		print(proxies)
		time.sleep(random.randint(0,3))
	return proxies



猜你喜欢

转载自blog.csdn.net/xiaoxun2802/article/details/79041648