在上篇网络爬虫之豆瓣电影中,简单介绍了Python网络爬虫三种常用方法,同时给出了爬取豆瓣电影信息的小案例。今天整理出视频业务线另一个比较常见的需求,针对某一影视剧,我们比较关注人们对它的评价,可以把这些评论生成词云图,比较直观地展示出来评论的方向;也可以后续对评论进行文本分析,对演员、剧情、特效以及对应的受众人群信息等方面进行深层次地探讨以及统计层面的分析,深挖评论信息背后的价值。
案例
这个案例用的是电影《爵迹》,通过更改代码url中的豆瓣ID即可换成其他影视剧,爬取的评论信息包括用户名、发表日期、评论。如需更多信息,自行完善代码,也可评论区提需求,我定期完善更新。
代码
这个案例用的也是BeautifulSoup方法。
import random
from bs4 import BeautifulSoup #解析包
import requests #请求包
import pandas as pd
import json
import time # 设置休眠时间,控制爬虫频率
User_Agents =[
'Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_8; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50',
'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50',
'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:2.0.1) Gecko/20100101 Firefox/4.0.1',
'Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; en) Presto/2.8.131 Version/11.11',
'Opera/9.80 (Windows NT 6.1; U; en) Presto/2.8.131 Version/11.11',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_0) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11',
]
allinfo = []
# url里面的豆瓣ID换成目标影视剧的即可
urls = ['https://movie.douban.com/subject/26354336/comments?start={}&limit=20&sort=new_score&status=P&comments_only=1'.format(number) for number in range(0,220,20)]
def getinfo(url):
selnumber_data = []
seltime_data = []
selname_data = []
selcomment_data = []
for i in range(1, 21):
selnumbers = '#comments > div:nth-child(' + str(i) + ') > div.comment > h3 > span.comment-vote'
selnumber_data.append(selnumbers)
for i in range(1, 21):
seltimes = '#comments > div:nth-child(' + str(
i) + ') > div.comment > h3 > span.comment-info > span.comment-time'
seltime_data.append(seltimes)
for i in range(1, 21):
selnames = '#comments > div:nth-child(' + str(i) + ') > div.comment > h3 > span.comment-info > a'
selname_data.append(selnames)
for i in range(1, 21):
selcomments = '#comments > div:nth-child(' + str(i) + ') > div.comment > p > span'
selcomment_data.append(selcomments)
for i in range(0, 20):
wdata = requests.get(url, headers={
'User-Agent': random.choice(User_Agents)})
wsoup = BeautifulSoup(wdata.text, 'lxml')
numbers = wsoup.select(selnumber_data[i])
times = wsoup.select(seltime_data[i])
names = wsoup.select(selname_data[i])
comments = wsoup.select(selcomment_data[i])
info = {
'name': names[0].get_text() if names else "",
'number': numbers[0].get_text() if numbers else "",
'time': times[0].get_text() if times else "",
'comment': comments[0].get_text() if comments else ""
}
allinfo.append(info)
df = pd.DataFrame(allinfo)
df.to_excel('jueji_douban.xlsx', sheet_name='Sheet1')
for url in urls:
print(url)
getinfo(url)
seconds = random.uniform(3,4)
time.sleep(seconds)