您好,登錄后才能下訂單哦!
如何用python爬蟲(chóng)獲取微博評(píng)論?,相信大部分人都還沒(méi)學(xué)會(huì)這個(gè)技能,為了讓大家學(xué)會(huì),給大家總結(jié)了以下內(nèi)容,話不多說(shuō),一起往下看吧。
數(shù)據(jù)格式:{“name”:評(píng)論人姓名,“comment_time”:評(píng)論時(shí)間,“comment_info”:評(píng)論內(nèi)容,“comment_url”:評(píng)論人的主頁(yè)}
以上就是我們需要的信息。
具體操作流程:
我們首相將主頁(yè)獲取完成以后,我們就會(huì)發(fā)現(xiàn),其中 的內(nèi)容帶有相關(guān)的反爬措施,獲取到的源碼中的信息含有很多的轉(zhuǎn)義符“\”,并且其中的相關(guān)“<”和“>”是通過(guò)html的語(yǔ)言直接編寫(xiě)的,這樣會(huì)導(dǎo)致我們的頁(yè)面解析出現(xiàn)一定的問(wèn)題,我們可以用replace方法直接將這些轉(zhuǎn)義符全部去掉,然后我們就可以對(duì)這個(gè)頁(yè)面進(jìn)行正則處理,同時(shí)我也嘗試過(guò)用其他的解析方法,但是其中遇到了很多 的問(wèn)題,所以我就不過(guò)多的介紹了。
當(dāng)我們獲取到了每一篇微博的鏈接,智匯返傭,還需要獲取一個(gè)很關(guān)鍵的值 id ,這個(gè)值有什么用呢,其主要的作用就是在評(píng)論頁(yè)面的ajax頁(yè)面的拼接地址上需要使用到。接下來(lái)就是需要尋找出我們找到的這兩個(gè)ajax的url有什么特點(diǎn)或者是規(guī)律:
當(dāng)我們從這些ajax中找到規(guī)律以后,不難發(fā)現(xiàn),這個(gè)爬蟲(chóng)差不多大功告成了。
下面我就展示一下我的代碼:
注意:請(qǐng)?jiān)趆eaders中添加自己的cookie
import requests
import json
import time
from lxml import etree
import html
import re
from bs4 import BeautifulSoup
class Weibospider:
def init(self):
self.start_url = 'https://weibo.com/u/5644764907?page=1&is_all=1'
self.headers = {
"accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,/;q=0.8",
"accept-encoding": "gzip, deflate, br",
"accept-language": "zh-CN,zh;q=0.9,en;q=0.8",
"cache-control": "max-age=0",
"cookie": 使用自己本機(jī)的cookie,
"referer": "https://www.weibo.com/u/5644764907?topnav=1&wvr=6&topsug=1",
"upgrade-insecure-requests": "1",
"user-agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.96 Safari/537.36",
}
self.proxy = {
'HTTP': 'HTTP://180.125.70.78:9999',
'HTTP': 'HTTP://117.90.4.230:9999',
'HTTP': 'HTTP://111.77.196.229:9999',
'HTTP': 'HTTP://111.177.183.57:9999',
'HTTP': 'HTTP://123.55.98.146:9999',
}
def parse_home_url(self, url): # 處理解析首頁(yè)面的詳細(xì)信息(不包括兩個(gè)通過(guò)ajax獲取到的頁(yè)面)
res = requests.get(url, headers=self.headers)
response = res.content.decode().replace("\", "")
every_id = re.compile('name=(\d+)', re.S).findall(response) # 獲取次級(jí)頁(yè)面需要的id
home_url = []
for id in every_id:
base_url = 'https://weibo.com/aj/v6/comment/big?ajwvr=6&id={}&from=singleWeiBo'
url = base_url.format(id)
home_url.append(url)
return home_url
def parse_comment_info(self, url): # 爬取直接發(fā)表評(píng)論的人的相關(guān)信息(name,info,time,info_url)
res = requests.get(url, headers=self.headers)
response = res.json()
count = response['data']['count']
html = etree.HTML(response['data']['html'])
name = html.xpath("//div[@class='list_li S_line1 clearfix']/div[@class='WB_face W_fl']/a/img/@alt") # 評(píng)論人的姓名
info = html.xpath("//div[@node-type='replywrap']/div[@class='WB_text']/text()") # 評(píng)論信息
info = "".join(info).replace(" ", "").split("\n")
info.pop(0)
comment_time = html.xpath("//div[@class='WB_from S_txt2']/text()") # 評(píng)論時(shí)間
name_url = html.xpath("//div[@class='WB_face W_fl']/a/@href") # 評(píng)論人的url
name_url = ["https:" + i for i in name_url]
comment_info_list = []
for i in range(len(name)):
item = {}
item["name"] = name[i] # 存儲(chǔ)評(píng)論人的網(wǎng)名
item["comment_info"] = info[i] # 存儲(chǔ)評(píng)論的信息
item["comment_time"] = comment_time[i] # 存儲(chǔ)評(píng)論時(shí)間
item["comment_url"] = name_url[i] # 存儲(chǔ)評(píng)論人的相關(guān)主頁(yè)
comment_info_list.append(item)
return count, comment_info_list
def write_file(self, path_name, content_list):
for content in content_list:
with open(path_name, "a", encoding="UTF-8") as f:
f.write(json.dumps(content, ensure_ascii=False))
f.write("\n")
def run(self):
start_url = 'https://weibo.com/u/5644764907?page={}&is_all=1'
start_ajax_url1 = 'https://weibo.com/p/aj/v6/mblog/mbloglist?ajwvr=6&domain=100406&is_all=1&page={0}&pagebar=0&pl_name=Pl_Official_MyProfileFeed__20&id=1004065644764907&script_uri=/u/5644764907&pre_page={0}'
start_ajax_url2 = 'https://weibo.com/p/aj/v6/mblog/mbloglist?ajwvr=6&domain=100406&is_all=1&page={0}&pagebar=1&pl_name=Pl_Official_MyProfileFeed__20&id=1004065644764907&script_uri=/u/5644764907&pre_page={0}'
for i in range(12): # 微博共有12頁(yè)
home_url = self.parse_home_url(start_url.format(i + 1)) # 獲取每一頁(yè)的微博
ajax_url1 = self.parse_home_url(start_ajax_url1.format(i + 1)) # ajax加載頁(yè)面的微博
ajax_url2 = self.parse_home_url(start_ajax_url2.format(i + 1)) # ajax第二頁(yè)加載頁(yè)面的微博
all_url = home_url + ajax_url1 + ajax_url2
for j in range(len(all_url)):
print(all_url[j])
path_name = "第{}條微博相關(guān)評(píng)論.txt".format(i * 45 + j + 1)
all_count, comment_info_list = self.parse_comment_info(all_url[j])
self.write_file(path_name, comment_info_list)
for num in range(1, 10000):
if num * 15 < int(all_count) + 15:
comment_url = all_url[j] + "&page={}".format(num + 1)
print(comment_url)
try:
count, comment_info_list = self.parse_comment_info(comment_url)
self.write_file(path_name, comment_info_list)
except Exception as e:
print("Error:", e)
time.sleep(60)
count, comment_info_list = self.parse_comment_info(comment_url)
self.write_file(path_name, comment_info_list)
del count
time.sleep(0.2)
print("第{}微博信息獲取完成!".format(i * 45 + j + 1))
if name == 'main':
weibo = Weibospider()
weibo.run()
以上就是使用python爬蟲(chóng)獲取微博評(píng)論的方法介紹,詳細(xì)使用情況還得要大家自己使用過(guò)才能知道具體要領(lǐng)。如果想閱讀更多相關(guān)內(nèi)容的文章,歡迎關(guān)注億速云行業(yè)資訊頻道!
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。