溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊(cè)×
其他方式登錄
點(diǎn)擊 登錄注冊(cè) 即表示同意《億速云用戶服務(wù)條款》

Python selenium爬取微博數(shù)據(jù)的方法

發(fā)布時(shí)間:2020-07-22 15:37:53 來源:億速云 閱讀:462 作者:小豬 欄目:開發(fā)技術(shù)

這篇文章主要講解了Python selenium爬取微博數(shù)據(jù)的方法,內(nèi)容清晰明了,對(duì)此有興趣的小伙伴可以學(xué)習(xí)一下,相信大家閱讀完之后會(huì)有幫助。

爬取某人的微博數(shù)據(jù),把某人所有時(shí)間段的微博數(shù)據(jù)都爬下來。

具體思路:

創(chuàng)建driver-----get網(wǎng)頁----找到并提取信息-----保存csv----翻頁----get網(wǎng)頁(開始循環(huán))----...----沒有“下一頁”就結(jié)束,

用了while True,沒用自我調(diào)用函數(shù)

嘟大海的微博:https://weibo.com/u/1623915527

辦公室小野的微博:https://weibo.com/bgsxy

代碼如下

from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import csv
import os
import time
 
#只有這2個(gè)參數(shù)設(shè)置,想爬誰的微博數(shù)據(jù)就在這里改地址和目標(biāo)csv名稱就行
weibo_url = 'https://weibo.com/bgsxy?profile_ftype=1&is_all=1#_0'
csv_name = 'bgsxy_allweibo.csv'
 
def start_chrome():
  print('開始創(chuàng)建瀏覽器')
  driver = webdriver.Chrome(executable_path='C:/Users/lori/Desktop/python52project/chromedriver_win32/chromedriver.exe')
  driver.start_client()
  return driver
 
def get_web(url):   #獲取網(wǎng)頁,并下拉到最底部
  print('開始打開指定網(wǎng)頁')
  driver.get(url)
  time.sleep(7)
  scoll_down()
  time.sleep(5)
 
def scoll_down():  # 滾輪下拉到最底部
  html_page = driver.find_element_by_tag_name('html')
  for i in range(7):
    print(i)
    html_page.send_keys(Keys.END)
    time.sleep(1)
 
def get_data():
  print('開始查找并提取數(shù)據(jù)')
  card_sel = 'div.WB_cardwrap.WB_feed_type'
  time_sel = 'a.S_txt2[node-type="feed_list_item_date"]'
  source_sel = 'a.S_txt2[suda-uatrack="key=profile_feed&value=pubfrom_guest"]'
  content_sel = 'div.WB_text.W_f14'
  interact_sel = 'span.line.S_line1>span>em:nth-child(2)'
 
  cards = driver.find_elements_by_css_selector(card_sel)
  info_list = []
 
  for card in cards:
    time = card.find_elements_by_css_selector(time_sel)[0].text #雖然有可能在一個(gè)card中有2個(gè)time元素,我們?nèi)〉谝粋€(gè)就對(duì)
    if card.find_elements_by_css_selector(source_sel):
      source = card.find_elements_by_css_selector(source_sel)[0].text
    else:
      source = ''
    content = card.find_elements_by_css_selector(content_sel)[0].text
    link = card.find_elements_by_css_selector(time_sel)[0].get_attribute('href')
    trans = card.find_elements_by_css_selector(interact_sel)[1].text
    comment = card.find_elements_by_css_selector(interact_sel)[2].text
    like = card.find_elements_by_css_selector(interact_sel)[3].text
    info_list.append([time,source,content,link,trans,comment,like])
 
  return info_list
 
def save_csv(info_list,csv_name):
  csv_path = './' + csv_name
  print('開始寫入csv文件')
  if os.path.exists(csv_path):
    with open(csv_path,'a',newline='',encoding='utf-8-sig') as f: #newline=''避免空行;encoding='utf-8-sig'比utf8牛,保存中文沒問題
      writer = csv.writer(f)
      writer.writerows(info_list)
  else:
    with open(csv_path,'w+',newline='',encoding='utf-8-sig') as f:
      writer = csv.writer(f)
      writer.writerow(['發(fā)布時(shí)間','來源','內(nèi)容','鏈接','轉(zhuǎn)發(fā)數(shù)','評(píng)論數(shù)','點(diǎn)贊數(shù)'])
      writer.writerows(info_list)
  time.sleep(5)
 
def next_page_url():
  next_page_sel = 'a.page.next'
  next_page_ele = driver.find_elements_by_css_selector(next_page_sel)
  if next_page_ele:
    return next_page_ele[0].get_attribute('href')
  else:
    return None
 
 
driver = start_chrome()
input('請(qǐng)?jiān)赾hrome中登錄weibo.com')   # 暫停程序,手動(dòng)登錄weibo.com
 
while True:
  get_web(weibo_url)
  info_list = get_data()
  save_csv(info_list,csv_name)
  if next_page_url():
    weibo_url = next_page_url()
  else:
    print('爬取結(jié)束')
    break

看完上述內(nèi)容,是不是對(duì)Python selenium爬取微博數(shù)據(jù)的方法有進(jìn)一步的了解,如果還想學(xué)習(xí)更多內(nèi)容,歡迎關(guān)注億速云行業(yè)資訊頻道。

向AI問一下細(xì)節(jié)

免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。

AI