您好,登錄后才能下訂單哦!
這篇文章將為大家詳細(xì)講解有關(guān)怎么使用python爬取B站排行榜Top100的視頻數(shù)據(jù),小編覺得挺實用的,因此分享給大家做個參考,希望大家閱讀完這篇文章后可以有所收獲。
from bs4 import BeautifulSoup # 解析網(wǎng)頁 import re # 正則表達式,進行文字匹配 import urllib.request,urllib.error # 通過瀏覽器請求數(shù)據(jù) import sqlite3 # 輕型數(shù)據(jù)庫 import time # 獲取當(dāng)前時間
爬取過程主要包括聲明爬取網(wǎng)頁 -> 爬取網(wǎng)頁數(shù)據(jù)并解析 -> 保存數(shù)據(jù)
def main(): #聲明爬取網(wǎng)站 baseurl = "https://www.bilibili.com/v/popular/rank/all" #爬取網(wǎng)頁 datalist = getData(baseurl) # print(datalist) #保存數(shù)據(jù) dbname = time.strftime("%Y-%m-%d", time.localtime()) dbpath = "BiliBiliTop100 " + dbname saveData(datalist,dbpath)
(1)在爬取的過程中采用的技術(shù)為:偽裝成瀏覽器對數(shù)據(jù)進行請求;
(2)解析爬取到的網(wǎng)頁源碼時:采用Beautifulsoup解析出需要的數(shù)據(jù),使用re正則表達式對數(shù)據(jù)進行匹配;
(3)保存數(shù)據(jù)時,考慮到B站排行榜是每日進行刷新,故可以用當(dāng)前日期進行保存數(shù)據(jù)庫命名。
數(shù)據(jù)庫中包含的數(shù)據(jù)有:排名、視頻鏈接、標(biāo)題、播放量、評論量、作者、綜合分?jǐn)?shù)這7個數(shù)據(jù)。
from bs4 import BeautifulSoup #解析網(wǎng)頁 import re # 正則表達式,進行文字匹配 import urllib.request,urllib.error import sqlite3 import time def main(): #聲明爬取網(wǎng)站 baseurl = "https://www.bilibili.com/v/popular/rank/all" #爬取網(wǎng)頁 datalist = getData(baseurl) # print(datalist) #保存數(shù)據(jù) dbname = time.strftime("%Y-%m-%d", time.localtime()) dbpath = "BiliBiliTop100 " + dbname saveData(datalist,dbpath) #re正則表達式 findLink =re.compile(r'<a class="title" href="(.*?)" rel="external nofollow" ') #視頻鏈接 findOrder = re.compile(r'<div class="num">(.*?)</div>') #榜單次序 findTitle = re.compile(r'<a class="title" href=".*?" rel="external nofollow" rel="external nofollow" target="_blank">(.*?)</a>') #視頻標(biāo)題 findPlay = re.compile(r'<span class="data-box"><i class="b-icon play"></i>([\s\S]*)(.*?)</span> <span class="data-box">') #視頻播放量 findView = re.compile(r'<span class="data-box"><i class="b-icon view"></i>([\s\S]*)(.*?)</span> <a href=".*?" rel="external nofollow" rel="external nofollow" target="_blank"><span class="data-box up-name">') # 視頻評價數(shù) findName = re.compile(r'<i class="b-icon author"></i>(.*?)</span></a>',re.S) #視頻作者 findScore = re.compile(r'<div class="pts"><div>(.*?)</div>綜合得分',re.S) #視頻得分 def getData(baseurl): datalist = [] html = askURL(baseurl) #print(html) soup = BeautifulSoup(html,'html.parser') #解釋器 for item in soup.find_all('li',class_="rank-item"): # print(item) data = [] item = str(item) Order = re.findall(findOrder,item)[0] data.append(Order) # print(Order) Link = re.findall(findLink,item)[0] Link = 'https:' + Link data.append(Link) # print(Link) Title = re.findall(findTitle,item)[0] data.append(Title) # print(Title) Play = re.findall(findPlay,item)[0][0] Play = Play.replace(" ","") Play = Play.replace("\n","") Play = Play.replace(".","") Play = Play.replace("萬","0000") data.append(Play) # print(Play) View = re.findall(findView,item)[0][0] View = View.replace(" ","") View = View.replace("\n","") View = View.replace(".","") View = View.replace("萬","0000") data.append(View) # print(View) Name = re.findall(findName,item)[0] Name = Name.replace(" ","") Name = Name.replace("\n","") data.append(Name) # print(Name) Score = re.findall(findScore,item)[0] data.append(Score) # print(Score) datalist.append(data) return datalist def askURL(url): #設(shè)置請求頭 head = { "User-Agent": "Mozilla/5.0 (Windows NT 10.0;Win64;x64) AppleWebKit/537.36(KHTML, likeGecko) Chrome/80.0.3987.163Safari/537.36" } request = urllib.request.Request(url, headers = head) html = "" try: response = urllib.request.urlopen(request) html = response.read().decode("utf-8") #print(html) except urllib.error.URLError as e: if hasattr(e,"code"): print(e.code) if hasattr(e,"reason"): print(e.reason) return html def saveData(datalist,dbpath): init_db(dbpath) conn = sqlite3.connect(dbpath) cur = conn.cursor() for data in datalist: sql = ''' insert into Top100( id,info_link,title,play,view,name,score) values("%s","%s","%s","%s","%s","%s","%s")'''%(data[0],data[1],data[2],data[3],data[4],data[5],data[6]) print(sql) cur.execute(sql) conn.commit() cur.close() conn.close() def init_db(dbpath): sql = ''' create table Top100 ( id integer primary key autoincrement, info_link text, title text, play numeric, view numeric, name text, score numeric ) ''' conn = sqlite3.connect(dbpath) cursor = conn.cursor() cursor.execute(sql) conn.commit() conn.close() if __name__ =="__main__": main()
關(guān)于“怎么使用python爬取B站排行榜Top100的視頻數(shù)據(jù)”這篇文章就分享到這里了,希望以上內(nèi)容可以對大家有一定的幫助,使各位可以學(xué)到更多知識,如果覺得文章不錯,請把它分享出去讓更多的人看到。
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進行舉報,并提供相關(guān)證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權(quán)內(nèi)容。