您好,登錄后才能下訂單哦!
本篇內(nèi)容介紹了“Python如何使用Requests抓取包圖網(wǎng)小視頻”的有關(guān)知識,在實際案例的操作過程中,不少人都會遇到這樣的困境,接下來就讓小編帶領(lǐng)大家學(xué)習(xí)一下如何處理這些情況吧!希望大家仔細閱讀,能夠?qū)W有所成!
分析網(wǎng)頁數(shù)據(jù)結(jié)構(gòu)
經(jīng)分析我們可以發(fā)現(xiàn)總站數(shù)據(jù)我們可以從這四這選項下手
分析網(wǎng)頁數(shù)據(jù)格式
image.png
網(wǎng)頁數(shù)據(jù)為靜態(tài)
抓取下一頁鏈接
抓取下一頁鏈接
OK, 上代碼!
import requests
from lxml import etree
import threading
class Spider(object):
def __init__(self):
self.headers = {"User-Agent":"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36"}
self.offset = 1
def start_work(self, url):
print("正在爬取第 %d 頁......" % self.offset)
self.offset += 1
response = requests.get(url=url,headers=self.headers)
html = response.content.decode()
html = etree.HTML(html)
video_src = html.xpath('//div[@class="video-play"]/video/@src')
video_title = html.xpath('//span[@class="video-title"]/text()')
next_page = "http:" + html.xpath('//a[@class="next"]/@href')[0]
# 爬取完畢...
if next_page == "http:":
return
self.write_file(video_src, video_title)
self.start_work(next_page)
def write_file(self, video_src, video_title):
for src, title in zip(video_src, video_title):
response = requests.get("http:"+ src, headers=self.headers)
file_name = title + ".mp4"
file_name = "".join(file_name.split("/"))
print("正在抓取%s" % file_name)
with open(file_name, "wb") as f:
f.write(response.content)
if __name__ == "__main__":
spider = Spider()
for i in range(0,3):
# spider.start_work(url="https://ibaotu.com/shipin/7-0-0-0-"+ str(i) +"-1.html")
t = threading.Thread(target=spider.start_work, args=("https://ibaotu.com/shipin/7-0-0-0-"+ str(i) +"-1.html",))
t.start()
運行結(jié)果
是不是很簡單呢!
“Python如何使用Requests抓取包圖網(wǎng)小視頻”的內(nèi)容就介紹到這里了,感謝大家的閱讀。如果想了解更多行業(yè)相關(guān)的知識可以關(guān)注億速云網(wǎng)站,小編將為大家輸出更多高質(zhì)量的實用文章!
免責聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進行舉報,并提供相關(guān)證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權(quán)內(nèi)容。