您好,登錄后才能下訂單哦!
這篇文章主要講解了“python抖音爬蟲采集數(shù)據(jù)的方法是什么”,文中的講解內(nèi)容簡單清晰,易于學(xué)習(xí)與理解,下面請大家跟著小編的思路慢慢深入,一起來研究和學(xué)習(xí)“python抖音爬蟲采集數(shù)據(jù)的方法是什么”吧!
爬蟲就是我們利用某種程序代替人工批量讀取、獲取網(wǎng)站上的資料信息。而反爬則是跟爬蟲的對立面,是竭盡全力阻止非人為的采集網(wǎng)站信息,二者相生相克,水火不容,到目前為止大部分的網(wǎng)站都還是可以輕易的爬取資料信息。<br>爬蟲想要繞過被反的策略就是盡可能的讓服務(wù)器人你不是機器程序,所以在程序中就要把自己偽裝成瀏覽器訪問網(wǎng)站,這可以極大程度降低被反的概率,那如何做到偽裝瀏覽器呢?
比如:
Accept:客戶端支持的數(shù)據(jù)類型,用逗號隔開,是有順序的,分號前面是主類型,分號后是子類型;
Accept-Encoding:指定瀏覽器可以支持的web服務(wù)器返回內(nèi)容壓縮編碼類型;
Accept-Language:瀏覽器可接受的自然語言的類型;
Connection:設(shè)置HTTP連接的持久化,通常都是Keep-Alive;
Host:服務(wù)器的域名或IP地址,如果不是通用端口,還包含該端口號;
Referer:指當(dāng)前請求的URL是在什么地址引用的;
user_agent_list = [ "Opera/9.80 (X11; Linux i686; U; hu) Presto/2.9.168 Version/11.50", "Opera/9.80 (X11; Linux i686; U; ru) Presto/2.8.131 Version/11.11", "Opera/9.80 (X11; Linux i686; U; es-ES) Presto/2.8.131 Version/11.11", "Mozilla/5.0 (Windows NT 5.1; U; en; rv:1.8.1) Gecko/20061208 Firefox/5.0 Opera 11.11", "Opera/9.80 (X11; Linux x86_64; U; bg) Presto/2.8.131 Version/11.10", "Opera/9.80 (Windows NT 6.0; U; en) Presto/2.8.99 Version/11.10", "Opera/9.80 (Windows NT 5.1; U; zh-tw) Presto/2.8.131 Version/11.10", "Opera/9.80 (Windows NT 6.1; Opera Tablet/15165; U; en) Presto/2.8.149 Version/11.1", "Opera/9.80 (X11; Linux x86_64; U; Ubuntu/10.10 (maverick); pl) Presto/2.7.62 Version/11.01", "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36", "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.97 Safari/537.36", "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:70.0) Gecko/20100101 Firefox/70.0", "Opera/9.80 (X11; Linux i686; Ubuntu/14.10) Presto/2.12.388 Version/12.16", "Opera/9.80 (Windows NT 6.0) Presto/2.12.388 Version/12.14", "Mozilla/5.0 (Windows NT 6.0; rv:2.0) Gecko/20100101 Firefox/4.0 Opera 12.14", "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.0) Opera 12.14", "Opera/12.80 (Windows NT 5.1; U; en) Presto/2.10.289 Version/12.02", "Opera/9.80 (Windows NT 6.1; U; es-ES) Presto/2.9.181 Version/12.00", "Opera/9.80 (Windows NT 5.1; U; zh-sg) Presto/2.9.181 Version/12.00", "Opera/12.0(Windows NT 5.2;U;en)Presto/22.9.168 Version/12.00", "Opera/12.0(Windows NT 5.1;U;en)Presto/22.9.168 Version/12.00", "Mozilla/5.0 (Windows NT 5.1) Gecko/20100101 Firefox/14.0 Opera/12.0", "Opera/9.80 (Windows NT 6.1; WOW64; U; pt) Presto/2.10.229 Version/11.62", "Opera/9.80 (Windows NT 6.0; U; pl) Presto/2.10.229 Version/11.62", "Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; fr) Presto/2.9.168 Version/11.52", "Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; de) Presto/2.9.168 Version/11.52", "Opera/9.80 (Windows NT 5.1; U; en) Presto/2.9.168 Version/11.51", "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; de) Opera 11.51", "Opera/9.80 (X11; Linux x86_64; U; fr) Presto/2.9.168 Version/11.50", ] referer_list = ["https://www.test.com/", "https://www.baidu.com/"]
獲取隨機數(shù),即每次采集都會根據(jù)隨機數(shù)提取隨機用戶代理、引用地址(注:若有多個頁面循環(huán)采集,最好采集完單個等待個幾秒鐘再繼續(xù)采集,減小服務(wù)器的壓力。):
import random import re, urllib.request, lxml.html import requests import time, random def get_randam(data): return random.randint(0, len(data)-1) def crawl(): headers = { 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8', 'Accept-Encoding': 'gzip, deflate', 'Accept-Language': 'zh-CN,zh;q=0.9', 'Connection': 'keep-alive', 'host': 'test.com', 'Referer': 'https://test.com/', } random_index = get_randam(user_agent_list) random_agent = user_agent_list[random_index] headers['User-Agent'] = random_agent random_index_01 = get_randam(referer_list) random_agent_01 = referer_list[random_index_01] headers['Referer'] = random_agent_01 session = requests.session() url = "https://www.test.com/" html_data = session.get(url, headers=headers, timeout=180) html_data.raise_for_status() html_data.encoding = 'utf-8-sig' data = html_data.text data_doc = lxml.html.document_fromstring(data) ...(對網(wǎng)頁數(shù)據(jù)進行解析、提取、存儲等) time.sleep(random.randint(3, 5))
根據(jù)代理ip的匿名程度,代理ip可以分為下面四類:
透明代理(Transparent Proxy)Transparent Proxy):透明代理雖然可以直接“隱藏”你的IP地址,但是還是可以查到你是誰。
匿名代理(Anonymous Proxy):匿名代理比透明代理進步了一點:別人只能知道你用了代理,無法知道你是誰。
混淆代理(Distorting Proxies):與匿名代理相同,如果使用了混淆代理,別人還是能知道你在用代理,但是會得到一個假的IP地址,偽裝的更逼真
高匿代理(Elite proxy或High Anonymity Proxy):可以看出來,高匿代理讓別人根本無法發(fā)現(xiàn)你是在用代理,所以是最好的選擇。<br>在使用的使用,毫無疑問使用高匿代理效果最好
下面我采用免費的高匿代理IP進行采集:
#代理IP: https://www.xicidaili.com/nn import requests proxies = { "http": "http://117.30.113.248:9999", "https": "https://120.83.120.157:9999" } r=requests.get("https://www.baidu.com", proxies=proxies) r.raise_for_status() r.encoding = 'utf-8-sig' print(r.text)
感謝各位的閱讀,以上就是“python抖音爬蟲采集數(shù)據(jù)的方法是什么”的內(nèi)容了,經(jīng)過本文的學(xué)習(xí)后,相信大家對python抖音爬蟲采集數(shù)據(jù)的方法是什么這一問題有了更深刻的體會,具體使用情況還需要大家實踐驗證。這里是億速云,小編將為大家推送更多相關(guān)知識點的文章,歡迎關(guān)注!
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進行舉報,并提供相關(guān)證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權(quán)內(nèi)容。