您好,登錄后才能下訂單哦!
使用Python爬蟲庫requests多線程抓取貓眼電影TOP100思路:
按F12查看網(wǎng)頁源代碼發(fā)現(xiàn)每一個電影的信息都在“<dd></dd>”標簽之中。
點開之后,信息如下:
在瀏覽器中打開貓眼電影網(wǎng)站,點擊“榜單”,再點擊“TOP100榜”如下圖:
接下來通過以下代碼獲取網(wǎng)頁源代碼:
#-*-coding:utf-8-*- import requests from requests.exceptions import RequestException #貓眼電影網(wǎng)站有反爬蟲措施,設置headers后可以爬取 headers = { 'Content-Type': 'text/plain; charset=UTF-8', 'Origin':'https://maoyan.com', 'Referer':'https://maoyan.com/board/4', 'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36' } #爬取網(wǎng)頁源代碼 def get_one_page(url,headers): try: response =requests.get(url,headers =headers) if response.status_code == 200: return response.text return None except RequestsException: return None def main(): url = "https://maoyan.com/board/4" html = get_one_page(url,headers) print(html) if __name__ == '__main__': main()
執(zhí)行結果如下:
上圖標示信息即為要提取的信息,代碼實現(xiàn)如下:
#-*-coding:utf-8-*- import requests import re from requests.exceptions import RequestException #貓眼電影網(wǎng)站有反爬蟲措施,設置headers后可以爬取 headers = { 'Content-Type': 'text/plain; charset=UTF-8', 'Origin':'https://maoyan.com', 'Referer':'https://maoyan.com/board/4', 'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36' } #爬取網(wǎng)頁源代碼 def get_one_page(url,headers): try: response =requests.get(url,headers =headers) if response.status_code == 200: return response.text return None except RequestsException: return None #正則表達式提取信息 def parse_one_page(html): pattern = re.compile('<dd>.*?board-index.*?>(\d+)</i>.*?data-src="(.*?)".*?name"><a' +'.*?>(.*?)</a>.*?star">(.*?)</p>.*?releasetime">(.*?)</p>.*?integer">(.*?)</i>.*?fraction">(.*?)</i>.*?</dd>',re.S) items = re.findall(pattern,html) for item in items: yield{ 'index':item[0], 'image':item[1], 'title':item[2], 'actor':item[3].strip()[3:], 'time':item[4].strip()[5:], 'score':item[5]+item[6] } def main(): url = "https://maoyan.com/board/4" html = get_one_page(url,headers) for item in parse_one_page(html): print(item) if __name__ == '__main__': main()
執(zhí)行結果如下:
上邊代碼實現(xiàn)單頁的信息抓取,要想爬取100個電影的信息,先觀察每一頁url的變化,點開每一頁我們會發(fā)現(xiàn)url進行變化,原url后面多了‘?offset=0',且offset的值變化從0,10,20,變化如下:
代碼實現(xiàn)如下:
#-*-coding:utf-8-*- import requests import re import json import os from requests.exceptions import RequestException #貓眼電影網(wǎng)站有反爬蟲措施,設置headers后可以爬取 headers = { 'Content-Type': 'text/plain; charset=UTF-8', 'Origin':'https://maoyan.com', 'Referer':'https://maoyan.com/board/4', 'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36' } #爬取網(wǎng)頁源代碼 def get_one_page(url,headers): try: response =requests.get(url,headers =headers) if response.status_code == 200: return response.text return None except RequestsException: return None #正則表達式提取信息 def parse_one_page(html): pattern = re.compile('<dd>.*?board-index.*?>(\d+)</i>.*?data-src="(.*?)".*?name"><a' +'.*?>(.*?)</a>.*?star">(.*?)</p>.*?releasetime">(.*?)</p>.*?integer">(.*?)</i>.*?fraction">(.*?)</i>.*?</dd>',re.S) items = re.findall(pattern,html) for item in items: yield{ 'index':item[0], 'image':item[1], 'title':item[2], 'actor':item[3].strip()[3:], 'time':item[4].strip()[5:], 'score':item[5]+item[6] } #貓眼TOP100所有信息寫入文件 def write_to_file(content): #encoding ='utf-8',ensure_ascii =False,使寫入文件的代碼顯示為中文 with open('result.txt','a',encoding ='utf-8') as f: f.write(json.dumps(content,ensure_ascii =False)+'\n') f.close() #下載電影封面 def save_image_file(url,path): jd = requests.get(url) if jd.status_code == 200: with open(path,'wb') as f: f.write(jd.content) f.close() def main(offset): url = "https://maoyan.com/board/4?offset="+str(offset) html = get_one_page(url,headers) if not os.path.exists('covers'): os.mkdir('covers') for item in parse_one_page(html): print(item) write_to_file(item) save_image_file(item['image'],'covers/'+item['title']+'.jpg') if __name__ == '__main__': #對每一頁信息進行爬取 for i in range(10): main(i*10)
爬取結果如下:
進行比較,發(fā)現(xiàn)多線程爬取時間明顯較快:
多線程:
以下為完整代碼:
#-*-coding:utf-8-*- import requests import re import json import os from requests.exceptions import RequestException from multiprocessing import Pool #貓眼電影網(wǎng)站有反爬蟲措施,設置headers后可以爬取 headers = { 'Content-Type': 'text/plain; charset=UTF-8', 'Origin':'https://maoyan.com', 'Referer':'https://maoyan.com/board/4', 'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36' } #爬取網(wǎng)頁源代碼 def get_one_page(url,headers): try: response =requests.get(url,headers =headers) if response.status_code == 200: return response.text return None except RequestsException: return None #正則表達式提取信息 def parse_one_page(html): pattern = re.compile('<dd>.*?board-index.*?>(\d+)</i>.*?data-src="(.*?)".*?name"><a' +'.*?>(.*?)</a>.*?star">(.*?)</p>.*?releasetime">(.*?)</p>.*?integer">(.*?)</i>.*?fraction">(.*?)</i>.*?</dd>',re.S) items = re.findall(pattern,html) for item in items: yield{ 'index':item[0], 'image':item[1], 'title':item[2], 'actor':item[3].strip()[3:], 'time':item[4].strip()[5:], 'score':item[5]+item[6] } #貓眼TOP100所有信息寫入文件 def write_to_file(content): #encoding ='utf-8',ensure_ascii =False,使寫入文件的代碼顯示為中文 with open('result.txt','a',encoding ='utf-8') as f: f.write(json.dumps(content,ensure_ascii =False)+'\n') f.close() #下載電影封面 def save_image_file(url,path): jd = requests.get(url) if jd.status_code == 200: with open(path,'wb') as f: f.write(jd.content) f.close() def main(offset): url = "https://maoyan.com/board/4?offset="+str(offset) html = get_one_page(url,headers) if not os.path.exists('covers'): os.mkdir('covers') for item in parse_one_page(html): print(item) write_to_file(item) save_image_file(item['image'],'covers/'+item['title']+'.jpg') if __name__ == '__main__': #對每一頁信息進行爬取 pool = Pool() pool.map(main,[i*10 for i in range(10)]) pool.close() pool.join()
本文主要講解了使用Python爬蟲庫requests多線程抓取貓眼電影TOP100數(shù)據(jù)的實例,更多關于Python爬蟲庫的知識請查看下面的相關鏈接
免責聲明:本站發(fā)布的內容(圖片、視頻和文字)以原創(chuàng)、轉載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權請聯(lián)系站長郵箱:is@yisu.com進行舉報,并提供相關證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權內容。