您好,登錄后才能下訂單哦!
這篇文章主要介紹“怎么用python爬取貓眼電影的前100部影片”,在日常操作中,相信很多人在怎么用python爬取貓眼電影的前100部影片問題上存在疑惑,小編查閱了各式資料,整理出簡單好用的操作方法,希望對(duì)大家解答”怎么用python爬取貓眼電影的前100部影片”的疑惑有所幫助!接下來,請(qǐng)跟著小編一起來學(xué)習(xí)吧!
import requests
import re
from bs4 import BeautifulSoup
from lxml import etree
import traceback
import csv
#定義一個(gè)函數(shù)獲取豆瓣電影第一頁
def get_one_page(url,code = 'utf-8'):
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.90 Safari/537.36'}
try:
r = requests.get(url,headers = headers)
if r.status_code == 200:
r.encoding = code
return r.text
else:
print("相應(yīng)失敗")
return None
except:
traceback.print_exc()
def process(raw):
right = raw.split("@")
return right[0]
def area(a):
if a[-1] == ")":
return a[16:]
else:
return None
def parse_one_page(slst,html):
#正則表達(dá)式
# rank = re.findall('<dd>.*?<i class="board-index.*?>(\d+)</i>',html,re.S)
# img = re.findall('data-src="(.*?)".*?</a>',html,re.S)
# name = re.findall('<p class="name".*?><a.*?>(.*?)</a>',html,re.S)
# star = re.findall('<p class="star">(.*?)</p>',html,re.S)
# time = re.findall('<p class="releasetime">(.*?)</p>',html,re.S)
# print(time)
#正則表達(dá)式別忘了加上r,防止轉(zhuǎn)義,否則會(huì)報(bào)錯(cuò)
# 把上面的正則表達(dá)式合在一起
pattern = re.compile(r'<dd>.*?<i class="board-index.*?>(\d+)</i>.*?data-src="(.*?)".*?</a>.*?<p class="name".*?><a.*?>(.*?)</a>.*?<p class="star">(.*?)</p>.*?<p class="releasetime">(.*?)</p>.*?<p class="score"><.*?>(.*?)</i><i class="fraction">(.*?)</i></p>',re.S)
items = re.findall(pattern,html)
#print(items)
for item in items:
#yield就相當(dāng)于return的功能,但也有所不同,yield語句把程序編程迭代器
yield {
'rank':item[0],
'img':process(item[1]),
'MovieName':item[2],
"star":item[3].strip()[3:],
"time":item[4].strip()[5:15],
"area":area(item[4].strip()),
"score":str(item[5]) + str(item[6])
}
# return ""
def write_to_file(item):
with open("貓眼top100.csv",'a',encoding = "utf_8_sig",newline="") as f:
#a追加模式 newline區(qū)分換行符
fieldnames = ['rank','img','MovieName','star','time','area','score']
w = csv.DictWriter(f,fieldnames = fieldnames) #字典寫入到csv
# w.writeheader()
w.writerow(item)
return ""
def down_img(name,url,num):
try:
response = requests.get(url)
with open('C:/Users/HUAWEI/Desktop/py/爬蟲/douban/'+name+'.jpg','wb') as f:
f.write(response.content)
print("第%s張圖片下載完畢"%str(num))
print("="*20)
except Exception as e:
print(e.__class__.__name__) #打印錯(cuò)誤類型名稱
def main(i):
num = 0
url = 'https://maoyan.com/board/4?offset=' + str(i)
html = get_one_page(url)
#print(html)
lst = [] #這個(gè)在這里沒啥用,但以后若要單獨(dú)存儲(chǔ)某類信息,可是這樣寫,后面再對(duì)應(yīng)加上函數(shù)參數(shù)
iterator = parse_one_page(lst,html)
for a in iterator:
#print(a)
num += 1
write_to_file(a)
down_img(a['MovieName'],a['img'],num)
# if __name__ == '__main__':
# for i in range(10):
# main(i)
#多線程抓取
from multiprocessing import Pool
if __name__ == '__main__':
pool = Pool()
pool.map(main,[i * 10 for i in range(10)])
最終運(yùn)行結(jié)果如下:
保存封面圖片
把爬到的信息儲(chǔ)存到csv文件中
到此,關(guān)于“怎么用python爬取貓眼電影的前100部影片”的學(xué)習(xí)就結(jié)束了,希望能夠解決大家的疑惑。理論與實(shí)踐的搭配能更好的幫助大家學(xué)習(xí),快去試試吧!若想繼續(xù)學(xué)習(xí)更多相關(guān)知識(shí),請(qǐng)繼續(xù)關(guān)注億速云網(wǎng)站,小編會(huì)繼續(xù)努力為大家?guī)砀鄬?shí)用的文章!
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場,如果涉及侵權(quán)請(qǐng)聯(lián)系站長郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。