您好,登錄后才能下訂單哦!
這篇文章主要介紹Python中多線程爬取豆瓣影評(píng)API接口的方法,文中介紹的非常詳細(xì),具有一定的參考價(jià)值,感興趣的小伙伴們一定要看完!
Python多線程豆瓣影評(píng)API接口爬蟲(chóng)
爬蟲(chóng)庫(kù)
1.使用簡(jiǎn)單的requests庫(kù),這是一個(gè)阻塞的庫(kù),速度比較慢。
2.解析使用XPATH表達(dá)式。
3.總體采用類的形式。
多線程
使用concurrent.future并發(fā)模塊,建立線程池,把future對(duì)象扔進(jìn)去執(zhí)行即可實(shí)現(xiàn)并發(fā)爬取效果。
數(shù)據(jù)存儲(chǔ)
使用Python ORM sqlalchemy保存到數(shù)據(jù)庫(kù),也可以使用自帶的csv模塊存在CSV中。
API接口
因?yàn)锳PI接口存在數(shù)據(jù)保護(hù)情況,一個(gè)電影的每一個(gè)分類只能抓取前25頁(yè),全部評(píng)論、好評(píng)、中評(píng)、差評(píng)所有分類能爬100頁(yè),每頁(yè)有20個(gè)數(shù)據(jù),即最多為兩千條數(shù)據(jù)。
因?yàn)闀r(shí)效性原因,不保證代碼能爬到數(shù)據(jù),只是給大家一個(gè)參考思路,上代碼:
from datetime import datetime import random import csv from concurrent.futures import ThreadPoolExecutor, as_completed from lxml import etree import pymysql import requests from models import create_session, Comments #隨機(jī)UA USERAGENT = [ 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.1 (KHTML, like Gecko) Chrome/14.0.835.163 Safari/535.1', 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36', 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:6.0) Gecko/20100101 Firefox/6.0', 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50', 'Opera/9.80 (Windows NT 6.1; U; zh-cn) Presto/2.9.168 Version/11.50', 'Mozilla/5.0 (Windows; U; Windows NT 6.1; ) AppleWebKit/534.12 (KHTML, like Gecko) Maxthon/3.0 Safari/534.12' ] class CommentFetcher: headers = {'User-Agent': ''} cookie = '' cookies = {'cookie': cookie} # cookie為登錄后的cookie,需要自行復(fù)制 base_node = '//div[@class="comment-item"]' def __init__(self, movie_id, start, type=''): ''' :type: 全部評(píng)論:'', 好評(píng):h 中評(píng):m 差評(píng):l :movie_id: 影片的ID號(hào) :start: 開(kāi)始的記錄數(shù),0-480 ''' self.movie_id = movie_id self.start = start self.type = type self.url = 'https://movie.douban.com/subject/{id}/comments?start={start}&limit=20&sort=new_score\ &status=P&percent_type={type}&comments_only=1'.format( id=str(self.movie_id), start=str(self.start), type=self.type ) #創(chuàng)建數(shù)據(jù)庫(kù)連接 self.session = create_session() #隨機(jī)useragent def _random_UA(self): self.headers['User-Agent'] = random.choice(USERAGENT) #獲取api接口,使用get方法,返回的數(shù)據(jù)為json數(shù)據(jù),需要提取里面的HTML def _get(self): self._random_UA() res = '' try: res = requests.get(self.url, cookies=self.cookies, headers=self.headers) res = res.json()['html'] except Exception as e: print('IP被封,請(qǐng)使用代理IP') print('正在獲取{} 開(kāi)始的記錄'.format(self.start)) return res def _parse(self): res = self._get() dom = etree.HTML(res) #id號(hào) self.id = dom.xpath(self.base_node + '/@data-cid') #用戶名 self.username = dom.xpath(self.base_node + '/div[@class="avatar"]/a/@title') #用戶連接 self.user_center = dom.xpath(self.base_node + '/div[@class="avatar"]/a/@href') #點(diǎn)贊數(shù) self.vote = dom.xpath(self.base_node + '//span[@class="votes"]/text()') #星級(jí) self.star = dom.xpath(self.base_node + '//span[contains(@class,"rating")]/@title') #發(fā)表時(shí)間 self.time = dom.xpath(self.base_node + '//span[@class="comment-time "]/@title') #評(píng)論內(nèi)容 所有span標(biāo)簽class名為short的節(jié)點(diǎn)文本 self.content = dom.xpath(self.base_node + '//span[@class="short"]/text()') #保存到數(shù)據(jù)庫(kù) def save_to_database(self): self._parse() for i in range(len(self.id)): try: comment = Comments( id=int(self.id[i]), username=self.username[i], user_center=self.user_center[i], vote=int(self.vote[i]), star=self.star[i], time=datetime.strptime(self.time[i], '%Y-%m-%d %H:%M:%S'), content=self.content[i] ) self.session.add(comment) self.session.commit() return 'finish' except pymysql.err.IntegrityError as e: print('數(shù)據(jù)重復(fù),不做任何處理') except Exception as e: #數(shù)據(jù)添加錯(cuò)誤,回滾 self.session.rollback() finally: #關(guān)閉數(shù)據(jù)庫(kù)連接 self.session.close() #保存到csv def save_to_csv(self): self._parse() f = open('comment.csv', 'w', encoding='utf-8') csv_in = csv.writer(f, dialect='excel') for i in range(len(self.id)): csv_in.writerow([ int(self.id[i]), self.username[i], self.user_center[i], int(self.vote[i]), self.time[i], self.content[i] ]) f.close() if __name__ == '__main__': with ThreadPoolExecutor(max_workers=4) as executor: futures = [] for i in ['', 'h', 'm', 'l']: for j in range(25): fetcher = CommentFetcher(movie_id=26266893, start=j * 20, type=i) futures.append(executor.submit(fetcher.save_to_csv)) for f in as_completed(futures): try: res = f.done() if res: ret_data = f.result() if ret_data == 'finish': print('{} 成功保存數(shù)據(jù)'.format(str(f))) except Exception as e: f.cancel()
以上是Python中多線程爬取豆瓣影評(píng)API接口的方法的所有內(nèi)容,感謝各位的閱讀!希望分享的內(nèi)容對(duì)大家有幫助,更多相關(guān)知識(shí),歡迎關(guān)注億速云行業(yè)資訊頻道!
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。