溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務(wù)條款》

python如何提升爬蟲效率

發(fā)布時間:2020-10-02 01:59:36 來源:腳本之家 閱讀:173 作者:straightup 欄目:開發(fā)技術(shù)

單線程+多任務(wù)異步協(xié)程

  • 協(xié)程

在函數(shù)(特殊函數(shù))定義的時候,使用async修飾,函數(shù)調(diào)用后,內(nèi)部語句不會立即執(zhí)行,而是會返回一個協(xié)程對象

  • 任務(wù)對象

任務(wù)對象=高級的協(xié)程對象(進一步封裝)=特殊的函數(shù)
任務(wù)對象必須要注冊到時間循環(huán)對象中
給任務(wù)對象綁定回調(diào):爬蟲的數(shù)據(jù)解析中

  • 事件循環(huán)

當(dāng)做是一個裝載任務(wù)對象的容器
當(dāng)啟動事件循環(huán)對象的時候,存儲在內(nèi)的任務(wù)對象會異步執(zhí)行

  • 特殊函數(shù)內(nèi)部不能寫不支持異步請求的模塊,如time,requests...否則雖然不報錯但實現(xiàn)不了異步

time.sleep -- asyncio.sleep
requests -- aiohttp

import asyncio
import time

start_time = time.time()
async def get_request(url):
  await asyncio.sleep(2)
  print(url,'下載完成!')

urls = [
  'www.1.com',
  'www.2.com',
]

task_lst = [] # 任務(wù)對象列表
for url in urls:
  c = get_request(url) # 協(xié)程對象
  task = asyncio.ensure_future(c) # 任務(wù)對象
  # task.add_done_callback(...)  # 綁定回調(diào)
  task_lst.append(task)

loop = asyncio.get_event_loop() # 事件循環(huán)對象
loop.run_until_complete(asyncio.wait(task_lst)) # 注冊,手動掛起

線程池+requests模塊

# 線程池
import time
from multiprocessing.dummy import Pool

start_time = time.time()
url_list = [
  'www.1.com',
  'www.2.com',
  'www.3.com',
]
def get_request(url):
  print('正在下載...',url)
  time.sleep(2)
  print('下載完成!',url)

pool = Pool(3)
pool.map(get_request,url_list)
print('總耗時:',time.time()-start_time)

兩個方法提升爬蟲效率

起一個flask服務(wù)端

from flask import Flask
import time

app = Flask(__name__)

@app.route('/bobo')
def index_bobo():
  time.sleep(2)
  return 'hello bobo!'

@app.route('/jay')
def index_jay():
  time.sleep(2)
  return 'hello jay!'

@app.route('/tom')
def index_tom():
  time.sleep(2)
  return 'hello tom!'

if __name__ == '__main__':
  app.run(threaded=True)

aiohttp模塊+單線程多任務(wù)異步協(xié)程

import asyncio
import aiohttp
import requests
import time

start = time.time()
async def get_page(url):
  # page_text = requests.get(url=url).text
  # print(page_text)
  # return page_text
  async with aiohttp.ClientSession() as s: #生成一個session對象
    async with await s.get(url=url) as response:
      page_text = await response.text()
      print(page_text)
  return page_text

urls = [
  'http://127.0.0.1:5000/bobo',
  'http://127.0.0.1:5000/jay',
  'http://127.0.0.1:5000/tom',
]
tasks = []
for url in urls:
  c = get_page(url)
  task = asyncio.ensure_future(c)
  tasks.append(task)
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.wait(tasks))

end = time.time()
print(end-start)

# 異步執(zhí)行!
# hello tom!
# hello bobo!
# hello jay!
# 2.0311079025268555
'''
aiohttp模塊實現(xiàn)單線程+多任務(wù)異步協(xié)程
并用xpath解析數(shù)據(jù)
'''
import aiohttp
import asyncio
from lxml import etree
import time

start = time.time()
# 特殊函數(shù):請求的發(fā)送和數(shù)據(jù)的捕獲
# 注意async with await關(guān)鍵字
async def get_request(url):
  async with aiohttp.ClientSession() as s:
    async with await s.get(url=url) as response:
      page_text = await response.text()
      return page_text    # 返回頁面源碼

# 回調(diào)函數(shù),解析數(shù)據(jù)
def parse(task):
  page_text = task.result()
  tree = etree.HTML(page_text)
  msg = tree.xpath('/html/body/ul//text()')
  print(msg)

urls = [
  'http://127.0.0.1:5000/bobo',
  'http://127.0.0.1:5000/jay',
  'http://127.0.0.1:5000/tom',
]
tasks = []
for url in urls:
  c = get_request(url)
  task = asyncio.ensure_future(c)
  task.add_done_callback(parse) #綁定回調(diào)函數(shù)!
  tasks.append(task)
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.wait(tasks))

end = time.time()
print(end-start)

requests模塊+線程池

import time
import requests
from multiprocessing.dummy import Pool

start = time.time()
urls = [
  'http://127.0.0.1:5000/bobo',
  'http://127.0.0.1:5000/jay',
  'http://127.0.0.1:5000/tom',
]
def get_request(url):
  page_text = requests.get(url=url).text
  print(page_text)
  return page_text

pool = Pool(3)
pool.map(get_request, urls)
end = time.time()
print('總耗時:', end-start)

# 實現(xiàn)異步請求
# hello jay!
# hello bobo!
# hello tom!
# 總耗時: 2.0467123985290527

小結(jié)

  • 爬蟲的加速目前掌握了兩種方法:

aiohttp模塊+單線程多任務(wù)異步協(xié)程
requests模塊+線程池

  • 爬蟲接觸的模塊有三個:

requests
urllib
aiohttp

以上就是python如何提升爬蟲效率的詳細內(nèi)容,更多關(guān)于python提升爬蟲效率的資料請關(guān)注億速云其它相關(guān)文章!

向AI問一下細節(jié)

免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進行舉報,并提供相關(guān)證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權(quán)內(nèi)容。

AI