溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊(cè)×
其他方式登錄
點(diǎn)擊 登錄注冊(cè) 即表示同意《億速云用戶服務(wù)條款》

在scrapy中使用selenium實(shí)現(xiàn)一個(gè)爬取網(wǎng)頁的功能

發(fā)布時(shí)間:2020-11-19 15:15:28 來源:億速云 閱讀:414 作者:Leah 欄目:開發(fā)技術(shù)

這篇文章將為大家詳細(xì)講解有關(guān)在scrapy中使用selenium實(shí)現(xiàn)一個(gè)爬取網(wǎng)頁的功能,文章內(nèi)容質(zhì)量較高,因此小編分享給大家做個(gè)參考,希望大家閱讀完這篇文章后對(duì)相關(guān)知識(shí)有一定的了解。

1.背景

  • 我們?cè)谂廊【W(wǎng)頁時(shí)一般會(huì)使用到三個(gè)爬蟲庫:requests,scrapy,selenium。requests一般用于小型爬蟲,scrapy用于構(gòu)建大的爬蟲項(xiàng)目,而selenium主要用來應(yīng)付負(fù)責(zé)的頁面(復(fù)雜js渲染的頁面,請(qǐng)求非常難構(gòu)造,或者構(gòu)造方式經(jīng)常變化)。
  • 在我們面對(duì)大型爬蟲項(xiàng)目時(shí),肯定會(huì)優(yōu)選scrapy框架來開發(fā),但是在解析復(fù)雜JS渲染的頁面時(shí),又很麻煩。 盡管使用selenium瀏覽器渲染來抓取這樣的頁面很方便,這種方式下,我們不需要關(guān)心頁面后臺(tái)發(fā)生了怎樣的請(qǐng)求,也不需要分析整個(gè)頁面的渲染過程,我們只需要關(guān)心頁面最終結(jié)果即可,可見即可爬,但是selenium的效率又太低。
  • 所以,如果可以在scrapy中,集成selenium,讓selenium負(fù)責(zé)復(fù)雜頁面的爬取,那么這樣的爬蟲就無敵了,可以爬取任何網(wǎng)站了。
     

 2. 環(huán)境

  • python 3.6.1
  • 系統(tǒng):win7
  • IDE:pycharm
  • 安裝過chrome瀏覽器
  • 配置好chromedriver(設(shè)置好環(huán)境變量)
  • selenium 3.7.0
  • scrapy 1.4.0
     

3.原理分析

3.1. 分析request請(qǐng)求的流程

首先看一下scrapy最新的架構(gòu)圖:

在scrapy中使用selenium實(shí)現(xiàn)一個(gè)爬取網(wǎng)頁的功能

部分流程:

第一:爬蟲引擎生成requests請(qǐng)求,送往scheduler調(diào)度模塊,進(jìn)入等待隊(duì)列,等待調(diào)度。

第二:scheduler模塊開始調(diào)度這些requests,出隊(duì),發(fā)往爬蟲引擎。

第三:爬蟲引擎將這些requests送到下載中間件(多個(gè),例如加header,代理,自定義等等)進(jìn)行處理。

第四:處理完之后,送往Downloader模塊進(jìn)行下載。從這個(gè)處理過程來看,突破口就在下載中間件部分,用selenium直接處理掉request請(qǐng)求。

3.2. requests和response中間處理件源碼分析

相關(guān)代碼位置:

在scrapy中使用selenium實(shí)現(xiàn)一個(gè)爬取網(wǎng)頁的功能

源碼解析:

# 文件:E:\Miniconda\Lib\site-packages\scrapy\core\downloader\middleware.py
"""
Downloader Middleware manager

See documentation in docs/topics/downloader-middleware.rst
"""
import six

from twisted.internet import defer

from scrapy.http import Request, Response
from scrapy.middleware import MiddlewareManager
from scrapy.utils.defer import mustbe_deferred
from scrapy.utils.conf import build_component_list


class DownloaderMiddlewareManager(MiddlewareManager):

  component_name = 'downloader middleware'

  @classmethod
  def _get_mwlist_from_settings(cls, settings):
    # 從settings.py或這custom_setting中拿到自定義的Middleware中間件
    '''
    'DOWNLOADER_MIDDLEWARES': {
      'mySpider.middlewares.ProxiesMiddleware': 400,
      # SeleniumMiddleware
      'mySpider.middlewares.SeleniumMiddleware': 543,
      'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
    },
    '''
    return build_component_list(
      settings.getwithbase('DOWNLOADER_MIDDLEWARES'))

  # 將所有自定義Middleware中間件的處理函數(shù)添加到對(duì)應(yīng)的methods列表中
  def _add_middleware(self, mw):
    if hasattr(mw, 'process_request'):
      self.methods['process_request'].append(mw.process_request)
    if hasattr(mw, 'process_response'):
      self.methods['process_response'].insert(0, mw.process_response)
    if hasattr(mw, 'process_exception'):
      self.methods['process_exception'].insert(0, mw.process_exception)

  # 整個(gè)下載流程
  def download(self, download_func, request, spider):
    @defer.inlineCallbacks
    def process_request(request):
      # 處理request請(qǐng)求,依次經(jīng)過各個(gè)自定義Middleware中間件的process_request方法,前面有加入到list中
      for method in self.methods['process_request']:
        response = yield method(request=request, spider=spider)
        assert response is None or isinstance(response, (Response, Request)), \
            'Middleware %s.process_request must return None, Response or Request, got %s' % \
            (six.get_method_self(method).__class__.__name__, response.__class__.__name__)
        # 這是關(guān)鍵地方
        # 如果在某個(gè)Middleware中間件的process_request中處理完之后,生成了一個(gè)response對(duì)象
        # 那么會(huì)直接將這個(gè)response return 出去,跳出循環(huán),不再處理其他的process_request
        # 之前我們的header,proxy中間件,都只是加個(gè)user-agent,加個(gè)proxy,并不做任何return值
        # 還需要注意一點(diǎn):就是這個(gè)return的必須是Response對(duì)象
        # 后面我們構(gòu)造的HtmlResponse正是Response的子類對(duì)象
        if response:
          defer.returnValue(response)
      # 如果在上面的所有process_request中,都沒有返回任何Response對(duì)象的話
      # 最后,會(huì)將這個(gè)加工過的Request送往download_func,進(jìn)行下載,返回的就是一個(gè)Response對(duì)象
      # 然后依次經(jīng)過各個(gè)Middleware中間件的process_response方法進(jìn)行加工,如下
      defer.returnValue((yield download_func(request=request,spider=spider)))

    @defer.inlineCallbacks
    def process_response(response):
      assert response is not None, 'Received None in process_response'
      if isinstance(response, Request):
        defer.returnValue(response)

      for method in self.methods['process_response']:
        response = yield method(request=request, response=response,
                    spider=spider)
        assert isinstance(response, (Response, Request)), \
          'Middleware %s.process_response must return Response or Request, got %s' % \
          (six.get_method_self(method).__class__.__name__, type(response))
        if isinstance(response, Request):
          defer.returnValue(response)
      defer.returnValue(response)

    @defer.inlineCallbacks
    def process_exception(_failure):
      exception = _failure.value
      for method in self.methods['process_exception']:
        response = yield method(request=request, exception=exception,
                    spider=spider)
        assert response is None or isinstance(response, (Response, Request)), \
          'Middleware %s.process_exception must return None, Response or Request, got %s' % \
          (six.get_method_self(method).__class__.__name__, type(response))
        if response:
          defer.returnValue(response)
      defer.returnValue(_failure)

    deferred = mustbe_deferred(process_request, request)
    deferred.addErrback(process_exception)
    deferred.addCallback(process_response)
    return deferred

4. 代碼

在settings.py中,配置好selenium參數(shù):

# 文件settings.py中

# ----------- selenium參數(shù)配置 -------------
SELENIUM_TIMEOUT = 25      # selenium瀏覽器的超時(shí)時(shí)間,單位秒
LOAD_IMAGE = True        # 是否下載圖片
WINDOW_HEIGHT = 900       # 瀏覽器窗口大小
WINDOW_WIDTH = 900

在spider中,生成request時(shí),標(biāo)記哪些請(qǐng)求需要走selenium下載:

# 文件mySpider.py中
class mySpider(CrawlSpider):
  name = "mySpiderAmazon"
  allowed_domains = ['amazon.com']

  custom_settings = {
    'LOG_LEVEL':'INFO',
    'DOWNLOAD_DELAY': 0,
    'COOKIES_ENABLED': False, # enabled by default
    'DOWNLOADER_MIDDLEWARES': {
      # 代理中間件
      'mySpider.middlewares.ProxiesMiddleware': 400,
      # SeleniumMiddleware 中間件
      'mySpider.middlewares.SeleniumMiddleware': 543,
      # 將scrapy默認(rèn)的user-agent中間件關(guān)閉
      'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
    },

#.....................華麗的分割線.......................
# 生成request時(shí),將是否使用selenium下載的標(biāo)記,放入到meta中
yield Request(
  url = "https://www.amazon.com/",
  meta = {'usedSelenium': True, 'dont_redirect': True},
  callback = self.parseIndexPage,
  errback = self.error
)

在下載中間件middlewares.py中,使用selenium抓取頁面(核心部分

# -*- coding: utf-8 -*-
from selenium import webdriver
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
from scrapy.http import HtmlResponse
from logging import getLogger
import time

class SeleniumMiddleware():
  # 經(jīng)常需要在pipeline或者中間件中獲取settings的屬性,可以通過scrapy.crawler.Crawler.settings屬性
  @classmethod
  def from_crawler(cls, crawler):
    # 從settings.py中,提取selenium設(shè)置參數(shù),初始化類
    return cls(timeout=crawler.settings.get('SELENIUM_TIMEOUT'),
          isLoadImage=crawler.settings.get('LOAD_IMAGE'),
          windowHeight=crawler.settings.get('WINDOW_HEIGHT'),
          windowWidth=crawler.settings.get('WINDOW_WIDTH')
          )

  def __init__(self, timeout=30, isLoadImage=True, windowHeight=None, windowWidth=None):
    self.logger = getLogger(__name__)
    self.timeout = timeout
    self.isLoadImage = isLoadImage
    # 定義一個(gè)屬于這個(gè)類的browser,防止每次請(qǐng)求頁面時(shí),都會(huì)打開一個(gè)新的chrome瀏覽器
    # 這樣,這個(gè)類處理的Request都可以只用這一個(gè)browser
    self.browser = webdriver.Chrome()
    if windowHeight and windowWidth:
      self.browser.set_window_size(900, 900)
    self.browser.set_page_load_timeout(self.timeout)    # 頁面加載超時(shí)時(shí)間
    self.wait = WebDriverWait(self.browser, 25)       # 指定元素加載超時(shí)時(shí)間

    def process_request(self, request, spider):
    '''
    用chrome抓取頁面
    :param request: Request請(qǐng)求對(duì)象
    :param spider: Spider對(duì)象
    :return: HtmlResponse響應(yīng)
    '''
    # self.logger.debug('chrome is getting page')
    print(f"chrome is getting page")
    # 依靠meta中的標(biāo)記,來決定是否需要使用selenium來爬取
    usedSelenium = request.meta.get('usedSelenium', False)
    if usedSelenium:
      try:
        self.browser.get(request.url)
        # 搜索框是否出現(xiàn)
        input = self.wait.until(
          EC.presence_of_element_located((By.XPATH, "//div[@class='nav-search-field ']/input"))
        )
        time.sleep(2)
        input.clear()
        input.send_keys("iphone 7s")
        # 敲enter鍵, 進(jìn)行搜索
        input.send_keys(Keys.RETURN)
        # 查看搜索結(jié)果是否出現(xiàn)
        searchRes = self.wait.until(
          EC.presence_of_element_located((By.XPATH, "//div[@id='resultsCol']"))
        )
      except Exception as e:
        # self.logger.debug(f'chrome getting page error, Exception = {e}')
        print(f"chrome getting page error, Exception = {e}")
        return HtmlResponse(url=request.url, status=500, request=request)
      else:
        time.sleep(3)
        return HtmlResponse(url=request.url,
                  body=self.browser.page_source,
                  request=request,
                  # 最好根據(jù)網(wǎng)頁的具體編碼而定
                  encoding='utf-8',
                  status=200)

5. 執(zhí)行結(jié)果

在scrapy中使用selenium實(shí)現(xiàn)一個(gè)爬取網(wǎng)頁的功能 

在scrapy中使用selenium實(shí)現(xiàn)一個(gè)爬取網(wǎng)頁的功能

6. 存在的問題

6.1. Spider關(guān)閉了,chrome沒有退出。

2018-04-04 09:26:18 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/response_bytes': 2092766,
'downloader/response_count': 2,
'downloader/response_status_count/200': 2,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2018, 4, 4, 1, 26, 16, 763602),
'log_count/INFO': 7,
'request_depth_max': 1,
'response_received_count': 2,
'scheduler/dequeued': 2,
'scheduler/dequeued/memory': 2,
'scheduler/enqueued': 2,
'scheduler/enqueued/memory': 2,
'start_time': datetime.datetime(2018, 4, 4, 1, 25, 48, 301602)}
2018-04-04 09:26:18 [scrapy.core.engine] INFO: Spider closed (finished)

上面,我們是把browser對(duì)象放到了Middleware中間件中,只能做process_request和process_response, 沒有說在中間件中介紹如何調(diào)用scrapy的close方法。

解決方案:利用信號(hào)量的方式,當(dāng)收到spider_closed信號(hào)時(shí),調(diào)用browser.quit()

6.2. 當(dāng)一個(gè)項(xiàng)目同時(shí)啟動(dòng)多個(gè)spider,會(huì)共用到Middleware中的selenium,不利于并發(fā)。

因?yàn)橛胹crapy + selenium的方式,只有部分,甚至是一小部分頁面會(huì)用到chrome,既然把chrome放到Middleware中有這么多限制,那為什么不能把chrome放到spider里面呢。這樣的好處在于:每個(gè)spider都有自己的chrome,這樣當(dāng)啟動(dòng)多個(gè)spider時(shí),就會(huì)有多個(gè)chrome,不是所有的spider共用一個(gè)chrome,這對(duì)我們的并發(fā)是有好處的。

解決方案:將chrome的初始化放到spider中,每個(gè)spider獨(dú)占自己的chrome

 7. 改進(jìn)版代碼

在settings.py中,配置好selenium參數(shù):

# 文件settings.py中

# ----------- selenium參數(shù)配置 -------------
SELENIUM_TIMEOUT = 25      # selenium瀏覽器的超時(shí)時(shí)間,單位秒
LOAD_IMAGE = True        # 是否下載圖片
WINDOW_HEIGHT = 900       # 瀏覽器窗口大小
WINDOW_WIDTH = 900

在spider中,生成request時(shí),標(biāo)記哪些請(qǐng)求需要走selenium下載:

# 文件mySpider.py中
# selenium相關(guān)庫
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait

# scrapy 信號(hào)相關(guān)庫
from scrapy.utils.project import get_project_settings
# 下面這種方式,即將廢棄,所以不用
# from scrapy.xlib.pydispatch import dispatcher
from scrapy import signals
# scrapy最新采用的方案
from pydispatch import dispatcher

class mySpider(CrawlSpider):
  name = "mySpiderAmazon"
  allowed_domains = ['amazon.com']

  custom_settings = {
    'LOG_LEVEL':'INFO',
    'DOWNLOAD_DELAY': 0,
    'COOKIES_ENABLED': False, # enabled by default
    'DOWNLOADER_MIDDLEWARES': {
      # 代理中間件
      'mySpider.middlewares.ProxiesMiddleware': 400,
      # SeleniumMiddleware 中間件
      'mySpider.middlewares.SeleniumMiddleware': 543,
      # 將scrapy默認(rèn)的user-agent中間件關(guān)閉
      'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
    },

  # 將chrome初始化放到spider中,成為spider中的元素
  def __init__(self, timeout=30, isLoadImage=True, windowHeight=None, windowWidth=None):
    # 從settings.py中獲取設(shè)置參數(shù)
    self.mySetting = get_project_settings()
    self.timeout = self.mySetting['SELENIUM_TIMEOUT']
    self.isLoadImage = self.mySetting['LOAD_IMAGE']
    self.windowHeight = self.mySetting['WINDOW_HEIGHT']
    self.windowWidth = self.mySetting['windowWidth']
    # 初始化chrome對(duì)象
    self.browser = webdriver.Chrome()
    if self.windowHeight and self.windowWidth:
      self.browser.set_window_size(900, 900)
    self.browser.set_page_load_timeout(self.timeout)    # 頁面加載超時(shí)時(shí)間
    self.wait = WebDriverWait(self.browser, 25)       # 指定元素加載超時(shí)時(shí)間
    super(mySpider, self).__init__()
    # 設(shè)置信號(hào)量,當(dāng)收到spider_closed信號(hào)時(shí),調(diào)用mySpiderCloseHandle方法,關(guān)閉chrome
    dispatcher.connect(receiver = self.mySpiderCloseHandle,
              signal = signals.spider_closed
              )

  # 信號(hào)量處理函數(shù):關(guān)閉chrome瀏覽器
  def mySpiderCloseHandle(self, spider):
    print(f"mySpiderCloseHandle: enter ")
    self.browser.quit()

#.....................華麗的分割線.......................
# 生成request時(shí),將是否使用selenium下載的標(biāo)記,放入到meta中
yield Request(
  url = "https://www.amazon.com/",
  meta = {'usedSelenium': True, 'dont_redirect': True},
  callback = self.parseIndexPage,
  errback = self.error
)

在下載中間件middlewares.py中,使用selenium抓取頁面

# -*- coding: utf-8 -*-
from selenium import webdriver
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
from scrapy.http import HtmlResponse
from logging import getLogger
import time

class SeleniumMiddleware():
  # Middleware中會(huì)傳遞進(jìn)來一個(gè)spider,這就是我們的spider對(duì)象,從中可以獲取__init__時(shí)的chrome相關(guān)元素
  def process_request(self, request, spider):
    '''
    用chrome抓取頁面
    :param request: Request請(qǐng)求對(duì)象
    :param spider: Spider對(duì)象
    :return: HtmlResponse響應(yīng)
    '''
    print(f"chrome is getting page")
    # 依靠meta中的標(biāo)記,來決定是否需要使用selenium來爬取
    usedSelenium = request.meta.get('usedSelenium', False)
    if usedSelenium:
      try:
        spider.browser.get(request.url)
        # 搜索框是否出現(xiàn)
        input = spider.wait.until(
          EC.presence_of_element_located((By.XPATH, "//div[@class='nav-search-field ']/input"))
        )
        time.sleep(2)
        input.clear()
        input.send_keys("iphone 7s")
        # 敲enter鍵, 進(jìn)行搜索
        input.send_keys(Keys.RETURN)
        # 查看搜索結(jié)果是否出現(xiàn)
        searchRes = spider.wait.until(
          EC.presence_of_element_located((By.XPATH, "//div[@id='resultsCol']"))
        )
      except Exception as e:
        print(f"chrome getting page error, Exception = {e}")
        return HtmlResponse(url=request.url, status=500, request=request)
      else:
        time.sleep(3)
        # 頁面爬取成功,構(gòu)造一個(gè)成功的Response對(duì)象(HtmlResponse是它的子類)
        return HtmlResponse(url=request.url,
                  body=spider.browser.page_source,
                  request=request,
                  # 最好根據(jù)網(wǎng)頁的具體編碼而定
                  encoding='utf-8',
                  status=200)

運(yùn)行結(jié)果(spider結(jié)束,執(zhí)行mySpiderCloseHandle關(guān)閉chrome瀏覽器):

['categorySelectorAmazon1.pipelines.MongoPipeline']
2018-04-04 11:56:21 [scrapy.core.engine] INFO: Spider opened
2018-04-04 11:56:21 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
chrome is getting page
parseProductDetail url = https://www.amazon.com/, status = 200, meta = {'usedSelenium': True, 'dont_redirect': True, 'download_timeout': 25.0, 'proxy': 'http://H37XPSB6V57VU96D:CAB31DAEB9313CE5@proxy.abuyun.com:9020', 'depth': 0}
chrome is getting page
2018-04-04 11:56:54 [scrapy.core.engine] INFO: Closing spider (finished)
mySpiderCloseHandle: enter
2018-04-04 11:56:59 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/response_bytes': 1938619,
'downloader/response_count': 2,
'downloader/response_status_count/200': 2,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2018, 4, 4, 3, 56, 54, 301602),
'log_count/INFO': 7,
'request_depth_max': 1,
'response_received_count': 2,
'scheduler/dequeued': 2,
'scheduler/dequeued/memory': 2,
'scheduler/enqueued': 2,
'scheduler/enqueued/memory': 2,
'start_time': datetime.datetime(2018, 4, 4, 3, 56, 21, 642602)}
2018-04-04 11:56:59 [scrapy.core.engine] INFO: Spider closed (finished)

關(guān)于在scrapy中使用selenium實(shí)現(xiàn)一個(gè)爬取網(wǎng)頁的功能就分享到這里了,希望以上內(nèi)容可以對(duì)大家有一定的幫助,可以學(xué)到更多知識(shí)。如果覺得文章不錯(cuò),可以把它分享出去讓更多的人看到。

向AI問一下細(xì)節(jié)

免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。

AI