您好,登錄后才能下訂單哦!
本文小編為大家詳細(xì)介紹“python基于Scrapy怎么從數(shù)據(jù)庫獲取URL進行抓取”,內(nèi)容詳細(xì),步驟清晰,細(xì)節(jié)處理妥當(dāng),希望這篇“python基于Scrapy怎么從數(shù)據(jù)庫獲取URL進行抓取”文章能幫助大家解決疑惑,下面跟著小編的思路慢慢深入,一起來學(xué)習(xí)新知識吧。
代碼如下:
import pymysql class MySpider(scrapy.Spider): MAX_RETRY = 10 logger = logging.getLogger(__name__) name = 'myspider' start_urls = [] @classmethod def from_crawler(cls, crawler, *args, **kwargs): spider = super(MySpider, cls).from_crawler(crawler, *args, **kwargs) crawler.signals.connect(spider.spider_closed, signals.spider_closed) return spider def __init__(self): # 連接database conn = pymysql.connect(host="數(shù)據(jù)庫地址",user="用戶名",password="數(shù)據(jù)庫密碼",database="數(shù)據(jù)庫名",charset="utf8") self.conn = conn self.logger.info('Connection to database opened') super(MySpider, self) def spider_closed(self, spider): self.db.close() self.logger.info('Connection to database closed') def parse(self, response): item = MyItem() #這里處理抓取邏輯 yield item def errback_httpbin(self): self.logger.info('http error') def start_requests(self): cursor = self.conn.cursor() #這里處理查詢數(shù)據(jù)庫邏輯 cursor.execute('SELECT * FROM mytable WHERE nbErrors < %s', (self.MAX_RETRY,)) rows = cursor.fetchall() for row in rows: yield Request(row[0], self.parse, meta={ 'splash': { 'args':{ 'html': 1, 'wait': 2 } } }, errback=self.errback_httpbin) cursor.close()
讀到這里,這篇“python基于Scrapy怎么從數(shù)據(jù)庫獲取URL進行抓取”文章已經(jīng)介紹完畢,想要掌握這篇文章的知識點還需要大家自己動手實踐使用過才能領(lǐng)會,如果想了解更多相關(guān)內(nèi)容的文章,歡迎關(guān)注億速云行業(yè)資訊頻道。
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進行舉報,并提供相關(guān)證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權(quán)內(nèi)容。