您好,登錄后才能下訂單哦!
CrawlSpider
LinkExtractor連接提取器:根據(jù)指定規(guī)則(正則)進行連接的提取
Rule規(guī)則解析器:將連接提取器提取到的連接進行請求發(fā)送,然后對獲取的頁面進行指定規(guī)則【callback】的解析
一個鏈接提取器對應唯一一個規(guī)則解析器
例:crawlspider深度(全棧)爬取【sunlinecrawl例】
分布式(通常用不到,爬取數(shù)據(jù)量級巨大、時間少時用分布式)
概念:可將一組程序執(zhí)行在多態(tài)機器上(分布式機群),使其進行數(shù)據(jù)的分布爬取
原生的scrapy框架是否可以實現(xiàn)分布式?
不能
抽屜
# spider文件 import scrapy from scrapy.linkextractors import LinkExtractor from scrapy.spiders import CrawlSpider, Rule class ChoutiSpider(CrawlSpider): name = 'chouti' # allowed_domains = ['www.xxx.com'] start_urls = ['https://dig.chouti.com/1'] # 連接提取器:從起始url對應的頁面中提取符合規(guī)則的所有連接;allow=正則表達式 # 正則為空的話,提取頁面中所有連接 link = LinkExtractor(allow=r'\d+') rules = ( # 規(guī)則解析器:將連接提取器提取到的連接對應的頁面源碼進行指定規(guī)則的解析 # Rule自動發(fā)送對應鏈接的請求 Rule(link, callback='parse_item', follow=True), # follow:True 將連接提取器 繼續(xù) 作用到 連接提取器提取出來的連接 對應的頁面源碼中 ) def parse_item(self, response): item = {} #item['domain_id'] = response.xpath('//input[@id="sid"]/@value').get() #item['name'] = response.xpath('//div[@id="name"]').get() #item['description'] = response.xpath('//div[@id="description"]').get() return item
陽光熱線網
# 1.spider文件 import scrapy from scrapy.linkextractors import LinkExtractor from scrapy.spiders import CrawlSpider, Rule from sunLineCrawl.items import SunlinecrawlItem,ContentItem class SunSpider(CrawlSpider): name = 'sun' # allowed_domains = ['www.xxx.com'] start_urls = ['http://wz.sun0769.com/index.php/question/questionType?type=4&page='] link = LinkExtractor(allow=r'type=4&page=\d+') # 提取頁碼連接 link1 = LinkExtractor(allow=r'question/2019\d+/\d+\.shtml') # 提取詳情頁連接 rules = ( Rule(link, callback='parse_item', follow=False), Rule(link1, callback='parse_detail'), ) # 解析出標題和網友名稱數(shù)據(jù) def parse_item(self, response): tr_list = response.xpath('//*[@id="morelist"]/div/table[2]//tr/td/table//tr') for tr in tr_list: title = tr.xpath('./td[2]/a[2]/text()').extract_first() net_friend = tr.xpath('./td[4]/text()').extract_first() item = SunlinecrawlItem() item['title'] = title item['net_friend'] = net_friend yield item # 解析出新聞的內容 def parse_detail(self,response): content = response.xpath('/html/body/div[9]/table[2]//tr[1]/td/div[2]//text()').extract() content = ''.join(content) item = ContentItem() item['content'] = content yield item -------------------------------------------------------------------------------- # 2.items文件 import scrapy class SunlinecrawlItem(scrapy.Item): title = scrapy.Field() net_friend = scrapy.Field() class ContentItem(scrapy.Item): content = scrapy.Field() -------------------------------------------------------------------------------- # 3.pipelines文件 class SunlinecrawlPipeline(object): def process_item(self, item, spider): # 確定接受到的item是什么類型(Content/Sunlinecrawl) if item.__class__.__name__ == 'SunlinecrawlItem': print(item['title'],item['net_friend']) else: print(item['content']) return item -------------------------------------------------------------------------------- # 4.setting文件 BOT_NAME = 'sunLineCrawl' SPIDER_MODULES = ['sunLineCrawl.spiders'] NEWSPIDER_MODULE = 'sunLineCrawl.spiders' LOG_LEVEL = 'ERROR' USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36' ROBOTSTXT_OBEY = False ITEM_PIPELINES = { 'sunLineCrawl.pipelines.SunlinecrawlPipeline': 300, }
以上就是本文的全部內容,希望對大家的學習有所幫助,也希望大家多多支持億速云。
免責聲明:本站發(fā)布的內容(圖片、視頻和文字)以原創(chuàng)、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯(lián)系站長郵箱:is@yisu.com進行舉報,并提供相關證據(jù),一經查實,將立刻刪除涉嫌侵權內容。