溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊(cè)×
其他方式登錄
點(diǎn)擊 登錄注冊(cè) 即表示同意《億速云用戶服務(wù)條款》

scrapy框架爬取大樂(lè)透數(shù)據(jù)

發(fā)布時(shí)間:2020-06-06 16:33:54 來(lái)源:網(wǎng)絡(luò) 閱讀:480 作者:星火燎愿 欄目:編程語(yǔ)言

github項(xiàng)目地址:?https://github.com/v587xpt/lottery_spider
#


上次做了一個(gè)雙色球的數(shù)據(jù)爬取,其實(shí)大樂(lè)透的爬取也很簡(jiǎn)單,使用request就可以爬取,但是為了更好的進(jìn)步,這次爬取大樂(lè)透采用了scrapy框架。

scrapy框架的運(yùn)行機(jī)制不介紹了,不懂的先去google了解下吧;


..
..

一、創(chuàng)建項(xiàng)目

我使用的是windows進(jìn)行開(kāi)發(fā)的,所以需要在windows上安裝好scrapy;假設(shè)已安裝好該框架;

1、打開(kāi)cmd,運(yùn)行

scrapy startproject?lottery_spider

命令,會(huì)在命令運(yùn)行的文件下生成一個(gè)lottery_spider的項(xiàng)目
.
2、再執(zhí)行 cd?lottery_spider 進(jìn)入lottery_spider項(xiàng)目,執(zhí)行??

scrapy gensiper?lottery "www.lottery.gov.cn"

lottery 為爬蟲(chóng)文件;

www.lottery.gov.cn 為目標(biāo)網(wǎng)站;

創(chuàng)建完畢后會(huì)在項(xiàng)目的 spider文件夾下生成爬蟲(chóng)文件:?lottery.py
..
..
?

二、項(xiàng)目?jī)?nèi)的各個(gè)文件代碼

1、items.py

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html

import scrapy

class LotterySpiderItem(scrapy.Item):
    qihao = scrapy.Field()
    bule_ball = scrapy.Field()
    red_ball = scrapy.Field()

此文件定義了數(shù)據(jù)的模型,就是數(shù)據(jù)的參數(shù);qihao、bule_ball、red_ball ;
.
2、lottery.py

# -*- coding: utf-8 -*-
import scrapy
from lottery_spider.items import LotterySpiderItem

class LotterySpider(scrapy.Spider):
    name = 'lottery'
    allowed_domains = ['gov.cn']        #允許爬蟲(chóng)爬取目標(biāo)網(wǎng)站的域名,此域名之外的不會(huì)爬??;
    start_urls = ['http://www.lottery.gov.cn/historykj/history_1.jspx?_ltype=dlt']  #起始頁(yè);從合格web開(kāi)始爬??;

    def parse(self, response):
        #使用xpath獲取數(shù)據(jù)前的路徑,返回一個(gè)list的格式數(shù)據(jù);
        results = response.xpath("http://div[@class='yylMain']//div[@class='result']//tbody//tr")
        for result in results:  #results數(shù)據(jù)需要for循環(huán)遍歷;
            qihao = result.xpath(".//td[1]//text()").get()
            bule_ball_1 = result.xpath(".//td[2]//text()").get()
            bule_ball_2 = result.xpath(".//td[3]//text()").get()
            bule_ball_3 = result.xpath(".//td[4]//text()").get()
            bule_ball_4 = result.xpath(".//td[5]//text()").get()
            bule_ball_5 = result.xpath(".//td[6]//text()").get()
            red_ball_1 = result.xpath(".//td[7]//text()").get()
            red_ball_2 = result.xpath(".//td[8]//text()").get()

            bule_ball_list = []     #定義一個(gè)列表,用于存儲(chǔ)五個(gè)藍(lán)球
            bule_ball_list.append(bule_ball_1)
            bule_ball_list.append(bule_ball_2)
            bule_ball_list.append(bule_ball_3)
            bule_ball_list.append(bule_ball_4)
            bule_ball_list.append(bule_ball_5)

            red_ball_list = []      #定義一個(gè)列表,用于存儲(chǔ)2個(gè)紅球
            red_ball_list.append(red_ball_1)
            red_ball_list.append(red_ball_2)

            print("===================================================")
            print("?期號(hào):"+ str(qihao) + " ?" + "藍(lán)球:"+ str(bule_ball_list) + " ?" + "紅球" + str(red_ball_list))

            item = LotterySpiderItem(qihao = qihao,bule_ball = bule_ball_list,red_ball = red_ball_list)
            yield item

        next_url = response.xpath("http://div[@class='page']/div/a[3]/@href").get()
        if not next_url:
            return
        else:
            last_url = "http://www.lottery.gov.cn/historykj/" + next_url
            yield scrapy.Request(last_url,callback=self.parse)  #這里調(diào)用parse方法的時(shí)候不用加();

此文件是運(yùn)行的爬蟲(chóng)文件;

.
3、pipelines.py

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
import json

class LotterySpiderPipeline(object):
    def __init__(self):
        print("爬蟲(chóng)開(kāi)始......")
        self.fp = open("daletou.json", 'w', encoding='utf-8')  # 打開(kāi)一個(gè)json文件

    def process_item(self, item, spider):
        item_json = json.dumps(dict(item), ensure_ascii=False)      #注意此處的item,需要dict來(lái)進(jìn)行序列化;
        self.fp.write(item_json + '\n')
        return item

    def close_spider(self,spider):
        self.fp.close()
        print("爬蟲(chóng)結(jié)束......")

此文件負(fù)責(zé)數(shù)據(jù)的保存,代碼中將數(shù)據(jù)保存為了json數(shù)據(jù);
.
4、settings.py

# -*- coding: utf-8 -*-

# Scrapy settings for lottery_spider project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'lottery_spider'

SPIDER_MODULES = ['lottery_spider.spiders']
NEWSPIDER_MODULE = 'lottery_spider.spiders'

# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'lottery_spider (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False    #False,不去尋找網(wǎng)站設(shè)置的rebots.txt文件;

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
DOWNLOAD_DELAY = 1      #配置爬蟲(chóng)速度,1秒一次
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {        #配置爬蟲(chóng)的請(qǐng)求頭,模擬瀏覽器請(qǐng)求;
    'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
    'Accept-Language': 'en',
    'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.120 Safari/537.36'
}

# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'lottery_spider.middlewares.LotterySpiderSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    'lottery_spider.middlewares.LotterySpiderDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {    #取消此配置的注釋,讓pipelines.py可以運(yùn)行;
   'lottery_spider.pipelines.LotterySpiderPipeline': 300,
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

此文件是整個(gè)爬蟲(chóng)項(xiàng)目的運(yùn)行配置文件;
.
5、start.py

from scrapy import cmdline

cmdline.execute("scrapy crawl lottery".split())
#等價(jià)于 ↓
# cmdline.execute(["scrapy","crawl","xiaoshuo"])

此文件是新建的文件,配置后就不用cmd中執(zhí)行命令運(yùn)行項(xiàng)目了;

向AI問(wèn)一下細(xì)節(jié)

免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。

AI