溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊(cè)×
其他方式登錄
點(diǎn)擊 登錄注冊(cè) 即表示同意《億速云用戶(hù)服務(wù)條款》

Python使用mongodb保存爬取豆瓣電影的數(shù)據(jù)過(guò)程解析

發(fā)布時(shí)間:2020-08-30 21:00:45 來(lái)源:腳本之家 閱讀:151 作者:silence-cc 欄目:開(kāi)發(fā)技術(shù)

創(chuàng)建爬蟲(chóng)項(xiàng)目douban

scrapy startproject douban

設(shè)置items.py文件,存儲(chǔ)要保存的數(shù)據(jù)類(lèi)型和字段名稱(chēng)

# -*- coding: utf-8 -*-
import scrapy
class DoubanItem(scrapy.Item):
 title = scrapy.Field()
 # 內(nèi)容
 content = scrapy.Field()
 # 評(píng)分
 rating_num = scrapy.Field()
 # 簡(jiǎn)介
 quote = scrapy.Field()

設(shè)置爬蟲(chóng)文件doubanmovies.py

# -*- coding: utf-8 -*-
import scrapy
from douban.items import DoubanItem
class DoubanmoviesSpider(scrapy.Spider):
 name = 'doubanmovies'
 allowed_domains = ['movie.douban.com']
 offset = 0
 url = 'https://movie.douban.com/top250?start='
 start_urls = [url + str(offset)]
 def parse(self, response):
  # print('*'*60)
  # print(response.url)
  # print('*'*60)
  item = DoubanItem()
  info = response.xpath("http://div[@class='info']")
  for each in info:
   item['title'] = each.xpath(".//span[@class='title'][1]/text()").extract()
   item['content'] = each.xpath(".//div[@class='bd']/p[1]/text()").extract()
   item['rating_num'] = each.xpath(".//span[@class='rating_num']/text()").extract()
   item['quote'] = each .xpath(".//span[@class='inq']/text()").extract()
   yield item
   # print(item)
  self.offset += 25
  if self.offset <= 250:
   yield scrapy.Request(self.url + str(self.offset),callback=self.parse)

設(shè)置管道文件,使用mongodb數(shù)據(jù)庫(kù)來(lái)保存爬取的數(shù)據(jù)。重點(diǎn)部分

# -*- coding: utf-8 -*-
from scrapy.conf import settings
import pymongo
class DoubanPipeline(object):
 def __init__(self):
  self.host = settings['MONGODB_HOST']
  self.port = settings['MONGODB_PORT']
 def process_item(self, item, spider):
  # 創(chuàng)建mongodb客戶(hù)端連接對(duì)象,該例從settings.py文件里面獲取mongodb所在的主機(jī)和端口參數(shù),可直接書(shū)寫(xiě)主機(jī)和端口
  self.client = pymongo.MongoClient(self.host,self.port)
  # 創(chuàng)建數(shù)據(jù)庫(kù)douban
  self.mydb = self.client['douban']
  # 在數(shù)據(jù)庫(kù)douban里面創(chuàng)建表doubanmovies
  # 把類(lèi)似字典的數(shù)據(jù)轉(zhuǎn)換為phthon字典格式
  content = dict(item)
  # 把數(shù)據(jù)添加到表里面
  self.mysheetname.insert(content)
  return item

設(shè)置settings.py文件

# -*- coding: utf-8 -*-
BOT_NAME = 'douban'
SPIDER_MODULES = ['douban.spiders']
NEWSPIDER_MODULE = 'douban.spiders'
USER_AGENT = 'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0;'
# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# Disable cookies (enabled by default)
COOKIES_ENABLED = False
# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
 'douban.pipelines.DoubanPipeline': 300,
}
# mongodb數(shù)據(jù)庫(kù)設(shè)置變量
MONGODB_HOST = '127.0.0.1'
MONGODB_PORT = 27017

終端測(cè)試

scrapy crawl douban

這博客園的代碼片段縮進(jìn),難道要用4個(gè)空格才可以搞定?我發(fā)現(xiàn)只能使用4個(gè)空格才能解決如上圖的代碼塊的縮進(jìn)

以上就是本文的全部?jī)?nèi)容,希望對(duì)大家的學(xué)習(xí)有所幫助,也希望大家多多支持億速云。

向AI問(wèn)一下細(xì)節(jié)

免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。

AI