您好,登錄后才能下訂單哦!
一、序言:
世界 1024 程序猿節(jié)日不加班,閑著沒(méi)事兒。。。隨手寫了個(gè)播客訪問(wèn)量爬蟲玩玩,訪問(wèn)量過(guò)萬(wàn)不是事兒?。。∶總€(gè)步驟注釋都很清晰,代碼僅供學(xué)習(xí)參考!
---- Nick.Peng
二、所需環(huán)境:
Python3.x
相關(guān)模塊: requests、json、lxml、urllib、bs4、fake_useragent
三、增加Blog訪問(wèn)量代碼如下:
#!/usr/bin/env python # -*- coding: utf-8 -*- # @Author: Nick # @Date: 2019-10-24 15:40:58 # @Last Modified by: Nick # @Last Modified time: 2019-10-24 16:54:31 import random import re import time import urllib import requests from bs4 import BeautifulSoup from fake_useragent import UserAgent try: from lxml import etree except Exception as e: import lxml.html # 實(shí)例化一個(gè)etree對(duì)象(解決通過(guò)from lxml import etree導(dǎo)包失?。? etree = lxml.html.etree # 實(shí)例化UserAgent對(duì)象,用于產(chǎn)生隨機(jī)UserAgent ua = UserAgent() class BlogSpider(object): """ Increase the number of CSDN blog visits. """ def __init__(self): self.url = "https://blog.csdn.net/PY0312/article/list/{}" self.headers = { "Referer": "https://blog.csdn.net/PY0312/", "User-Agent": ua.random } self.firefoxHead = { "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:61.0) Gecko/20100101 Firefox/61.0"} self.IPRegular = r"(([1-9]?\d|1\d{2}|2[0-4]\d|25[0-5]).){3}([1-9]?\d|1\d{2}|2[0-4]\d|25[0-5])" def send_request(self, num): """ 模擬瀏覽器發(fā)起請(qǐng)求 :param num: num :return: html_str """ html_str = requests.get(self.url.format( num), headers=self.headers).content.decode() # print(html_str) return html_str def parse_data(self, html_str): """ 用于解析發(fā)起請(qǐng)求返回的數(shù)據(jù) :param html_str: :return: each_page_urls """ # 將返回的 html字符串 轉(zhuǎn)換為 element對(duì)象,用于xpath操作 element_obj = etree.HTML(html_str) # print(element_obj) # 獲取每一頁(yè)所有blog的url each_page_urls = element_obj.xpath( '//*[@id="mainBox"]/main/div[2]/div/h5/a/@href') # print(each_page_urls) return each_page_urls def parseIPList(self, url="http://www.xicidaili.com/"): """ 爬取最新代理ip,來(lái)源:西刺代理 注意:西刺代理容易被封,如遇到IP被封情況,采用以下兩種方法即可解決: 方法一:請(qǐng)參考我上一篇博客《Python 實(shí)現(xiàn)快代理IP爬蟲》 ===> 喜歡研究的同學(xué),可參考對(duì)接此接口 方法二:直接屏蔽掉此接口,不使用代理也能正常使用 :param url: "http://www.xicidaili.com/" :return: 代理IP列表ips """ ips = [] request = urllib.request.Request(url, headers=self.firefoxHead) response = urllib.request.urlopen(request) soup = BeautifulSoup(response, "lxml") tds = soup.find_all("td") for td in tds: string = str(td.string) if re.search(self.IPRegular, string): ips.append(string) # print(ips) return ips def main(self, total_page, loop_times, each_num): """ 調(diào)度方法 :param total_page: 設(shè)置博客總頁(yè)數(shù) :param loop_times: 設(shè)置循環(huán)次數(shù) :param each_num: 設(shè)置每一頁(yè)要隨機(jī)挑選文章數(shù) :return: """ i = 0 # 根據(jù)設(shè)置次數(shù),打開循環(huán) while i < loop_times: # 遍歷,得到每一頁(yè)的頁(yè)碼 for j in range(total_page): # 拼接每一頁(yè)的url,并模擬發(fā)送請(qǐng)求, 返回響應(yīng)數(shù)據(jù) html_str = self.send_request(j + 1) # 解析響應(yīng)數(shù)據(jù),得到每一頁(yè)所有博文的url each_page_urls = self.parse_data(html_str) # 調(diào)用parseIPList隨機(jī)產(chǎn)生代理IP,防反爬 # ips = self.parseIPList() # proxies = {"http": "{}:8080".format( # ips[random.randint(0, 40)])} # 遍歷,每一頁(yè)隨機(jī)挑選each_num篇文章 for x in range(each_num): # 隨機(jī)抽取每一頁(yè)的一篇博文進(jìn)行訪問(wèn),防反爬 current_url = random.choice(each_page_urls) status = True if requests.get( current_url, headers=self.headers).content.decode() else False print("當(dāng)前正在訪問(wèn)的文章是:{},訪問(wèn)狀態(tài):{}".format(current_url, status)) time.sleep(1) # 延時(shí)1秒,防反爬 time.sleep(1) # 延時(shí)1秒,防反爬 i += 1 if __name__ == '__main__': bs = BlogSpider() bs.main(7, 200, 3) # 參數(shù)參照main方法說(shuō)明,酌情設(shè)置
以上就是本文的全部?jī)?nèi)容,希望對(duì)大家的學(xué)習(xí)有所幫助,也希望大家多多支持億速云。
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。