您好,登錄后才能下訂單哦!
一、需求場景:
1、由于業(yè)務需要,會頻繁地購買或更換HTTPS類型的代理。
2、購買到的代理會出現(xiàn)所屬地區(qū)不正確或不可用等問題,每次都需要測試無誤后才能交付。
3、起初都是人工操作,“使用Proxifier填上代理信息-->清緩存-->訪問測試IP的網站”,多的時候一天會有近千條需要測試。
二、想法:
用Python爬蟲掛上代理去抓取IP網站的信息,將正確結果依次返回(使用多線程加快爬取速度)。
三、實踐:
# -*- coding: utf-8 -*- import time import requests from bs4 import BeautifulSoup from threading import Thread, Lock lock = Lock() ipaddress = 0 country = 0 bad = 0 url = "https://whatismyipaddress.com/" kv = { 'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36\ (KHTML, like Gecko) Chrome/64.0.3282.186 Safari/537.36'} class GetProxiesThread(Thread): def __init__(self, url, kv, proxy, proxies): self.url = url self.kv = kv self.proxy = proxy self.proxies = proxies super(GetProxiesThread, self).__init__() def run(self): check_proxies(self.url, self.kv, self.proxy, self.proxies) def check_proxies(url, kv, proxy, proxies): global ipaddress, country, bad, seq try: r = requests.get(url, headers=kv, proxies=proxies) r.raise_for_status() r.encoding = r.apparent_encoding soup = BeautifulSoup(r.text, 'lxml') ipaddress = soup.select('#section_left div a')[0].text country = soup.body.table.contents[-2].td.string lock.acquire() print("{} ==> {}".format(ipaddress, country)) except Exception as reason: lock.acquire() bad += 1 print(reason) print("{} is unavailable!".format(proxy)) finally: lock.release() def main(): num = 0 start = time.time() with open('proxieslist.txt', 'r') as f: threads = [] for line in f: num += 1 proxy = ''.join(line.splitlines()) https_proxy = 'https://' + proxy proxies = {"https": https_proxy} t = GetProxiesThread(url, kv, proxy, proxies) threads.append(t) t.start() time.sleep(0.6) for t in threads: t.join() print("The total amount: %s" % (num)) print("Not available quantity: %s" % (bad)) print("Elapsed time: %s" % (time.time() - start)) if __name__ == '__main__': main()
四、存在的問題:
1、可以添加將結果寫入到記事本的功能。
2、有時候運行到一半會卡主,原因不明。
免責聲明:本站發(fā)布的內容(圖片、視頻和文字)以原創(chuàng)、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯(lián)系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。