您好,登錄后才能下訂單哦!
小編給大家分享一下怎么使用python實(shí)現(xiàn)爬蟲抓取小說功能,相信大部分人都還不怎么了解,因此分享這篇文章給大家參考一下,希望大家閱讀完這篇文章后大有收獲,下面讓我們一起去了解一下吧!
具體如下:
# -*- coding: utf-8 -*- from bs4 import BeautifulSoup from urllib import request import re import os,time #訪問url,返回html頁(yè)面 def get_html(url): req = request.Request(url) req.add_header('User-Agent','Mozilla/5.0') response = request.urlopen(url) html = response.read() return html #從列表頁(yè)獲取小說書名和鏈接 def get_books(url):#根據(jù)列表頁(yè),返回此頁(yè)的{書名:鏈接}的字典 html = get_html(url) soup = BeautifulSoup(html,'lxml') fixed_html = soup.prettify() books = soup.find_all('div',attrs={'class':'bbox'}) book_dict = {} for book in books: book_name = book.h4.a.string book_url = book.h4.a.get('href') book_dict[book_name] = book_url return book_dict #根據(jù)書名鏈接,獲取具體的章節(jié){名稱:鏈接} 的字典 def get_parts(url): html = get_html(url) soup = BeautifulSoup(html,'lxml') fixed_html = soup.prettify() part_urls = soup.find_all('a') host = "http://www.xiaoshuotxt.org" part_dict = {} for p in part_urls: p_url = str(p.get('href')) if re.search(r'\d{5}.html',p_url) and ("xiaoshuotxt" not in p_url): part_dict[p.string] = host + p_url return part_dict #根據(jù)章節(jié)的url獲取具體的章節(jié)內(nèi)容 def get_txt(url): html = get_html(url) soup = BeautifulSoup(html,'lxml') fixed_html = soup.prettify() title = soup.h2.string #獲取文章標(biāo)題 content = soup.find('div',attrs={'class':'zw'}) txt = BeautifulSoup.get_text(content) #正文內(nèi)容 return txt if __name__ == "__main__": root_dir= r'e:\books' #url = 'http://www.xiaoshuotxt.org/mingzhu/index_2.html' #第2頁(yè)的小說 url = "http://www.xiaoshuotxt.org/writer/58" #金庸的小說 books = get_books(url) for book_name,book_url in books.items(): os.mkdir(os.path.join(root_dir,book_name)) part_dict = get_parts(book_url) print(book_name,"共:",len(part_dict),"章節(jié)") for part_name,part_url in part_dict.items(): print("正在保存:",part_name) f1 = open(r'e:\books\%s\%s.txt'%(book_name,part_name),'w',encoding='utf-8')#以u(píng)tf-8編碼創(chuàng)建文件 part_txt = get_txt(part_url) f1.write(str(part_txt)) f1.close() time.sleep(2)
運(yùn)行效果:
以上是“怎么使用python實(shí)現(xiàn)爬蟲抓取小說功能”這篇文章的所有內(nèi)容,感謝各位的閱讀!相信大家都有了一定的了解,希望分享的內(nèi)容對(duì)大家有所幫助,如果還想學(xué)習(xí)更多知識(shí),歡迎關(guān)注億速云行業(yè)資訊頻道!
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。