您好,登錄后才能下訂單哦!
jb51上面的資源還比較全,就準(zhǔn)備用python來(lái)實(shí)現(xiàn)自動(dòng)采集信息,與下載啦。
Python具有豐富和強(qiáng)大的庫(kù),使用urllib,re等就可以輕松開(kāi)發(fā)出一個(gè)網(wǎng)絡(luò)信息采集器!
下面,是我寫(xiě)的一個(gè)實(shí)例腳本,用來(lái)采集某技術(shù)網(wǎng)站的特定欄目的所有電子書(shū)資源,并下載到本地保存!
軟件運(yùn)行截圖如下:
在腳本運(yùn)行時(shí)期,不但會(huì)打印出信息到shell窗口,還會(huì)保存日志到txt文件,記錄采集到的頁(yè)面地址,書(shū)籍的名稱(chēng),大小,服務(wù)器本地下載地址以及百度網(wǎng)盤(pán)的下載地址!
實(shí)例采集并下載億速云的python欄目電子書(shū)資源:
# -*- coding:utf-8 -*- import re import urllib2 import urllib import sys import os reload(sys) sys.setdefaultencoding('utf-8') def getHtml(url): request = urllib2.Request(url) page = urllib2.urlopen(request) htmlcontent = page.read() #解決中文亂碼問(wèn)題 htmlcontent = htmlcontent.decode('gbk', 'ignore').encode("utf8",'ignore') return htmlcontent def report(count, blockSize, totalSize): percent = int(count*blockSize*100/totalSize) sys.stdout.write("r%d%%" % percent + ' complete') sys.stdout.flush() def getBookInfo(url): htmlcontent = getHtml(url); #print "htmlcontent=",htmlcontent; # you should see the ouput html #<h2 class="h2user">crifan</h2> regex_title = '<h2s+?itemprop="name">(?P<title>.+?)</h2>'; title = re.search(regex_title, htmlcontent); if(title): title = title.group("title"); print "書(shū)籍名字:",title; file_object.write('書(shū)籍名字:'+title+'r'); #<li>書(shū)籍大?。?lt;span itemprop="fileSize">27.2MB</span></li> filesize = re.search('<spans+?itemprop="fileSize">(?P<filesize>.+?)</span>', htmlcontent); if(filesize): filesize = filesize.group("filesize"); print "文件大小:",filesize; file_object.write('文件大小:'+filesize+'r'); #<div class="picthumb"><a target="_blank" bookimg = re.search('<divs+?class="picthumb"><a href="(?P<bookimg>.+?)" rel="external nofollow" target="_blank"', htmlcontent); if(bookimg): bookimg = bookimg.group("bookimg"); print "封面圖片:",bookimg; file_object.write('封面圖片:'+bookimg+'r'); #<li><a target="_blank">酷云中國(guó)電信下載</a></li> downurl1 = re.search('<li><a href="(?P<downurl1>.+?)" rel="external nofollow" target="_blank">酷云中國(guó)電信下載</a></li>', htmlcontent); if(downurl1): downurl1 = downurl1.group("downurl1"); print "下載地址1:",downurl1; file_object.write('下載地址1:'+downurl1+'r'); sys.stdout.write('rFetching ' + title + '...n') title = title.replace(' ', ''); title = title.replace('/', ''); saveFile = '/Users/superl/Desktop/pythonbook/'+title+'.rar'; if os.path.exists(saveFile): print "該文件已經(jīng)下載了!"; else: urllib.urlretrieve(downurl1, saveFile, reporthook=report); sys.stdout.write("rDownload complete, saved as %s" % (saveFile) + 'nn') sys.stdout.flush() file_object.write('文件下載成功!r'); else: print "下載地址1不存在"; file_error.write(url+'r'); file_error.write(title+"下載地址1不存在!文件沒(méi)有自動(dòng)下載!r"); file_error.write('r'); #<li><a rel="external nofollow" target="_blank">百度網(wǎng)盤(pán)下載2</a></li> downurl2 = re.search('</a></li><li><a href="(?P<downurl2>.+?)" rel="external nofollow" target="_blank">百度網(wǎng)盤(pán)下載2</a></li>', htmlcontent); if(downurl2): downurl2 = downurl2.group("downurl2"); print "下載地址2:",downurl2; file_object.write('下載地址2:'+downurl2+'r'); else: #file_error.write(url+'r'); print "下載地址2不存在"; file_error.write(title+"下載地址2不存在r"); file_error.write('r'); file_object.write('r'); print "n"; def getBooksUrl(url): htmlcontent = getHtml(url); #<ul class="cur-cat-list"><a href="/books/438381.html" rel="external nofollow" class="tit"</ul></div><!--end #content --> urls = re.findall('<a href="(?P<urls>.+?)" rel="external nofollow" class="tit"', htmlcontent); for url in urls: url = "http://www.jb51.net"+url; print url+"n"; file_object.write(url+'r'); getBookInfo(url) #print "url->", url if __name__=="__main__": file_object = open('/Users/superl/Desktop/python.txt','w+'); file_error = open('/Users/superl/Desktop/pythonerror.txt','w+'); pagenum = 3; for pagevalue in range(1,pagenum+1): listurl = "http://www.jb51.net/ books/list476_%d.html"%pagevalue; print listurl; file_object.write(listurl+'r'); getBooksUrl(listurl); file_object.close(); file_error.close();
注意,上面代碼部分地方的url被我換了。
總結(jié)
以上所述是小編給大家介紹的python采集jb51電子書(shū)資源并自動(dòng)下載到本地實(shí)例腳本,希望對(duì)大家有所幫助,如果大家有任何疑問(wèn)請(qǐng)給我留言,小編會(huì)及時(shí)回復(fù)大家的。在此也非常感謝大家對(duì)億速云網(wǎng)站的支持!
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。