您好,登錄后才能下訂單哦!
小編給大家分享一下python爬蟲案例之如何獲取招聘要求,相信大部分人都還不怎么了解,因此分享這篇文章給大家參考一下,希望大家閱讀完這篇文章后大有收獲,下面讓我們一起去了解一下吧!
大致流程如下:
1.從代碼中取出pid
2.根據(jù)pid拼接網(wǎng)址 => 得到 detail_url,使用requests.get,防止爬蟲掛掉,一旦發(fā)現(xiàn)爬取的detail重復(fù),就重新啟動(dòng)爬蟲
3.根據(jù)detail_url獲取網(wǎng)頁html信息 => requests - > html,使用BeautifulSoup
若爬取太快,就等著解封
if html.status_code!=200 print('status_code if {}'.format(html.status_code))
4.根據(jù)html得到soup => soup
5.從soup中獲取特定元素內(nèi)容 => 崗位信息
6.保存數(shù)據(jù)到MongoDB中
代碼:
# @author: limingxuan # @contect: limx2011@hotmail.com # @blog: https://www.jianshu.com/p/a5907362ba72 # @time: 2018-07-21 import requests from bs4 import BeautifulSoup import time from pymongo import MongoClient headers = { 'accept': "application/json, text/javascript, */*; q=0.01", 'accept-encoding': "gzip, deflate, br", 'accept-language': "zh-CN,zh;q=0.9,en;q=0.8", 'content-type': "application/x-www-form-urlencoded; charset=UTF-8", 'cookie': "JSESSIONID=""; __c=1530137184; sid=sem_pz_bdpc_dasou_title; __g=sem_pz_bdpc_dasou_title; __l=r=https%3A%2F%2Fwww.zhipin.com%2Fgongsi%2F5189f3fadb73e42f1HN40t8~.html&l=%2Fwww.zhipin.com%2Fgongsir%2F5189f3fadb73e42f1HN40t8~.html%3Fka%3Dcompany-jobs&g=%2Fwww.zhipin.com%2F%3Fsid%3Dsem_pz_bdpc_dasou_title; Hm_lvt_194df3105ad7148dcf2b98a91b5e727a=1531150234,1531231870,1531573701,1531741316; lastCity=101010100; toUrl=https%3A%2F%2Fwww.zhipin.com%2Fjob_detail%2F%3Fquery%3Dpython%26scity%3D101010100; Hm_lpvt_194df3105ad7148dcf2b98a91b5e727a=1531743361; __a=26651524.1530136298.1530136298.1530137184.286.2.285.199", 'origin': "https://www.zhipin.com", 'referer': "https://www.zhipin.com/job_detail/?query=python&scity=101010100", 'user-agent': "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36" } conn = MongoClient('127.0.0.1',27017) db = conn.zhipin_jobs def init(): items = db.Python_jobs.find().sort('pid') for item in items: if 'detial' in item.keys(): #當(dāng)爬蟲掛掉時(shí),跳過已爬取的頁 continue detail_url = 'https://www.zhipin.com/job_detail/{}.html'.format(item['pid']) #單引號(hào)和雙引號(hào)相同,str.format()新格式化方式 #第一階段順利打印出崗位頁面的url print(detail_url) #返回的html是 Response 類的結(jié)果 html = requests.get(detail_url,headers = headers) if html.status_code != 200: print('status_code is {}'.format(html.status_code)) break #返回值soup表示一個(gè)文檔的全部內(nèi)容(html.praser是html解析器) soup = BeautifulSoup(html.text,'html.parser') job = soup.select('.job-sec .text') print(job) #??? if len(job)<1: continue item['detail'] = job[0].text.strip() #職位描述 location = soup.select(".job-sec .job-location .location-address") item['location'] = location[0].text.strip() #工作地點(diǎn) item['updated_at'] = time.strftime("%Y-%m-%d %H:%M:%S",time.localtime()) #實(shí)時(shí)爬取時(shí)間 #print(item['detail']) #print(item['location']) #print(item['updated_at']) res = save(item) #調(diào)用保存數(shù)據(jù)結(jié)構(gòu) print(res) time.sleep(40)#爬太快IP被封了24小時(shí)== #保存數(shù)據(jù)到MongoDB中 def save(item): return db.Python_jobs.update_one({'_id':item['_id']},{'$set':item}) #why item ??? # 保存數(shù)據(jù)到MongoDB if __name__ == '__main__': init()
最終結(jié)果就是在MongoBooster中看到新增了detail和location的數(shù)據(jù)內(nèi)容
以上是python爬蟲案例之如何獲取招聘要求的所有內(nèi)容,感謝各位的閱讀!相信大家都有了一定的了解,希望分享的內(nèi)容對(duì)大家有所幫助,如果還想學(xué)習(xí)更多知識(shí),歡迎關(guān)注億速云行業(yè)資訊頻道!
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。