您好,登錄后才能下訂單哦!
這篇文章給大家分享的是有關(guān)python如何爬取古詩(shī)文存入mysql數(shù)據(jù)庫(kù)的內(nèi)容。小編覺(jué)得挺實(shí)用的,因此分享給大家做個(gè)參考,一起跟隨小編過(guò)來(lái)看看吧。
使用正則提取數(shù)據(jù),請(qǐng)求庫(kù)requests,看代碼,在存入數(shù)據(jù)庫(kù)時(shí),報(bào)錯(cuò)ERROR 1054 (42S22): Unknown column ‘title' in ‘field list'。原來(lái)是我寫(xiě)sql 有問(wèn)題,sql = “insert into poem(title,author,content,create_time) values({},{},{},{})”.format(title, author,content,crate_time)
應(yīng)該寫(xiě)成sql = “insert into poem(title,author,content,create_time) values('{}','{}','{}','{}')”.format(title, author,content,crate_time)
。
把插入的值放入引號(hào)中。
import datetime import re import pymysql import requests url = "https://www.gushiwen.org/" headers = { 'User-Agent': "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_8; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50"} class Spiderpoem(object): conn = pymysql.Connect(host="localhost", port=3306, user="root", password='mysql', database='poem_data', charset="utf8") cs1 = conn.cursor() def get_requests(self, url, headers=None): """發(fā)送請(qǐng)求""" resp = requests.get(url, headers=headers) if resp.status_code == 200: # print(resp.request.headers) return resp.text return None def get_parse(self, response): """解析網(wǎng)頁(yè)""" re_data = { "title": r'<div\sclass="sons">.*?<b>(.*?)</b>.*?</div>', "author": r'<p>.*?class="source">.*?<a.*?>(.*?)</a>.*?<a.*?>(.*?)</a>.*?</p>', "content": r'<div\sclass="contson".*?>(.*?)</div>' } titles = self.reg_con(re_data["title"], response) authors = self.reg_con(re_data["author"], response) poems_list = self.reg_con(re_data["content"], response) contents = list() for item in poems_list: ite = re.sub(r'<.*?>|\s', "", item) contents.append(ite.strip()) for value in zip(titles, authors, contents): title, author, content = value author = "".join([author[0], '.', author[1]]) poem = { "title": title, "author": author, "content": content } yield poem def reg_con(self, params, response): """正則匹配""" if not response: return "請(qǐng)求錯(cuò)誤" param = re.compile(params, re.DOTALL) # re.DOTALL 匹配換行等價(jià)于re.S result = re.findall(param, response) return result @classmethod def save_data(cls, poem): title = poem.get("title") author = poem.get("author") content = poem.get("content") crate_time = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S") sql = "insert into poem(title,author,content,create_time) values('{}','{}','{}','{}')".format(title, author, content, crate_time) count = cls.cs1.execute(sql) print(count) cls.conn.commit() def main(self): resp = self.get_requests(url, headers) for it in self.get_parse(resp): self.save_data(it) self.cs1.close() self.conn.close() if __name__ == '__main__': Spiderpoem().main()
感謝各位的閱讀!關(guān)于“python如何爬取古詩(shī)文存入mysql數(shù)據(jù)庫(kù)”這篇文章就分享到這里了,希望以上內(nèi)容可以對(duì)大家有一定的幫助,讓大家可以學(xué)到更多知識(shí),如果覺(jué)得文章不錯(cuò),可以把它分享出去讓更多的人看到吧!
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。