您好,登錄后才能下訂單哦!
如何通過(guò)Serverless 輕松識(shí)別驗(yàn)證碼,很多新手對(duì)此不是很清楚,為了幫助大家解決這個(gè)難題,下面小編將為大家詳細(xì)講解,有這方面需求的人可以來(lái)學(xué)習(xí)下,希望你能有所收獲。
前言
Serverless 概念自被提出就倍受關(guān)注,尤其是近些年來(lái) Serverless 煥發(fā)出了前所未有的活力,各領(lǐng)域的工程師都在試圖將 Serverless 架構(gòu)與自身工作相結(jié)合,以獲取到 Serverless 架構(gòu)所帶來(lái)的“技術(shù)紅利”。
驗(yàn)證碼(CAPTCHA)是“Completely Automated Public Turing test to tell Computers and Humans Apart”(全自動(dòng)區(qū)分計(jì)算機(jī)和人類的圖靈測(cè)試)的縮寫,是一種區(qū)分用戶是計(jì)算機(jī)還是人的公共全自動(dòng)程序。可以防止惡意破解密碼、刷票、論壇灌水,有效防止某個(gè)黑客對(duì)某一個(gè)特定注冊(cè)用戶用特定程序暴力破解方式進(jìn)行不斷地登陸嘗試。實(shí)際上驗(yàn)證碼是現(xiàn)在很多網(wǎng)站通行的方式,我們利用比較簡(jiǎn)易的方式實(shí)現(xiàn)了這個(gè)功能。CAPTCHA 的問(wèn)題由計(jì)算機(jī)生成并評(píng)判,但是這個(gè)問(wèn)題只有人類才能解答,計(jì)算機(jī)是無(wú)法解答的,所以回答出問(wèn)題的用戶就可以被認(rèn)為是人類。說(shuō)白了,驗(yàn)證碼就是用來(lái)驗(yàn)證的碼,驗(yàn)證是人訪問(wèn)的還是機(jī)器訪問(wèn)的“碼”。
那么人工智能領(lǐng)域中的驗(yàn)證碼識(shí)別與 Serverless 架構(gòu)會(huì)碰撞出哪些火花呢?本文將通過(guò) Serverless 架構(gòu)和卷積神經(jīng)網(wǎng)絡(luò)(CNN)算法,實(shí)現(xiàn)驗(yàn)證碼識(shí)別功能。
淺談驗(yàn)證碼
驗(yàn)證碼的發(fā)展,可以說(shuō)是非常迅速的,從開(kāi)始的單純數(shù)字驗(yàn)證碼,到后來(lái)的數(shù)字+字母驗(yàn)證碼,再到后來(lái)的數(shù)字+字母+中文的驗(yàn)證碼以及圖形圖像驗(yàn)證碼,單純的驗(yàn)證碼素材已經(jīng)越來(lái)越多了。從驗(yàn)證碼的形態(tài)來(lái)看,也是各不相同,輸入、點(diǎn)擊、拖拽以及短信驗(yàn)證碼、語(yǔ)音驗(yàn)證碼……
Bilibili 的登錄驗(yàn)證碼就包括了多種模式,例如滑動(dòng)滑塊進(jìn)行驗(yàn)證:
例如,通過(guò)依次點(diǎn)擊文字進(jìn)行驗(yàn)證:
而百度貼吧、知乎、以及 Google 等相關(guān)網(wǎng)站的驗(yàn)證碼又各不相同,例如選擇正著寫的文字、選擇包括指定物體的圖片以及按順序點(diǎn)擊圖片中的字符等。
驗(yàn)證碼的識(shí)別可能會(huì)根據(jù)驗(yàn)證碼的類型而不太一致,當(dāng)然最簡(jiǎn)單的驗(yàn)證碼可能就是最原始的文字驗(yàn)證碼了:
即便是文字驗(yàn)證碼,也是存在很多差異的,例如簡(jiǎn)單的數(shù)字驗(yàn)證碼、簡(jiǎn)單的數(shù)字+字母驗(yàn)證碼、文字驗(yàn)證碼、驗(yàn)證碼中包括計(jì)算、簡(jiǎn)單驗(yàn)證碼中增加一些干擾成為復(fù)雜驗(yàn)證碼等。
驗(yàn)證碼識(shí)別
1. 簡(jiǎn)單驗(yàn)證碼識(shí)別
驗(yàn)證碼識(shí)別是一個(gè)古老的研究領(lǐng)域,簡(jiǎn)單說(shuō)就是把圖片上的文字轉(zhuǎn)化為文本的過(guò)程。最近幾年,隨著大數(shù)據(jù)的發(fā)展,廣大爬蟲(chóng)工程師在對(duì)抗反爬策略時(shí),對(duì)驗(yàn)證碼的識(shí)別要求也越來(lái)越高。在簡(jiǎn)單驗(yàn)證碼的時(shí)代,驗(yàn)證碼的識(shí)別主要是針對(duì)文本驗(yàn)證碼,通過(guò)圖像的切割,對(duì)驗(yàn)證碼每一部分進(jìn)行裁剪,然后再對(duì)每個(gè)裁剪單元進(jìn)行相似度對(duì)比,獲得最可能的結(jié)果,最后進(jìn)行拼接,例如將驗(yàn)證碼:
進(jìn)行二值化等操作:
完成之后再進(jìn)行切割:
切割完成再進(jìn)行識(shí)別,最后進(jìn)行拼接,這樣的做法是,針對(duì)每個(gè)字符進(jìn)行識(shí)別,相對(duì)來(lái)說(shuō)是比較容易的。
但是隨著時(shí)間的發(fā)展,在這種簡(jiǎn)單驗(yàn)證碼逐漸無(wú)法滿足判斷“是人還是機(jī)器”的問(wèn)題時(shí),驗(yàn)證碼進(jìn)行了一次小升級(jí),即驗(yàn)證碼上面增加了一些干擾線,或者驗(yàn)證碼進(jìn)行了嚴(yán)重的扭曲,增加了強(qiáng)色塊干擾,例如 Dynadot 網(wǎng)站的驗(yàn)證碼:
不僅有圖像扭曲重疊,還有干擾線和色塊干擾。這個(gè)時(shí)候想要識(shí)別驗(yàn)證碼,簡(jiǎn)單的切割識(shí)別就很難獲得良好的效果了,這時(shí)通過(guò)深度學(xué)習(xí)反而可以獲得不錯(cuò)的效果。
2. 基于 CNN 的驗(yàn)證碼識(shí)別
卷積神經(jīng)網(wǎng)絡(luò)(Convolutional Neural Network,簡(jiǎn)稱 CNN),是一種前饋神經(jīng)網(wǎng)絡(luò),人工神經(jīng)元可以響應(yīng)周圍單元,進(jìn)行大型圖像處理。卷積神經(jīng)網(wǎng)絡(luò)包括卷積層和池化層。
如圖所示,左圖是傳統(tǒng)的神經(jīng)網(wǎng)絡(luò),其基本結(jié)構(gòu)是:輸入層、隱含層、輸出層。右圖則是卷積神經(jīng)網(wǎng)絡(luò),其結(jié)構(gòu)由輸入層、輸出層、卷積層、池化層、全連接層構(gòu)成。卷積神經(jīng)網(wǎng)絡(luò)其實(shí)是神經(jīng)網(wǎng)絡(luò)的一種拓展,而事實(shí)上從結(jié)構(gòu)上來(lái)說(shuō),樸素的 CNN 和樸素的 NN 沒(méi)有任何區(qū)別(當(dāng)然,引入了特殊結(jié)構(gòu)的、復(fù)雜的 CNN 會(huì)和 NN 有著比較大的區(qū)別)。相對(duì)于傳統(tǒng)神經(jīng)網(wǎng)絡(luò),CNN 在實(shí)際效果中讓我們的網(wǎng)絡(luò)參數(shù)數(shù)量大大地減少,這樣我們可以用較少的參數(shù),訓(xùn)練出更加好的模型,典型的事半功倍,而且可以有效地避免過(guò)擬合。同樣,由于 filter 的參數(shù)共享,即使圖片進(jìn)行了一定的平移操作,我們照樣可以識(shí)別出特征,這叫做 “平移不變性”。因此,模型就更加穩(wěn)健了。
1)驗(yàn)證碼生成
驗(yàn)證碼的生成是非常重要的一個(gè)步驟,因?yàn)檫@一部分的驗(yàn)證碼將會(huì)作為我們的訓(xùn)練集和測(cè)試集,同時(shí)最終我們的模型可以識(shí)別什么類型的驗(yàn)證碼,也是和這部分有關(guān)。
# coding:utf-8 import random import numpy as np from PIL import Image from captcha.image import ImageCaptcha CAPTCHA_LIST = [eve for eve in "0123456789abcdefghijklmnopqrsruvwxyzABCDEFGHIJKLMOPQRSTUVWXYZ"] CAPTCHA_LEN = 4 # 驗(yàn)證碼長(zhǎng)度 CAPTCHA_HEIGHT = 60 # 驗(yàn)證碼高度 CAPTCHA_WIDTH = 160 # 驗(yàn)證碼寬度 randomCaptchaText = lambda char=CAPTCHA_LIST, size=CAPTCHA_LEN: "".join([random.choice(char) for _ in range(size)]) def genCaptchaTextImage(width=CAPTCHA_WIDTH, height=CAPTCHA_HEIGHT, save=None): image = ImageCaptcha(width=width, height=height) captchaText = randomCaptchaText() if save: image.write(captchaText, './img/%s.jpg' % captchaText) return captchaText, np.array(Image.open(image.generate(captchaText))) print(genCaptchaTextImage(save=True))
通過(guò)上述代碼,可以生成簡(jiǎn)單的中英文驗(yàn)證碼:
2)模型訓(xùn)練
模型訓(xùn)練的代碼如下(部分代碼來(lái)自網(wǎng)絡(luò))。
util.py 文件,主要是一些提取出來(lái)的公有方法:
# -*- coding:utf-8 -*- import numpy as np from captcha_gen import genCaptchaTextImage from captcha_gen import CAPTCHA_LIST, CAPTCHA_LEN, CAPTCHA_HEIGHT, CAPTCHA_WIDTH # 圖片轉(zhuǎn)為黑白,3維轉(zhuǎn)1維 convert2Gray = lambda img: np.mean(img, -1) if len(img.shape) > 2 else img # 驗(yàn)證碼向量轉(zhuǎn)為文本 vec2Text = lambda vec, captcha_list=CAPTCHA_LIST: ''.join([captcha_list[int(v)] for v in vec]) def text2Vec(text, captchaLen=CAPTCHA_LEN, captchaList=CAPTCHA_LIST): """ 驗(yàn)證碼文本轉(zhuǎn)為向量 """ vector = np.zeros(captchaLen * len(captchaList)) for i in range(len(text)): vector[captchaList.index(text[i]) + i * len(captchaList)] = 1 return vector def getNextBatch(batchCount=60, width=CAPTCHA_WIDTH, height=CAPTCHA_HEIGHT): """ 獲取訓(xùn)練圖片組 """ batchX = np.zeros([batchCount, width * height]) batchY = np.zeros([batchCount, CAPTCHA_LEN * len(CAPTCHA_LIST)]) for i in range(batchCount): text, image = genCaptchaTextImage() image = convert2Gray(image) # 將圖片數(shù)組一維化 同時(shí)將文本也對(duì)應(yīng)在兩個(gè)二維組的同一行 batchX[i, :] = image.flatten() / 255 batchY[i, :] = text2Vec(text) return batchX, batchY # print(getNextBatch(batch_count=1))
model_train.py 文件,主要是進(jìn)行模型訓(xùn)練。在該文件中,定義了模型的基本信息,例如該模型是三層卷積神經(jīng)網(wǎng)絡(luò),原始圖像大小是 60*160,在第一次卷積后變?yōu)?60*160, 第一池化后變?yōu)?30*80;第二次卷積后變?yōu)?30*80 ,第二次池化后變?yōu)?15*40;第三次卷積后變?yōu)? 15*40 ,第三次池化后變?yōu)?*20。經(jīng)過(guò)三次卷積和池化后,原始圖片數(shù)據(jù)變?yōu)?7*20 的平面數(shù)據(jù),同時(shí)項(xiàng)目在進(jìn)行訓(xùn)練的時(shí)候,每隔 100 次進(jìn)行一次數(shù)據(jù)測(cè)試,計(jì)算一次準(zhǔn)確度:
# -*- coding:utf-8 -*- import tensorflow.compat.v1 as tf from datetime import datetime from util import getNextBatch from captcha_gen import CAPTCHA_HEIGHT, CAPTCHA_WIDTH, CAPTCHA_LEN, CAPTCHA_LIST tf.compat.v1.disable_eager_execution() variable = lambda shape, alpha=0.01: tf.Variable(alpha * tf.random_normal(shape)) conv2d = lambda x, w: tf.nn.conv2d(x, w, strides=[1, 1, 1, 1], padding='SAME') maxPool2x2 = lambda x: tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') optimizeGraph = lambda y, y_conv: tf.train.AdamOptimizer(1e-3).minimize( tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=y_conv))) hDrop = lambda image, weight, bias, keepProb: tf.nn.dropout( maxPool2x2(tf.nn.relu(conv2d(image, variable(weight, 0.01)) + variable(bias, 0.1))), keepProb) def cnnGraph(x, keepProb, size, captchaList=CAPTCHA_LIST, captchaLen=CAPTCHA_LEN): """ 三層卷積神經(jīng)網(wǎng)絡(luò) """ imageHeight, imageWidth = size xImage = tf.reshape(x, shape=[-1, imageHeight, imageWidth, 1]) hDrop1 = hDrop(xImage, [3, 3, 1, 32], [32], keepProb) hDrop2 = hDrop(hDrop1, [3, 3, 32, 64], [64], keepProb) hDrop3 = hDrop(hDrop2, [3, 3, 64, 64], [64], keepProb) # 全連接層 imageHeight = int(hDrop3.shape[1]) imageWidth = int(hDrop3.shape[2]) wFc = variable([imageHeight * imageWidth * 64, 1024], 0.01) # 上一層有64個(gè)神經(jīng)元 全連接層有1024個(gè)神經(jīng)元 bFc = variable([1024], 0.1) hDrop3Re = tf.reshape(hDrop3, [-1, imageHeight * imageWidth * 64]) hFc = tf.nn.relu(tf.matmul(hDrop3Re, wFc) + bFc) hDropFc = tf.nn.dropout(hFc, keepProb) # 輸出層 wOut = variable([1024, len(captchaList) * captchaLen], 0.01) bOut = variable([len(captchaList) * captchaLen], 0.1) yConv = tf.matmul(hDropFc, wOut) + bOut return yConv def accuracyGraph(y, yConv, width=len(CAPTCHA_LIST), height=CAPTCHA_LEN): """ 偏差計(jì)算圖,正確值和預(yù)測(cè)值,計(jì)算準(zhǔn)確度 """ maxPredictIdx = tf.argmax(tf.reshape(yConv, [-1, height, width]), 2) maxLabelIdx = tf.argmax(tf.reshape(y, [-1, height, width]), 2) correct = tf.equal(maxPredictIdx, maxLabelIdx) # 判斷是否相等 return tf.reduce_mean(tf.cast(correct, tf.float32)) def train(height=CAPTCHA_HEIGHT, width=CAPTCHA_WIDTH, ySize=len(CAPTCHA_LIST) * CAPTCHA_LEN): """ cnn訓(xùn)練 """ accRate = 0.95 x = tf.placeholder(tf.float32, [None, height * width]) y = tf.placeholder(tf.float32, [None, ySize]) keepProb = tf.placeholder(tf.float32) yConv = cnnGraph(x, keepProb, (height, width)) optimizer = optimizeGraph(y, yConv) accuracy = accuracyGraph(y, yConv) saver = tf.train.Saver() with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # 初始化 step = 0 # 步數(shù) while True: batchX, batchY = getNextBatch(64) sess.run(optimizer, feed_dict={x: batchX, y: batchY, keepProb: 0.75}) # 每訓(xùn)練一百次測(cè)試一次 if step % 100 == 0: batchXTest, batchYTest = getNextBatch(100) acc = sess.run(accuracy, feed_dict={x: batchXTest, y: batchYTest, keepProb: 1.0}) print(datetime.now().strftime('%c'), ' step:', step, ' accuracy:', acc) # 準(zhǔn)確率滿足要求,保存模型 if acc > accRate: modelPath = "./model/captcha.model" saver.save(sess, modelPath, global_step=step) accRate += 0.01 if accRate > 0.90: break step = step + 1 train()
當(dāng)完成了這部分之后,我們可以通過(guò)本地機(jī)器對(duì)模型進(jìn)行訓(xùn)練,為了提升訓(xùn)練速度,我將代碼中的 accRate 部分設(shè)置為:
if accRate > 0.90: break
也就是說(shuō),當(dāng)準(zhǔn)確率超過(guò) 90% 之后,系統(tǒng)就會(huì)自動(dòng)停止,并且保存模型。
接下來(lái)可以進(jìn)行訓(xùn)練:
訓(xùn)練時(shí)間可能會(huì)比較長(zhǎng),訓(xùn)練完成之后,可以根據(jù)結(jié)果繪圖,查看隨著 Step 的增加,準(zhǔn)確率的變化曲線:
橫軸表示訓(xùn)練的 Step,縱軸表示準(zhǔn)確率
3. 基于 Serverless 架構(gòu)的驗(yàn)證碼識(shí)別
將上面的代碼部分進(jìn)行進(jìn)一步整合,按照函數(shù)計(jì)算的規(guī)范進(jìn)行編碼:
# -*- coding:utf-8 -*- # 核心后端服務(wù) import base64 import json import uuid import tensorflow as tf import random import numpy as np from PIL import Image from captcha.image import ImageCaptcha # Response class Response: def __init__(self, start_response, response, errorCode=None): self.start = start_response responseBody = { 'Error': {"Code": errorCode, "Message": response}, } if errorCode else { 'Response': response } # 默認(rèn)增加uuid,便于后期定位 responseBody['ResponseId'] = str(uuid.uuid1()) print("Response: ", json.dumps(responseBody)) self.response = json.dumps(responseBody) def __iter__(self): status = '200' response_headers = [('Content-type', 'application/json; charset=UTF-8')] self.start(status, response_headers) yield self.response.encode("utf-8") CAPTCHA_LIST = [eve for eve in "0123456789abcdefghijklmnopqrsruvwxyzABCDEFGHIJKLMOPQRSTUVWXYZ"] CAPTCHA_LEN = 4 # 驗(yàn)證碼長(zhǎng)度 CAPTCHA_HEIGHT = 60 # 驗(yàn)證碼高度 CAPTCHA_WIDTH = 160 # 驗(yàn)證碼寬度 # 隨機(jī)字符串 randomStr = lambda num=5: "".join(random.sample('abcdefghijklmnopqrstuvwxyz', num)) randomCaptchaText = lambda char=CAPTCHA_LIST, size=CAPTCHA_LEN: "".join([random.choice(char) for _ in range(size)]) # 圖片轉(zhuǎn)為黑白,3維轉(zhuǎn)1維 convert2Gray = lambda img: np.mean(img, -1) if len(img.shape) > 2 else img # 驗(yàn)證碼向量轉(zhuǎn)為文本 vec2Text = lambda vec, captcha_list=CAPTCHA_LIST: ''.join([captcha_list[int(v)] for v in vec]) variable = lambda shape, alpha=0.01: tf.Variable(alpha * tf.random_normal(shape)) conv2d = lambda x, w: tf.nn.conv2d(x, w, strides=[1, 1, 1, 1], padding='SAME') maxPool2x2 = lambda x: tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') optimizeGraph = lambda y, y_conv: tf.train.AdamOptimizer(1e-3).minimize( tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=y_conv))) hDrop = lambda image, weight, bias, keepProb: tf.nn.dropout( maxPool2x2(tf.nn.relu(conv2d(image, variable(weight, 0.01)) + variable(bias, 0.1))), keepProb) def genCaptchaTextImage(width=CAPTCHA_WIDTH, height=CAPTCHA_HEIGHT, save=None): image = ImageCaptcha(width=width, height=height) captchaText = randomCaptchaText() if save: image.write(captchaText, save) return captchaText, np.array(Image.open(image.generate(captchaText))) def text2Vec(text, captcha_len=CAPTCHA_LEN, captcha_list=CAPTCHA_LIST): """ 驗(yàn)證碼文本轉(zhuǎn)為向量 """ vector = np.zeros(captcha_len * len(captcha_list)) for i in range(len(text)): vector[captcha_list.index(text[i]) + i * len(captcha_list)] = 1 return vector def getNextBatch(batch_count=60, width=CAPTCHA_WIDTH, height=CAPTCHA_HEIGHT): """ 獲取訓(xùn)練圖片組 """ batch_x = np.zeros([batch_count, width * height]) batch_y = np.zeros([batch_count, CAPTCHA_LEN * len(CAPTCHA_LIST)]) for i in range(batch_count): text, image = genCaptchaTextImage() image = convert2Gray(image) # 將圖片數(shù)組一維化 同時(shí)將文本也對(duì)應(yīng)在兩個(gè)二維組的同一行 batch_x[i, :] = image.flatten() / 255 batch_y[i, :] = text2Vec(text) return batch_x, batch_y def cnnGraph(x, keepProb, size, captchaList=CAPTCHA_LIST, captchaLen=CAPTCHA_LEN): """ 三層卷積神經(jīng)網(wǎng)絡(luò) """ imageHeight, imageWidth = size xImage = tf.reshape(x, shape=[-1, imageHeight, imageWidth, 1]) hDrop1 = hDrop(xImage, [3, 3, 1, 32], [32], keepProb) hDrop2 = hDrop(hDrop1, [3, 3, 32, 64], [64], keepProb) hDrop3 = hDrop(hDrop2, [3, 3, 64, 64], [64], keepProb) # 全連接層 imageHeight = int(hDrop3.shape[1]) imageWidth = int(hDrop3.shape[2]) wFc = variable([imageHeight * imageWidth * 64, 1024], 0.01) # 上一層有64個(gè)神經(jīng)元 全連接層有1024個(gè)神經(jīng)元 bFc = variable([1024], 0.1) hDrop3Re = tf.reshape(hDrop3, [-1, imageHeight * imageWidth * 64]) hFc = tf.nn.relu(tf.matmul(hDrop3Re, wFc) + bFc) hDropFc = tf.nn.dropout(hFc, keepProb) # 輸出層 wOut = variable([1024, len(captchaList) * captchaLen], 0.01) bOut = variable([len(captchaList) * captchaLen], 0.1) yConv = tf.matmul(hDropFc, wOut) + bOut return yConv def captcha2Text(image_list): """ 驗(yàn)證碼圖片轉(zhuǎn)化為文本 """ with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('model/')) predict = tf.argmax(tf.reshape(yConv, [-1, CAPTCHA_LEN, len(CAPTCHA_LIST)]), 2) vector_list = sess.run(predict, feed_dict={x: image_list, keepProb: 1}) vector_list = vector_list.tolist() text_list = [vec2Text(vector) for vector in vector_list] return text_list x = tf.placeholder(tf.float32, [None, CAPTCHA_HEIGHT * CAPTCHA_WIDTH]) keepProb = tf.placeholder(tf.float32) yConv = cnnGraph(x, keepProb, (CAPTCHA_HEIGHT, CAPTCHA_WIDTH)) saver = tf.train.Saver() def handler(environ, start_response): try: request_body_size = int(environ.get('CONTENT_LENGTH', 0)) except (ValueError): request_body_size = 0 requestBody = json.loads(environ['wsgi.input'].read(request_body_size).decode("utf-8")) imageName = randomStr(10) imagePath = "/tmp/" + imageName print("requestBody: ", requestBody) reqType = requestBody.get("type", None) if reqType == "get_captcha": genCaptchaTextImage(save=imagePath) with open(imagePath, 'rb') as f: data = base64.b64encode(f.read()).decode() return Response(start_response, {'image': data}) if reqType == "get_text": # 圖片獲取 print("Get pucture") imageData = base64.b64decode(requestBody["image"]) with open(imagePath, 'wb') as f: f.write(imageData) # 開(kāi)始預(yù)測(cè) img = Image.open(imageName) img = img.resize((160, 60), Image.ANTIALIAS) img = img.convert("RGB") img = np.asarray(img) image = convert2Gray(img) image = image.flatten() / 255 return Response(start_response, {'result': captcha2Text([image])})
在這個(gè)函數(shù)部分,主要包括兩個(gè)接口:
獲取驗(yàn)證碼:用戶測(cè)試使用,生成驗(yàn)證碼
獲取驗(yàn)證碼識(shí)別結(jié)果:用戶識(shí)別使用,識(shí)別驗(yàn)證碼
這部分代碼,所需要的依賴內(nèi)容如下:
tensorflow==1.13.1 numpy==1.19.4 scipy==1.5.4 pillow==8.0.1 captcha==0.3
另外,為了更加簡(jiǎn)單的來(lái)體驗(yàn),提供測(cè)試頁(yè)面,測(cè)試頁(yè)面的后臺(tái)服務(wù)使用 Python Web Bottle 框架:
# -*- coding:utf-8 -*- import os import json from bottle import route, run, static_file, request import urllib.request url = "http://" + os.environ.get("url") @route('/') def index(): return static_file("index.html", root='html/') @route('/get_captcha') def getCaptcha(): data = json.dumps({"type": "get_captcha"}).encode("utf-8") reqAttr = urllib.request.Request(data=data, url=url) return urllib.request.urlopen(reqAttr).read().decode("utf-8") @route('/get_captcha_result', method='POST') def getCaptcha(): data = json.dumps({"type": "get_text", "image": json.loads(request.body.read().decode("utf-8"))["image"]}).encode( "utf-8") reqAttr = urllib.request.Request(data=data, url=url) return urllib.request.urlopen(reqAttr).read().decode("utf-8") run(host='0.0.0.0', debug=False, port=9000)
該后端服務(wù),所需依賴:
bottle==0.12.19
前端頁(yè)面代碼:
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>驗(yàn)證碼識(shí)別測(cè)試系統(tǒng)</title> <link href="https://www.bootcss.com/p/layoutit/css/bootstrap-combined.min.css" rel="stylesheet"> <script> var image = undefined function getCaptcha() { const xmlhttp = window.XMLHttpRequest ? new XMLHttpRequest() : new ActiveXObject("Microsoft.XMLHTTP"); xmlhttp.open("GET", '/get_captcha', false); xmlhttp.onreadystatechange = function () { if (xmlhttp.readyState == 4 && xmlhttp.status == 200) { image = JSON.parse(xmlhttp.responseText).Response.image document.getElementById("captcha").src = "data:image/png;base64," + image document.getElementById("getResult").style.visibility = 'visible' } } xmlhttp.setRequestHeader("Content-type", "application/json"); xmlhttp.send(); } function getCaptchaResult() { const xmlhttp = window.XMLHttpRequest ? new XMLHttpRequest() : new ActiveXObject("Microsoft.XMLHTTP"); xmlhttp.open("POST", '/get_captcha_result', false); xmlhttp.onreadystatechange = function () { if (xmlhttp.readyState == 4 && xmlhttp.status == 200) { document.getElementById("result").innerText = "識(shí)別結(jié)果:" + JSON.parse(xmlhttp.responseText).Response.result } } xmlhttp.setRequestHeader("Content-type", "application/json"); xmlhttp.send(JSON.stringify({"image": image})); } </script> </head> <body> <div class="container-fluid" style="margin-top: 10px"> <div class="row-fluid"> <div class="span12"> <center> <h4> 驗(yàn)證碼識(shí)別測(cè)試系統(tǒng) </h4> </center> </div> </div> <div class="row-fluid"> <div class="span2"> </div> <div class="span8"> <center> <img src="" id="captcha"/> <br><br> <p id="result"></p> </center> <fieldset> <legend>操作:</legend> <button class="btn" onclick="getCaptcha()">獲取驗(yàn)證碼</button> <button class="btn" onclick="getCaptchaResult()" id="getResult" style="visibility: hidden">識(shí)別驗(yàn)證碼 </button> </fieldset> </div> <div class="span2"> </div> </div> </div> </body> </html>
準(zhǔn)備好代碼之后,開(kāi)始編寫部署文件:
Global: Service: Name: ServerlessBook Description: Serverless圖書(shū)案例 Log: Auto Nas: Auto ServerlessBookCaptchaDemo: Component: fc Provider: alibaba Access: release Extends: deploy: - Hook: s install docker Path: ./ Pre: true Properties: Region: cn-beijing Service: ${Global.Service} Function: Name: serverless_captcha Description: 驗(yàn)證碼識(shí)別 CodeUri: Src: ./src/backend Excludes: - src/backend/.fun - src/backend/model Handler: index.handler Environment: - Key: PYTHONUSERBASE Value: /mnt/auto/.fun/python MemorySize: 3072 Runtime: python3 Timeout: 60 Triggers: - Name: ImageAI Type: HTTP Parameters: AuthType: ANONYMOUS Methods: - GET - POST - PUT Domains: - Domain: Auto ServerlessBookCaptchaWebsiteDemo: Component: bottle Provider: alibaba Access: release Extends: deploy: - Hook: pip3 install -r requirements.txt -t ./ Path: ./src/website Pre: true Properties: Region: cn-beijing CodeUri: ./src/website App: index.py Environment: - Key: url Value: ${ServerlessBookCaptchaDemo.Output.Triggers[0].Domains[0]} Detail: Service: ${Global.Service} Function: Name: serverless_captcha_website
整體的目錄結(jié)構(gòu):
| - src # 項(xiàng)目目錄 | | - backend # 項(xiàng)目后端,核心接口 | | - index.py # 后端核心代碼 | | - requirements.txt # 后端核心代碼依賴 | | - website # 項(xiàng)目前端,便于測(cè)試使用 | | - html # 項(xiàng)目前端頁(yè)面 | | - index.html # 項(xiàng)目前端頁(yè)面 | | - index.py # 項(xiàng)目前端的后臺(tái)服務(wù)(bottle框架) | | - requirements.txt # 項(xiàng)目前端的后臺(tái)服務(wù)依賴
完成之后,我們可以在項(xiàng)目目錄下,進(jìn)行項(xiàng)目的部署:
s deploy
部署完成之后,打開(kāi)返回的頁(yè)面地址:
點(diǎn)擊獲取驗(yàn)證碼,即可在線生成一個(gè)驗(yàn)證碼:
此時(shí)點(diǎn)擊識(shí)別驗(yàn)證碼,即可進(jìn)行驗(yàn)證碼識(shí)別:
由于模型在訓(xùn)練的時(shí)候,填寫的目標(biāo)準(zhǔn)確率是 90%,所以可以認(rèn)為在海量同類型驗(yàn)證碼測(cè)試之后,整體的準(zhǔn)確率在 90% 左右。
看完上述內(nèi)容是否對(duì)您有幫助呢?如果還想對(duì)相關(guān)知識(shí)有進(jìn)一步的了解或閱讀更多相關(guān)文章,請(qǐng)關(guān)注億速云行業(yè)資訊頻道,感謝您對(duì)億速云的支持。
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。