您好,登錄后才能下訂單哦!
這篇文章將為大家詳細(xì)講解有關(guān)Python如何實(shí)現(xiàn)基于Dlib的人臉識(shí)別系統(tǒng),小編覺得挺實(shí)用的,因此分享給大家做個(gè)參考,希望大家閱讀完這篇文章后可以有所收獲。
python常用的庫:1.requesuts;2.scrapy;3.pillow;4.twisted;5.numpy;6.matplotlib;7.pygama;8.ipyhton等。
人臉識(shí)別系統(tǒng)的實(shí)現(xiàn)流程與之前是一樣的,只是這里我們借助了dlib和face_recognition這兩個(gè)庫來實(shí)現(xiàn)。face_recognition是對(duì)dlib庫的包裝,使對(duì)dlib的使用更方便。所以首先要安裝這2個(gè)庫。
pip3 install dlib pip3 install face_recognition
然后,還要安裝imutils庫
pip3 install imutils
我們看一下項(xiàng)目的目錄結(jié)構(gòu):
. ├── dataset │ ├── alan_grant [22 entries exceeds filelimit, not opening dir] │ ├── claire_dearing [53 entries exceeds filelimit, not opening dir] │ ├── ellie_sattler [31 entries exceeds filelimit, not opening dir] │ ├── ian_malcolm [41 entries exceeds filelimit, not opening dir] │ ├── john_hammond [36 entries exceeds filelimit, not opening dir] │ └── owen_grady [35 entries exceeds filelimit, not opening dir] ├── examples │ ├── example_01.png │ ├── example_02.png │ └── example_03.png ├── output │ ├── lunch_scene_output.avi │ └── webcam_face_recognition_output.avi ├── videos │ └── lunch_scene.mp4 ├── encode_faces.py ├── encodings.pickle ├── recognize_faces_image.py ├── recognize_faces_video_file.py ├── recognize_faces_video.py └── search_bing_api.py 10 directories, 12 files
首先,提取128維的人臉嵌入:
命令如下:
python3 encode_faces.py --dataset dataset --encodings encodings.pickle -d hog
記?。喝绻愕碾娔X內(nèi)存不夠大,請(qǐng)使用hog模型進(jìn)行人臉檢測(cè),如果內(nèi)存夠大,可以使用cnn神經(jīng)網(wǎng)絡(luò)進(jìn)行人臉檢測(cè)。
看代碼:
# USAGE # python encode_faces.py --dataset dataset --encodings encodings.pickle # import the necessary packages from imutils import paths import face_recognition import argparse import pickle import cv2 import os # construct the argument parser and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-i", "--dataset", required=True, help="path to input directory of faces + images") ap.add_argument("-e", "--encodings", required=True, help="path to serialized db of facial encodings") ap.add_argument("-d", "--detection-method", type=str, default="hog", help="face detection model to use: either `hog` or `cnn`") args = vars(ap.parse_args()) # grab the paths to the input images in our dataset print("[INFO] quantifying faces...") imagePaths = list(paths.list_images(args["dataset"])) # initialize the list of known encodings and known names knownEncodings = [] knownNames = [] # loop over the image paths for (i, imagePath) in enumerate(imagePaths): # extract the person name from the image path print("[INFO] processing image {}/{}".format(i + 1, len(imagePaths))) name = imagePath.split(os.path.sep)[-2] # load the input image and convert it from RGB (OpenCV ordering) # to dlib ordering (RGB) image = cv2.imread(imagePath) rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) # detect the (x, y)-coordinates of the bounding boxes # corresponding to each face in the input image boxes = face_recognition.face_locations(rgb, model=args["detection_method"]) # compute the facial embedding for the face encodings = face_recognition.face_encodings(rgb, boxes) # loop over the encodings for encoding in encodings: # add each encoding + name to our set of known names and # encodings knownEncodings.append(encoding) knownNames.append(name) # dump the facial encodings + names to disk print("[INFO] serializing encodings...") data = {"encodings": knownEncodings, "names": knownNames} f = open(args["encodings"], "wb") f.write(pickle.dumps(data)) f.close()
輸出結(jié)果是每張圖片輸出一個(gè)人臉的128維的向量和對(duì)于的名字,并序列化到硬盤,供后續(xù)人臉識(shí)別使用。
識(shí)別圖像中的人臉:
這里使用KNN方法實(shí)現(xiàn)最終的人臉識(shí)別,而不是使用SVM進(jìn)行訓(xùn)練。
命令如下:
python3 recognize_faces_image.py --encodings encodings.pickle --image examples/example_01.png
看代碼:
# USAGE # python recognize_faces_image.py --encodings encodings.pickle --image examples/example_01.png # import the necessary packages import face_recognition import argparse import pickle import cv2 # construct the argument parser and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-e", "--encodings", required=True, help="path to serialized db of facial encodings") ap.add_argument("-i", "--image", required=True, help="path to input image") ap.add_argument("-d", "--detection-method", type=str, default="cnn", help="face detection model to use: either `hog` or `cnn`") args = vars(ap.parse_args()) # load the known faces and embeddings print("[INFO] loading encodings...") data = pickle.loads(open(args["encodings"], "rb").read()) # load the input image and convert it from BGR to RGB image = cv2.imread(args["image"]) rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) # detect the (x, y)-coordinates of the bounding boxes corresponding # to each face in the input image, then compute the facial embeddings # for each face print("[INFO] recognizing faces...") boxes = face_recognition.face_locations(rgb, model=args["detection_method"]) encodings = face_recognition.face_encodings(rgb, boxes) # initialize the list of names for each face detected names = [] # loop over the facial embeddings for encoding in encodings: # attempt to match each face in the input image to our known # encodings matches = face_recognition.compare_faces(data["encodings"], encoding) name = "Unknown" # check to see if we have found a match if True in matches: # find the indexes of all matched faces then initialize a # dictionary to count the total number of times each face # was matched matchedIdxs = [i for (i, b) in enumerate(matches) if b] counts = {} # loop over the matched indexes and maintain a count for # each recognized face face for i in matchedIdxs: name = data["names"][i] counts[name] = counts.get(name, 0) + 1 # determine the recognized face with the largest number of # votes (note: in the event of an unlikely tie Python will # select first entry in the dictionary) name = max(counts, key=counts.get) # update the list of names names.append(name) # loop over the recognized faces for ((top, right, bottom, left), name) in zip(boxes, names): # draw the predicted face name on the image cv2.rectangle(image, (left, top), (right, bottom), (0, 255, 0), 2) y = top - 15 if top - 15 > 15 else top + 15 cv2.putText(image, name, (left, y), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (0, 255, 0), 2) # show the output image cv2.imshow("Image", image) cv2.waitKey(0)
實(shí)際效果如下:
關(guān)于“Python如何實(shí)現(xiàn)基于Dlib的人臉識(shí)別系統(tǒng)”這篇文章就分享到這里了,希望以上內(nèi)容可以對(duì)大家有一定的幫助,使各位可以學(xué)到更多知識(shí),如果覺得文章不錯(cuò),請(qǐng)把它分享出去讓更多的人看到。
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。