您好,登錄后才能下訂單哦!
這篇文章主要介紹Android API如何實現(xiàn)人臉檢測,文中介紹的非常詳細,具有一定的參考價值,感興趣的小伙伴們一定要看完!
圖片來源:Wikipedia
所謂人臉檢測就是指從一副圖片或者一幀視頻中標定出所有人臉的位置和尺寸。人臉檢測是人臉識別系統(tǒng)中的一個重要環(huán)節(jié),也可以獨立應(yīng)用于視頻監(jiān)控。在數(shù)字媒體日益普及的今天,利用人臉檢測技術(shù)還可以幫助我們從海量圖片數(shù)據(jù)中快速篩選出包含人臉的圖片。 在目前的數(shù)碼相機中,人臉檢測可以用來完成自動對焦,即“臉部對焦”?!澳槻繉埂笔窃谧詣悠毓夂妥詣訉拱l(fā)明后,二十年來最重要的一次攝影技術(shù)革新。家用數(shù)碼相機,占絕大多數(shù)的照片是以人為拍攝主體的,這就要求相機的自動曝光和對焦以人物為基準。
via cdstm.cn
構(gòu)建一個人臉檢測的Android Activity
你可以構(gòu)建一個通用的Android Activity,我們擴展了基類ImageView,成為MyImageView,而我們需要進行檢測的包含人臉的位圖文件必須是565格式,API才能正常工作。被檢測出來的人臉需要一個置信測度(confidence measure),這個措施定義在android.media.FaceDetector.Face.CONFIDENCE_THRESHOLD。
最重要的方法實現(xiàn)在setFace(),它將FaceDetector對象實例化,同時調(diào)用findFaces,結(jié)果存放在faces里,人臉的中點轉(zhuǎn)移到MyImageView。代碼如下:
public class TutorialOnFaceDetect1 extends Activity { private MyImageView mIV; private Bitmap mFaceBitmap; private int mFaceWidth = 200; private int mFaceHeight = 200; private static final int MAX_FACES = 1; private static String TAG = "TutorialOnFaceDetect"; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); mIV = new MyImageView(this); setContentView(mIV, new LayoutParams(LayoutParams.WRAP_CONTENT, LayoutParams.WRAP_CONTENT)); // load the photo Bitmap b = BitmapFactory.decodeResource(getResources(), R.drawable.face3); mFaceBitmap = b.copy(Bitmap.Config.RGB_565, true); b.recycle(); mFaceWidth = mFaceBitmap.getWidth(); mFaceHeight = mFaceBitmap.getHeight(); mIV.setImageBitmap(mFaceBitmap); // perform face detection and set the feature points setFace(); mIV.invalidate(); } public void setFace() { FaceDetector fd; FaceDetector.Face [] faces = new FaceDetector.Face[MAX_FACES]; PointF midpoint = new PointF(); int [] fpx = null; int [] fpy = null; int count = 0; try { fd = new FaceDetector(mFaceWidth, mFaceHeight, MAX_FACES); count = fd.findFaces(mFaceBitmap, faces); } catch (Exception e) { Log.e(TAG, "setFace(): " + e.toString()); return; } // check if we detect any faces if (count > 0) { fpx = new int[count]; fpy = new int[count]; for (int i = 0; i < count; i++) { try { faces[i].getMidPoint(midpoint); fpx[i] = (int)midpoint.x; fpy[i] = (int)midpoint.y; } catch (Exception e) { Log.e(TAG, "setFace(): face " + i + ": " + e.toString()); } } } mIV.setDisplayPoints(fpx, fpy, count, 0); } }
接下來的代碼中,我們在MyImageView中添加setDisplayPoints() ,用來在被檢測出的人臉上標記渲染。圖1展示了一個標記在被檢測處的人臉上處于中心位置。
// set up detected face features for display public void setDisplayPoints(int [] xx, int [] yy, int total, int style) { mDisplayStyle = style; mPX = null; mPY = null; if (xx != null && yy != null && total > 0) { mPX = new int[total]; mPY = new int[total]; for (int i = 0; i < total; i++) { mPX[i] = xx[i]; mPY[i] = yy[i]; } } }
圖1:單一人臉檢測
多人臉檢測
通過FaceDetector可以設(shè)定檢測到人臉數(shù)目的上限。比如設(shè)置最多只檢測10張臉:
private static final int MAX_FACES = 10;
圖2展示檢測到多張人臉的情況。
圖2:多人人臉檢測
定位眼睛中心位置
Android人臉檢測返回其他有用的信息,例同時會返回如eyesDistance,pose,以及confidence。我們可以通過eyesDistance來定位眼睛的中心位置。
下面的代碼中,我們將setFace()放在doLengthyCalc()中。同時圖3展示了定位眼睛中心位置的效果。
public class TutorialOnFaceDetect extends Activity { private MyImageView mIV; private Bitmap mFaceBitmap; private int mFaceWidth = 200; private int mFaceHeight = 200; private static final int MAX_FACES = 10; private static String TAG = "TutorialOnFaceDetect"; private static boolean DEBUG = false; protected static final int GUIUPDATE_SETFACE = 999; protected Handler mHandler = new Handler(){ // @Override public void handleMessage(Message msg) { mIV.invalidate(); super.handleMessage(msg); } }; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); mIV = new MyImageView(this); setContentView(mIV, new LayoutParams(LayoutParams.WRAP_CONTENT, LayoutParams.WRAP_CONTENT)); // load the photo Bitmap b = BitmapFactory.decodeResource(getResources(), R.drawable.face3); mFaceBitmap = b.copy(Bitmap.Config.RGB_565, true); b.recycle(); mFaceWidth = mFaceBitmap.getWidth(); mFaceHeight = mFaceBitmap.getHeight(); mIV.setImageBitmap(mFaceBitmap); mIV.invalidate(); // perform face detection in setFace() in a background thread doLengthyCalc(); } public void setFace() { FaceDetector fd; FaceDetector.Face [] faces = new FaceDetector.Face[MAX_FACES]; PointF eyescenter = new PointF(); float eyesdist = 0.0f; int [] fpx = null; int [] fpy = null; int count = 0; try { fd = new FaceDetector(mFaceWidth, mFaceHeight, MAX_FACES); count = fd.findFaces(mFaceBitmap, faces); } catch (Exception e) { Log.e(TAG, "setFace(): " + e.toString()); return; } // check if we detect any faces if (count > 0) { fpx = new int[count * 2]; fpy = new int[count * 2]; for (int i = 0; i < count; i++) { try { faces[i].getMidPoint(eyescenter); eyesdist = faces[i].eyesDistance(); // set up left eye location fpx[2 * i] = (int)(eyescenter.x - eyesdist / 2); fpy[2 * i] = (int)eyescenter.y; // set up right eye location fpx[2 * i + 1] = (int)(eyescenter.x + eyesdist / 2); fpy[2 * i + 1] = (int)eyescenter.y; if (DEBUG) { Log.e(TAG, "setFace(): face " + i + ": confidence = " + faces[i].confidence() + ", eyes distance = " + faces[i].eyesDistance() + ", pose = ("+ faces[i].pose(FaceDetector.Face.EULER_X) + "," + faces[i].pose(FaceDetector.Face.EULER_Y) + "," + faces[i].pose(FaceDetector.Face.EULER_Z) + ")" + ", eyes midpoint = (" + eyescenter.x + "," + eyescenter.y +")"); } } catch (Exception e) { Log.e(TAG, "setFace(): face " + i + ": " + e.toString()); } } } mIV.setDisplayPoints(fpx, fpy, count * 2, 1); } private void doLengthyCalc() { Thread t = new Thread() { Message m = new Message(); public void run() { try { setFace(); m.what = TutorialOnFaceDetect.GUIUPDATE_SETFACE; TutorialOnFaceDetect.this.mHandler.sendMessage(m); } catch (Exception e) { Log.e(TAG, "doLengthyCalc(): " + e.toString()); } } }; t.start(); } }
以上是“Android API如何實現(xiàn)人臉檢測”這篇文章的所有內(nèi)容,感謝各位的閱讀!希望分享的內(nèi)容對大家有幫助,更多相關(guān)知識,歡迎關(guān)注億速云行業(yè)資訊頻道!
免責聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進行舉報,并提供相關(guān)證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權(quán)內(nèi)容。