您好,登錄后才能下訂單哦!
怎么在Android 中使用 dlib+opencv 實現(xiàn)一個動態(tài)人臉檢測功能?針對這個問題,這篇文章詳細(xì)介紹了相對應(yīng)的分析和解答,希望可以幫助更多想解決這個問題的小伙伴找到更簡單易行的方法。
1 概述
完成 Android 相機(jī)預(yù)覽功能以后,在此基礎(chǔ)上我使用 dlib 與 opencv 庫做了一個關(guān)于人臉檢測的 demo。該 demo 在相機(jī)預(yù)覽過程中對人臉進(jìn)行實時檢測,并將檢測到的人臉用矩形框描繪出來。具體實現(xiàn)原理如下:
采用雙層 View,底層的 TextureView 用于預(yù)覽,程序從 TextureView 中獲取預(yù)覽幀數(shù)據(jù),然后調(diào)用 dlib 庫對幀數(shù)據(jù)進(jìn)行處理,最后將檢測結(jié)果繪制在頂層的 SurfaceView 中。
2 項目配置
由于項目中用到了 dlib 與 opencv 庫,因此需要對其進(jìn)行配置。主要涉及到以下幾個方面:
2.1 C++支持
在項目創(chuàng)建過程中依次選擇 Include C++ Support、C++11、Exceptions Support ( -fexceptions )以及 Runtime Type Information Support ( -frtti ) 。最后生成的 build.gradle 文件如下:
defaultConfig { applicationId "com.example.lightweh.facedetection" minSdkVersion 23 targetSdkVersion 28 versionCode 1 versionName "1.0" testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner" externalNativeBuild { cmake { arguments "-DCMAKE_BUILD_TYPE=Release" cppFlags "-std=c++11 -frtti -fexceptions" } } }
其中,arguments 參數(shù)是后添加上去的,主要用于指定 CMake 的編譯模式為 Release,因為在 Debug 模式下 dlib 庫中相關(guān)算法的運(yùn)行速度非常慢。前期如果需要調(diào)試 C++ 代碼,可先將 arguments 參數(shù)注釋。
2.2 dlib 與 opencv 下載
?到dlib官網(wǎng)下載最新版本的源碼,解壓后將文件夾中的dlib目錄復(fù)制到 Android Studio 工程的 cpp 目錄下。
?到sourceforge 下載最新的 opencv-android 庫,解壓后將文件夾中的 native 目錄同樣復(fù)制到 Android Studio 工程的 cpp 目錄下,并改名為 opencv。
2.3 CMakeLists 配置
在 CMakeLists 文件中,我們首先包含 dlib 的 cmake 文件,接下來添加 opencv 的 include 文件夾并引入 opencv 的 so 庫,同時將 jni_common 目錄中的文件及人臉檢測相關(guān)文件添加至 native-lib 庫中,最后進(jìn)行鏈接。
# 設(shè)置native目錄 set(NATIVE_DIR ${CMAKE_SOURCE_DIR}/src/main/cpp) # 設(shè)置dlib include(${NATIVE_DIR}/dlib/cmake) # 設(shè)置opencv include文件夾 include_directories(${NATIVE_DIR}/opencv/jni/include) # 設(shè)置opencv的so庫 add_library( libopencv_java3 SHARED IMPORTED) set_target_properties( libopencv_java3 PROPERTIES IMPORTED_LOCATION ${NATIVE_DIR}/opencv/libs/${ANDROID_ABI}/libopencv_java3.so) # 將jni_common目錄中所有文件名,存至SRC_LIST中 AUX_SOURCE_DIRECTORY(${NATIVE_DIR}/jni_common SRC_LIST) add_library( # Sets the name of the library. native-lib # Sets the library as a shared library. SHARED # Provides a relative path to your source file(s). ${SRC_LIST} src/main/cpp/face_detector.h src/main/cpp/face_detector.cpp src/main/cpp/native-lib.cpp) find_library( # Sets the name of the path variable. log-lib # Specifies the name of the NDK library that # you want CMake to locate. log) target_link_libraries( # Specifies the target library. native-lib dlib libopencv_java3 jnigraphics # Links the target library to the log library # included in the NDK. ${log-lib}) # 指定release編譯選項 set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} -s -O3 -Wall") set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} -s -O3 -Wall")
由于 C++ 代碼中用到了頭文件 "android/bitmap.h",所以鏈接時需要添加 jnigraphics 庫。
3 JNI相關(guān) Java 類定義
3.1 VisionDetRet 類
VisionDetRet 類的相關(guān)對象主要負(fù)責(zé) C++ 與 Java 之間的數(shù)據(jù)傳遞。
public final class VisionDetRet { private int mLeft; private int mTop; private int mRight; private int mBottom; VisionDetRet() {} public VisionDetRet(int l, int t, int r, int b) { mLeft = l; mTop = t; mRight = r; mBottom = b; } public int getLeft() { return mLeft; } public int getTop() { return mTop; } public int getRight() { return mRight; } public int getBottom() { return mBottom; } }
3.2 FaceDet 類
FaceDet 類為 JNI 函數(shù)調(diào)用類,主要定義了一些需要 C++ 實現(xiàn)的 native 方法。
public class FaceDet { private static final String TAG = "FaceDet"; // accessed by native methods @SuppressWarnings("unused") private long mNativeFaceDetContext; static { try { // 預(yù)加載native方法庫 System.loadLibrary("native-lib"); jniNativeClassInit(); Log.d(TAG, "jniNativeClassInit success"); } catch (UnsatisfiedLinkError e) { Log.e(TAG, "library not found"); } } public FaceDet() { jniInit(); } @Nullable @WorkerThread public List<VisionDetRet> detect(@NonNull Bitmap bitmap) { VisionDetRet[] detRets = jniBitmapDet(bitmap); return Arrays.asList(detRets); } @Override protected void finalize() throws Throwable { super.finalize(); release(); } public void release() { jniDeInit(); } @Keep private native static void jniNativeClassInit(); @Keep private synchronized native int jniInit(); @Keep private synchronized native int jniDeInit(); @Keep private synchronized native VisionDetRet[] jniBitmapDet(Bitmap bitmap); }
4 Native 方法實現(xiàn)
4.1 定義 VisionDetRet 類對應(yīng)的 C++ 類
#include <jni.h> #define CLASSNAME_VISION_DET_RET "com/lightweh/dlib/VisionDetRet" #define CONSTSIG_VISION_DET_RET "()V" #define CLASSNAME_FACE_DET "com/lightweh/dlib/FaceDet" class JNI_VisionDetRet { public: JNI_VisionDetRet(JNIEnv *env) { // 查找VisionDetRet類信息 jclass detRetClass = env->FindClass(CLASSNAME_VISION_DET_RET); // 獲取VisionDetRet類成員變量 jID_left = env->GetFieldID(detRetClass, "mLeft", "I"); jID_top = env->GetFieldID(detRetClass, "mTop", "I"); jID_right = env->GetFieldID(detRetClass, "mRight", "I"); jID_bottom = env->GetFieldID(detRetClass, "mBottom", "I"); } void setRect(JNIEnv *env, jobject &jDetRet, const int &left, const int &top, const int &right, const int &bottom) { // 設(shè)置VisionDetRet類對象jDetRet的成員變量值 env->SetIntField(jDetRet, jID_left, left); env->SetIntField(jDetRet, jID_top, top); env->SetIntField(jDetRet, jID_right, right); env->SetIntField(jDetRet, jID_bottom, bottom); } // 創(chuàng)建VisionDetRet類實例 static jobject createJObject(JNIEnv *env) { jclass detRetClass = env->FindClass(CLASSNAME_VISION_DET_RET); jmethodID mid = env->GetMethodID(detRetClass, "<init>", CONSTSIG_VISION_DET_RET); return env->NewObject(detRetClass, mid); } // 創(chuàng)建VisionDetRet類對象數(shù)組 static jobjectArray createJObjectArray(JNIEnv *env, const int &size) { jclass detRetClass = env->FindClass(CLASSNAME_VISION_DET_RET); return (jobjectArray) env->NewObjectArray(size, detRetClass, NULL); } private: jfieldID jID_left; jfieldID jID_top; jfieldID jID_right; jfieldID jID_bottom; };
4.2 定義人臉檢測類
人臉檢測算法需要用大小位置不同的窗口在圖像中進(jìn)行滑動,然后判斷窗口中是否存在人臉。本文采用的是 dlib 中的是HOG(histogram of oriented gradient)方法對人臉進(jìn)行檢測,其檢測效果要好于 opencv。dlib 中同樣提供了 CNN 方法來進(jìn)行人臉檢測,效果好于 HOG,不過需要使用 GPU 加速,不然程序運(yùn)行會非常慢。
class FaceDetector { private: dlib::frontal_face_detector face_detector; std::vector<dlib::rectangle> det_rects; public: FaceDetector(); // 實現(xiàn)人臉檢測算法 int Detect(const cv::Mat &image); // 返回檢測結(jié)果 std::vector<dlib::rectangle> getDetResultRects(); }; FaceDetector::FaceDetector() { // 定義人臉檢測器 face_detector = dlib::get_frontal_face_detector(); } int FaceDetector::Detect(const cv::Mat &image) { if (image.empty()) return 0; if (image.channels() == 1) { cv::cvtColor(image, image, CV_GRAY2BGR); } dlib::cv_image<dlib::bgr_pixel> dlib_image(image); det_rects.clear(); // 返回檢測到的人臉矩形特征框 det_rects = face_detector(dlib_image); return det_rects.size(); } std::vector<dlib::rectangle> FaceDetector::getDetResultRects() { return det_rects; }
4.3 native 方法實現(xiàn)
JNI_VisionDetRet *g_pJNI_VisionDetRet; JavaVM *g_javaVM = NULL; // 該函數(shù)在加載本地庫時被調(diào)用 JNIEXPORT jint JNI_OnLoad(JavaVM *vm, void *reserved) { g_javaVM = vm; JNIEnv *env; vm->GetEnv((void **) &env, JNI_VERSION_1_6); // 初始化 g_pJNI_VisionDetRet g_pJNI_VisionDetRet = new JNI_VisionDetRet(env); return JNI_VERSION_1_6; } // 該函數(shù)用于執(zhí)行清理操作 void JNI_OnUnload(JavaVM *vm, void *reserved) { g_javaVM = NULL; delete g_pJNI_VisionDetRet; } namespace { #define JAVA_NULL 0 using DetPtr = FaceDetector *; // 用于存放人臉檢測類對象的指針,關(guān)聯(lián)Jave層對象與C++底層對象(相互對應(yīng)) class JNI_FaceDet { public: JNI_FaceDet(JNIEnv *env) { jclass clazz = env->FindClass(CLASSNAME_FACE_DET); mNativeContext = env->GetFieldID(clazz, "mNativeFaceDetContext", "J"); env->DeleteLocalRef(clazz); } DetPtr getDetectorPtrFromJava(JNIEnv *env, jobject thiz) { DetPtr const p = (DetPtr) env->GetLongField(thiz, mNativeContext); return p; } void setDetectorPtrToJava(JNIEnv *env, jobject thiz, jlong ptr) { env->SetLongField(thiz, mNativeContext, ptr); } jfieldID mNativeContext; }; // Protect getting/setting and creating/deleting pointer between java/native std::mutex gLock; std::shared_ptr<JNI_FaceDet> getJNI_FaceDet(JNIEnv *env) { static std::once_flag sOnceInitflag; static std::shared_ptr<JNI_FaceDet> sJNI_FaceDet; std::call_once(sOnceInitflag, [env]() { sJNI_FaceDet = std::make_shared<JNI_FaceDet>(env); }); return sJNI_FaceDet; } // 從java對象獲取它持有的c++對象指針 DetPtr const getDetPtr(JNIEnv *env, jobject thiz) { std::lock_guard<std::mutex> lock(gLock); return getJNI_FaceDet(env)->getDetectorPtrFromJava(env, thiz); } // The function to set a pointer to java and delete it if newPtr is empty // C++對象new以后,將指針轉(zhuǎn)成long型返回給java對象持有 void setDetPtr(JNIEnv *env, jobject thiz, DetPtr newPtr) { std::lock_guard<std::mutex> lock(gLock); DetPtr oldPtr = getJNI_FaceDet(env)->getDetectorPtrFromJava(env, thiz); if (oldPtr != JAVA_NULL) { delete oldPtr; } getJNI_FaceDet(env)->setDetectorPtrToJava(env, thiz, (jlong) newPtr); } } // end unnamespace #ifdef __cplusplus extern "C" { #endif #define DLIB_FACE_JNI_METHOD(METHOD_NAME) Java_com_lightweh_dlib_FaceDet_##METHOD_NAME void JNIEXPORT DLIB_FACE_JNI_METHOD(jniNativeClassInit)(JNIEnv *env, jclass _this) {} // 生成需要返回的結(jié)果數(shù)組 jobjectArray getRecResult(JNIEnv *env, DetPtr faceDetector, const int &size) { // 根據(jù)檢測到的人臉數(shù)創(chuàng)建相應(yīng)大小的jobjectArray jobjectArray jDetRetArray = JNI_VisionDetRet::createJObjectArray(env, size); for (int i = 0; i < size; i++) { // 對檢測到的每一個人臉創(chuàng)建對應(yīng)的實例對象,然后插入數(shù)組 jobject jDetRet = JNI_VisionDetRet::createJObject(env); env->SetObjectArrayElement(jDetRetArray, i, jDetRet); dlib::rectangle rect = faceDetector->getDetResultRects()[i]; // 將人臉矩形框的值賦給對應(yīng)的jobject實例對象 g_pJNI_VisionDetRet->setRect(env, jDetRet, rect.left(), rect.top(), rect.right(), rect.bottom()); } return jDetRetArray; } JNIEXPORT jobjectArray JNICALL DLIB_FACE_JNI_METHOD(jniBitmapDet)(JNIEnv *env, jobject thiz, jobject bitmap) { cv::Mat rgbaMat; cv::Mat bgrMat; jniutils::ConvertBitmapToRGBAMat(env, bitmap, rgbaMat, true); cv::cvtColor(rgbaMat, bgrMat, cv::COLOR_RGBA2BGR); // 獲取人臉檢測類指針 DetPtr mDetPtr = getDetPtr(env, thiz); // 調(diào)用人臉檢測算法,返回檢測到的人臉數(shù) jint size = mDetPtr->Detect(bgrMat); // 返回檢測結(jié)果 return getRecResult(env, mDetPtr, size); } jint JNIEXPORT JNICALL DLIB_FACE_JNI_METHOD(jniInit)(JNIEnv *env, jobject thiz) { DetPtr mDetPtr = new FaceDetector(); // 設(shè)置人臉檢測類指針 setDetPtr(env, thiz, mDetPtr); return JNI_OK; } jint JNIEXPORT JNICALL DLIB_FACE_JNI_METHOD(jniDeInit)(JNIEnv *env, jobject thiz) { // 指針置0 setDetPtr(env, thiz, JAVA_NULL); return JNI_OK; } #ifdef __cplusplus } #endif
5 Java端調(diào)用人臉檢測算法
在開啟人臉檢測之前,需要在相機(jī) AutoFitTextureView 上覆蓋一層自定義 BoundingBoxView 用于繪制檢測到的人臉矩形框,該 View 的具體實現(xiàn)如下:
public class BoundingBoxView extends SurfaceView implements SurfaceHolder.Callback { protected SurfaceHolder mSurfaceHolder; private Paint mPaint; private boolean mIsCreated; public BoundingBoxView(Context context, AttributeSet attrs) { super(context, attrs); mSurfaceHolder = getHolder(); mSurfaceHolder.addCallback(this); mSurfaceHolder.setFormat(PixelFormat.TRANSPARENT); setZOrderOnTop(true); mPaint = new Paint(); mPaint.setAntiAlias(true); mPaint.setColor(Color.RED); mPaint.setStrokeWidth(5f); mPaint.setStyle(Paint.Style.STROKE); } @Override public void surfaceChanged(SurfaceHolder surfaceHolder, int format, int width, int height) { } @Override public void surfaceCreated(SurfaceHolder surfaceHolder) { mIsCreated = true; } @Override public void surfaceDestroyed(SurfaceHolder surfaceHolder) { mIsCreated = false; } public void setResults(List<VisionDetRet> detRets) { if (!mIsCreated) { return; } Canvas canvas = mSurfaceHolder.lockCanvas(); //清除掉上一次的畫框。 canvas.drawColor(Color.TRANSPARENT, PorterDuff.Mode.CLEAR); canvas.drawColor(Color.TRANSPARENT); for (VisionDetRet detRet : detRets) { Rect rect = new Rect(detRet.getLeft(), detRet.getTop(), detRet.getRight(), detRet.getBottom()); canvas.drawRect(rect, mPaint); } mSurfaceHolder.unlockCanvasAndPost(canvas); } }
同時,需要在布局文件中添加對應(yīng)的 BoundingBoxView 層,保證與 AutoFitTextureView 完全重合:
<?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".CameraFragment"> <com.lightweh.facedetection.AutoFitTextureView android:id="@+id/textureView" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_centerVertical="true" android:layout_centerHorizontal="true" /> <com.lightweh.facedetection.BoundingBoxView android:id="@+id/boundingBoxView" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignLeft="@+id/textureView" android:layout_alignTop="@+id/textureView" android:layout_alignRight="@+id/textureView" android:layout_alignBottom="@+id/textureView" /> </RelativeLayout>
BoundingBoxView 添加完成以后,即可在 CameraFragment 中添加對應(yīng)的人臉檢測代碼:
private class detectAsync extends AsyncTask<Bitmap, Void, List<VisionDetRet>> { @Override protected void onPreExecute() { mIsDetecting = true; super.onPreExecute(); } protected List<VisionDetRet> doInBackground(Bitmap... bp) { List<VisionDetRet> results; // 返回檢測結(jié)果 results = mFaceDet.detect(bp[0]); return results; } protected void onPostExecute(List<VisionDetRet> results) { // 繪制檢測到的人臉矩形框 mBoundingBoxView.setResults(results); mIsDetecting = false; } }
然后,分別在 onResume 與 onPause 函數(shù)中完成人臉檢測類對象的初始化和釋放:
@Override public void onResume() { super.onResume(); startBackgroundThread(); mFaceDet = new FaceDet(); if (mTextureView.isAvailable()) { openCamera(mTextureView.getWidth(), mTextureView.getHeight()); } else { mTextureView.setSurfaceTextureListener(mSurfaceTextureListener); } } @Override public void onPause() { closeCamera(); stopBackgroundThread(); if (mFaceDet != null) { mFaceDet.release(); } super.onPause(); }
最后,在 TextureView 的回調(diào)函數(shù) onSurfaceTextureUpdated 完成調(diào)用:
@Override public void onSurfaceTextureUpdated(SurfaceTexture texture) { if (!mIsDetecting) { Bitmap bp = mTextureView.getBitmap(); // 保證圖片方向與預(yù)覽方向一致 bp = Bitmap.createBitmap(bp, 0, 0, bp.getWidth(), bp.getHeight(), mTextureView.getTransform(null), true ); new detectAsync().execute(bp); } }
關(guān)于怎么在Android 中使用 dlib+opencv 實現(xiàn)一個動態(tài)人臉檢測功能問題的解答就分享到這里了,希望以上內(nèi)容可以對大家有一定的幫助,如果你還有很多疑惑沒有解開,可以關(guān)注億速云行業(yè)資訊頻道了解更多相關(guān)知識。
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進(jìn)行舉報,并提供相關(guān)證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權(quán)內(nèi)容。