您的位置:首页 > 移动开发 > Android开发

Android人脸检测的开发实例

2012-09-18 23:55 477 查看
Android中可以直接在位图上进行人脸检测。Android SDK为人脸检测提供了两个类,分别是android.media.FaceDetector和android.media.FaceDetector.Face。

所谓人脸检测就是指从一副图片或者一帧视频中标定出所有人脸的位置和尺寸。人脸检测是人脸识别系统中的一个重要环节,也可以独立应用于视频监控。在数字媒体日益普及的今天,利用人脸检测技术还可以帮助我们从海量图片数据中快速筛选出包含人脸的图片。 在目前的数码相机中,人脸检测可以用来完成自动对焦,即“脸部对焦”。“脸部对焦”是在自动曝光和自动对焦发明后,二十年来最重要的一次摄影技术革新。家用数码相机,占绝大多数的照片是以人为拍摄主体的,这就要求相机的自动曝光和对焦以人物为基准。

构建一个人脸检测的Activity

你可以构建一个通用的Android Activity,我们扩展了基类ImageView,成为MyImageView,而我们需要进行检测的包含人脸的位图文件必须是565格式,API才能正常工作。被检测出来的人脸需要一个置信测度(confidence measure),这个措施定义在android.media.FaceDetector.Face.CONFIDENCE_THRESHOLD。

最重要的方法实现在setFace(),它将FaceDetector对象实例化,同时调用findFaces,结果存放在faces里,人脸的中点转移到MyImageView。代码如下:

public class TutorialOnFaceDetect1 extends Activity {     
    private MyImageView mIV;     
    private Bitmap mFaceBitmap;     
    private int mFaceWidth = 200;     
    private int mFaceHeight = 200;     
    private static final int MAX_FACES = 1;     
    private static String TAG = "TutorialOnFaceDetect";     
         
    @Override    
    public void onCreate(Bundle savedInstanceState) {     
    super.onCreate(savedInstanceState);     
         
    mIV = new MyImageView(this);     
    setContentView(mIV, new LayoutParams(LayoutParams.WRAP_CONTENT, LayoutParams.WRAP_CONTENT));     
         
    // load the photo     
    Bitmap b = BitmapFactory.decodeResource(getResources(), R.drawable.face3);     
    mFaceBitmap = b.copy(Bitmap.Config.RGB_565, true);     
    b.recycle();     
         
    mFaceWidth = mFaceBitmap.getWidth();     
    mFaceHeight = mFaceBitmap.getHeight();     
    mIV.setImageBitmap(mFaceBitmap);     
         
    // perform face detection and set the feature points setFace();     
         
    mIV.invalidate();     
    }     
         
    public void setFace() {     
    FaceDetector fd;     
    FaceDetector.Face [] faces = new FaceDetector.Face[MAX_FACES];     
    PointF midpoint = new PointF();     
    int [] fpx = null;     
    int [] fpy = null;     
    int count = 0;     
         
    try {     
    fd = new FaceDetector(mFaceWidth, mFaceHeight, MAX_FACES);     
    count = fd.findFaces(mFaceBitmap, faces);     
    } catch (Exception e) {     
    Log.e(TAG, "setFace(): " + e.toString());     
    return;     
    }     
         
    // check if we detect any faces     
    if (count > 0) {     
    fpx = new int[count];     
    fpy = new int[count];     
         
    for (int i = 0; i < count; i++) {     
    try {     
    faces<I>.getMidPoint(midpoint);     
         
    fpx = (int)midpoint.x;     
    fpy = (int)midpoint.y;     
    } catch (Exception e) {     
    Log.e(TAG, "setFace(): face " + i + ": " + e.toString());     
    }     
    }     
    }     
         
    mIV.setDisplayPoints(fpx, fpy, count, 0);     
    }     
    }
接下来的代码中,我们在MyImageView中添加setDisplayPoints() ,用来在被检测出的人脸上标记渲染。

public class TutorialOnFaceDetect1 extends Activity {     
    private MyImageView mIV;     
    private Bitmap mFaceBitmap;     
    private int mFaceWidth = 200;     
    private int mFaceHeight = 200;     
    private static final int MAX_FACES = 1;     
    private static String TAG = "TutorialOnFaceDetect";     
         
    @Override    
    public void onCreate(Bundle savedInstanceState) {     
    super.onCreate(savedInstanceState);     
         
    mIV = new MyImageView(this);     
    setContentView(mIV, new LayoutParams(LayoutParams.WRAP_CONTENT, LayoutParams.WRAP_CONTENT));     
         
    // load the photo     
    Bitmap b = BitmapFactory.decodeResource(getResources(), R.drawable.face3);     
    mFaceBitmap = b.copy(Bitmap.Config.RGB_565, true);     
    b.recycle();     
         
    mFaceWidth = mFaceBitmap.getWidth();     
    mFaceHeight = mFaceBitmap.getHeight();     
    mIV.setImageBitmap(mFaceBitmap);     
         
    // perform face detection and set the feature points setFace();     
         
    mIV.invalidate();     
    }     
         
    public void setFace() {     
    FaceDetector fd;     
    FaceDetector.Face [] faces = new FaceDetector.Face[MAX_FACES];     
    PointF midpoint = new PointF();     
    int [] fpx = null;     
    int [] fpy = null;     
    int count = 0;     
         
    try {     
    fd = new FaceDetector(mFaceWidth, mFaceHeight, MAX_FACES);     
    count = fd.findFaces(mFaceBitmap, faces);     
    } catch (Exception e) {     
    Log.e(TAG, "setFace(): " + e.toString());     
    return;     
    }     
         
    // check if we detect any faces     
    if (count > 0) {     
    fpx = new int[count];     
    fpy = new int[count];     
         
    for (int i = 0; i < count; i++) {     
    try {     
    faces<I>.getMidPoint(midpoint);     
         
    fpx = (int)midpoint.x;     
    fpy = (int)midpoint.y;     
    } catch (Exception e) {     
    Log.e(TAG, "setFace(): face " + i + ": " + e.toString());     
    }     
    }     
    }     
         
    mIV.setDisplayPoints(fpx, fpy, count, 0);     
    }     
    }
多人脸检测

通过FaceDetector可以设定检测到人脸数目的上限。比如设置最多只检测10张脸:

private static final int MAX_FACES = 10;
下图展示了检测到多张人脸的情况:



定位眼睛中心位置

Android人脸检测返回其他有用的信息,例同时会返回如eyesDistance,pose,以及confidence。我们可以通过eyesDistance来定位眼睛的中心位置。

下面的代码中,我们将setFace()放在doLengthyCalc()中。

public class TutorialOnFaceDetect extends Activity {     
    private MyImageView mIV;     
    private Bitmap mFaceBitmap;     
    private int mFaceWidth = 200;     
    private int mFaceHeight = 200;     
    private static final int MAX_FACES = 10;     
    private static String TAG = "TutorialOnFaceDetect";     
    private static boolean DEBUG = false;     
         
    protected static final int GUIUPDATE_SETFACE = 999;     
    protected Handler mHandler = new Handler(){     
    // @Override     
    public void handleMessage(Message msg) {     
    mIV.invalidate();     
         
    super.handleMessage(msg);     
    }     
    };     
         
    @Override    
    public void onCreate(Bundle savedInstanceState) {     
    super.onCreate(savedInstanceState);     
         
    mIV = new MyImageView(this);     
    setContentView(mIV, new LayoutParams(LayoutParams.WRAP_CONTENT, LayoutParams.WRAP_CONTENT));     
         
    // load the photo     
    Bitmap b = BitmapFactory.decodeResource(getResources(), R.drawable.face3);     
    mFaceBitmap = b.copy(Bitmap.Config.RGB_565, true);     
    b.recycle();     
         
    mFaceWidth = mFaceBitmap.getWidth();     
    mFaceHeight = mFaceBitmap.getHeight();     
    mIV.setImageBitmap(mFaceBitmap);     
    mIV.invalidate();     
         
    // perform face detection in setFace() in a background thread     
    doLengthyCalc();     
    }     
         
    public void setFace() {     
    FaceDetector fd;     
    FaceDetector.Face [] faces = new FaceDetector.Face[MAX_FACES];     
    PointF eyescenter = new PointF();     
    float eyesdist = 0.0f;     
    int [] fpx = null;     
    int [] fpy = null;     
    int count = 0;     
         
    try {     
    fd = new FaceDetector(mFaceWidth, mFaceHeight, MAX_FACES);     
    count = fd.findFaces(mFaceBitmap, faces);     
    } catch (Exception e) {     
    Log.e(TAG, "setFace(): " + e.toString());     
    return;     
    }     
         
    // check if we detect any faces     
    if (count > 0) {     
    fpx = new int[count * 2];     
    fpy = new int[count * 2];     
         
    for (int i = 0; i < count; i++) {     
    try {     
    faces<I>.getMidPoint(eyescenter);     
    eyesdist = faces<I>.eyesDistance();     
         
    // set up left eye location     
    fpx[2 * i] = (int)(eyescenter.x - eyesdist / 2);     
    fpy[2 * i] = (int)eyescenter.y;     
         
    // set up right eye location     
    fpx[2 * i + 1] = (int)(eyescenter.x + eyesdist / 2);     
    fpy[2 * i + 1] = (int)eyescenter.y;     
         
    if (DEBUG) {     
    Log.e(TAG, "setFace(): face " + i + ": confidence = " + faces<I>.confidence()     
    + ", eyes distance = " + faces<I>.eyesDistance()     
    + ", pose = ("+ faces<I>.pose(FaceDetector.Face.EULER_X) + ","    
    + faces<I>.pose(FaceDetector.Face.EULER_Y) + ","    
    + faces<I>.pose(FaceDetector.Face.EULER_Z) + ")"    
    + ", eyes midpoint = (" + eyescenter.x + "," + eyescenter.y +")");     
    }     
    } catch (Exception e) {     
    Log.e(TAG, "setFace(): face " + i + ": " + e.toString());     
    }     
    }     
    }     
         
    mIV.setDisplayPoints(fpx, fpy, count * 2, 1);     
    }     
         
    private void doLengthyCalc() {     
    Thread t = new Thread() {     
    Message m = new Message();     
         
    public void run() {     
    try {     
    setFace();     
    m.what = TutorialOnFaceDetect.GUIUPDATE_SETFACE;     
    TutorialOnFaceDetect.this.mHandler.sendMessage(m);     
    } catch (Exception e) {     
    Log.e(TAG, "doLengthyCalc(): " + e.toString());     
    }     
    }     
    };     
         
    t.start();     
    }     
    }
下图展示了定位眼睛中心位置的效果:
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: