您的位置:首页 > 移动开发 > IOS开发

CoreImage 之 CIDetector iOS 人脸识别

2016-03-13 00:00 351 查看
CoreImage 之 CIDetector iOS 人脸识别
刚从上一家公司离职,所以最近正在找工作,虽然之前做过视频滤镜的添加,图片滤镜的添加,但是却没有做过美颜的功能。在找工作的工程中也确实有公司会问这个问题,所以就自己研究一下思路,至于具体实现的DEMO,以后会放出来。

简单说一下思路吧,其实最初的思路还是对的。

如果要做美颜的功能的话,就需要了解CoreImage,这个不做美颜,处理图片也需要用到这个,当然你也可以用GPUImage,但是后者没有人脸识别的类,但是有美颜的滤镜,这个滤镜叫做 "GPUFaceBeauty" ,不知道是不是这个,一下子没想起来是否写对了,大家查一下好了,然后告诉我。好了,言归正传,这里美颜需要用到CoreImage框架下的CIDetector和CIFeature,以及CIFaceBalance等相关的类。

CIDetector

苹果API:

A
CIDetector
object uses image processing to search for and identify notable features (faces, rectangles, and barcodes) in a still image or video. Detected features are represented by
CIFeature
objects that provide more information about each feature.

This class can maintain many state variables that can impact performance. So for best performance, reuse
CIDetector
instances instead of creating new ones.

一个cidetector对象利用图像处理技术来搜索和识别的显著特征(面,矩形,和条形码)在静止图像或视频。检测到的特征是由cifeature对象提供更多的信息来表示每个特征。
这个类可以保持多个状态变量,可以影响性能。所以为了获得最好的性能,而不是创造出新的重用cidetector实例。

上面的这一段话,我相信就已经说明了问题了,说明了我们为什么需要CIDetector和CIFeature了,简单的用一下吧。

代码:

//添加图片:
UIImage* image = [UIImage imageNamed:@"test.jpg"];
UIImageView *testImage = [[UIImageView alloc] initWithImage: image];
[testImage setTransform:CGAffineTransformMakeScale(1, -1)];
[[[UIApplication sharedApplication] delegate].window setTransform:
CGAffineTransformMakeScale(1, -1)];
[testImage setFrame:CGRectMake(0, 0, testImage.image.size.width,
testImage.image.size.height)];
[self.view addSubview:testImage];

//识别图片:
CIImage* ciimage = [CIImage imageWithCGImage:image.CGImage];
NSDictionary* opts = [NSDictionary dictionaryWithObject:
CIDetectorAccuracyHigh forKey:CIDetectorAccuracy];
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil options:opts];
NSArray* features = [detector featuresInImage:ciimage];

//标出脸部,眼睛和嘴:
for (CIFaceFeature *faceFeature in features){
}

// 标出脸部
CGFloat faceWidt
4000
h = faceFeature.bounds.size.width;
UIView* faceView = [[UIView alloc] initWithFrame:faceFeature.bounds];
faceView.layer.borderWidth = 1;
faceView.layer.borderColor = [[UIColor redColor] CGColor];
[self.view addSubview:faceView];

// 标出左眼
if(faceFeature.hasLeftEyePosition) {
UIView* leftEyeView = [[UIView alloc] initWithFrame:
CGRectMake(faceFeature.leftEyePosition.x-faceWidth*0.15,
faceFeature.leftEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];
[leftEyeView setBackgroundColor:[[UIColor blueColor] colorWithAlphaComponent:0.3]];
[leftEyeView setCenter:faceFeature.leftEyePosition];
leftEyeView.layer.cornerRadius = faceWidth*0.15;
[self.view  addSubview:leftEyeView];
}
// 标出右眼
if(faceFeature.hasRightEyePosition) {
UIView* leftEye = [[UIView alloc] initWithFrame:
CGRectMake(faceFeature.rightEyePosition.x-faceWidth*0.15,
faceFeature.rightEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];
[leftEye setBackgroundColor:[[UIColor blueColor] colorWithAlphaComponent:0.3]];
[leftEye setCenter:faceFeature.rightEyePosition];
leftEye.layer.cornerRadius = faceWidth*0.15;
[self.view  addSubview:leftEye];
}
// 标出嘴部
if(faceFeature.hasMouthPosition) {
UIView* mouth = [[UIView alloc] initWithFrame:
CGRectMake(faceFeature.mouthPosition.x-faceWidth*0.2,
faceFeature.mouthPosition.y-faceWidth*0.2, faceWidth*0.4, faceWidth*0.4)];
[mouth setBackgroundColor:[[UIColor greenColor] colorWithAlphaComponent:0.3]];
[mouth setCenter:faceFeature.mouthPosition];
mouth.layer.cornerRadius = faceWidth*0.2;
[self.view  addSubview:mouth];
}

以上就是简单的检测人脸的方法,而我们需要做美颜,那么就需要用到CIFaceBalance这个方法对象了

CIFaceBalance

这是用来调整人脸的颜色,而我们检测到了之后,其实不是直接处理的,而是将检测到的对象直接做一个标识,然后在layer层上面处理的。

demo后续补上。。。。

还在研究
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: