IOS学习笔记(4)——自定义相机的实现
2014-08-18 15:36
489 查看
1、苹果的系统相机的调用
在IOS开发中,需要调用苹果的相机,这个其实就是个模态视图的切换,如下:
UIImagePickerController *wImagePickerController = [[UIImagePickerController alloc] init];
wImagePickerController.delegate = self;
wImagePickerController.sourceType = UIImagePickerControllerSourceTypeCamera;
[self presentModalViewController:wImagePickerController animated:YES];
2、自定义相机的实现
系统的相机并不能满足需求。比如我需要在相机上添加一些提示文字,在做OCR识别时,需要添加一个识别框。这时就需要我们进行相机的手动实现。
(1)在进行视频捕获时,有输入设备及输出设备,程序通过 AVCaptureSession 的一个实例来协调、组织数据在它们之间的流动。由此可见,一个相机的实现需要以下要素:
程序中至少需要:
● An instance of AVCaptureDeviceto represent the input device, such as a camera or microphone
● An instance of a concrete subclass ofAVCaptureInputto configure the ports from the input device
● An instance of a concrete subclass ofAVCaptureOutputto manage the output to a movie file or still image
● An instance of AVCaptureSessionto coordinate the data flow from the input to the output
(2)AVCaptureSession的作用,查阅文档可以知道:
To perform a real-time capture, a client may instantiate AVCaptureSession and add appropriate
AVCaptureInputs, such as AVCaptureDeviceInput, and outputs, such as AVCaptureMovieFileOutput.
它是一个实时的“捕获器”,一个AVCaptureSession 可以协调多个输入设备及输出设备。通过 AVCaptureSession 的 addInput、addOutput 方法可将输入、输出设备加入 AVCaptureSession 中。
(3)添加输入设备input到AVCaptureSession:
创建input
//获取后置摄像头
self.captureDevice = [self backCamera];
NSError *error = nil;
AVCaptureDeviceInput *wVideoInput = [AVCaptureDeviceInput deviceInputWithDevice:self.captureDevice error:&error];
[self.captureSession addInput:wVideoInput];
获取后置摄像头方法:
- (AVCaptureDevice *)backCamera
{
NSArray *cameras = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in cameras)
{
if (device.position == AVCaptureDevicePositionBack)
return device;
}
return [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
}
其中,输入设备的种类有:
AVMediaTypeVideo
AVMediaTypeAudio
(4)添加输出output到AVCaptureSession:
AVCaptureVideoDataOutput *wOutput = [[AVCaptureVideoDataOutput alloc] init];
//设置Session 属性:
self.captureSession.sessionPreset = AVCaptureSessionPresetMedium;
[self.captureSession addOutput:wOutput];
其中,输出设备的种类有:
输出设备有:
AVCaptureMovieFileOutput 输出到文件
AVCaptureVideoDataOutput 可用于处理被捕获的视频帧
AVCaptureAudioDataOutput 可用于处理被捕获的音频数据
AVCaptureStillImageOutput 可用于捕获带有元数据(MetaData)的静止图像
(5)AVCaptureVideoDataOutputSampleBufferDelegate协议中有以下方法需要实现,看下它的作用:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection;
Delegates receive this message whenever the output captures and outputs a new video frame, decoding or re-encoding it
as specified by its videoSettings property. Delegates can use the provided video frame in conjunction with other APIs
for further processing. This method will be called on the dispatch queue specified by the output's
sampleBufferCallbackQueue property. This method is called periodically, so it must be efficient to prevent capture
performance problems, including dropped frames.
这是一个周期反复执行的方法,它在视频设备output视频数据时,并将数据重新进行解码和再编码并输出,并输出到一个“缓冲区”内,由于输出是不断进行的,所以我们需要及时处理缓冲区的数据,否则会“溢出”。
(6)获取某一时刻的相片
由于相机就是为了获取某一时刻的照片,所以,对于- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection;
方法的不断输出,我一直丢弃,直到用户点击按钮事件发生。如下:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
if (NO == isbeginCapture)
{
return; //丢弃输出的数据
}
[self cutRectSet];
isbeginCapture = NO;
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
//锁定buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
//获取buffer的基址
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
//获取每行的比特数
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
//获取像素的宽和高
pixelWidth = CVPixelBufferGetWidth(imageBuffer);
pixelHeight = CVPixelBufferGetHeight(imageBuffer);
//创建colorspace
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
//创建bitmap context
CGContextRef context = CGBitmapContextCreate(baseAddress, pixelWidth, pixelHeight, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef wCGImageRef = CGBitmapContextCreateImage(context);
UIImage * wImage = [UIImage imageWithCGImage:wCGImageRef];
CGImageRef cgimg = CGImageCreateWithImageInRect([wImage CGImage], captureRect);
UIGraphicsBeginImageContext(captureRect.size);
CGContextRef context1 = UIGraphicsGetCurrentContext();
CGContextDrawImage(context1, cutRect, cgimg);
//获取图片
self.targetImage = [UIImage imageWithCGImage:cgimg];
UIGraphicsEndImageContext();
CGImageRelease(cgimg);
CGImageRelease(wCGImageRef);
//释放buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
CGContextRelease(context);
}
(7)视频的开始与结束:
- (void)startRunning; 开始数据流从输入到输出的连接
- (void)stopRunning; 停止从输入到输出的数据流连接
(8)做成之后,效果图如下:
3、参考资料
http://course.gdou.com/blog/Blog.pzs/archive/2011/12/14/10882.html
在IOS开发中,需要调用苹果的相机,这个其实就是个模态视图的切换,如下:
UIImagePickerController *wImagePickerController = [[UIImagePickerController alloc] init];
wImagePickerController.delegate = self;
wImagePickerController.sourceType = UIImagePickerControllerSourceTypeCamera;
[self presentModalViewController:wImagePickerController animated:YES];
2、自定义相机的实现
系统的相机并不能满足需求。比如我需要在相机上添加一些提示文字,在做OCR识别时,需要添加一个识别框。这时就需要我们进行相机的手动实现。
(1)在进行视频捕获时,有输入设备及输出设备,程序通过 AVCaptureSession 的一个实例来协调、组织数据在它们之间的流动。由此可见,一个相机的实现需要以下要素:
程序中至少需要:
● An instance of AVCaptureDeviceto represent the input device, such as a camera or microphone
● An instance of a concrete subclass ofAVCaptureInputto configure the ports from the input device
● An instance of a concrete subclass ofAVCaptureOutputto manage the output to a movie file or still image
● An instance of AVCaptureSessionto coordinate the data flow from the input to the output
(2)AVCaptureSession的作用,查阅文档可以知道:
To perform a real-time capture, a client may instantiate AVCaptureSession and add appropriate
AVCaptureInputs, such as AVCaptureDeviceInput, and outputs, such as AVCaptureMovieFileOutput.
它是一个实时的“捕获器”,一个AVCaptureSession 可以协调多个输入设备及输出设备。通过 AVCaptureSession 的 addInput、addOutput 方法可将输入、输出设备加入 AVCaptureSession 中。
(3)添加输入设备input到AVCaptureSession:
创建input
//获取后置摄像头
self.captureDevice = [self backCamera];
NSError *error = nil;
AVCaptureDeviceInput *wVideoInput = [AVCaptureDeviceInput deviceInputWithDevice:self.captureDevice error:&error];
[self.captureSession addInput:wVideoInput];
获取后置摄像头方法:
- (AVCaptureDevice *)backCamera
{
NSArray *cameras = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in cameras)
{
if (device.position == AVCaptureDevicePositionBack)
return device;
}
return [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
}
其中,输入设备的种类有:
AVMediaTypeVideo
AVMediaTypeAudio
(4)添加输出output到AVCaptureSession:
AVCaptureVideoDataOutput *wOutput = [[AVCaptureVideoDataOutput alloc] init];
//设置Session 属性:
self.captureSession.sessionPreset = AVCaptureSessionPresetMedium;
[self.captureSession addOutput:wOutput];
其中,输出设备的种类有:
输出设备有:
AVCaptureMovieFileOutput 输出到文件
AVCaptureVideoDataOutput 可用于处理被捕获的视频帧
AVCaptureAudioDataOutput 可用于处理被捕获的音频数据
AVCaptureStillImageOutput 可用于捕获带有元数据(MetaData)的静止图像
(5)AVCaptureVideoDataOutputSampleBufferDelegate协议中有以下方法需要实现,看下它的作用:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection;
Delegates receive this message whenever the output captures and outputs a new video frame, decoding or re-encoding it
as specified by its videoSettings property. Delegates can use the provided video frame in conjunction with other APIs
for further processing. This method will be called on the dispatch queue specified by the output's
sampleBufferCallbackQueue property. This method is called periodically, so it must be efficient to prevent capture
performance problems, including dropped frames.
这是一个周期反复执行的方法,它在视频设备output视频数据时,并将数据重新进行解码和再编码并输出,并输出到一个“缓冲区”内,由于输出是不断进行的,所以我们需要及时处理缓冲区的数据,否则会“溢出”。
(6)获取某一时刻的相片
由于相机就是为了获取某一时刻的照片,所以,对于- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection;
方法的不断输出,我一直丢弃,直到用户点击按钮事件发生。如下:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
if (NO == isbeginCapture)
{
return; //丢弃输出的数据
}
[self cutRectSet];
isbeginCapture = NO;
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
//锁定buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
//获取buffer的基址
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
//获取每行的比特数
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
//获取像素的宽和高
pixelWidth = CVPixelBufferGetWidth(imageBuffer);
pixelHeight = CVPixelBufferGetHeight(imageBuffer);
//创建colorspace
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
//创建bitmap context
CGContextRef context = CGBitmapContextCreate(baseAddress, pixelWidth, pixelHeight, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef wCGImageRef = CGBitmapContextCreateImage(context);
UIImage * wImage = [UIImage imageWithCGImage:wCGImageRef];
CGImageRef cgimg = CGImageCreateWithImageInRect([wImage CGImage], captureRect);
UIGraphicsBeginImageContext(captureRect.size);
CGContextRef context1 = UIGraphicsGetCurrentContext();
CGContextDrawImage(context1, cutRect, cgimg);
//获取图片
self.targetImage = [UIImage imageWithCGImage:cgimg];
UIGraphicsEndImageContext();
CGImageRelease(cgimg);
CGImageRelease(wCGImageRef);
//释放buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
CGContextRelease(context);
}
(7)视频的开始与结束:
- (void)startRunning; 开始数据流从输入到输出的连接
- (void)stopRunning; 停止从输入到输出的数据流连接
(8)做成之后,效果图如下:
3、参考资料
http://course.gdou.com/blog/Blog.pzs/archive/2011/12/14/10882.html
相关文章推荐
- IOS学习笔记--利用UIWindow实现自定义AlertView
- ((ios开发学习笔记 十一))自定义TableViewCell 的方式实现自定义TableView(带源码)
- ios学习笔记----实现一个带滑动手势的tabBarViewController,并可自定义tabBar
- ((ios开发学习笔记 十二))Nib加载的方式实现自定义TableView
- ((ios开发学习笔记 十))代码实现自定义TableView
- iOS学习笔记-116.多线程15——NSOperationQueue和自定义NSOperation合用实现多线程
- IOS学习笔记32—使用Storyboard实现复杂界面
- java学习笔记—自定义实现linkedList集合
- Nutch 1.3 学习笔记 外传 扩展Nutch插件实现自定义索引字段
- IOS学习笔记34—EGOTableViewPullRefresh实现下拉刷新
- (ios开发学习笔记四)利用toolbar实现多窗体跳转
- 关于SQLServer2005的学习笔记——自定义分组的实现
- ios学习笔记之类实现
- 关于SQLServer2005的学习笔记——自定义分组的实现
- java学习笔记——自定义实现Stack集合
- ((ios开发学习笔记九)) Simple TableView 实现(附 实例源码)
- Android游戏开发学习笔记(一):tweened animation自定义动画的实现
- Silverlight学习笔记四:如何通过自定义ComboBox实现SelectedValue
- android 学习笔记:自定义通用ListView/GridView,实现ListAdapter 类
- ios学习笔记(二)xcode 4.3.2下实现基本交互