您的位置:首页 > 移动开发 > IOS开发

IOS学习笔记(4)——自定义相机的实现

2014-08-18 15:36 489 查看
1、苹果的系统相机的调用

在IOS开发中,需要调用苹果的相机,这个其实就是个模态视图的切换,如下:

UIImagePickerController *wImagePickerController = [[UIImagePickerController alloc] init];

wImagePickerController.delegate = self;

wImagePickerController.sourceType = UIImagePickerControllerSourceTypeCamera;

[self presentModalViewController:wImagePickerController animated:YES];

2、自定义相机的实现

系统的相机并不能满足需求。比如我需要在相机上添加一些提示文字,在做OCR识别时,需要添加一个识别框。这时就需要我们进行相机的手动实现。

(1)在进行视频捕获时,有输入设备及输出设备,程序通过 AVCaptureSession 的一个实例来协调、组织数据在它们之间的流动。由此可见,一个相机的实现需要以下要素:

程序中至少需要:

● An instance of AVCaptureDeviceto represent the input device, such as a camera or microphone
● An instance of a concrete subclass ofAVCaptureInputto configure the ports from the input device
● An instance of a concrete subclass ofAVCaptureOutputto manage the output to a movie file or still image
● An instance of AVCaptureSessionto coordinate the data flow from the input to the output
(2)AVCaptureSession的作用,查阅文档可以知道:

To perform a real-time capture, a client may instantiate AVCaptureSession and add appropriate

AVCaptureInputs, such as AVCaptureDeviceInput, and outputs, such as AVCaptureMovieFileOutput.

它是一个实时的“捕获器”,一个AVCaptureSession 可以协调多个输入设备及输出设备。通过 AVCaptureSession 的 addInput、addOutput 方法可将输入、输出设备加入 AVCaptureSession 中。

(3)添加输入设备input到AVCaptureSession:

创建input

//获取后置摄像头

self.captureDevice = [self backCamera];

NSError *error = nil;

AVCaptureDeviceInput *wVideoInput = [AVCaptureDeviceInput deviceInputWithDevice:self.captureDevice error:&error];

[self.captureSession addInput:wVideoInput];

获取后置摄像头方法:

- (AVCaptureDevice *)backCamera

{

NSArray *cameras = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];

for (AVCaptureDevice *device in cameras)

{

if (device.position == AVCaptureDevicePositionBack)

return device;

}

return [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

}

其中,输入设备的种类有:

AVMediaTypeVideo

AVMediaTypeAudio

(4)添加输出output到AVCaptureSession:

AVCaptureVideoDataOutput *wOutput = [[AVCaptureVideoDataOutput alloc] init];

//设置Session 属性:

self.captureSession.sessionPreset = AVCaptureSessionPresetMedium;

[self.captureSession addOutput:wOutput];

其中,输出设备的种类有:

输出设备有:

AVCaptureMovieFileOutput 输出到文件

AVCaptureVideoDataOutput 可用于处理被捕获的视频帧

AVCaptureAudioDataOutput 可用于处理被捕获的音频数据

AVCaptureStillImageOutput 可用于捕获带有元数据(MetaData)的静止图像

(5)AVCaptureVideoDataOutputSampleBufferDelegate协议中有以下方法需要实现,看下它的作用:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection;

Delegates receive this message whenever the output captures and outputs a new video frame, decoding or re-encoding it

as specified by its videoSettings property. Delegates can use the provided video frame in conjunction with other APIs

for further processing. This method will be called on the dispatch queue specified by the output's

sampleBufferCallbackQueue property. This method is called periodically, so it must be efficient to prevent capture

performance problems, including dropped frames.

这是一个周期反复执行的方法,它在视频设备output视频数据时,并将数据重新进行解码和再编码并输出,并输出到一个“缓冲区”内,由于输出是不断进行的,所以我们需要及时处理缓冲区的数据,否则会“溢出”。

(6)获取某一时刻的相片

由于相机就是为了获取某一时刻的照片,所以,对于- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection;

方法的不断输出,我一直丢弃,直到用户点击按钮事件发生。如下:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection

{

if (NO == isbeginCapture)

{

return; //丢弃输出的数据

}

[self cutRectSet];

isbeginCapture = NO;

CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

//锁定buffer

CVPixelBufferLockBaseAddress(imageBuffer, 0);

//获取buffer的基址

void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);

//获取每行的比特数

size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);

//获取像素的宽和高

pixelWidth = CVPixelBufferGetWidth(imageBuffer);

pixelHeight = CVPixelBufferGetHeight(imageBuffer);

//创建colorspace

CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

//创建bitmap context

CGContextRef context = CGBitmapContextCreate(baseAddress, pixelWidth, pixelHeight, 8,

bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);

CGImageRef wCGImageRef = CGBitmapContextCreateImage(context);

UIImage * wImage = [UIImage imageWithCGImage:wCGImageRef];

CGImageRef cgimg = CGImageCreateWithImageInRect([wImage CGImage], captureRect);

UIGraphicsBeginImageContext(captureRect.size);

CGContextRef context1 = UIGraphicsGetCurrentContext();

CGContextDrawImage(context1, cutRect, cgimg);

//获取图片

self.targetImage = [UIImage imageWithCGImage:cgimg];

UIGraphicsEndImageContext();

CGImageRelease(cgimg);

CGImageRelease(wCGImageRef);

//释放buffer

CVPixelBufferUnlockBaseAddress(imageBuffer,0);

CGContextRelease(context);

}

(7)视频的开始与结束:

- (void)startRunning; 开始数据流从输入到输出的连接

- (void)stopRunning; 停止从输入到输出的数据流连接

(8)做成之后,效果图如下:





3、参考资料

http://course.gdou.com/blog/Blog.pzs/archive/2011/12/14/10882.html
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: