How to capture video frames from the camera as images using AV Foundation
2012-03-09 23:16
501 查看
A: How do I capture video frames from the camera as images using AV Foundation?
To perform a real-time capture, first create a capture session by instantiating an
Next, create a input data source that provides video data to the capture session by instantiating a
Create an output destination by instantiating an
You invoke the capture session
Listing 1 shows an example of this.
Listing 1 Configuring a capture device to record video with AV Foundation and saving the frames as
To perform a real-time capture, first create a capture session by instantiating an
AVCaptureSessionobject. You use an
AVCaptureSessionobject to coordinate the flow of data from AV input devices to outputs.
Next, create a input data source that provides video data to the capture session by instantiating a
AVCaptureDeviceInputobject. Call
addInputto add that input to the
AVCaptureSessionobject.
Create an output destination by instantiating an
AVCaptureVideoDataOutputobject , and add it to the capture session using
addOutput.
AVCaptureVideoDataOutputis used to process uncompressed frames from the video being captured. An instance of
AVCaptureVideoDataOutputproduces video frames you can process using other media APIs. You can access the frames with the
captureOutput:didOutputSampleBuffer:fromConnection:delegate method. Use
setSampleBufferDelegate:queue:to set the sample buffer delegate and the queue on which callbacks should be invoked. The delegate of an
AVCaptureVideoDataOutputSampleBufferobject must adopt the
AVCaptureVideoDataOutputSampleBufferDelegateprotocol. Use the
sessionPresetproperty to customize the quality of the output.
You invoke the capture session
startRunningmethod to start the flow of data from the inputs to the outputs, and
stopRunningto stop the flow.
Listing 1 shows an example of this.
setupCaptureSessioncreates a capture session, adds a video input to provide video frames, adds an output destination to access the captured frames, then starts flow of data from the inputs to the outputs. While the capture session is running, the captured video sample buffers are sent to the sample buffer delegate using
captureOutput:didOutputSampleBuffer:fromConnection:. Each sample buffer (
CMSampleBufferRef) is then converted to a
UIImagein
imageFromSampleBuffer.
Listing 1 Configuring a capture device to record video with AV Foundation and saving the frames as
UIImageobjects.
#import <AVFoundation/AVFoundation.h> // Create and configure a capture session and start it running - (void)setupCaptureSession { NSError *error = nil; // Create the session AVCaptureSession *session = [[AVCaptureSession alloc] init]; // Configure the session to produce lower resolution video frames, if your // processing algorithm can cope. We'll specify medium quality for the // chosen device. session.sessionPreset = AVCaptureSessionPresetMedium; // Find a suitable AVCaptureDevice AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; // Create a device input with the device and add it to the session. AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error]; if (!input) { // Handling the error appropriately. } [session addInput:input]; // Create a VideoDataOutput and add it to the session AVCaptureVideoDataOutput *output = [[[AVCaptureVideoDataOutput alloc] init] autorelease]; [session addOutput:output]; // Configure your output. dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL); [output setSampleBufferDelegate:self queue:queue]; dispatch_release(queue); // Specify the pixel format output.videoSettings = [NSDictionary dictionaryWithObject: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]; // If you wish to cap the frame rate to a known value, such as 15 fps, set // minFrameDuration. output.minFrameDuration = CMTimeMake(1, 15); // Start the session running to start the flow of data [session startRunning]; // Assign session to an ivar. [self setSession:session]; } // Delegate routine that is called when a sample buffer was written - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { // Create a UIImage from the sample buffer data UIImage *image = [self imageFromSampleBuffer:sampleBuffer]; < Add your code here that uses the image > } // Create a UIImage from sample buffer data - (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer { // Get a CMSampleBuffer's Core Video image buffer for the media data CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); // Lock the base address of the pixel buffer CVPixelBufferLockBaseAddress(imageBuffer, 0); // Get the number of bytes per row for the pixel buffer void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer); // Get the number of bytes per row for the pixel buffer size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); // Get the pixel buffer width and height size_t width = CVPixelBufferGetWidth(imageBuffer); size_t height = CVPixelBufferGetHeight(imageBuffer); // Create a device-dependent RGB color space CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); // Create a bitmap graphics context with the sample buffer data CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); // Create a Quartz image from the pixel data in the bitmap graphics context CGImageRef quartzImage = CGBitmapContextCreateImage(context); // Unlock the pixel buffer CVPixelBufferUnlockBaseAddress(imageBuffer,0); // Free up the context and color space CGContextRelease(context); CGColorSpaceRelease(colorSpace); // Create an image object from the Quartz image UIImage *image = [UIImage imageWithCGImage:quartzImage]; // Release the Quartz image CGImageRelease(quartzImage); return (image); } |
Document Revision History
Date | Notes |
---|---|
2010-09-29 | Updated the imageFromSampleBuffer code to correctly create a UIImage from the sample buffer data. |
2010-07-20 | New document that shows how to capture video frames from the camera as images using AV Foundation |
相关文章推荐
- ios学习--How to capture video frames from the camera as images using AV Foundation
- How to capture video frames from the camera as images using AV Foundation on iOS
- How to capture video frames from the camera as images using AV Foundation
- How to use OpenCV to capture and display images from a camera
- Using opencv to process the video stream from camera
- How to grab video frames directly from QCamera
- How to access HBase from spark-shell using YARN as the master on CDH 5.3 and Spark 1.2
- How to download streaming audio or video media from the internet using the MMS protocol?
- How to get the password text in a text with password property from another process using C++ - 用C++如何从不同进程获取密码框文本
- Android - How to direct the audio data from MediaRecorder as the input of ffmpeg command via Pipe?
- How to automate PowerPoint by using Visual C++ 5.0 or Visual C++ 6.0 with The Microsoft Foundation Classes
- Openstack: python API “how to download image from glance using the python api”
- .net中捕获摄像头视频的方式及对比(How to Capture Camera Video via .Net)
- .net中捕获摄像头视频的方式及对比(How to Capture Camera Video via .Net)
- How to Get the Best Image Quality from Your IP Camera
- How to Push the Video streams to DouYu/XiongMao/Bilibili using PI
- .net中捕获摄像头视频的方式及对比(How to Capture Camera Video via .Net)
- Using Live555 to Stream Live Video from an IP camera connected to an H264 encoder
- How to open MS word document from the SharePoint 2010 using Microsoft.Office.Interop.dll
- Data transfer from GPIO port to RAM buffer using DMA upon receiving a trigger signal on the timer capture input channel.