您的位置:首页 > 移动开发 > IOS开发

iOS开发进阶 - 用AVFoundation自定义视频录制功能

2016-05-25 14:13 603 查看
如果移动端访问不佳,请访问我的个人博客


系统自带的录制视频的功能显然无法满足美工和项目经理的要求,自定义视频录制就非常重要了,那么下面来带大家制作属于自己的视频录制界面。





简介

自定义视频录制需要用到的框架主要是
AVFoundation
CoreMedia
,包括视频输出,输入和文件的读写,下面给大家罗列一下将要用到的类:

AVCaptureSession

AVCaptureVideoPreviewLayer

AVCaptureDeviceInput

AVCaptureConnection

AVCaptureVideoDataOutput

AVCaptureAudioDataOutput

AVAssetWriter

AVAssetWriterInput


下面详细介绍每个类和代码实现



AVCaptureSession

AVCaptureSession
AVFoundation
捕捉类的中心枢纽,我们先从这个类入手,在视频捕获时,客户端可以实例化
AVCaptureSession
并添加适当的
AVCaptureInputs
AVCaptureDeviceInput
和输出,比如
AVCaptureMovieFileOutput
。通过
[AVCaptureSession startRunning]
开始数据流从输入到输出,和
[AVCaptureSession stopRunning]
停止输出输入的流动。客户端可以通过设置sessionPreset属性定制录制质量水平或输出的比特率。

//捕获视频的会话
- (AVCaptureSession *)recordSession {
if (_recordSession == nil) {
_recordSession = [[AVCaptureSession alloc] init];
//添加后置摄像头的输出
if ([_recordSession canAddInput:self.backCameraInput]) {
[_recordSession addInput:self.backCameraInput];
}
//添加后置麦克风的输出
if ([_recordSession canAddInput:self.audioMicInput]) {
[_recordSession addInput:self.audioMicInput];
}
//添加视频输出
if ([_recordSession canAddOutput:self.videoOutput]) {
[_recordSession addOutput:self.videoOutput];
//设置视频的分辨率为后置摄像头
NSDictionary* actual = self.videoOutput.videoSettings;
_cx = [[actual objectForKey:@"Height"] integerValue];
_cy = [[actual objectForKey:@"Width"] integerValue];
}
//添加音频输出
if ([_recordSession canAddOutput:self.audioOutput]) {
[_recordSession addOutput:self.audioOutput];
}
//设置视频录制的方向
self.videoConnection.videoOrientation = AVCaptureVideoOrientationPortrait;
}
return _recordSession;
}


AVCaptureDevice

AVCaptureDevice
的每个实例对应一个设备,如摄像头或麦克风。
AVCaptureDevice
的实例不能直接创建。所有现有设备可以使用类方法
devicesWithMediaType:defaultDeviceWithMediaType:
获取,设备可以提供一个或多个给定流媒体类型。
AVCaptureDevice
实例可用于提供给
AVCaptureSession
创建一个为
AVCaptureDeviceInput
类型的输入源。

//返回前置摄像头
- (AVCaptureDevice *)frontCamera {
return [self cameraWithPosition:AVCaptureDevicePositionFront];
}

//返回后置摄像头
- (AVCaptureDevice *)backCamera {
return [self cameraWithPosition:AVCaptureDevicePositionBack];
}

//用来返回是前置摄像头还是后置摄像头
- (AVCaptureDevice *)cameraWithPosition:(AVCaptureDevicePosition) position {
//返回和视频录制相关的所有默认设备
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
//遍历这些设备返回跟position相关的设备
for (AVCaptureDevice *device in devices) {
if ([device position] == position) {
return device;
}
}
return nil;
}
//开启闪光灯
- (void)openFlashLight {
AVCaptureDevice *backCamera = [self backCamera];
if (backCamera.torchMode == AVCaptureTorchModeOff) {
[backCamera lockForConfiguration:nil];
backCamera.torchMode = AVCaptureTorchModeOn;
backCamera.flashMode = AVCaptureFlashModeOn;
[backCamera unlockForConfiguration];
}
}
//关闭闪光灯
- (void)closeFlashLight {
AVCaptureDevice *backCamera = [self backCamera];
if (backCamera.torchMode == AVCaptureTorchModeOn) {
[backCamera lockForConfiguration:nil];
backCamera.torchMode = AVCaptureTorchModeOff;
backCamera.flashMode = AVCaptureTorchModeOff;
[backCamera unlockForConfiguration];
}
}


AVCaptureDeviceInput

AVCaptureDeviceInput
是AVCaptureSession输入源,提供媒体数据从设备连接到系统,通过
AVCaptureDevice
的实例化得到,就是我们将要用到的设备输出源设备,也就是前后摄像头,通过
[AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]
方法获得。

//后置摄像头输入
- (AVCaptureDeviceInput *)backCameraInput {
if (_backCameraInput == nil) {
NSError *error;
_backCameraInput = [[AVCaptureDeviceInput alloc] initWithDevice:[self backCamera] error:&error];
if (error) {
[SVProgressHUD showErrorWithStatus:@"获取后置摄像头失败~"];
}
}
return _backCameraInput;
}

//前置摄像头输入
- (AVCaptureDeviceInput *)frontCameraInput {
if (_frontCameraInput == nil) {
NSError *error;
_frontCameraInput = [[AVCaptureDeviceInput alloc] initWithDevice:[self frontCamera] error:&error];
if (error) {
[SVProgressHUD showErrorWithStatus:@"获取前置摄像头失败~"];
}
}
return _frontCameraInput;
}


AVCaptureVideoPreviewLayer

CoreAnimation
里面layer的一个子类,用来做为
AVCaptureSession
预览视频输出,简单来说就是来做为拍摄的视频呈现的一个layer。

//捕获到的视频呈现的layer
- (AVCaptureVideoPreviewLayer *)previewLayer {
if (_previewLayer == nil) {
//通过AVCaptureSession初始化
AVCaptureVideoPreviewLayer *preview = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.recordSession];
//设置比例为铺满全屏
preview.videoGravity = AVLayerVideoGravityResizeAspectFill;
_previewLayer = preview;
}
return _previewLayer;
}


AVCaptureMovieFileOutput

AVCaptureMovieFileOutput
AVCaptureFileOutput
的子类,用来写入
QuickTime
视频类型的媒体文件。因为这个类在iphone上并不能实现暂停录制,和不能定义视频文件的类型,所以在这里并不使用,而是用灵活性更强的
AVCaptureVideoDataOutput
AVCaptureAudioDataOutput
来实现视频的录制。

AVCaptureVideoDataOutput

AVCaptureVideoDataOutput
AVCaptureOutput
一个子类,可以用于用来输出未压缩或压缩的视频捕获的帧,
AVCaptureVideoDataOutput
产生的实例可以使用其他媒体视频帧适合的api处理,应用程序可以用
captureOutput:didOutputSampleBuffer:fromConnection:
代理方法来获取帧数据。

//视频输出
- (AVCaptureVideoDataOutput *)videoOutput {
if (_videoOutput == nil) {
_videoOutput = [[AVCaptureVideoDataOutput alloc] init];
[_videoOutput setSampleBufferDelegate:self queue:self.captureQueue];
NSDictionary* setcapSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange], kCVPixelBufferPixelFormatTypeKey,
nil];
_videoOutput.videoSettings = setcapSettings;
}
return _videoOutput;
}


AVCaptureAudioDataOutput

AVCaptureAudioDataOutput
AVCaptureOutput
的子类,可用于用来输出捕获来的非压缩或压缩的音频样本,
AVCaptureAudioDataOutput
产生的实例可以使用其他媒体视频帧适合的api处理,应用程序可以用
captureOutput:didOutputSampleBuffer:fromConnection:
代理方法来获取音频数据。

//音频输出
- (AVCaptureAudioDataOutput *)audioOutput {
if (_audioOutput == nil) {
_audioOutput = [[AVCaptureAudioDataOutput alloc] init];
[_audioOutput setSampleBufferDelegate:self queue:self.captureQueue];
}
return _audioOutput;
}


AVCaptureConnection

AVCaptureConnection
代表
AVCaptureInputPort
或端口之间的连接,和一个
AVCaptureOutput
AVCaptureVideoPreviewLayer
AVCaptureSession
中的呈现。

//视频连接
- (AVCaptureConnection *)videoConnection {
_videoConnection = [self.videoOutput connectionWithMediaType:AVMediaTypeVideo];
return _videoConnection;
}

//音频连接
- (AVCaptureConnection *)audioConnection {
if (_audioConnection == nil) {
_audioConnection = [self.audioOutput connectionWithMediaType:AVMediaTypeAudio];
}
return _audioConnection;
}


AVAssetWriter

AVAssetWriter
为写入媒体数据到一个新的文件提供服务,
AVAssetWriter
的实例可以规定写入媒体文件的格式,如
QuickTime
电影文件格式或
MPEG-4
文件格式等等。
AVAssetWriter
有多个并行的轨道媒体数据,基本的有视频轨道和音频轨道,将会在下面介绍。
AVAssetWriter
的单个实例可用于一次写入一个单一的文件。那些希望写入多次文件的客户端必须每一次用一个新的
AVAssetWriter
实例。

//初始化方法
- (instancetype)initPath:(NSString*)path Height:(NSInteger)cy width:(NSInteger)cx channels:(int)ch samples:(Float64) rate {
self = [super init];
if (self) {
self.path = path;
//先把路径下的文件给删除掉,保证录制的文件是最新的
[[NSFileManager defaultManager] removeItemAtPath:self.path error:nil];
NSURL* url = [NSURL fileURLWithPath:self.path];
//初始化写入媒体类型为MP4类型
_writer = [AVAssetWriter assetWriterWithURL:url fileType:AVFileTypeMPEG4 error:nil];
//使其更适合在网络上播放
_writer.shouldOptimizeForNetworkUse = YES;
//初始化视频输出
[self initVideoInputHeight:cy width:cx];
//确保采集到rate和ch
if (rate != 0 && ch != 0) {
//初始化音频输出
[self initAudioInputChannels:ch samples:rate];
}
}
return self;
}


AVAssetWriterInput

AVAssetWriterInput
去拼接一个多媒体样本类型为
CMSampleBuffer
的实例到
AVAssetWriter
对象的输出文件的一个轨道;当有多个输入时,
AVAssetWriter
试图在用于存储和播放效率的理想模式写媒体数据。它的每一个输入信号,是否能接受媒体的数据根据通过
readyForMoreMediaData
的值来判断。如果
readyForMoreMediaData
YES
,说明输入可以接受媒体数据。并且你只能媒体数据追加到输入端。

//初始化视频输入
- (void)initVideoInputHeight:(NSInteger)cy width:(NSInteger)cx {
//录制视频的一些配置,分辨率,编码方式等等
NSDictionary* settings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInteger: cx], AVVideoWidthKey,
[NSNumber numberWithInteger: cy], AVVideoHeightKey,
nil];
//初始化视频写入类
_videoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:settings];
//表明输入是否应该调整其处理为实时数据源的数据
_videoInput.expectsMediaDataInRealTime = YES;
//将视频输入源加入
[_writer addInput:_videoInput];
}

//初始化音频输入
- (void)initAudioInputChannels:(int)ch samples:(Float64)rate {
//音频的一些配置包括音频各种这里为AAC,音频通道、采样率和音频的比特率
NSDictionary *settings = [NSDictionary dictionaryWithObjectsAndKeys:
[ NSNumber numberWithInt: kAudioFormatMPEG4AAC], AVFormatIDKey,
[ NSNumber numberWithInt: ch], AVNumberOfChannelsKey,
[ NSNumber numberWithFloat: rate], AVSampleRateKey,
[ NSNumber numberWithInt: 128000], AVEncoderBitRateKey,
nil];
//初始化音频写入类
_audioInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:settings];
//表明输入是否应该调整其处理为实时数据源的数据
_audioInput.expectsMediaDataInRealTime = YES;
//将音频输入源加入
[_writer addInput:_audioInput];

}



上面是录制之前的一些需要的类和配置,下面介绍的是如何将获取到的数据呈现出来和怎样进行文件写入



写入数据

#pragma mark - 写入数据
- (void) captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
BOOL isVideo = YES;
@synchronized(self) {
if (!self.isCapturing  || self.isPaused) {
return;
}
if (captureOutput != self.videoOutput) {
isVideo = NO;
}

//初始化编码器,当有音频和视频参数时创建编码器
if ((self.recordEncoder == nil) && !isVideo)
{
CMFormatDescriptionRef fmt = CMSampleBufferGetFormatDescription(sampleBuffer);
[self setAudioFormat:fmt];
NSString *videoName = [NSString getUploadFile_type:@"video" fileType:@"mp4"];
self.videoPath = [[self getVideoCachePath] stringByAppendingPathComponent:videoName];
self.recordEncoder = [WCLRecordEncoder encoderForPath:self.videoPath Height:_cy width:_cx channels:_channels samples:_samplerate];
}

//判断是否中断录制过
if (self.discont) {
if (isVideo) {
return;
}
self.discont = NO;
// 计算暂停的时间
CMTime pts = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
CMTime last = isVideo ? _lastVideo : _lastAudio;
if (last.flags & kCMTimeFlags_Valid) {
if (_timeOffset.flags & kCMTimeFlags_Valid) {
pts = CMTimeSubtract(pts, _timeOffset);
}
CMTime offset = CMTimeSubtract(pts, last);
if (_timeOffset.value == 0) {
_timeOffset = offset;
}else {
_timeOffset = CMTimeAdd(_timeOffset, offset);
}
}
_lastVideo.flags = 0;
_lastAudio.flags = 0;
}
// 增加sampleBuffer的引用计时,这样我们可以释放这个或修改这个数据,防止在修改时被释放
CFRetain(sampleBuffer);
if (_timeOffset.value > 0) {
CFRelease(sampleBuffer);
//根据得到的timeOffset调整
sampleBuffer = [self adjustTime:sampleBuffer by:_timeOffset];
}
// 记录暂停上一次录制的时间
CMTime pts = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
CMTime dur = CMSampleBufferGetDuration(sampleBuffer);
if (dur.value > 0) {
pts = CMTimeAdd(pts, dur);
}
if (isVideo) {
_lastVideo = pts;
}else {
_lastAudio = pts;
}
}
CMTime dur = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
if (self.startTime.value == 0) {
self.startTime = dur;
}
CMTime sub = CMTimeSubtract(dur, self.startTime);
self.currentRecordTime = CMTimeGetSeconds(sub);
if (self.currentRecordTime > self.maxRecordTime) {
if (self.currentRecordTime - self.maxRecordTime < 0.1) {
if ([self.delegate respondsToSelector:@selector(recordProgress:)]) {
dispatch_async(dispatch_get_main_queue(), ^{
[self.delegate recordProgress:self.currentRecordTime/self.maxRecordTime];
});
}
}
return;
}
if ([self.delegate respondsToSelector:@selector(recordProgress:)]) {
dispatch_async(dispatch_get_main_queue(), ^{
[self.delegate recordProgress:self.currentRecordTime/self.maxRecordTime];
});
}
// 进行数据编码
[self.recordEncoder encodeFrame:sampleBuffer isVideo:isVideo];
CFRelease(sampleBuffer);
}

//设置音频格式
- (void)setAudioFormat:(CMFormatDescriptionRef)fmt {
const AudioStreamBasicDescription *asbd = CMAudioFormatDescriptionGetStreamBasicDescription(fmt);
_samplerate = asbd->mSampleRate;
_channels = asbd->mChannelsPerFrame;

}

//调整媒体数据的时间
- (CMSampleBufferRef)adjustTime:(CMSampleBufferRef)sample by:(CMTime)offset {
CMItemCount count;
CMSampleBufferGetSampleTimingInfoArray(sample, 0, nil, &count);
CMSampleTimingInfo* pInfo = malloc(sizeof(CMSampleTimingInfo) * count);
CMSampleBufferGetSampleTimingInfoArray(sample, count, pInfo, &count);
for (CMItemCount i = 0; i < count; i++) {
pInfo[i].decodeTimeStamp = CMTimeSubtract(pInfo[i].decodeTimeStamp, offset);
pInfo[i].presentationTimeStamp = CMTimeSubtract(pInfo[i].presentationTimeStamp, offset);
}
CMSampleBufferRef sout;
CMSampleBufferCreateCopyWithNewTiming(nil, sample, count, pInfo, &sout);
free(pInfo);
return sout;
}

//通过这个方法写入数据
- (BOOL)encodeFrame:(CMSampleBufferRef) sampleBuffer isVideo:(BOOL)isVideo {
//数据是否准备写入
if (CMSampleBufferDataIsReady(sampleBuffer)) {
//写入状态为未知,保证视频先写入
if (_writer.status == AVAssetWriterStatusUnknown && isVideo) {
//获取开始写入的CMTime
CMTime startTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
//开始写入
[_writer startWriting];
[_writer startSessionAtSourceTime:startTime];
}
//写入失败
if (_writer.status == AVAssetWriterStatusFailed) {
NSLog(@"writer error %@", _writer.error.localizedDescription);
return NO;
}
//判断是否是视频
if (isVideo) {
//视频输入是否准备接受更多的媒体数据
if (_videoInput.readyForMoreMediaData == YES) {
//拼接数据
[_videoInput appendSampleBuffer:sampleBuffer];
return YES;
}
}else {
//音频输入是否准备接受更多的媒体数据
if (_audioInput.readyForMoreMediaData) {
//拼接数据
[_audioInput appendSampleBuffer:sampleBuffer];
return YES;
}
}
}
return NO;
}


完成录制并写入相册

//停止录制
- (void) stopCaptureHandler:(void (^)(UIImage *movieImage))handler {
@synchronized(self) {
if (self.isCapturing) {
NSString* path = self.recordEncoder.path;
NSURL* url = [NSURL fileURLWithPath:path];
self.isCapturing = NO;
dispatch_async(_captureQueue, ^{
[self.recordEncoder finishWithCompletionHandler:^{
self.isCapturing = NO;
self.recordEncoder = nil;
[[PHPhotoLibrary sharedPhotoLibrary] performChanges:^{
[PHAssetChangeRequest creationRequestForAssetFromVideoAtFileURL:url];
} completionHandler:^(BOOL success, NSError * _Nullable error) {
NSLog(@"保存成功");
}];
[self movieToImageHandler:handler];
}];
});
}
}
}

//获取视频第一帧的图片
- (void)movieToImageHandler:(void (^)(UIImage *movieImage))handler {
NSURL *url = [NSURL fileURLWithPath:self.videoPath];
AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:url options:nil];
AVAssetImageGenerator *generator = [[AVAssetImageGenerator alloc] initWithAsset:asset];
generator.appliesPreferredTrackTransform = TRUE;
CMTime thumbTime = CMTimeMakeWithSeconds(0, 60);
generator.apertureMode = AVAssetImageGeneratorApertureModeEncodedPixels;
AVAssetImageGeneratorCompletionHandler generatorHandler =
^(CMTime requestedTime, CGImageRef im, CMTime actualTime, AVAssetImageGeneratorResult result, NSError *error){
if (result == AVAssetImageGeneratorSucceeded) {
UIImage *thumbImg = [UIImage imageWithCGImage:im];
if (handler) {
dispatch_async(dispatch_get_main_queue(), ^{
handler(thumbImg);
});
}
}
};
[generator generateCGImagesAsynchronouslyForTimes:
[NSArray arrayWithObject:[NSValue valueWithCMTime:thumbTime]] completionHandler:generatorHandler];
}

//完成视频录制时调用
- (void)finishWithCompletionHandler:(void (^)(void))handler {
[_writer finishWritingWithCompletionHandler: handler];
}


以上就是本博客内容的全部内容,大家如果有什么疑问可以问我,本文附带有demo,大家可以去看看具体怎么使用,有用的话可以点一下star,谢谢大家的阅读~~

我的demo地址

内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: