使用WebRTC实现语音通话,视频通话
2017-03-06 00:00
423 查看
摘要: 语音通话,视频通话,WebRTC
WebRTC介绍
WebRTC,名称源自网页实时通信(Web Real-Time Communication)的缩写,是一个支持网页浏览器进行实时语音对话或视频对话的技术,谷歌2011年5月进行开源。
WebRTC提供了视频会议的核心技术,包括音视频的采集、编解码、网络传输、显示等功能,并且还支持跨平台:windows,linux,mac,android。
1.通过CocoaPods依赖WebRTC
2.搭建RTC服务器用于信灵交互,可以使用github上开源代码https://github.com/LingyuCoder/SkyRTC
3.引入WebRTC的头文件
4.引入必要的依赖库
5.开始实现语音,视频通话
1.初始化
2.第一个任务第一步发起方创建offer(即SDP会话描述)
3.第二步,发起方成功创建offer之后,根据offer设置本地描述
4.第三步,发起方创建本地信令描述成功后,发送本地offer给接受方
5.第四步,接收方根据接受到的发起方的offer设置本地描述
6.第五步,接收方创建本地描述成功后,创建answer
7.第六步,接收方创建answer后设置本地描述
8.第七步,接收方设置本地描述成功后
9.第八步,发起方收到接受方的answer,发起方根据answer设置本地描述
10.第九步,发起方设置answer本地描述成功,确定本地媒体条件完成
11.第二个任务,第一步发现Ice candidate时发送给接收方
12.第二步,接收方收到Ice candidate后add起来
13.第三步,发起方接收方收到连接结果
WebRTC介绍
WebRTC,名称源自网页实时通信(Web Real-Time Communication)的缩写,是一个支持网页浏览器进行实时语音对话或视频对话的技术,谷歌2011年5月进行开源。
WebRTC提供了视频会议的核心技术,包括音视频的采集、编解码、网络传输、显示等功能,并且还支持跨平台:windows,linux,mac,android。
1.通过CocoaPods依赖WebRTC
pod 'libjingle_peerconnection'
2.搭建RTC服务器用于信灵交互,可以使用github上开源代码https://github.com/LingyuCoder/SkyRTC
3.引入WebRTC的头文件
#import "RTCICEServer.h" #import "RTCICECandidate.h" #import "RTCICEServer.h" #import "RTCMediaConstraints.h" #import "RTCMediaStream.h" #import "RTCPair.h" #import "RTCPeerConnection.h" #import "RTCPeerConnectionDelegate.h" #import "RTCPeerConnectionFactory.h" #import "RTCSessionDescription.h" #import "RTCVideoRenderer.h" #import "RTCVideoCapturer.h" #import "RTCVideoTrack.h" #import "RTCAudioTrack.h" #import "RTCSessionDescriptionDelegate.h" #import "RTCEAGLVideoView.h" #import <AVFoundation/AVFoundation.h>
4.引入必要的依赖库
AudioToolbox.framework VideoToolbox.framework QuartzCore.framework OpenGLES.framework CoreGraphics.framework CoreVideo.framework CoreMedia.framework CoreAudio.framework AVFoundation.framework GLKit.framework CFNetwork.framework Security.framework libsqlite3.tbd libicucore.tbd libc.tbd libstdc++.6.0.9.tbd
5.开始实现语音,视频通话
1.初始化
@property (nonatomic, strong) RTCICEServer *iceServer; @property (nonatomic, strong) RTCPeerConnectionFactory *pcFactory; @property (nonatomic, strong) RTCPeerConnection *peerConnection; @property (nonatomic, strong) RTCVideoTrack *localVideoTrack; @property (nonatomic, strong) RTCVideoTrack *remoteVideoTrack; @property (nonatomic, strong) RTCMediaConstraints *sdpConstraints; @property (nonatomic, strong) RTCAudioTrack *localAudioTrack; @property (nonatomic, strong) RTCAudioTrack *remoteAudioTrack;
NSString *videoEnable = self.audioOrVideoType == CXIMMediaCallTypeAudio ? @"false" : @"true"; self.sdpConstraints = [[RTCMediaConstraints alloc] initWithMandatoryConstraints: @[ [[RTCPair alloc] initWithKey:@"OfferToReceiveAudio" value:@"true"], [[RTCPair alloc] initWithKey:@"OfferToReceiveVideo" value:videoEnable] ] optionalConstraints: nil]; self.iceServer = [[RTCICEServer alloc] initWithURI:[NSURL URLWithString:kICEServer_URL] username:kICEServer_UserName password:kICEServer_Password]; [RTCPeerConnectionFactory initializeSSL]; self.pcFactory = [[RTCPeerConnectionFactory alloc] init]; RTCMediaConstraints *constraints = [[RTCMediaConstraints alloc] init]; //发起方第二个任务第一步,发起方创建RTCPeerConnection对象时设置了onicecandidate handler,hander被调用当candidates找到了的时候 _peerConnection = [self.pcFactory peerConnectionWithICEServers:@[self.iceServer] constraints:constraints delegate:self]; //添加Stream RTCMediaStream *localStream = [self.pcFactory mediaStreamWithLabel:@"ARDAMS"]; RTCAudioTrack *audioTrack = [self.pcFactory audioTrackWithID:@"ARDAMSa0"]; self.localAudioTrack = audioTrack; [localStream addAudioTrack:audioTrack]; if(self.audioOrVideoType == CXIMMediaCallTypeVideo){ AVCaptureDevice *device; for (AVCaptureDevice *captureDevice in [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo] ) { if (captureDevice.position == AVCaptu 3ff8 reDevicePositionFront) { device = captureDevice; break; } } if (device) { RTCVideoSource *videoSource; RTCVideoCapturer *capturer = [RTCVideoCapturer capturerWithDeviceName:device.localizedName]; videoSource = [self.pcFactory videoSourceWithCapturer:capturer constraints:constraints]; RTCVideoTrack *videoTrack = [self.pcFactory videoTrackWithID:@"ARDAMSv0" source:videoSource]; self.localVideoTrack = videoTrack; [localStream addVideoTrack:videoTrack]; } } [_peerConnection addStream:localStream]; self.rtcIsClosed = NO;
2.第一个任务第一步发起方创建offer(即SDP会话描述)
[self.peerConnection createOfferWithDelegate:self constraints:self.sdpConstraints];
3.第二步,发起方成功创建offer之后,根据offer设置本地描述
- (void)peerConnection:(RTCPeerConnection *)peerConnection didCreateSessionDescription:(RTCSessionDescription *)sdp error:(NSError *)error{ if(error){ NSLog(@"%@",error); return; } //根据offer设置本地描述 [peerConnection setLocalDescriptionWithDelegate:self sessionDescription:sdp]; }
4.第三步,发起方创建本地信令描述成功后,发送本地offer给接受方
- (void)peerConnection:(RTCPeerConnection *)peerConnection didSetSessionDescriptionWithError:(NSError *)error{ if (error) { NSLog(@"%@",error.userInfo[@"error"]); return; } if (peerConnection.signalingState == RTCSignalingHaveLocalOffer) { NSDictionary *sdp = @{ @"type":peerConnection.localDescription.type, @"sdp":peerConnection.localDescription.description }; //这里处理发送本地offer给接受方 } }
5.第四步,接收方根据接受到的发起方的offer设置本地描述
RTCSessionDescription *remoteSdp = [[RTCSessionDescription alloc] initWithType:sdp[@"type"] sdp:sdp[@"sdp"]]; [self.peerConnection setRemoteDescriptionWithDelegate:self sessionDescription:remoteSdp];
6.第五步,接收方创建本地描述成功后,创建answer
- (void)peerConnection:(RTCPeerConnection *)peerConnection didSetSessionDescriptionWithError:(NSError *)error{ if (error) { NSLog(@"%@",error.userInfo[@"error"]); return; } else if(peerConnection.signalingState == RTCSignalingHaveRemoteOffer){ //创建answer [peerConnection createAnswerWithDelegate:self constraints:self.sdpConstraints]; } }
7.第六步,接收方创建answer后设置本地描述
- (void)peerConnection:(RTCPeerConnection *)peerConnection didCreateSessionDescription:(RTCSessionDescription *)sdp error:(NSError *)error{ if(error){ NSLog(@"%@",error); return; } //根据answer设置本地描述 [peerConnection setLocalDescriptionWithDelegate:self sessionDescription:sdp]; }
8.第七步,接收方设置本地描述成功后
- (void)peerConnection:(RTCPeerConnection *)peerConnection didSetSessionDescriptionWithError:(NSError *)error{ if (error) { NSLog(@"%@",error.userInfo[@"error"]); return; } else if(peerConnection.signalingState == RTCSignalingStable){ if(self.initiateOrAcceptCallType == SDIMCallAcceptType){ NSDictionary *sdp = @{ @"type":peerConnection.localDescription.type, @"sdp":peerConnection.localDescription.description }; //这里处理发送本地answer给发起方 } } }
9.第八步,发起方收到接受方的answer,发起方根据answer设置本地描述
RTCSessionDescription *remoteSdp = [[RTCSessionDescription alloc] initWithType:sdp[@"type"] sdp:sdp[@"sdp"]]; [self.peerConnection setRemoteDescriptionWithDelegate:self sessionDescription:remoteSdp];
10.第九步,发起方设置answer本地描述成功,确定本地媒体条件完成
- (void)peerConnection:(RTCPeerConnection *)peerConnection didSetSessionDescriptionWithError:(NSError *)error{ if (error) { NSLog(@"%@",error.userInfo[@"error"]); return; } }
11.第二个任务,第一步发现Ice candidate时发送给接收方
// New Ice candidate have been found. - (void)peerConnection:(RTCPeerConnection *)peerConnection gotICECandidate:(RTCICECandidate *)candidate{ NSLog(@"%s",__func__); NSDictionary *candidateInfo = @{ @"sdpMid":candidate.sdpMid, @"sdpMLineIndex":@(candidate.sdpMLineIndex), @"candidate":candidate.sdp }; //在这里处理发起方发送Ice candidate给接收方 }
12.第二步,接收方收到Ice candidate后add起来
NSDictionary *candidateInfo = body.data[@"candidate"]; NSString *sdpMid = candidateInfo[@"sdpMid"]; NSInteger sdpMLineIndex = [candidateInfo[@"sdpMLineIndex"] integerValue]; NSString *sdp = candidateInfo[@"candidate"]; RTCICECandidate *candidate = [[RTCICECandidate alloc] initWithMid:sdpMid index:sdpMLineIndex sdp:sdp]; [self.peerConnection addICECandidate:candidate];
13.第三步,发起方接收方收到连接结果
- (void)peerConnection:(RTCPeerConnection *)peerConnection iceConnectionChanged:(RTCICEConnectionState)newState{ // 连接成功 if(newState == RTCICEConnectionConnected){ } //连接失败 else if (newState == RTCICEConnectionFailed || newState == RTCICEConnectionDisconnected || newState == RTCICEConnectionClosed){ } }
相关文章推荐
- Java使用websocket和WebRTC实现视频通话
- iOS WebRTC语音视频通话实现与demo
- 使用WebRTC实现电脑与手机通过浏览器进行视频通话
- webrtc (3) 使用webrtc Native API实现视频通话
- 使用WebRTC实现电脑与手机通过浏览器进行视频通话
- java使用websocket和WebRTC视频通话
- 实时语音视频通话SDK如何实现立体声(二)
- 基于Chrome、Java、WebSocket、WebRTC实现浏览器视频通话
- 实时语音视频通话SDK如何实现听声辨位
- WebRTC 整理 (安卓IOS微信端实现实时视频通话)
- 使用Resiprocate 部署 WebRTC IM 视频通话平台
- iOS - 自主实现类似微信语音视频信息聊天 (idoubs详细使用方法)1.0
- Java+WebSocket+WebRTC实现视频通话实例
- freeswitch 基于webrtc网页视频、语音通话官方例子video_demo安装
- 基于WebRTC Chrome与Firefox实现视频通话
- 基于Chrome、Java、WebSocket、WebRTC实现浏览器视频通话
- 基于WebRTC实现页面浏览器视频通话-原理及实现demo
- 实时语音视频通话SDK如何实现立体声(一)
- Android手机间语音通话使用webrtc消除回音