您的位置:首页 > 移动开发 > IOS开发

ios 原生语音识别,百度翻译API使用,原生文字转语音播报

2018-04-02 10:42 459 查看
若有不正之处,希望大家不吝赐教,谢谢!

原生语音识别所需:

首先需要再plist文件中加入:

Privacy - Speech Recognition Usage Description

需要使用siri来进行语音识别

Privacy - Microphone Usage Description

同意后,您可以使用语音翻译的相关功能

百度翻译API所需,百度文档地址http://api.fanyi.baidu.com/api/trans/product/apidoc,官方文档提供:

Privacy - Camera Usage Description

访问相机

Privacy - Photo Library Usage Description

访问相册

原声语音识别

获取语音识别的权限,主要代码如下:

[SFSpeechRecognizer requestAuthorization:^(SFSpeechRecognizerAuthorizationStatus status) {
switch (status) {
case SFSpeechRecognizerAuthorizationStatusAuthorized:
NSLog(@"可以语音识别");
break;
case SFSpeechRecognizerAuthorizationStatusDenied:
NSLog(@"用户被拒绝访问语音识别");
if(self.errorHandler){
self.errorHandler(self, TSSpeechHelperErrorTypeUserRefuse);
}
break;
case SFSpeechRecognizerAuthorizationStatusRestricted:
NSLog(@"不能在该设备上进行语音识别");
if(self.errorHandler){
self.errorHandler(self, TSSpeechHelperErrorTypeNoNotPossible);
}
break;
case SFSpeechRecognizerAuthorizationStatusNotDetermined:
NSLog(@"没有授权语音识别");
if(self.errorHandler){
self.errorHandler(self, TSSpeechHelperErrorTypeNoPermission);
}
break;
default:
break;
}
}];


使用SFSpeechRecognizer :Speech Kit 框架来进行语音识别 支持ios10 16年苹果推出

- (
4000
SFSpeechRecognizer *)speechRecognizer{
if (_speechRecognizer == nil){
// NSLocale 类返回本地化信息,主要体现在"语言"和"区域格式"这两个设置项
NSLocale *cale = [[NSLocale alloc]initWithLocaleIdentifier:self.languageString];//英语就是 en_US

_speechRecognizer = [[SFSpeechRecognizer alloc]initWithLocale:cale];
_speechRecognizer.delegate = self;
}
return _speechRecognizer;
}


开始录音并识别

//开始录音
-(void)startRecording{
if (self.recognitionTask) {
[self.recognitionTask cancel];
self.recognitionTask = nil;
}
NSError *error = nil;
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
[audioSession setCategory:AVAudioSessionCategoryRecord error:&error];
[audioSession setMode:AVAudioSessionModeMeasurement error:&error];
[audioSession setActive:true withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:&error];
if (error == nil) {
//可以使用
}else{
//提示设备不支持的错误
// TSSpeechHelperErrorTypeNoNotPossible
return;
}
self.recognitionRequest = [[SFSpeechAudioBufferRecognitionRequest alloc]init];
AVAudioInputNode *inputNode = self.audioEngine.inputNode;
self.recognitionRequest.shouldReportPartialResults = true;

//开始识别任务
self.recognitionTask = [self.speechRecognizer recognitionTaskWithRequest:self.recognitionRequest resultHandler:^(SFSpeechRecognitionResult * _Nullable result, NSError * _Nullable error) {
bool isFinal = false;
if (result) {
NSString * bestString = [[result bestTranscription] formattedString];
isFinal = [result isFinal];
NSLog(@"进行了一次语音识别,内容是: %@",bestString);
}
if (error || isFinal) {
[self stopRecording];
}
}];
AVAudioFormat *recordingFormat = [inputNode outputFormatForBus:0];
[inputNode installTapOnBus:0 bufferSize:1024 format:recordingFormat block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) {
[self.recognitionRequest appendAudioPCMBuffer:buffer];
}];
[self.audioEngine prepare];
if (![self.audioEngine startAndReturnError:nil]) {
//打开录音失败 TSSpeechHelperErrorTypeAudioStartError

}
}

//停止录音
- (void)stopRecording{
[_audioEngine stop];
[self.recognitionRequest endAudio];
[_audioEngine.inputNode removeTapOnBus:0];
_speechRecognizer = nil;
}


百度翻译api使用

使用百度翻译API提供的接口进行文字翻译

//翻译语言
- (void)initbdapiParameterDataWithModel:(NSString *)translationFromString  success:(void (^)(NSString * translationString))success
failure:(void (^)(NSError * error))failure  {

//将APPID q salt key 拼接一起
NSString *appendStr = [NSString stringWithFormat:@"%@%@%@%@",AppIdKey,model.translationFromString,SaltKey,SecretKey];

//加密 生成签名  此处有坑:  这里不需要讲文字进行utf-8编码
NSString *md5Str = [self md5:appendStr];

//将待翻译的文字进行urf-8转码
NSString *qEncoding = [translationFromString stringByAddingPercentEscapesUsingEncoding:NSUTF8StringEncoding];

//拼接请求url字符串
NSString * bdTranslationurlString = [NSString stringWithFormat:@"http://api.fanyi.baidu.com/api/trans/vip/translate?q=%@&from=%@&to=%@&appid=%@&salt=%@&sign=%@",qEncoding,model.translationfromCodeString,model.translationToCodeString,AppIdKey,SaltKey,md5Str];

//创建网络请求
AFHTTPSessionManager *manager = [AFHTTPSessionManager manager];

__weak typeof(model)weakModel = model;
//发起网络请求
[manager GET:bdTranslationurlString parameters:nil progress:nil success:^(NSURLSessionDataTask * _Nonnull task, id  _Nullable responseObject) {
__strong typeof(model)strongModel = weakModel;

// 判断是否翻译成功
if (responseObject == nil) {
success(@"翻译失败,请稍后再试!");
translationToString = @"";
return ;
}

//获取翻译后的字符串 eg: {"from":"en","to":"zh","trans_result":[{"src":"apple","dst":"\u82f9\u679c"}]}
NSString *resStr = [[responseObject objectForKey:@"trans_result"] firstObject][@"dst"];
success(resStr);

} failure:^(NSURLSessionDataTask * _Nullable task, NSError * _Nonnull error) {
//翻译失败 返回失败block
failure(error);
}];

}


文字转语音播报

```
NSString *string = @"你好"
NSString *yzString = @"zh-CN";
AVSpeechSynthesizer *player  = [[AVSpeechSynthesizer alloc]init];
AVSpeechUtterance *speechUtterance = [[AVSpeechUtterance alloc]initWithString:string];//设置语音内容
speechUtterance.voice  = [AVSpeechSynthesisVoice voiceWithLanguage:yzString];//设置语言
speechUtterance.rate   = 0.5;  //设置语速
speechUtterance.volume = 1.0;  //设置音量(0.0~1.0)默认为1.0
speechUtterance.pitchMultiplier    =  0.5;  //设置语调 (0.5-2.0)
speechUtterance.postUtteranceDelay = 1; //目的是让语音合成器播放下一语句前有短暂的暂停
[player speakUtterance:speechUtterance];


备注:主要播报文字代码如下,需要注意的是如果来语音识别后立刻想要播报,需要在播报之前开启Seeeion

AVAudioSession *audioSession = [AVAudioSession sharedInstance];

[audioSession setCategory:AVAudioSessionCategoryPlayback error:nil];

[audioSession setMode:AVAudioSessionModeMeasurement error:nil];

[audioSession setActive:true withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:nil];

[audioSession setCategory:AVAudioSessionCategoryAmbient
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: