您的位置:首页 > 其它

Using RemoteIO audio unit

2011-08-24 14:31 190 查看


Using RemoteIO audio unit

By MICHAEL
TYSON | Published: NOVEMBER
4, 2008

I’ve had nasty old time trying to get some audio stuff going on the iPhone, no thanks to Apple’s lack of documentation. I have a feeling it’s all still quite hush-hush, so no details here, but if you’re an iPhone developer
interested in getting RemoteIO/IO Remote/whatever it’s called working on the iPhone… Do I have good news for you.

Read
on here, at the Developer forums

Drop me a line if you find this helpful.

Update: I’m told the new NDA is pretty much all-good with blog postings. So, read on for the goods.

Update 2: Thanks to Joel
Reymont, we now have an explanation for
the “CrashIfClientProvidedBogusAudioBufferList” iPhone simulator bug: The simulator doesn’t like mono audio. Thanks, Joel!


Update 3: Happily, Apple have now created some excellent
documentation on Remote IO, with some good sample
projects. I recommend using that as a resource, now that it’s there, as that will continue to be updated.


So, we need to obtain an instance of the RemoteIO audio unit, configure it, and hook it up to a recording callback, which is used to notify you that there is data ready to be grabbed, and where you pull the data from the audio
unit.


Overview

Identify the audio component (kAudioUnitType_Output/ kAudioUnitSubType_RemoteIO/ kAudioUnitManufacturerApple)

Use AudioComponentFindNext(NULL, &descriptionOfAudioComponent) to obtain the AudioComponent, which is like the factory with which you obtain the audio unit

Use AudioComponentInstanceNew(ourComponent, &audioUnit) to make an instance of the audio unit

Enable IO for recording and possibly playback with AudioUnitSetProperty

Describe the audio format in an AudioStreamBasicDescription structure, and apply the format using AudioUnitSetProperty

Provide a callback for recording, and possibly playback, again using AudioUnitSetProperty

Allocate some buffers

Initialise the audio unit

Start the audio unit

Rejoice

Here’s my code: I’m using both recording and playback. Use what applies to you!


Initialisation

Initialisation looks like this. We have a member variable of type AudioComponentInstance which will contain our audio unit.

The audio format described below uses SInt16 for samples (i.e. signed, 16 bits per sample)

#define kOutputBus 0

#define kInputBus 1

// ...



OSStatus status;

AudioComponentInstance audioUnit;



// Describe audio component

AudioComponentDescription desc;

desc.componentType = kAudioUnitType_Output;

desc.componentSubType = kAudioUnitSubType_RemoteIO;

desc.componentFlags = 0;

desc.componentFlagsMask = 0;

desc.componentManufacturer = kAudioUnitManufacturer_Apple;



// Get component

AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);



// Get audio units

status = AudioComponentInstanceNew(inputComponent, &audioUnit);

checkStatus(status);



// Enable IO for recording

UInt32 flag = 1;

status = AudioUnitSetProperty(audioUnit,

kAudioOutputUnitProperty_EnableIO,

kAudioUnitScope_Input,

kInputBus,

&flag,

sizeof(flag));

checkStatus(status);



// Enable IO for playback

status = AudioUnitSetProperty(audioUnit,

kAudioOutputUnitProperty_EnableIO,

kAudioUnitScope_Output,

kOutputBus,

&flag,

sizeof(flag));

checkStatus(status);



// Describe format

audioFormat.mSampleRate = 44100.00;

audioFormat.mFormatID = kAudioFormatLinearPCM;

audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;

audioFormat.mFramesPerPacket = 1;

audioFormat.mChannelsPerFrame = 1;

audioFormat.mBitsPerChannel = 16;

audioFormat.mBytesPerPacket = 2;

audioFormat.mBytesPerFrame = 2;



// Apply format

status = AudioUnitSetProperty(audioUnit,

kAudioUnitProperty_StreamFormat,

kAudioUnitScope_Output,

kInputBus,

&audioFormat,

sizeof(audioFormat));

checkStatus(status);

status = AudioUnitSetProperty(audioUnit,

kAudioUnitProperty_StreamFormat,

kAudioUnitScope_Input,

kOutputBus,

&audioFormat,

sizeof(audioFormat));

checkStatus(status);





// Set input callback

AURenderCallbackStruct callbackStruct;

callbackStruct.inputProc = recordingCallback;

callbackStruct.inputProcRefCon = self;

status = AudioUnitSetProperty(audioUnit,

kAudioOutputUnitProperty_SetInputCallback,

kAudioUnitScope_Global,

kInputBus,

&callbackStruct,

sizeof(callbackStruct));

checkStatus(status);



// Set output callback

callbackStruct.inputProc = playbackCallback;

callbackStruct.inputProcRefCon = self;

status = AudioUnitSetProperty(audioUnit,

kAudioUnitProperty_SetRenderCallback,

kAudioUnitScope_Global,

kOutputBus,

&callbackStruct,

sizeof(callbackStruct));

checkStatus(status);



// Disable buffer allocation for the recorder (optional - do this if we want to pass in our own)

flag = 0;

status = AudioUnitSetProperty(audioUnit,

kAudioUnitProperty_ShouldAllocateBuffer,

kAudioUnitScope_Output,

kInputBus,

&flag,

sizeof(flag));



// TODO: Allocate our own buffers if we want



// Initialise

status = AudioUnitInitialize(audioUnit);

checkStatus(status);

Then, when you’re ready to start:

OSStatus status = AudioOutputUnitStart(audioUnit);
checkStatus(status);


And to stop:

OSStatus status = AudioOutputUnitStop(audioUnit);
checkStatus(status);


Then, when we’re finished:

AudioUnitUninitialize(audioUnit);

And now for our callbacks.

RECORDING

static OSStatus recordingCallback(void *inRefCon,

AudioUnitRenderActionFlags *ioActionFlags,

const AudioTimeStamp *inTimeStamp,

UInt32 inBusNumber,

UInt32 inNumberFrames,

AudioBufferList *ioData) {



// TODO: Use inRefCon to access our interface object to do stuff

// Then, use inNumberFrames to figure out how much data is available, and make

// that much space available in buffers in an AudioBufferList.



AudioBufferList *bufferList; // <- Fill this up with buffers (you will want to malloc it, as it's a dynamic-length list)



// Then:

// Obtain recorded samples



OSStatus status;



status = AudioUnitRender([audioInterface audioUnit],

ioActionFlags,

inTimeStamp,

inBusNumber,

inNumberFrames,

bufferList);

checkStatus(status);



// Now, we have the samples we just read sitting in buffers in bufferList

DoStuffWithTheRecordedAudio(bufferList);

return noErr;

}

PLAYBACK

static OSStatus playbackCallback(void *inRefCon,

AudioUnitRenderActionFlags *ioActionFlags,

const AudioTimeStamp *inTimeStamp,

UInt32 inBusNumber,

UInt32 inNumberFrames,

AudioBufferList *ioData) {

// Notes: ioData contains buffers (may be more than one!)

// Fill them up as much as you can. Remember to set the size value in each buffer to match how

// much data is in the buffer.

return noErr;

}

Finally, rejoice with me in this discovery ;)

Resources that helped

http://pastie.org/pastes/219616

http://developer.apple.com/samplecode/CAPlayThrough/listing8.html

http://listas.apesol.org/pipermail/svn-libsdl.org/2008-July/000797.html

No thanks at all to Apple for their lack of accessible documentation on this topic – They really have a long way to go here! Also boo to them with their lack of search engine, and refusal to open up their docs to Google. It’s
a jungle out there!

Update: You can adjust the latency of RemoteIO (and, in fact, any other audio framework) by setting the
kAudioSessionProperty_PreferredHardwareIOBufferDuration
property:

float aBufferLength = 0.005; // In seconds
AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, 
                        sizeof(aBufferLength), &aBufferLength);


This adjusts the length of buffers that’re passed to you – if buffer length was originally, say, 1024 samples, then halving the number of samples halves the amount of time taken to process them.
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: