您的位置:首页 > 移动开发 > Android开发

Android学习(九)AudioTrack(2)

2015-10-12 00:00 344 查看
摘要: AudioTrack

AudioTrack(2)

在native层中的android_media_AudioTrack_native_setup函数中创建了一个AudioTrack* lpTrack = new AudioTrack();对象,源码在AudioTrack.cpp中:
AudioTrack::AudioTrack() : mStatus(NO_INIT){}
再看set函数,android_media_AudioTrack_native_setup中的MODE_STREAM模式参数设置如下:
if (memoryMode == javaAudioTrackFields.MODE_STREAM) { lpTrack->set( atStreamType,// stream typeàSTREAM_MUSIC sampleRateInHertz, format,// word length, PCMàPCM_16 channels,à2 frameCount, 0,// flags audioCallback, &(lpJniStorage->mCallbackData),//callback, callback data (user) 0,// notificationFrames == 0 since not using EVENT_MORE_DATA to feed the AudioTrack 0,// shared mem true,// thread can call Java sessionId);// audio session ID }
看set的源码:
status_t AudioTrack::set( int streamType, uint32_t sampleRate, int format, int channels, int frameCount, uint32_t flags, callback_t cbf, void* user, int notificationFrames, const sp<IMemory>& sharedBuffer, bool threadCanCallJava, int sessionId){ …audio_io_handle_t output = à audio_io_handle_t是一个int类型,通过typedef定义,值涉及到AudioFlinger和AudioPolicyService,这个值主要被AudioFlinger使用,用来表示内部的工作线程索引号,AudioFlinger会根据情况创建几个工作线程,AudioSystem::getOutput会根据流类型等其他参数最终选择一个合适的工作线程,并返回其在AudioFlinger中索引号AudioSystem::getOutput((AudioSystem::stream_type)streamType, sampleRate, format, channels, (AudioSystem::output_flags)flags); if (output == 0) { LOGE("Could not get audio output for stream type %d", streamType); return BAD_VALUE; } … // create the IAudioTrack status_t status = createTrack(streamType, sampleRate, format, channelCount, frameCount, flags, sharedBuffer, output, true); if (status != NO_ERROR) { return status; } if (cbf != 0) { mAudioTrackThread = new AudioTrackThread(*this, threadCanCallJava); if (mAudioTrackThread == 0) { LOGE("Could not create callback thread"); return NO_INIT; } } … return NO_ERROR;}
分析createTrack函数:
status_t AudioTrack::createTrack( int streamType, uint32_t sampleRate, int format, int channelCount, int frameCount, uint32_t flags, const sp<IMemory>& sharedBuffer, audio_io_handle_t output, bool enforceFrameCount){ status_t status; const sp<IAudioFlinger>& audioFlinger = AudioSystem::get_audio_flinger();à得到AudioFlinger的Binder代理端BpAudioFlinger… sp<IAudioTrack> track = audioFlinger->createTrack(getpid(), streamType, sampleRate, format, channelCount, frameCount, ((uint16_t)flags) << 16, sharedBuffer, output, &mSessionId, &status);à向AudioFlinger发送createTrack请求,在STREAM模式下shareBuffer为空,output为AudioSystem::getOutput获取的值,为AudioFlinger中的线程索引号,该函数返回IAudioTrack(实际类型为BpAudioTrack)对象,后续AudioFlinger和AudioTrack交互即围绕IAudioTrack进行。在STREAM模式下,没有在AudioTrack端创建共享内存,AudioFlinger和AudioTrack交互的共享内存最终是由AudioFlinger的createTrack创建。 …sp<IMemory> cblk = track->getCblk();… mAudioTrack.clear(); mAudioTrack = track; mCblkMemory.clear(); mCblkMemory = cblk;à cblk 是control block的缩写 mCblk = static_cast<audio_track_cblk_t*>(cblk->pointer());àIMemory的pointer在此处将返回共享内存的首地址,为void*类型,做强制类型转换。 mCblk->flags |= CBLK_DIRECTION_OUT; if (sharedBuffer == 0) {àbuffers指向数据空间,它的起始位置加上audio_track_cblk_t的大小 mCblk->buffers = (char*)mCblk + sizeof(audio_track_cblk_t); } else { mCblk->buffers = sharedBuffer->pointer(); // Force buffer full condition as data is already present in shared memory mCblk->stepUser(mCblk->frameCount); } mCblk->volumeLR = (uint32_t(uint16_t(mVolume[RIGHT] * 0x1000)) << 16) | uint16_t(mVolume[LEFT] * 0x1000); mCblk->sendLevel = uint16_t(mSendLevel * 0x1000); mAudioTrack->attachAuxEffect(mAuxEffectId); mCblk->bufferTimeoutMs = MAX_STARTUP_TIMEOUT_MS; mCblk->waitTimeMs = 0; mRemainingFrames = mNotificationFramesAct; mLatency = afLatency + (1000*mCblk->frameCount) / sampleRate; return NO_ERROR;}
audio_track_cblk_t结构体分析:
struct audio_track_cblk_t{ // The data members are grouped so that members accessed frequently and in the same context // are in the same line of data cache. Mutex lock; Condition cv;à同步变量,初始化的时候会设置为支持跨进程共享 volatile uint32_t user;à当前写位置,volatile支持跨进程 volatile uint32_t server;à当前读位置 uint32_t userBase; uint32_t serverBase; void* buffers;à指向数据缓冲区的首地址 uint32_t frameCount;à数据缓冲区的总大小,以Frame为单位 // Cache line boundary uint32_t loopStart;à设置播放的起点和终点 uint32_t loopEnd; int loopCount;à设置循环播放次数 volatile union { uint16_t volume[2]; uint32_t volumeLR; };à音量相关 uint32_t sampleRate;à采样率 // NOTE: audio_track_cblk_t::frameSize is not equal to AudioTrack::frameSize() for // 8 bit PCM data: in this case, mCblk->frameSize is based on a sample size of // 16 bit because data is converted to 16 bit before being stored in buffer uint8_t frameSize;à以单位Frame的数据大小 uint8_t channelCount;à声道数 uint16_t flags; uint16_t bufferTimeoutMs; // Maximum cumulated timeout before restarting audioflinger uint16_t waitTimeMs; // Cumulated wait time uint16_t sendLevel; uint16_t reserved; // Cache line boundary (32 bytes) audio_track_cblk_t(); uint32_t stepUser(uint32_t frameCount); bool stepServer(uint32_t frameCount); void* buffer(uint32_t offset) const; uint32_t framesAvailable(); uint32_t framesAvailable_l(); uint32_t framesReady();};
分析callback_t cbf 不为空,会创建这样一个线程mAudioTrackThread = new AudioTrackThread(*this, threadCanCallJava);来做回调函数。
这个线程与外界数据的输入方式有关,AudioTrack支持两种数据输入方式:

Push方式:用户主动调用write写数据,相当于数据被push到AudioTrack,MediaPlayerService一般使用这种方式提供数据。

Pull方式:AudioTrackThread将利用回调函数,以EVENT_MORE_DATA为参数主动从用户那里Pull数据。ToneGenerator则使用这种方式为AudioTrack提供数据。

AudioTrack回调中能用的事件类型:
enum event_type { EVENT_MORE_DATA = 0, à表示AudioTrack需要更多的数据 EVENT_UNDERRUN = 1,à表示Audio硬件处于低负荷状态 EVENT_LOOP_END = 2, à表示已到达播放终点 EVENT_MARKER = 3, à数据使用警戒通知,当数据使用超过某值时,会发生通知,该值可通过setMarkerPosition()设置 EVENT_NEW_POS = 4,à数据使用进度通知,进度通知由setPositionUpdatePeriod()设置 EVENT_BUFFER_END = 5à数据全部被消耗 };
关于AudioTrackThread的定义:
class AudioTrackThread : public Thread { public: AudioTrackThread(AudioTrack& receiver, bool bCanCallJava = false); private: friend class AudioTrack; virtual bool threadLoop(); virtual status_t readyToRun(); virtual void onFirstRef(); AudioTrack& mReceiver; Mutex mLock; };
AudioTrack& mReceiver;就是创建该线程的AudioTrack。
bool AudioTrack::AudioTrackThread::threadLoop(){ return mReceiver.processAudioBuffer(this);}
分析mReceiver.processAudioBuffer:
bool AudioTrack::processAudioBuffer(const sp<AudioTrackThread>& thread){ Buffer audioBuffer; uint32_t frames; size_t writtenSize; à处理underrun的情况 if (mActive && (mCblk->framesReady() == 0)) { if ((mCblk->flags & CBLK_UNDERRUN_MSK) == CBLK_UNDERRUN_OFF) { mCbf(EVENT_UNDERRUN, mUserData, 0); if (mCblk->server == mCblk->frameCount) {àserver是读设置,frameCount是buffer中的数据总和,当读位置等于数据总和时,表示数据读完了 mCbf(EVENT_BUFFER_END, mUserData, 0); } mCblk->flags |= CBLK_UNDERRUN_ON; if (mSharedBuffer != 0) return false; } } à循环播放通知 while (mLoopCount > mCblk->loopCount) { int loopCount = -1; mLoopCount--; if (mLoopCount >= 0) loopCount = mLoopCount;à一次循环播放完毕,loopCount表示还剩多少次 mCbf(EVENT_LOOP_END, mUserData, (void *)&loopCount); } à警戒值操作 if (!mMarkerReached && (mMarkerPosition > 0)) { if (mCblk->server >= mMarkerPosition) { mCbf(EVENT_MARKER, mUserData, (void *)&mMarkerPosition); mMarkerReached = true; } } // Manage new position callback if (mUpdatePeriod > 0) { while (mCblk->server >= mNewPosition) { à以帧为单位的进度通知 ,例如设置每500帧通知一次,假设消费者一次读1500帧,那么会这个循环会连续通知3次mCbf(EVENT_NEW_POS, mUserData, (void *)&mNewPosition); mNewPosition += mUpdatePeriod; } } // If Shared buffer is used, no data is requested from client. if (mSharedBuffer != 0) { frames = 0; } else { frames = mRemainingFrames; } do { audioBuffer.frameCount = frames; à获得一块可写的缓冲 status_t err = obtainBuffer(&audioBuffer, 1); … size_t reqSize = audioBuffer.size; mCbf(EVENT_MORE_DATA, mUserData, &audioBuffer); à从用户那里pull数据, mCbf就是回调函数 writtenSize = audioBuffer.size; // Sanity check on returned size if (ssize_t(writtenSize) <= 0) { // The callback is done filling buffers // Keep this thread going to handle timed events and // still try to get more data in intervals of WAIT_PERIOD_MS // but don't just loop and block the CPU, so wait usleep(WAIT_PERIOD_MS*1000); break; } if (writtenSize > reqSize) writtenSize = reqSize; if (mFormat == AudioSystem::PCM_8_BIT && !(mFlags & AudioSystem::OUTPUT_FLAG_DIRECT)) { // 8 to 16 bit conversion const int8_t *src = audioBuffer.i8 + writtenSize-1; int count = writtenSize; int16_t *dst = audioBuffer.i16 + writtenSize-1; while(count--) { *dst-- = (int16_t)(*src--^0x80) << 8; } writtenSize <<= 1; } audioBuffer.size = writtenSize; // NOTE: mCblk->frameSize is not equal to AudioTrack::frameSize() for // 8 bit PCM data: in this case, mCblk->frameSize is based on a sampel size of // 16 bit. audioBuffer.frameCount = writtenSize/mCblk->frameSize; frames -= audioBuffer.frameCount; releaseBuffer(&audioBuffer); } while (frames); if (frames == 0) { mRemainingFrames = mNotificationFramesAct; } else { mRemainingFrames = frames; } return true;}
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: