您的位置:首页 > 运维架构 > 网站架构

Android Camera 系统架构源码分析(4)---->Camera的数据来源及Camera的管理

2017-11-14 14:55 477 查看
Camera的数据来源及Camera的管理

我们接着第3篇,再返回Cam1DeviceBase::startPreview()的(4)mpCamAdapter->startPreview()。在讲(4)前我们先来看看(1)onStartPreview()。

onStartPreview();的实现在DefaultCam1Device.cpp

DefaultCam1Device::onStartPreview()
{
//InitializeCameraAdapter.
//下面的函数在Cam1DeviceBase.cpp中实现
initCameraAdapter();
}
Cam1DeviceBase::initCameraAdapter()
{
//(1)ChecktoseeifCamAdapterhasexistedornot.
//...
//(2)Create&initanewCamAdapter.
//createInstance的实现在BaseCamAdapter.Instance.cpp中。
//在createInstance中,我们的s8AppMode是default,所以我们用的是MtkDefaultCamAdapter。
//createInstance仅是初始化变量,不作深入讨论
mpCamAdapter=ICamAdapter::createInstance(mDevName,mi4OpenId,mpParamsMgr);
if(mpCamAdapter!=0&&mpCamAdapter->init())
{
//(.1)init.此处调用的是MtkDefaultCamAdapter的init()
mpCamAdapter->setCallbacks(mpCamMsgCbInfo);
//使能MsgType,未被全能的msgType会被忽略掉
mpCamAdapter->enableMsgType(mpCamMsgCbInfo->mMsgEnabled);
//(.2)InvokeitssetParameters貌似没有实现
mpCamAdapter->setParameters();
//(.3)Sendto-docommands.根据TodoCmdMap做一些功能的操作,例如打开拍照声音,使能录像自动对焦等
for(size_ti=0;i<mTodoCmdMap.size();i++)
{
CommandInfoconst&rCmdInfo=mTodoCmdMap.valueAt(i);
MY_LOGD("sendqueuedcmd(%#d),args(%d,%d)",rCmdInfo.cmd,rCmdInfo.arg1,rCmdInfo.arg2);
mpCamAdapter->sendCommand(rCmdInfo.cmd,rCmdInfo.arg1,rCmdInfo.arg2);
}
mTodoCmdMap.clear();
//这里分别给两个Client设置了ProviderClient,都为mpCamAdapter,这里之前已经分析过,不再赘述
//(.4)[DisplayClient]setImageBufferProviderClientifneeded.
mpDisplayClient->setImgBufProviderClient(mpCamAdapter);
//(.5)[CamClient]setImageBufferProviderClientifneeded.
mpCamClient->setImgBufProviderClient(mpCamAdapter);
}
}
//MtkDefaultCamAdapter.cpp注意下面涉及到的几个元素,都是很重要的
CamAdapter::init()
{
//PreviewBufMgr
mpPreviewBufMgr=IPreviewBufMgr::createInstance(mpImgBufProvidersMgr);
//PreviewCmdQueThread,注意第一个参数为PreviewBufMgr
mpPreviewCmdQueThread=IPreviewCmdQueThread::createInstance(mpPreviewBufMgr,getOpenId(),mpParamsMgr);
mpPreviewCmdQueThread->run();
//CaptureCmdQueThread
mpCaptureCmdQueThread=ICaptureCmdQueThread::createInstance(this);
mpCaptureCmdQueThread->run();
//就是相机的3个Auto,自动曝光,自动对焦,自动白平衡
init3A();
mpVideoSnapshotScenario=IVideoSnapshotScenario::createInstance();
mpVideoSnapshotScenario->setCallback(this);
mpResourceLock=ResourceLock::CreateInstance();
mpResourceLock->Init();
}
[/code]
就这样初始化完成了。看到这里,可以道大胆地猜,如果说CamClient和DisplayClient是两个功能的操作者,而CamAdapter则是整个Camera的管理者,负责与底层沟通,读取Buf,并负责分配Buf给各个功能操作者,并包括管理着Camer各种属性和算法,如3A,是否自动对焦等。看一下是否是这样的。

我们继续看Cam1DeviceBase::startPreview()的(4)mpCamAdapter->startPreview()

CamAdapter::startPreview()
{
//注意传进去的是this
//调用了state.cpp的onStartPreview();
//StateManager是整个Camera的状态管理,我们忽略
returnmpStateManager->getCurrentState()->onStartPreview(this);
}
//state.cpp
StateIdle::onStartPreview(IStateHandler*pHandler)
{
//调用handler即MtkDefaultCamAdapter的onHandleStartPreview()
status=pHandler->onHandleStartPreview();
}
CamAdapter::onHandleStartPreview()
{
//向PreviewCmdQueThread发送了3条消息
mpPreviewCmdQueThread->postCommand(PrvCmdCookie::eStart,PrvCmdCookie::eSemAfter);
mpPreviewCmdQueThread->postCommand(PrvCmdCookie::eDelay,PrvCmdCookie::eSemAfter);
mpPreviewCmdQueThread->postCommand(PrvCmdCookie::eUpdate,PrvCmdCookie::eSemBefore);
}
//PreviewCmdQueThread.cpp接收上面发出来的命令
PreviewCmdQueThread::threadLoop()
{
sp<PrvCmdCookie>pCmdCookie;
//
if(getCommand(pCmdCookie))
{
switch(pCmdCookie->getCmd())
{
casePrvCmdCookie::eStart:
isvalid=start();
break;
casePrvCmdCookie::eDelay:
isvalid=delay(EQueryType_Init);
break;
casePrvCmdCookie::eUpdate:
isvalid=update();
break;
casePrvCmdCookie::ePrecap:
isvalid=precap();
break;
casePrvCmdCookie::eStop:
isvalid=stop();
break;
casePrvCmdCookie::eExit:
default:
break;
}
}
}
[/code]

下面要关注的是PreviewCmdQueThread收到的3条命令对应的3个函数,分别是:start(),delay(),update()。都在PreviewCmdQueThread.cpp

start()

PreviewCmdQueThread::start()
{
vector<IhwScenario::PortImgInfo>vimgInfo;
vector<IhwScenario::PortBufInfo>vBufPass1Out;
ImgBufQueNodePass1Node;
IhwScenario::PortBufInfoBufInfo;
//预览模式
EIspProfile_TeIspProfile=(mspParamsMgr->getRecordingHint())?EIspProfile_VideoPreview:EIspProfile_NormalPreview;
ECmd_TeCmd=(mspParamsMgr->getRecordingHint())?ECmd_CamcorderPreviewStart:ECmd_CameraPreviewStart;
//录像提示
mbRecordingHint=(mspParamsMgr->getRecordingHint())?true:false;
//(0)scenarioIDisdecidedbyrecordinghint
int32_teScenarioId=(mspParamsMgr->getRecordingHint())?ACDK_SCENARIO_ID_VIDEO_PREVIEW:ACDK_SCENARIO_ID_CAMERA_PREVIEW;
//(1)sensor(singleton)配置sensor
ret=mSensorInfo.init((ACDK_SCENARIO_ID_ENUM)eScenarioId));
//(2)Hwscenario
mpHwScenario=IhwScenario::createInstance(eHW_VSS,mSensorInfo.getSensorType(),
mSensorInfo.meSensorDev,mSensorInfo.mSensorBitOrder);
//调用的是VSSScenario.cpp主要初始化了ICamIOPipe,IPostProcPipe
mpHwScenario->init();
//(2.1)hwconfig获取ImageSensor的信息
getCfg(eID_Pass1In|eID_Pass1Out,vimgInfo);
//配置mpHwScenario的信息
getHw()->setConfig(&vimgInfo);
//(2.2)enquepass1buffer
//mustdothisearlierthanhwstart调用PreviewBufMgr.cpp创建bufPASS1BUFCNT+PASS1BUFCNT_VSS个
mspPreviewBufHandler->allocBuffer(mSensorInfo.getImgWidth(),mSensorInfo.getImgHeight(),
mSensorInfo.getImgFormat(),PASS1BUFCNT+PASS1BUFCNT_VSS);
for(int32_ti=0;i<PASS1BUFCNT;i++)
{
//取出刚才创建的Buf
mspPreviewBufHandler->dequeBuffer(eID_Pass1Out,Pass1Node);
//获取bufinfo
mapNode2BufInfo(eID_Pass1Out,Pass1Node,BufInfo);
//收集bufinfo
vBufPass1Out.push_back(BufInfo);
}
//把buf加入到mpHwScenario的队列时
getHw()->enque(NULL,&vBufPass1Out);
//取出vssBuf
#ifVSS_ENABLE
mspPreviewBufHandler->dequeBuffer(eID_Pass1Out,Pass1Node);
mapNode2BufInfo(eID_Pass1Out,Pass1Node,BufInfo);
mvBufPass1OutVss.clear();
mvBufPass1OutVss.push_back(BufInfo);
#endif
//(3)3A
//!!mustbesetafterhw->enque;otherwise,over-exposure.
mp3AHal=Hal3ABase::createInstance(DevMetaInfo::queryHalSensorDev(gInfo.openId));
mp3AHal->setZoom(100,0,0,mSensorInfo.getImgWidth(),mSensorInfo.getImgHeight());
mp3AHal->setIspProfile(eIspProfile);
mp3AHal->sendCommand(eCmd);
//(4)EIS
mpEisHal=EisHalBase::createInstance("mtkdefaultAdapter");
if(mpEisHal!=NULL)
{
eisHal_config_teisHalConfig;
eisHalConfig.imageWidth=mSensorInfo.getImgWidth();
eisHalConfig.imageHeight=mSensorInfo.getImgHeight();
mpEisHal->configEIS(eHW_VSS,eisHalConfig);
}
//
#ifVSS_ENABLE
mpVideoSnapshotScenario=IVideoSnapshotScenario::createInstance();
#endif
//(5)hwstart
//!!enablepass1SHOULDBElaststep!!使能ISP
getHw()->start();
}
//PreviewCmdQueThread.cpp
sensorInfo::init(ACDK_SCENARIO_ID_ENUMscenarioId)
{
//(1)init
mpSensor=SensorHal::createInstance();
//(2)mainorsub在enumDeviceLocked增加的Sensor信息现在获取出来
meSensorDev=(halSensorDev_e)DevMetaInfo::queryHalSensorDev(gInfo.openId);
//设置sensor信息
mpSensor->sendCommand(meSensorDev,SENSOR_CMD_SET_SENSOR_DEV);
mpSensor->init();
//(3)读取Camera信息有raw,yuv,rgb565...
mpSensor->sendCommand(meSensorDev,SENSOR_CMD_GET_SENSOR_TYPE,(int32_t)&meSensorType);
//(4)tg/memsize
uint32_tu4TgInW=0;
uint32_tu4TgInH=0;
switch(scenarioId)
{
caseACDK_SCENARIO_ID_CAMERA_PREVIEW:
{
mpSensor->sendCommand(meSensorDev,SENSOR_CMD_GET_SENSOR_PRV_RANGE,(int32_t)&u4TgInW,(uint32_t)&u4TgInH);
break;
}
caseACDK_SCENARIO_ID_VIDEO_PREVIEW:
//与上面的类似,都只是获取了sensor的分辨率
}
//
mu4TgOutW=ROUND_TO_2X(u4TgInW);//incasesensorreturnsoddweight
mu4TgOutH=ROUND_TO_2X(u4TgInH);//incasesenosrreturnsoddheight
mu4MemOutW=mu4TgOutW;
mu4MemOutH=mu4TgOutH;
//Scenario译为情境,不是很清楚在这里做为何用
IhwScenario*pHwScenario=IhwScenario::createInstance(eHW_VSS,meSensorType,
meSensorDev,mSensorBitOrder);
pHwScenario->getHwValidSize(mu4MemOutW,mu4MemOutH);
pHwScenario->destroyInstance();
pHwScenario=NULL;
//配置Sensor信息
halSensorIFParam_tsensorCfg[2];
intidx=meSensorDev==SENSOR_DEV_MAIN?0:1;
sensorCfg[idx].u4SrcW=u4TgInW;
sensorCfg[idx].u4SrcH=u4TgInH;
sensorCfg[idx].u4CropW=mu4TgOutW;
sensorCfg[idx].u4CropH=mu4TgOutH;
sensorCfg[idx].u4IsContinous=1;
sensorCfg[idx].u4IsBypassSensorScenario=0;
sensorCfg[idx].u4IsBypassSensorDelay=1;
sensorCfg[idx].scenarioId=scenarioId;
mpSensor->setConf(sensorCfg);
//
//(5)format
halSensorRawImageInfo_tsensorFormatInfo;
memset(&sensorFormatInfo,0,sizeof(halSensorRawImageInfo_t));
mpSensor->sendCommand(meSensorDev,SENSOR_CMD_GET_RAW_INFO,
(MINT32)&sensorFormatInfo,1,0);
mSensorBitOrder=(ERawPxlID)sensorFormatInfo.u1Order;
if(meSensorType==SENSOR_TYPE_RAW)//RAW
{
switch(sensorFormatInfo.u4BitDepth)
{
case8:
mFormat=MtkCameraParameters::PIXEL_FORMAT_BAYER8;
break;
case10:
default:
mFormat=MtkCameraParameters::PIXEL_FORMAT_BAYER10;
break;
}
}
elseif(meSensorType==SENSOR_TYPE_YUV){
switch(sensorFormatInfo.u1Order)
{
caseSENSOR_OUTPUT_FORMAT_UYVY:
caseSENSOR_OUTPUT_FORMAT_CbYCrY:
mFormat=MtkCameraParameters::PIXEL_FORMAT_YUV422I_UYVY;
break;
caseSENSOR_OUTPUT_FORMAT_VYUY:
caseSENSOR_OUTPUT_FORMAT_CrYCbY:
mFormat=MtkCameraParameters::PIXEL_FORMAT_YUV422I_VYUY;
break;
caseSENSOR_OUTPUT_FORMAT_YVYU:
caseSENSOR_OUTPUT_FORMAT_YCrYCb:
mFormat=MtkCameraParameters::PIXEL_FORMAT_YUV422I_YVYU;
break;
caseSENSOR_OUTPUT_FORMAT_YUYV:
caseSENSOR_OUTPUT_FORMAT_YCbYCr:
default:
mFormat=CameraParameters::PIXEL_FORMAT_YUV422I;
break;
}
}
}
[/code]
上面初始化了几个元素IhwScenario,vBufPass1Out,meSensorDev。然后Delay()这个没有做什么动作。接下来的Update()是我们的重点,它包括了如何去读取Sensor的数据,又是如何把数据放入到ISP中处理。经过一系列的处理后,PreviewCmdQueThread就会把数据传给我们的DisplayClient让其进行负责显示。

PreviewCmdQueThread::update()
{
//Loop:checkifnextcommandiscomming
//Nextcommandcanbe{stop,precap}
//Doatleast1frame(incaseofgoingtoprecapturedirectly)
//-->thisworkswhenAEupdatesineachframe(insteadofin3frames)
do{
//(1)读取处理分配buf
updateOne();
mFrameCnt++;
//(2)handlezoomcallback
handleCallback();
//(3)updatecheck
updateCheck1();
updateCheck2();
}while(!isNextCommand());
}
PreviewCmdQueThread::updateOne()
{
boolret=true;
int32_tpass1LatestBufIdx=-1;
int64_tpass1LatestTimeStamp=0;
nsecs_tpassTime=0,pass1Time=0,pass2Time=0,vssTime=0;
#ifVSS_ENABLE
vector<IhwScenario::PortBufInfo>vDeBufPass1OutVss;
#endif
vector<IhwScenario::PortQTBufInfo>vDeBufPass1Out;
vector<IhwScenario::PortQTBufInfo>vDeBufPass2Out;
vector<IhwScenario::PortBufInfo>vEnBufPass2In;
vector<IhwScenario::PortBufInfo>vEnBufPass2Out;
vector<IhwScenario::PortImgInfo>vPass2Cfg;
passTime=systemTime();
//*************************************************************
//(1)[PASS1]sensor--->ISP-->DRAM(IMGO)
//*************************************************************
pass1Time=passTime;
//从ISP_DMA_IMGO获取Buf信息
getHw()->deque(eID_Pass1Out,&vDeBufPass1Out)
pass1Time=systemTime()-pass1Time;
//
mapQT2BufInfo(eID_Pass2In,vDeBufPass1Out,vEnBufPass2In);
mp3AHal->sendCommand(ECmd_Update);
mpEisHal->doEIS();
//*************************************************************
//(2)[PASS2]DRAM(IMGI)-->ISP-->CDP-->DRAM(DISPO,VIDO)
//ifnobufferisavailable,returnimmediately.
//*************************************************************
int32_tflag=0;
//(.1)PASS2-IN
getCfg(eID_Pass2In,vPass2Cfg);
//(.2)PASS2-OUT
ImgBufQueNodedispNode;
ImgBufQueNodevidoNode;
mspPreviewBufHandler->dequeBuffer(eID_Pass2DISPO,dispNode);
mspPreviewBufHandler->dequeBuffer(eID_Pass2VIDO,vidoNode);
if(dispNode.getImgBuf()!=0)
{
flag|=eID_Pass2DISPO;
IhwScenario::PortBufInfoBufInfo;
IhwScenario::PortImgInfoImgInfo;
mapNode2BufInfo(eID_Pass2DISPO,dispNode,BufInfo);
mapNode2ImgInfo(eID_Pass2DISPO,dispNode,ImgInfo);
vEnBufPass2Out.push_back(BufInfo);
vPass2Cfg.push_back(ImgInfo);
}
if(vidoNode.getImgBuf()!=0)
{
if(vidoNode.getCookieDE()==IPreviewBufMgr::eBuf_Rec&&!mbRecording)
{
MY_LOGW("VRhasbeenstopped,donotenquebuftopass2");
}
else
{
flag=flag|eID_Pass2VIDO;
IhwScenario::PortBufInfoBufInfo;
IhwScenario::PortImgInfoImgInfo;
mapNode2BufInfo(eID_Pass2VIDO,vidoNode,BufInfo);
mapNode2ImgInfo(eID_Pass2VIDO,vidoNode,ImgInfo);
vEnBufPass2Out.push_back(BufInfo);
vPass2Cfg.push_back(ImgInfo);
}
}
//(.3)nobuffer==>returnimmediately.
//...
//(.4)hasbuffer==>dopass2en/deque
//Note:configmustbesetearlierthanen/de-que
//
updateZoom(vPass2Cfg);
getHw()->setConfig(&vPass2Cfg);
getHw()->enque(&vEnBufPass2In,&vEnBufPass2Out);
pass2Time=systemTime();
getHw()->deque((EHwBufIdx)flag,&vDeBufPass2Out);
pass2Time=systemTime()-pass2Time;
//*************************************************************
//(3)returnbuffer
//*************************************************************
if(vDeBufPass1Out.size()>0&&vDeBufPass1Out[0].bufInfo.vBufInfo.size()>0)
{
pass1LatestBufIdx=vDeBufPass1Out[0].bufInfo.vBufInfo.size()-1;
pass1LatestTimeStamp=vDeBufPass1Out[0].bufInfo.vBufInfo[pass1LatestBufIdx].getTimeStamp_ns();
//MY_LOGD("pass1LatestBufIdx(%d),pass1LatestTimeStamp(%lld)",pass1LatestBufIdx,pass1LatestTimeStamp);
}
//
if(ret)
{
if(checkDumpPass1())
{
dumpBuffer(vDeBufPass1Out,"pass1","raw",mFrameCnt);
}
if(flag&eID_Pass2DISPO)
{
if(checkDumpPass2Dispo())
{
dumpImg((MUINT8*)(dispNode.getImgBuf()->getVirAddr()),
dispNode.getImgBuf()->getBufSize(),"pass2_dispo","yuv",mFrameCnt);
}
}
if(flag&eID_Pass2VIDO)
{
if(checkDumpPass2Vido())
{
dumpImg((MUINT8*)(vidoNode.getImgBuf()->getVirAddr()),
vidoNode.getImgBuf()->getBufSize(),"pass2_vido","yuv",mFrameCnt);
}
}
}
//(.1)returnPASS1
#ifVSS_ENABLE
if(mpVideoSnapshotScenario->getStatus()==IVideoSnapshotScenario::Status_WaitImage&&
pass1LatestBufIdx>=0)
{
vector<IhwScenario::PortImgInfo>vPass1OutCfg;
IVideoSnapshotScenario::ImageInfovssImage;
//
getCfg(eID_Pass1Out,vPass1OutCfg);
//
vssImage.size.width=vPass1OutCfg[0].u4Width;
vssImage.size.height=vPass1OutCfg[0].u4Height;
vssImage.size.stride=vPass1OutCfg[0].u4Stride[0];
vssImage.mem.id=vDeBufPass1Out[0].bufInfo.vBufInfo[pass1LatestBufIdx].memID;
vssImage.mem.vir=vDeBufPass1Out[0].bufInfo.vBufInfo[pass1LatestBufIdx].u4BufVA;
vssImage.mem.phy=vDeBufPass1Out[0].bufInfo.vBufInfo[pass1LatestBufIdx].u4BufPA;
vssImage.mem.size=vDeBufPass1Out[0].bufInfo.vBufInfo[pass1LatestBufIdx].u4BufSize;
vssImage.crop.x=vPass2Cfg[0].crop.x;
vssImage.crop.y=vPass2Cfg[0].crop.y;
vssImage.crop.w=vPass2Cfg[0].crop.w;
vssImage.crop.h=vPass2Cfg[0].crop.h;
mpVideoSnapshotScenario->setImage(vssImage);
//
mapQT2BufInfo(eID_Pass1Out,vDeBufPass1Out,vDeBufPass1OutVss);
getHw()->replaceQue(&vDeBufPass1OutVss,&mvBufPass1OutVss);
//Savethecurrentpass1bufferfornextVSS.
mvBufPass1OutVss.clear();
mvBufPass1OutVss.push_back(vDeBufPass1OutVss[0]);
}
else
{
#endif
getHw()->enque(vDeBufPass1Out);
#ifVSS_ENABLE
}
#endif
//
#ifVSS_ENABLE
vssTime=systemTime();
mpVideoSnapshotScenario->process();
vssTime=systemTime()-vssTime;
#endif
//(.2)returnPASS2
if(flag&eID_Pass2DISPO)
{
dispNode.getImgBuf()->setTimestamp(pass1LatestTimeStamp);
//通知Client的接收队列,进行接收处理
mspPreviewBufHandler->enqueBuffer(dispNode);
}
//
if(flag&eID_Pass2VIDO)
{
vidoNode.getImgBuf()->setTimestamp(pass1LatestTimeStamp);
mspPreviewBufHandler->enqueBuffer(vidoNode);
}
//[T.B.D]
//'0':"SUPPOSE"DISPOandVIDOgetsthesametimeStamp
if(vDeBufPass2Out.size()>1)
{
MY_LOGW_IF(vDeBufPass2Out.at(0).bufInfo.getTimeStamp_ns()!=vDeBufPass2Out.at(1).bufInfo.getTimeStamp_ns(),
"DISP(%f),VIDO(%f)",vDeBufPass2Out.at(0).bufInfo.getTimeStamp_ns(),vDeBufPass2Out.at(1).bufInfo.getTimeStamp_ns());
}
}
[/code]
好啦,我们找到了在哪里取Buf,又是怎么样的方式去通知接收Buf,再结合之前DisplayClient接收Buf的分析,就可以大概知道数据的整体流程是怎么样子的。关于上面是Sensor,ISP是如何关联起来的,又是如何从两者里面去取数据的。那些Pass1,Pass2又是什么东东,为什么Pass1和2又有in和out........这里面似乎大有文章,有空的时候再去开一个专题去深入分析。我们继续mspPreviewBufHandler->enqueBuffer(dispNode);很快就可以完成本文的任务了。

mspPreviewBufHandler是PreviewBufMgr,那就相当于调用了PreviewBufMgr的enqueBuffer()

PreviewBufMgr::enqueBuffer(ImgBufQueNodeconst&node)
{
//(1)setDONEtagintopackage
const_cast<ImgBufQueNode*>(&node)->setStatus(ImgBufQueNode::eSTATUS_DONE);
//(2)choosethecorrect"client"
switch(node.getCookieDE())
{
caseeBuf_Pass1:
//...
caseeBuf_Disp:
{
//从Provider的队列中找到Id为eID_DISPLAY的Provider,之前已经分析过在哪里初始化的
sp<IImgBufProvider>bufProvider=mspImgBufProvidersMgr->getDisplayPvdr();
/**
在DisplayClient::waitAndHandleReturnBuffers()中,调用了ImgBufQueue.cppdequeProcessor()
dequeProcessor()里mDoneImgBufQueCond.wait()会一直在等待。直到下面的函数被调用了
就会把buf传进去,并mDoneImgBufQueCond.broadcast();通知接收buf
**/
bufProvider->enqueProvider(node);
}
break;
//
caseeBuf_AP:
//...
caseeBuf_FD:
//...
caseeBuf_Rec:
//...
}
[/code]
看到这里验证了我们之前的猜想,CamAdapter管理了ImageSensor,ISP等硬件的初始化和调度,3A,数据的分配流向和处理等,虽然CamAdapter没有去执行Preview,Capture,Record等操作,但却管理到了他们的数据的来源和数据的动向,最后Camdapter里还包含了Bufmanager和整个Camera的状态。例如在CamAdapter中Preview的管理就包括了PreviewCmdQueThread和PreviewBufMgr,PreviewCmdQueThread的线程里接收到通知后,从Sensor取数据并交由ISP处理,3A等处理后把Buf交给PreviewBufMgr进行分配到相应的操作。如果PreviewBufMgr收到的Buf需要显示,则通过ImgBufProvidersManager找到DisplayClient里的ImgBufQueue,再把Buf插入到队列里,由ImgBufQueue进行通知显示
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐