您的位置:首页 > 运维架构

Ogre中使用Opencv加载视频作为纹理贴图

2011-09-21 11:06 471 查看
增强现实中经常要用到真实的2D视频图片再配合虚拟对象(如三维模型)结合,以达到增强现实的效果。这里使用Ogre结合Opencv来实现

在开始前要先了解到纹理贴图的长宽必须是2的n次方,所以假如加载的视频没有达到这个值,必须做一下resize

1.创建动态纹理、背景框矩形,并添加到场景

在CreateScene中加入如下代码

void CreateScene()
{
MaterialPtr material = MaterialManager::getSingleton().create(
"DynamicTextureMaterial", // name
ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME);
material->getTechnique(0)->getPass(0)->createTextureUnitState("DynamicBg");
material->getTechnique(0)->getPass(0)->setDepthCheckEnabled(false);
material->getTechnique(0)->getPass(0)->setDepthWriteEnabled(false);
material->getTechnique(0)->getPass(0)->setLightingEnabled(false);

// Create background rectangle covering the whole screen

Rectangle2D* rect = new Rectangle2D(true);
rect->setCorners(-1.0, 1.0, 1.0, -1.0);
rect->setMaterial("DynamicTextureMaterial");

// Render the background before everything else
rect->setRenderQueueGroup(RENDER_QUEUE_BACKGROUND);

// Hacky, but we need to set the bounding box to something big
// NOTE: If you are using Eihort (v1.4), please see the note below on setting the bounding box
rect->setBoundingBox(AxisAlignedBox(-100000.0*Vector3::UNIT_SCALE, 100000.0*Vector3::UNIT_SCALE));

// Attach background to the scene
SceneNode* node = mSceneMgr->getRootSceneNode()->createChildSceneNode("Background");
node->attachObject(rect);

//add other scene node here
}


2.初始化视频图像获取相关数据

void InitVideoQuery(char *szVideoPath)
{
m_szVideoPath = szVideoPath;
m_pFileCamer=cvCreateFileCapture(szVideoPath );
m_pVideoImage=cvQueryFrame(m_pFileCamer);

m_pDrawImage=cvCreateImage(cvSize(1024,1024),8,3);

m_pTex=TextureManager::getSingleton().createManual("DynamicBg","General",
TEX_TYPE_2D,m_pDrawImage->width,m_pDrawImage->height,1,0,PF_A8B8G8R8,TU_WRITE_ONLY);
}




3.更新顶点缓存

void UpdateTextureImage()
{
m_pVideoImage=cvQueryFrame(m_pFileCamer);
if(m_pDrawImage==0)
{
cvReleaseCapture(&m_pFileCamer);
m_pFileCamer=cvCreateFileCapture(m_szVideoPath);
m_pVideoImage=cvQueryFrame(m_pFileCamer);
}
cvShowImage("img",m_pVideoImage);

cvResize(m_pVideoImage,m_pDrawImage);

HardwarePixelBufferSharedPtr buffer=m_pTex->getBuffer(0,0);
buffer->lock(HardwareBuffer::HBL_DISCARD);
const PixelBox &pb = buffer->getCurrentLock();
uint32 *data = static_cast<uint32*>(pb.data);
size_t height = pb.getHeight();
size_t width = pb.getWidth();
size_t pitch = pb.rowPitch; // Skip between rows of image
for(size_t y=0; y<m_pDrawImage->height; ++y)
{
unsigned char *pImgLine=(unsigned char *)(m_pDrawImage->imageData+y*m_pDrawImage->widthStep);
for(size_t x=0; x<m_pDrawImage->width; ++x)
{
// 0xRRGGBB -> fill the buffer with yellow pixels
unsigned char B=pImgLine[3*x];
unsigned char G=pImgLine[3*x+1];
unsigned char R=pImgLine[3*x+2];
uint32 pixel=(R<<16)+(G<<8)+(B);
data[pitch*y + x] =pixel ;
}
}

buffer->unlock();
}[/code]

值得一提的是,在更新顶点缓存的过程中,即lock到unlock之间,由于操作硬件缓存效率低,为了提高效率,应该尽量减少代码的操作量,可选的做法是在外面将数据打包好,然后在更新的时候直接memcpy


                                            
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: