您的位置:首页 > 其它

ffmpeg二次开发指南

2012-12-17 23:24 337 查看
The libavformat and libavcodec libraries that come with ffmpeg are a great way of accessing a large variety of video file formats. Unfortunately, there is no real documentation on using these libraries in your own programs (at least I couldn't find any), and
the example programs aren't really very helpful either.

This situation meant that, when I used libavformat/libavcodec on a recent project, it took quite a lot of experimentation to find out how to use them. Here's what I learned - hopefully I'll be able to save others from having to go through the same trial-and-error
process. There's also a small demo program that you can download. The code I'll present works with libavformat/libavcodec as included in version 0.4.8 of ffmpeg (the most recent version as I'm writing this). If you find that later versions break the code,
please let me know.

In this document, I'll only cover how to read video streams from a file; audio streams work pretty much the same way, but I haven't actually used them, so I can't present any example code.

In case you're wondering why there are two libraries, libavformat and libavcodec: Many video file formats (AVI being a prime example) don't actually specify which codec(s) should be used to encode audio and video data; they merely define how an audio and a
video stream (or, potentially, several audio/video streams) should be combined into a single file. This is why sometimes, when you open an AVI file, you get only sound, but no picture - because the right video codec isn't installed on your system. Thus, libavformat
deals with parsing video files and separating the streams contained in them, and libavcodec deals with decoding raw audio and video streams.

Opening a Video File

First things first - let's look at how to open a video file and get at the streams contained in it. The first thing we need to do is to initialize libavformat/libavcodec:

av_register_all();

This registers all available file formats and codecs with the library so they will be used automatically when a file with the corresponding format/codec is opened. Note that you only need to call av_register_all() once, so it's probably best to do this somewhere
in your startup code. If you like, it's possible to register only certain individual file formats and codecs, but there's usually no reason why you would have to do that.

Next off, opening the file:

AVFormatContext *pFormatCtx;

const char *filename="myvideo.mpg";

// Open video file

if(av_open_input_file(&pFormatCtx, filename, NULL, 0, NULL)!=0)

handle_error(); // Couldn't open file

The last three parameters specify the file format, buffer size and format parameters; by simply specifying NULL or 0 we ask libavformat to auto-detect the format and use a default buffer size. Replace handle_error() with appropriate error handling code for
your application.

Next, we need to retrieve information about the streams contained in the file:

// Retrieve stream information

if(av_find_stream_info(pFormatCtx)<0)

handle_error(); // Couldn't find stream information

This fills the streams field of the AVFormatContext with valid information. As a debugging aid, we'll dump this information onto standard error, but of course you don't have to do this in a production application:

dump_format(pFormatCtx, 0, filename, false);

As mentioned in the introduction, we'll handle only video streams, not audio streams. To make things nice and easy, we simply use the first video stream we find:

int i, videoStream;

AVCodecContext *pCodecCtx;

// Find the first video stream

videoStream=-1;

for(i=0; i<pFormatCtx->nb_streams; i++)

if(pFormatCtx->streams->codec.codec_type==CODEC_TYPE_VIDEO)

{

videoStream=i;

break;

}

if(videoStream==-1)

handle_error(); // Didn't find a video stream

// Get a pointer to the codec context for the video stream

pCodecCtx=&pFormatCtx->streams[videoStream]->codec;

OK, so now we've got a pointer to the so-called codec context for our video stream, but we still have to find the actual codec and open it:

AVCodec *pCodec;

// Find the decoder for the video stream

pCodec=avcodec_find_decoder(pCodecCtx->codec_id);

if(pCodec==NULL)

handle_error(); // Codec not found

// Inform the codec that we can handle truncated bitstreams -- i.e.,

// bitstreams where frame boundaries can fall in the middle of packets

if(pCodec->capabilities & CODEC_CAP_TRUNCATED)

pCodecCtx->flags|=CODEC_FLAG_TRUNCATED;

// Open codec

if(avcodec_open(pCodecCtx, pCodec)<0)

handle_error(); // Could not open codec

(So what's up with those "truncated bitstreams"? Well, as we'll see in a moment, the data in a video stream is split up into packets. Since the amount of data per video frame can vary, the boundary between two video frames need not coincide with a packet boundary.
Here, we're telling the codec that we can handle this situation.)

One important piece of information that is stored in the AVCodecContext structure is the frame rate of the video. To allow for non-integer frame rates (like NTSC's 29.97 fps), the rate is stored as a fraction, with the numerator in pCodecCtx->frame_rate and
the denominator in pCodecCtx->frame_rate_base. While testing the library with different video files, I noticed that some codecs (notably ASF) seem to fill these fields incorrectly (frame_rate_base contains 1 instead of 1000). The following hack fixes this:

// Hack to correct wrong frame rates that seem to be generated by some

// codecs

if(pCodecCtx->frame_rate>1000 && pCodecCtx->frame_rate_base==1)

pCodecCtx->frame_rate_base=1000;

Note that it shouldn't be a problem to leave this fix in place even if the bug is corrected some day - it's unlikely that a video would have a frame rate of more than 1000 fps.

One more thing left to do: Allocate a video frame to store the decoded images in:

AVFrame *pFrame;

pFrame=avcodec_alloc_frame();

That's it! Now let's start decoding some video.

Decoding Video Frames

As I've already mentioned, a video file can contain several audio and video streams, and each of those streams is split up into packets of a particular size. Our job is to read these packets one by one using libavformat, filter out all those that aren't part
of the video stream we're interested in, and hand them on to libavcodec for decoding. In doing this, we'll have to take care of the fact that the boundary between two frames can occur in the middle of a packet.

Sound complicated? Lucikly, we can encapsulate this whole process in a routine that simply returns the next video frame:

bool GetNextFrame(AVFormatContext *pFormatCtx, AVCodecContext *pCodecCtx,

int videoStream, AVFrame *pFrame)

{

static AVPacket packet;

static int bytesRemaining=0;

static uint8_t *rawData;

static bool fFirstTime=true;

int bytesDecoded;

int frameFinished;

// First time we're called, set packet.data to NULL to indicate it

// doesn't have to be freed

if(fFirstTime)

{

fFirstTime=false;

packet.data=NULL;

}

// Decode packets until we have decoded a complete frame

while(true)

{

// Work on the current packet until we have decoded all of it

while(bytesRemaining > 0)

{

// Decode the next chunk of data

bytesDecoded=avcodec_decode_video(pCodecCtx, pFrame,

&frameFinished, rawData, bytesRemaining);

// Was there an error?

if(bytesDecoded < 0)

{

fprintf(stderr, "Error while decoding frame\n");

return false;

}

bytesRemaining-=bytesDecoded;

rawData+=bytesDecoded;

// Did we finish the current frame? Then we can return

if(frameFinished)

return true;

}

// Read the next packet, skipping all packets that aren't for this

// stream

do

{

// Free old packet

if(packet.data!=NULL)

av_free_packet(&packet);

// Read new packet

if(av_read_packet(pFormatCtx, &packet)<0)

goto loop_exit;

} while(packet.stream_index!=videoStream);

bytesRemaining=packet.size;

rawData=packet.data;

}

loop_exit:

// Decode the rest of the last frame

bytesDecoded=avcodec_decode_video(pCodecCtx, pFrame, &frameFinished,

rawData, bytesRemaining);

// Free last packet

if(packet.data!=NULL)

av_free_packet(&packet);

return frameFinished!=0;

}

Now, all we have to do is sit in a loop, calling GetNextFrame() until it returns false. Just one more thing to take care of: Most codecs return images in YUV 420 format (one luminance and two chrominance channels, with the chrominance channels samples at half
the spatial resolution of the luminance channel). Depending on what you want to do with the video data, you may want to convert this to RGB. (Note, though, that this is not necessary if all you want to do is display the video data; take a look at the X11 Xvideo
extension, which does YUV-to-RGB and scaling in hardware.) Fortunately, libavcodec provides a conversion routine called img_convert, which does conversion between YUV and RGB as well as a variety of other image formats. The loop that decodes the video thus
becomes:

while(GetNextFrame(pFormatCtx, pCodecCtx, videoStream, pFrame))

{

img_convert((AVPicture *)pFrameRGB, PIX_FMT_RGB24, (AVPicture*)pFrame,

pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height);

// Process the video frame (save to disk etc.)

DoSomethingWithTheImage(pFrameRGB);

}

The RGB image pFrameRGB (of type AVFrame *) is allocated like this:

AVFrame *pFrameRGB;

int numBytes;

uint8_t *buffer;

// Allocate an AVFrame structure

pFrameRGB=avcodec_alloc_frame();

if(pFrameRGB==NULL)

handle_error();

// Determine required buffer size and allocate buffer

numBytes=avpicture_get_size(PIX_FMT_RGB24, pCodecCtx->width,

pCodecCtx->height);

buffer=new uint8_t[numBytes];

// Assign appropriate parts of buffer to image planes in pFrameRGB

avpicture_fill((AVPicture *)pFrameRGB, buffer, PIX_FMT_RGB24,

pCodecCtx->width, pCodecCtx->height);

Cleaning up

OK, we've read and processed our video, now all that's left for us to do is clean up after ourselves:

// Free the RGB image

delete [] buffer;

av_free(pFrameRGB);

// Free the YUV frame

av_free(pFrame);

// Close the codec

avcodec_close(pCodecCtx);

// Close the video file

av_close_input_file(pFormatCtx);

Done!

admin发表于 2006-4-4 10:48 AM

Update (April 26, 2005): A reader informs me that to compile the example programs on Kanotix (a Debian derivative) and possibly Debian itself, the include directives for avcodec.h and avformat.h have to be prefixed with "ffmpeg", like this:

#include <ffmpeg/avcodec.h>

#include <ffmpeg/avformat.h>

Also, the library libdts has to be included when compiling the programs, like this:

g++ -o avcodec_sample.0.4.9 avcodec_sample.0.4.9.cpp \

-lavformat -lavcodec -ldts -lz

A few months ago, I wrote an article on using the libavformat and libavcodec libraries that come with ffmpeg. Since then, I have received a number of comments, and a new prerelease version of ffmpeg (0.4.9-pre1) has recently become available, adding support
for seeking in video files, new file formats, and a simplified interface for reading video frames. These changes have been in the CVS for a while, but now is the first time we get to see them in a release. (Thanks by the way to Silviu Minut for sharing the
results of long hours of studying the CVS versions of ffmpeg - his page with ffmpeg information and a demo program is here.)

In this article, I'll describe only the differences between the previous release (0.4.8) and the new one, so if you're new to libavformat / libavcodec, I suggest you read the original article first.

First, a word about compiling the new release. On my compiler (gcc 3.3.1 on SuSE), I get an internal compiler error while compiling the source file ffv1.c. I suspect this particular version of gcc is a little flaky - I've had the same thing happen to me when
compiling OpenCV - but at any rate, a quick fix is to compile this one file without optimizations. The easiest way to do this is to do a make, then when the build hits the compiler error, change to the libavcodec subdirectory (since this is where ffv1.c lives),
copy the gcc command to compile ffv1.c from your terminal window, paste it back in, edit out the "-O3" compiler switch and then run gcc using that command. After that, you can change back to the main ffmpeg directory and restart make, and it should complete
the build.

What's New?

So what's new? From a programmer's point of view, the biggest change is probably the simplified interface for reading individual video frames from a video file. In ffmpeg 0.4.8 and earlier, data is read from the video file in packets using the routine av_read_packet().
Usually, the information for one video frame is spread out over several packets, and the situation is made even more complicated by the fact that the boundary between two video frames can come in the middle of two packets. Thankfully, ffmpeg 0.4.9 introduces
a new routine called av_read_frame(), which returns all of the data for a video frame in a single packet. The old way of reading video data using av_read_packet() is still supported but deprecated - I say: good riddance.

So let's take a look at how to access video data using the new API. In my original article (with the old 0.4.8 API), the main decode loop looked like this:

while(GetNextFrame(pFormatCtx, pCodecCtx, videoStream, pFrame))

{

img_convert((AVPicture *)pFrameRGB, PIX_FMT_RGB24, (AVPicture*)pFrame,

pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height);

// Process the video frame (save to disk etc.)

DoSomethingWithTheImage(pFrameRGB);

}

GetNextFrame() is a helper routine that handles the process of assembling all of the packets that make up one video frame. The new API simplifies things to the point that we can do the actual reading and decoding of data directly in our main loop:

while(av_read_frame(pFormatCtx, &packet)>=0)

{

// Is this a packet from the video stream?

if(packet.stream_index==videoStream)

{

// Decode video frame

avcodec_decode_video(pCodecCtx, pFrame, &frameFinished,

packet.data, packet.size);

// Did we get a video frame?

if(frameFinished)

{

// Convert the image from its native format to RGB

img_convert((AVPicture *)pFrameRGB, PIX_FMT_RGB24,

(AVPicture*)pFrame, pCodecCtx->pix_fmt, pCodecCtx->width,

pCodecCtx->height);

// Process the video frame (save to disk etc.)

DoSomethingWithTheImage(pFrameRGB);

}

}

// Free the packet that was allocated by av_read_frame

av_free_packet(&packet);

}

At first sight, it looks as if things have actually gotten more complex - but that is just because this piece code does things that used to be hidden in the GetNextFrame() routine (checking if the packet belongs to the video stream, decoding the frame and freeing
the packet). Overall, because we can eliminate GetNextFrame() completely, things have gotten a lot easier.

I've updated the demo program to use the new API. Simply comparing the number of lines (222 lines for the old version vs. 169 lines for the new one) shows that the new API has simplified things considerably.

Another important addition in the 0.4.9 release is the ability to seek to a certain timestamp in a video file. This is accomplished using the av_seek_frame() function, which takes three parameters: A pointer to the AVFormatContext, a stream index and the timestamp
to seek to. The function will then seek to the first key frame before the given timestamp. All of this is from the documentation - I haven't gotten round to actually testing av_seek_frame() yet, so I can't present any sample code either. If you've used av_seek_frame()
successfully, I'd be glad to hear about it.

Frame Grabbing (Video4Linux and IEEE1394)

Toru Tamaki sent me some sample code that demonstrates how to grab frames from a Video4Linux or IEEE1394 video source using libavformat / libavcodec. For Video4Linux, the call to av_open_input_file() should be modified as follows:

AVFormatParameters formatParams;

AVInputFormat *iformat;

formatParams.device = "/dev/video0";

formatParams.channel = 0;

formatParams.standard = "ntsc";

formatParams.width = 640;

formatParams.height = 480;

formatParams.frame_rate = 29;

formatParams.frame_rate_base = 1;

filename = "";

iformat = av_find_input_format("video4linux");

av_open_input_file(&ffmpegFormatContext,

filename, iformat, 0, &formatParams);

For IEEE1394, call av_open_input_file() like this:

AVFormatParameters formatParams;

AVInputFormat *iformat;

formatParams.device = "/dev/dv1394";

filename = "";

iformat = av_find_input_format("dv1394");

av_open_input_file(&ffmpegFormatContext,

filename, iformat, 0, &formatParams);

To be continued...

If I come across additional interesting information about libavformat / libavcodec, I plan to publish it here. So, if you have any comments, please contact me at the address given at the top of this article.

Standard disclaimer: I assume no liability for the correct functioning of the code and techniques presented in this article.

stiphon发表于 2006-5-9 05:11 PM


回复 #2 admin 的帖子

这个例子可以用吗?我没有运行起来。

admin发表于 2006-5-9 09:43 PM

例子只是说明使用方法,有些是伪代码,肯定不能运行的,建议你参考ffmpeg自带的如output_example.c等例程,那是可以运行的!

lsosa发表于 2006-5-20 12:54 PM

新来咋到,先发个帖子报个到,呵呵。。。

由于毕设需要翻译,所以,就选择了这一段ffmpeg指南,将它翻译了一下,贴在下面,供人参考:

/*---------------------------------------------------*/

// lsosa.BIT

// 2006-5-20

ffmpeg开发指南(使用 libavformat 和 libavcodec)

Ffmpeg 中的Libavformat 和 libavcodec库是访问大多数视频文件格式的一个很好的方法。不幸的是,在开发您自己的程序时,这套库基本上没有提供什么实际的文档可以用来作为参考(至少我没有找到任何文档),并且它的例程也并没有太多的帮助。

这种情况意味着,当我在最近某个项目中需要用到 libavformat/libavcodec 库时,需要作很多试验来搞清楚怎样使用它们。这里是我所学习的--希望我做的这些能够帮助一些人,以免他们重蹈我的覆辙,作同样的试验,遇到同样的错误。你还可以从这里下载一个demo程序。我将要公开的这部分代码需要0.4.8
版本的ffmpeg库中的 libavformat/libavcodec 的支持(我正在写最新版本)。如果您发现以后的版本与我写的程序不能兼容,请告知我。

在这个文档里,我仅仅涉及到如何从文件中读入视频流;音频流使用几乎同样的方法可以工作的很好,不过,我并没有实际使用过它们,所以,我没于办法提供任何 示例代码。

或许您会觉得奇怪,为什么需要两个库文件 libavformat 和 libavcodec :许多视频文件格式(AVI就是一个最好的例子)实际上并没有明确指出应该使用哪种编码来解析音频和视频数据;它们只是定义了音频流和视频流(或者,有可能是多个音频视频流)如何被绑定在一个文件里面。这就是为什么有时候,当你打开了一个AVI文件时,你只能听到声音,却不能看到图象--因为你的系统没有安装合适的视频解码器。所以,
libavformat 用来处理解析视频文件并将包含在其中的流分离出来, 而libavcodec 则处理原始音频和视频流的解码。

打开视频文件:

首先第一件事情--让我们来看看怎样打开一个视频文件并从中得到流。我们要做的第一件事情就是初始化libavformat/libavcodec:

av_register_all();

这一步注册库中含有的所有可用的文件格式和编码器,这样当打开一个文件时,它们才能够自动选择相应的文件格式和编码器。要注意你只需调用一次 av_register_all(),所以,尽可能的在你的初始代码中使用它。如果你愿意,你可以仅仅注册个人的文件格式和编码,不过,通常你不得不这么做却没有什么原因。

下一步,打开文件:

AVFormatContext *pFormatCtx;

const char *filename="myvideo.mpg";

// 打开视频文件

if(av_open_input_file(&pFormatCtx, filename, NULL, 0, NULL)!=0)

handle_error(); // 不能打开此文件

最后三个参数描述了文件格式,缓冲区大小(size)和格式参数;我们通过简单地指明NULL或0告诉 libavformat 去自动探测文件格式并且使用默认的缓冲区大小。请在你的程序中用合适的出错处理函数替换掉handle_error()。

下一步,我们需要取出包含在文件中的流信息:

// 取出流信息

if(av_find_stream_info(pFormatCtx)<0)

handle_error(); // 不能够找到流信息

这一步会用有效的信息把 AVFormatContext 的流域(streams field)填满。作为一个可调试的诊断,我们会将这些信息全盘输出到标准错误输出中,不过你在一个应用程序的产品中并不用这么做:

dump_format(pFormatCtx, 0, filename, false);

就像在引言中提到的那样,我们仅仅处理视频流,而不是音频流。为了让这件事情更容易理解,我们只简单使用我们发现的第一种视频流:

int i, videoStream;

AVCodecContext *pCodecCtx;

// 寻找第一个视频流

videoStream=-1;

for(i=0; i<pFormatCtx->nb_streams; i++)

if(pFormatCtx->streams->codec.codec_type==CODEC_TYPE_VIDEO)

{

videoStream=i;

break;

}

if(videoStream==-1)

handle_error(); // Didn't find a video stream

// 得到视频流编码上下文的指针

pCodecCtx=&pFormatCtx->streams[videoStream]->codec;

好了,我们已经得到了一个指向视频流的称之为上下文的指针。但是我们仍然需要找到真正的编码器打开它。

AVCodec *pCodec;

// 寻找视频流的解码器

pCodec=avcodec_find_decoder(pCodecCtx->codec_id);

if(pCodec==NULL)

handle_error(); // 找不到解码器

// 通知解码器我们能够处理截断的bit流--ie,

// bit流帧边界可以在包中

if(pCodec->capabilities & CODEC_CAP_TRUNCATED)

pCodecCtx->flags|=CODEC_FLAG_TRUNCATED;

// 打开解码器

if(avcodec_open(pCodecCtx, pCodec)<0)

handle_error(); // 打不开解码器

(那么什么是“截断bit流”?好的,就像一会我们看到的,视频流中的数据是被分割放入包中的。因为每个视频帧的数据的大小是可变的,那么两帧之间的边界就不一定刚好是包的边界。这里,我们告知解码器我们可以处理bit流。)

存储在 AVCodecContext结构中的一个重要的信息就是视频帧速率。为了允许非整数的帧速率(比如 NTSC的 29.97帧),速率以分数的形式存储,分子在 pCodecCtx->frame_rate,分母在 pCodecCtx->frame_rate_base 中。在用不同的视频文件测试库时,我注意到一些编码器(很显然ASF)似乎并不能正确的给予赋值( frame_rate_base 用1代替1000)。下面给出修复补丁:

// 加入这句话来纠正某些编码器产生的帧速错误

if(pCodecCtx->frame_rate>1000 && pCodecCtx->frame_rate_base==1)

pCodecCtx->frame_rate_base=1000;

注意即使将来这个bug解决了,留下这几句话也并没有什么坏处。视频不可能拥有超过1000fps的帧速。

只剩下一件事情要做了:给视频帧分配空间以便存储解码后的图片:

AVFrame *pFrame;

pFrame=avcodec_alloc_frame();

就这样,现在我们开始解码这些视频。

解码视频帧

就像我前面提到过的,视频文件包含数个音频和视频流,并且他们各个独自被分开存储在固定大小的包里。我们要做的就是使用libavformat依次读取这些包,过滤掉所有那些视频流中我们不感兴趣的部分,并把它们交给 libavcodec 进行解码处理。在做这件事情时,我们要注意这样一个事实,两帧之间的边界也可以在包的中间部分。

听起来很复杂?幸运的是,我们在一个例程中封装了整个过程,它仅仅返回下一帧:

bool GetNextFrame(AVFormatContext *pFormatCtx, AVCodecContext *pCodecCtx,

int videoStream, AVFrame *pFrame)

{

static AVPacket packet;

static int bytesRemaining=0;

static uint8_t *rawData;

static bool fFirstTime=true;

Int bytesDecoded;

Int frameFinished;

// 我们第一次调用时,将 packet.data 设置为NULL指明它不用释放了

if(fFirstTime)

{

fFirstTime=false;

packet.data=NULL;

}

// 解码直到成功解码完整的一帧

while(true)

{

// 除非解码完毕,否则一直在当前包中工作

while(bytesRemaining > 0)

{

// 解码下一块数据

bytesDecoded=avcodec_decode_video(pCodecCtx, pFrame,

&frameFinished, rawData, bytesRemaining);

// 出错了?

if(bytesDecoded < 0)

{

fprintf(stderr, "Error while decoding frame\n");

return false;

}

bytesRemaining-=bytesDecoded;

rawData+=bytesDecoded;

// 我们完成当前帧了吗?接着我们返回

if(frameFinished)

return true;

}

// 读取下一包,跳过所有不属于这个流的包

do

{

// 释放旧的包

if(packet.data!=NULL)

av_free_packet(&packet);

// 读取新的包

if(av_read_packet(pFormatCtx, &packet)<0)

goto loop_exit;

} while(packet.stream_index!=videoStream);

bytesRemaining=packet.size;

rawData=packet.data;

}

loop_exit:

// 解码最后一帧的余下部分

bytesDecoded=avcodec_decode_video(pCodecCtx, pFrame, &frameFinished,

rawData, bytesRemaining);

// 释放最后一个包

if(packet.data!=NULL)

av_free_packet(&packet);

return frameFinished!=0;

}

现在,我们要做的就是在一个循环中,调用 GetNextFrame () 直到它返回false。还有一处需要注意:大多数编码器返回 YUV 420 格式的图片(一个亮度和两个色度通道,色度通道只占亮度通道空间分辨率的一半(译者注:此句原句为the chrominance channels samples at half the spatial resolution of the luminance channel))。看你打算如何对视频数据处理,或许你打算将它转换至RGB格式。(注意,尽管,如果你只是打算显示视频数据,那大可不必要这么做;查看一下
X11 的 Xvideo 扩展,它可以在硬件层进行 YUV到RGB 转换。)幸运的是, libavcodec 提供给我们了一个转换例程 img_convert ,它可以像转换其他图象进行 YUV 和 RGB之间的转换。这样解码视频的循环就变成这样:

while(GetNextFrame(pFormatCtx, pCodecCtx, videoStream, pFrame))

{

img_convert((AVPicture *)pFrameRGB, PIX_FMT_RGB24, (AVPicture*)pFrame,

pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height);

// 处理视频帧(存盘等等)

DoSomethingWithTheImage(pFrameRGB);

}

RGB图象pFrameRGB (AVFrame *类型)的空间分配如下:

AVFrame *pFrameRGB;

int numBytes;

uint8_t *buffer;

// 分配一个AVFrame 结构的空间

pFrameRGB=avcodec_alloc_frame();

if(pFrameRGB==NULL)

handle_error();

// 确认所需缓冲区大小并且分配缓冲区空间

numBytes=avpicture_get_size(PIX_FMT_RGB24, pCodecCtx->width,

pCodecCtx->height);

buffer=new uint8_t[numBytes];

// 在pFrameRGB中给图象位面赋予合适的缓冲区

avpicture_fill((AVPicture *)pFrameRGB, buffer, PIX_FMT_RGB24,

pCodecCtx->width, pCodecCtx->height);

清除

好了,我们已经处理了我们的视频,现在需要做的就是清除我们自己的东西:

// 释放 RGB 图象

delete [] buffer;

av_free(pFrameRGB);

// 释放YUV 帧

av_free(pFrame);

// 关闭解码器(codec)

avcodec_close(pCodecCtx);

// 关闭视频文件

av_close_input_file(pFormatCtx);

完成!

更新(2005年4月26号):有个读者提出:在 Kanotix (一个 Debian 的发行版)上面编译本例程,或者直接在 Debian 上面编译,头文件中avcodec.h 和avformat.h 需要加上前缀“ffmpeg”,就像这样:

#include <ffmpeg/avcodec.h>

#include <ffmpeg/avformat.h>

同样的, libdts 库在编译程序时也要像下面这样加入进来:

g++ -o avcodec_sample.0.4.9 avcodec_sample.0.4.9.cpp \

-lavformat -lavcodec -ldts -lz

lsosa发表于 2006-5-20 12:55 PM

接着上边。。。

lsosa.BIT

-----------------------------------------------------------------------------------

几个月前,我写了一篇有关使用ffmpeg下libavformat 和 libavcodec库的文章。从那以来,我收到过一些评论,并且新的ffmpeg预发行版(0.4.9-pre1) 最近也要出来了,增加了对在视频文件中定位的支持,新的文件格式,和简单的读取视频帧的接口。这些改变不久就会应用到CVS中,不过这次是我第一次在发行版中看到它们。(顺便感谢
Silviu Minut 共享长时间学习CVS版的ffmpeg的成果--他的有关ffmpeg的信息和demo程序在这里。)

在这篇文章里,我仅仅会描述一下以前的版本(0.4.8)和最新版本之间的区别,所以,如果你是采用新的 libavformat / libavcodec ,我建议你读前面的文章。

首先,说说有关编译新发行版吧。用我的编译器( SuSE 上的 gcc 3.3.1 ),在编译源文件 ffv1.c 时会报一个编译器内部的错误。我怀疑这是个精简版的gcc--我在编译 OpenCV 时也遇到了同样的事情--但是不论如何,一个快速的解决方法就是在编译此文件时不要加优化参数。最简单的方法就是作一个make,当编译时遇到编译器错误,进入 libavcodec 子目录(因为这也是 ffv1.c 所在之处),在你的终端中使用gcc命令去编译ffv1.c,粘贴,编辑删除编译器开关(译者注:就是参数)"-O3",然后使用那个命令运行gcc。然后,你可以变回ffmpeg主目录并且重新运行make,这次应该可以编译了。

都有哪些更新?

有那些更新呢?从一个程序员的角度来看,最大的变化就是尽可能的简化了从视频文件中读取个人的视频帧的操作。在ffmpeg 0.4.8 和其早期版本中,在从一个视频文件中的包中用例程av_read_packet()来读取数据时,一个视频帧的信息通常可以包含在几个包里,而另情况更为复杂的是,实际上两帧之间的边界还可以存在于两个包之间。幸亏ffmpeg 0.4.9 引入了新的叫做av_read_frame()的例程,它可以从一个简单的包里返回一个视频帧包含的所有数据。使用av_read_packet()读取视频数据的老办法仍然支持,但是不赞成使用--我说:摆脱它是可喜的。

这里让我们来看看如何使用新的API来读取视频数据。在我原来的文章中(与 0.4.8 API相关),主要的解码循环就像下面这样:

while(GetNextFrame(pFormatCtx, pCodecCtx, videoStream, pFrame))

{

img_convert((AVPicture *)pFrameRGB, PIX_FMT_RGB24, (AVPicture*)pFrame,

pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height);

// 处理视频帧(存盘等等)

DoSomethingWithTheImage(pFrameRGB);

}

GetNextFrame() 是个有帮助的例程,它可以处理这样一个过程,这个过程汇编一个完整的视频帧所需要的所有的包。新的API简化了我们在主循环中实际直接读取和解码数据的操作:

while(av_read_frame(pFormatCtx, &packet)>=0)

{

// 这是视频流中的一个包吗?

if(packet.stream_index==videoStream)

{

// 解码视频流

avcodec_decode_video(pCodecCtx, pFrame, &frameFinished,

packet.data, packet.size);

// 我们得到一帧了吗?

if(frameFinished)

{

// 把原始图像转换成 RGB

img_convert((AVPicture *)pFrameRGB, PIX_FMT_RGB24,

(AVPicture*)pFrame, pCodecCtx->pix_fmt, pCodecCtx->width,

pCodecCtx->height);

// 处理视频帧(存盘等等)

DoSomethingWithTheImage(pFrameRGB);

}

}

// 释放用av_read_frame分配空间的包

av_free_packet(&packet);

}

看第一眼,似乎看上去变得更为复杂了。但那仅仅是因为这块代码做的都是要隐藏在GetNextFrame()例程中实现的(检查包是否属于视频流,解码帧并释放包)。总的说来,因为我们能够完全排除 GetNextFrame (),事情变得更简单了。

我已经更新了demo程序使用最新的API。简单比较一下行数(老版本222行 Vs新版本169行)显示出新的API大大的简化了这件事情。

0.4.9的另一个重要的更新是能够在视频文件中定位一个时间戳。它通过函数av_seek_frame() 来实现,此函数有三个参数:一个指向 AVFormatContext 的指针,一个流索引和定位时间戳。此函数在给定时间戳以前会去定位第一个关键帧。所有这些都来自于文档。我并没有对av_seek_frame()进行测试,所以这里我并不能够给出任何示例代码。如果你成功的使用av_seek_frame() ,我很高兴听到这个消息。

捕获视频(Video4Linux and IEEE1394)

Toru Tamaki 发给我了一些使用 libavformat / libavcodec 库从 Video4Linux 或者 IEEE1394 视频设备源中抓捕视频帧的样例代码。对 Video4Linux,调用av_open_input_file() 函数应该修改如下:

AVFormatParameters formatParams;

AVInputFormat *iformat;

formatParams.device = "/dev/video0";

formatParams.channel = 0;

formatParams.standard = "ntsc";

formatParams.width = 640;

formatParams.height = 480;

formatParams.frame_rate = 29;

formatParams.frame_rate_base = 1;

filename = "";

iformat = av_find_input_format("video4linux");

av_open_input_file(&ffmpegFormatContext,

filename, iformat, 0, &formatParams);

For IEEE1394, call av_open_input_file() like this:

AVFormatParameters formatParams;

AVInputFormat *iformat;

formatParams.device = "/dev/dv1394";

filename = "";

iformat = av_find_input_format("dv1394");

av_open_input_file(&ffmpegFormatContext,

filename, iformat, 0, &formatParams);

继续。。。

如果我碰巧遇到了一些有关 libavformat / libavcodec 的有趣的信息,我计划在这里公布。所以,如果你有任何的评论,请通过这篇文章顶部给出的地址联系我。

标准弃权:我没有责任去纠正这些代码的功能和这篇文章中涉及的技术

/*---------------------------------------------------------------------------------*/

admin发表于 2006-5-20 01:11 PM

支持!

Fastreaming发表于 2006-5-20 01:20 PM

"// 通知解码器我们能够处理截断的bit流--ie,

// bit流帧边界可以在包中

if(pCodec->capabilities & CODEC_CAP_TRUNCATED)

pCodecCtx->flags|=CODEC_FLAG_TRUNCATED;

"

This statement is wrong,take care it or just ignore it!

lsosa发表于 2006-5-20 02:17 PM

原帖由 [i]Fastreaming 于 2006-5-20 01:20 PM 发表

"// 通知解码器我们能够处理截断的bit流--ie,

// bit流帧边界可以在包中

if(pCodec->capabilities & CODEC_CAP_TRUNCATED)

pCodecCtx->flags|=CODEC_FLAG_TRUNCATED;

"

This s ...


嗯,我现在要开始使用这个东西了,前面只是翻译了一下,在使用中也许会遇到什么问题,最终,我希望我们能够多交流,为大家,也为自己。。。

rob发表于 2006-7-14 02:56 PM

对我这样的新手来说不错。

[ 本帖最后由 rob 于 2006-7-14 03:15 PM 编辑 ]

go发表于 2006-7-27 04:34 PM

看不懂

tang1007发表于 2006-8-9 04:12 PM

多谢分享:victory:

xjqanswer发表于 2006-11-6 01:09 PM


示例程序如何单独编译

下载一了一个叫做ffmpeg api smaple.c的程序,按照程序里面提供的编译方法行了编译(gcc -o xxxx xxx.c -lavformat -lavcodec -lz),发现有很多头文件都找不到,编译无法通过。由于本人是初次使用ffmpeg api进行编程,所以想向高手们请教一下,一般的ffmpeg api 程序应该如何编译,希望能讲解得详细些,谢了!

guoguotux发表于 2006-11-7 03:50 PM

多谢

vincente13发表于 2006-12-13 12:38 AM

我刚使用martin bohme's 例子来编码、解码的音频。。

这是我的程序

http://www.ntu.edu.sg/home2003/y030004/ffmpeg/src/ffmpeg-audio-decode.cpp

主要的程序

[code]while(av_read_frame(pFormatCtx, &packet)>=0)

{

// Is this a packet from the audio stream?

if(packet.stream_index==audioStream)

{

avcodec_decode_audio(pCodecCtx, buffer, &frameFinished, packet.data, packet.size);

// Did we get a audio frame?

if(frameFinished) {

/* encode the samples */

pkt.size= avcodec_encode_audio(c, outbuf, packet.size, buffer);

//pkt.pts= av_rescale_q(c->coded_frame->pts, c->time_base, audio_st->time_base);

pkt.pts = packet.pts;

pkt.data= outbuf;

/* write the compressed frame in the media file */

if (av_write_frame(oc, &pkt) != 0) {

fprintf(stderr, "Error while writing audio frame\n");

exit(1);

}

}

}

// Free the packet that was allocated by av_read_frame

av_free_packet(&packet);

av_free_packet(&pkt);

}
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: