您的位置:首页 > 编程语言 > C语言/C++

C/C++音视频库ffmpeg的数据包AVPacket分析

2017-03-15 07:34 274 查看
C/C++音视频库ffmpeg的数据包AVPacket分析



ffmpeg下载地址 http://www.ffmpeg.club/

AVPacket是ffmpeg用来存放编码后的视频帧数据,我们来分析一下这个结构体,先贴出ffmpeg3.2中AVPacket声明的源代码:
typedef struct AVPacket {
/**
* A reference to the reference-counted buffer where the packet data is
* stored.
* May be NULL, then the packet data is not reference-counted.
*/
AVBufferRef *buf;
/**
* Presentation timestamp in AVStream->time_base units; the time at which
* the decompressed packet will be presented to the user.
* Can be AV_NOPTS_VALUE if it is not stored in the file.
* pts MUST be larger or equal to dts as presentation cannot happen before
* decompression, unless one wants to view hex dumps. Some formats misuse
* the terms dts and pts/cts to mean something different. Such timestamps
* must be converted to true pts/dts before they are stored in AVPacket.
*/
int64_t pts;
/**
* Decompression timestamp in AVStream->time_base units; the time at which
* the packet is decompressed.
* Can be AV_NOPTS_VALUE if it is not stored in the file.
*/
int64_t dts;
uint8_t *data;
int size;
int stream_index;
/**
* A combination of AV_PKT_FLAG values
*/
int flags;
/**
* Additional packet data that can be provided by the container.
* Packet can contain several types of side information.
*/
AVPacketSideData *side_data;
int side_data_elems;

/**
* Duration of this packet in AVStream->time_base units, 0 if unknown.
* Equals next_pts - this_pts in presentation order.
*/
int64_t duration;

int64_t pos; ///< byte position in stream, -1 if unknown

#if FF_API_CONVERGENCE_DURATION
/**
* @deprecated Same as the duration field, but as int64_t. This was required
* for Matroska subtitles, whose duration values could overflow when the
* duration field was still an int.
*/
attribute_deprecated
int64_t convergence_duration;
#endif
} AVPacket;
我们依次进行分析

AVBufferRef *buf;

用来存放引用计数的数据,如果没有使用引用计数,值就是NULL,当你多个packet对象引用同一帧数据的时候用到。

int64_t pts;

本帧数据显示的时间,比较关键的数据,在做seek和播放进度的时候都要用到它,pts只是一个数量,对应于AVStream->time_base,要根据time_base才能转换为具体的时间,音频和视频一般有不同的time_base,所以在做音视频同步一定要做转换,不能直接拿pts做。

转换方式,比如转为毫秒

AVFormatContext *ic = NULL;

static double r2d(AVRational r)

{

return r.num == 0 || r.den == 0 ? 0. : (double)r.num / (double)r.den;

}

//。。。

int pts = (pkt->pts *r2d(ic->streams[pkt->stream_index]->time_base)) * 1000;

int64_t dts;

基本属性等同于pts,区别就是dts对应的是解码时间不是显示时间,解码后会放入缓冲,比如h264,如果有b帧,则要先解码后面的b帧,再解码之前的帧。

uint8_t *data; int size;

帧的数据和数据大小

int stream_index;

帧数据所属流的索引,用来区分音频,视频,和字幕数据。

int flags;

标志,其中为1表示该数据是一个关键帧

AV_PKT_FLAG_KEY 0x0001 关键帧

AVPacketSideData *side_data;

int side_data_elems;

容器提供的一些附加数据

int64_t duration;

下一帧pts - 当前帧pts ,也就表示两帧时间间隔。

int64_t pos;

当前帧数据在文件中的位置(字节为单位),如果做文件移位用到,如果rtsp就没有此数据。

更多的资料也可以关注我csdn上的视频课程
夏曹俊老师课程专栏http://edu.csdn.net/lecturer/961
手把手教您开发视频播放器
http://edu.csdn.net/course/detail/3300


内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: