您的位置:首页 > 其它

live555源码分析---- DESCRIBE命令处理

2014-04-08 10:12 435 查看
转载至:http://blog.csdn.net/gavinr/article/details/7026497

live555 DESCRIBE命令处理比较复杂,详细的处理过程如下

1.DESCRIBE处理函数

[cpp] view
plaincopyprint?

void RTSPServer::RTSPClientSession

::handleCmd_DESCRIBE(char const* cseq,

char const* urlPreSuffix, char const* urlSuffix,

char const* fullRequestStr) {

...

//权限验证,默认返回真

if (!authenticationOK("DESCRIBE", cseq, urlTotalSuffix, fullRequestStr)) break;

// We should really check that the request contains an "Accept:" #####

// for "application/sdp", because that's what we're sending back #####

//查找资源对应的session(1.1)

// Begin by looking up the "ServerMediaSession" object for the specified "urlTotalSuffix":

ServerMediaSession* session = fOurServer.lookupServerMediaSession(urlTotalSuffix);

...

//获取SDP信息(1.2)

// Then, assemble a SDP description for this session:

sdpDescription = session->generateSDPDescription();

...

//组装响应包

// Also, generate our RTSP URL, for the "Content-Base:" header

// (which is necessary to ensure that the correct URL gets used in

// subsequent "SETUP" requests).

rtspURL = fOurServer.rtspURL(session, fClientInputSocket);

snprintf((char*)fResponseBuffer, sizeof fResponseBuffer,

"RTSP/1.0 200 OK\r\nCSeq: %s\r\n"

"%s"

"Content-Base: %s/\r\n"

"Content-Type: application/sdp\r\n"

"Content-Length: %d\r\n\r\n"

"%s",

cseq,

dateHeader(),

rtspURL,

sdpDescriptionSize,

sdpDescription);

} while (0);

...

}

调用DynamicRTSPServer::lookupServerMediaSession查找对应资源(URl)的session, 这个过程将实例化服务过程中所需的对像,sink,source等。接着调用session的generateSDPDescription,生成SDP信息。SDP很可能需要从文件中获取部分信息,所以在这其中必定有读文件的部分。然后就是组装响应包了。

2.查找资源对应的session(1.1)

[cpp] view
plaincopyprint?

ServerMediaSession*

DynamicRTSPServer::lookupServerMediaSession(char const* streamName) {

// First, check whether the specified "streamName" exists as a local file:

FILE* fid = fopen(streamName, "rb");

Boolean fileExists = fid != NULL;

// Next, check whether we already have a "ServerMediaSession" for this file:

ServerMediaSession* sms = RTSPServer::lookupServerMediaSession(streamName); //查询对应的session

Boolean smsExists = sms != NULL;

// Handle the four possibilities for "fileExists" and "smsExists":

if (!fileExists) {

if (smsExists) {

// "sms" was created for a file that no longer exists. Remove it:

removeServerMediaSession(sms); //对应的文件已经不存在,从链表中移除session

}

return NULL;

} else {

if (!smsExists) {

// Create a new "ServerMediaSession" object for streaming from the named file.

sms = createNewSMS(envir(), streamName, fid); //session不存在,则创建(2.2)

addServerMediaSession(sms); //加入到链表(2.1)

}

fclose(fid);

return sms;

}

}

DynamicRTSPServer是RTSPServer的子类,继承关系为:DynamicRTSPServer->RTSPServerSupportingHTTPStreaming->RTSPServer

调用RTSPServer::lookupServerMediaSession,查询streamName对应的session是否存在。若不存在,则需要创建新的session,并将其加入到链表中,否则直接返回已经存在的session。

3.session的管理(2.1)

session是通过hash表存放的,由RTSPServer类进行管理,先看session查询函数的定义

[cpp] view
plaincopyprint?

ServerMediaSession* RTSPServer::lookupServerMediaSession(char const* streamName) {

return (ServerMediaSession*)(fServerMediaSessions->Lookup(streamName));

}

fServerMediaSessions 为RTSPServer的成员,是一个Hash表,定义为HashTable* fServerMediaSessions。

session添加到Hash表

[cpp] view
plaincopyprint?

void RTSPServer::addServerMediaSession(ServerMediaSession* serverMediaSession) {

if (serverMediaSession == NULL) return;

char const* sessionName = serverMediaSession->streamName();

if (sessionName == NULL) sessionName = "";

ServerMediaSession* existingSession

= (ServerMediaSession*)(fServerMediaSessions->Add(sessionName, (void*)serverMediaSession));

removeServerMediaSession(existingSession); // if any

}

Hash表中已经存在对应的session,则会移除原来的session

4.创建session(2.2)

由于不同的媒体类媒,需要创建不同的session,为了便于修改,创建session的代码被放到一个独立的函数createNewSMS中

[cpp] view
plaincopyprint?

static ServerMediaSession* createNewSMS(UsageEnvironment& env,

char const* fileName, FILE* /*fid*/) {

// Use the file name extension to determine the type of "ServerMediaSession":

char const* extension = strrchr(fileName, '.');

if (extension == NULL) return NULL;

ServerMediaSession* sms = NULL;

Boolean const reuseSource = False;

if (strcmp(extension, ".aac") == 0) {

// Assumed to be an AAC Audio (ADTS format) file:

NEW_SMS("AAC Audio");

sms->addSubsession(ADTSAudioFileServerMediaSubsession::createNew(env, fileName, reuseSource));

} else if (strcmp(extension, ".amr") == 0) {

// Assumed to be an AMR Audio file:

NEW_SMS("AMR Audio");

sms->addSubsession(AMRAudioFileServerMediaSubsession::createNew(env, fileName, reuseSource));

} else if (strcmp(extension, ".ac3") == 0) {

// Assumed to be an AC-3 Audio file:

NEW_SMS("AC-3 Audio");

sms->addSubsession(AC3AudioFileServerMediaSubsession::createNew(env, fileName, reuseSource));

} else if (strcmp(extension, ".m4e") == 0) {

// Assumed to be a MPEG-4 Video Elementary Stream file:

NEW_SMS("MPEG-4 Video");

sms->addSubsession(MPEG4VideoFileServerMediaSubsession::createNew(env, fileName, reuseSource));

} else if (strcmp(extension, ".264") == 0) {

// Assumed to be a H.264 Video Elementary Stream file:

NEW_SMS("H.264 Video");

OutPacketBuffer::maxSize = 100000; // allow for some possibly large H.264 frames

sms->addSubsession(H264VideoFileServerMediaSubsession::createNew(env, fileName, reuseSource));

}

...

else if (strcmp(extension, ".mpg") == 0) {

// Assumed to be a MPEG-1 or 2 Program Stream (audio+video) file:

NEW_SMS("MPEG-1 or 2 Program Stream");

MPEG1or2FileServerDemux* demux

= MPEG1or2FileServerDemux::createNew(env, fileName, reuseSource);

sms->addSubsession(demux->newVideoServerMediaSubsession());

sms->addSubsession(demux->newAudioServerMediaSubsession());

}

//其它媒体格式

...

return sms;

}

session的创建被隐藏在宏NEW_SMS里,NEW_SMS定义如下:

[cpp] view
plaincopyprint?

#define NEW_SMS(description) do {\

char const* descStr = description\

", streamed by the LIVE555 Media Server";\

sms = ServerMediaSession::createNew(env, fileName, fileName, descStr);\

} while(0)

ServerMediaSession::createNew将实例化一个ServerMediaSession对象

5.创建subsession(4)

live555支持的复合容器类型只有*.mpg、*.mkv、webm,可以看到程序为容器中的每一个流建立一个subseesion,然后通过ServerMediaSession::addSubsession函数,将subsession 加入到ServerMediaSession。

先来看subsession是如何管理的

[cpp] view
plaincopyprint?

Boolean

ServerMediaSession::addSubsession(ServerMediaSubsession* subsession) {

if (subsession->fParentSession != NULL) return False; // it's already used

if (fSubsessionsTail == NULL) {

fSubsessionsHead = subsession;

} else {

fSubsessionsTail->fNext = subsession;

}

fSubsessionsTail = subsession;

subsession->fParentSession = this;

subsession->fTrackNumber = ++fSubsessionCounter;

return True;

}

从上面的代码中看到,ServerMediaSession 中记录了subsession的链表

从(4)中可以看到,每一种类型的媒体流都有自己的subsession实现,(4)中实例化一个H264的session代码为

[cpp] view
plaincopyprint?

H264VideoFileServerMediaSubsession::createNew(env, fileName, reuseSource)

这里有reuseSource参数,,表示将重用source,但其默认值为False。

继承关系:H264VideoFileServerMediaSubsession->FileServerMediaSubsession->OnDemandServerMediaSubsession->ServerMediaSubsession->Medium

6.获取SDP信息(1.2)

[cpp] view
plaincopyprint?

char* ServerMediaSession::generateSDPDescription() {

...

char* rangeLine = NULL; // for now

char* sdp = NULL; // for now

do {

...

// Unless subsessions have differing durations, we also have a "a=range:" line:

float dur = duration();

if (dur == 0.0) {

rangeLine = strDup("a=range:npt=0-\r\n");

} else if (dur > 0.0) {

char buf[100];

sprintf(buf, "a=range:npt=0-%.3f\r\n", dur);

rangeLine = strDup(buf);

} else { // subsessions have differing durations, so "a=range:" lines go there

rangeLine = strDup("");

}

char const* const sdpPrefixFmt =

"v=0\r\n"

"o=- %ld%06ld %d IN IP4 %s\r\n"

"s=%s\r\n"

"i=%s\r\n"

"t=0 0\r\n"

"a=tool:%s%s\r\n"

"a=type:broadcast\r\n"

"a=control:*\r\n"

"%s"

"%s"

"a=x-qt-text-nam:%s\r\n"

"a=x-qt-text-inf:%s\r\n"

"%s";

sdpLength += strlen(sdpPrefixFmt)

+ 20 + 6 + 20 + ipAddressStrSize

+ strlen(fDescriptionSDPString)

+ strlen(fInfoSDPString)

+ strlen(libNameStr) + strlen(libVersionStr)

+ strlen(sourceFilterLine)

+ strlen(rangeLine)

+ strlen(fDescriptionSDPString)

+ strlen(fInfoSDPString)

+ strlen(fMiscSDPLines);

sdp = new char[sdpLength];

if (sdp == NULL) break;

// Generate the SDP prefix (session-level lines):

sprintf(sdp, sdpPrefixFmt,

fCreationTime.tv_sec, fCreationTime.tv_usec, // o= <session id>

1, // o= <version> // (needs to change if params are modified)

ipAddressStr, // o= <address>

fDescriptionSDPString, // s= <description>

fInfoSDPString, // i= <info>

libNameStr, libVersionStr, // a=tool:

sourceFilterLine, // a=source-filter: incl (if a SSM session)

rangeLine, // a=range: line

fDescriptionSDPString, // a=x-qt-text-nam: line

fInfoSDPString, // a=x-qt-text-inf: line

fMiscSDPLines); // miscellaneous session SDP lines (if any)

//生成SDP的媒体描述部分(6.1)

// Then, add the (media-level) lines for each subsession:

char* mediaSDP = sdp;

for (subsession = fSubsessionsHead; subsession != NULL;

subsession = subsession->fNext) {

mediaSDP += strlen(mediaSDP);

sprintf(mediaSDP, "%s", subsession->sdpLines());

}

} while (0);

delete[] rangeLine; delete[] sourceFilterLine; delete[] ipAddressStr;

return sdp;

}

上面的代码生成了SDP信息,其中媒体描述部分需要从subsession中获取

7.生成SDP的媒体描述部分(6.1)

sdpLines为定义在ServerMediaSubsession中的纯虚函数,在OnDemandServerMediaSubsession::sdpLines()中实现

[cpp] view
plaincopyprint?

char const*

OnDemandServerMediaSubsession::sdpLines() {

if (fSDPLines == NULL) {

// We need to construct a set of SDP lines that describe this

// subsession (as a unicast stream). To do so, we first create

// dummy (unused) source and "RTPSink" objects,

// whose parameters we use for the SDP lines:

unsigned estBitrate;

FramedSource* inputSource = createNewStreamSource(0, estBitrate); //实例化source(7.1),注意这里的第一个参数(clientSessionId)值为0

if (inputSource == NULL) return NULL; // file not found

struct in_addr dummyAddr;

dummyAddr.s_addr = 0;

Groupsock dummyGroupsock(envir(), dummyAddr, 0, 0); //实例化一个Groupsock,用于创建RTPSink

unsigned char rtpPayloadType = 96 + trackNumber()-1; // if dynamic //这里的负载类型有什么意义呢?

RTPSink* dummyRTPSink

= createNewRTPSink(&dummyGroupsock, rtpPayloadType, inputSource); //实例化RTPSink(7.2)

setSDPLinesFromRTPSink(dummyRTPSink, inputSource, estBitrate); //从RTPSink中获取SDP信息(7.3)

Medium::close(dummyRTPSink); //关闭RTPSink

closeStreamSource(inputSource); //关闭source

}

return fSDPLines;

}

这个函数代码较少,但却完成了一些重要函数的调用,创建了source、RTPSink、Groupsock等实例。

8.实例化source(7.1)

这里调用的createNewStreamSource函数,定义为OnDemandServerMediaSubsession类的一个纯虚函数, 不同类型的媒体有不同的实现,对于H264而言,其实现如下

[cpp] view
plaincopyprint?

FramedSource* H264VideoFileServerMediaSubsession::createNewStreamSource(unsigned /*clientSessionId*/, unsigned& estBitrate) {

estBitrate = 500; // kbps, estimate

// Create the video source:

ByteStreamFileSource* fileSource = ByteStreamFileSource::createNew(envir(), fFileName); //创建一个字节流source,用于读取文件

if (fileSource == NULL) return NULL;

fFileSize = fileSource->fileSize();

// Create a framer for the Video Elementary Stream:

return H264VideoStreamFramer::createNew(envir(), fileSource); //创建一个H264的Frame source

}

live555中直接从文件中读取, 基本上都是通过类ByteStreamFileSource进行的

最后一句创建了一个H264VideoStreamFramer实例,其继承关系:

H264VideoStreamFramer->MPEGVideoStreamFramer->FramedFilter->FramedSource->MediaSource

9.实例化RTPSink(7.2)

函数createNewRTPSink被定义为OnDemandServerMediaSubsession的一个虚函数,对于H264,在H264VideoFileServerMediaSubsession类中实现

[cpp] view
plaincopyprint?

RTPSink* H264VideoFileServerMediaSubsession

::createNewRTPSink(Groupsock* rtpGroupsock,

unsigned char rtpPayloadTypeIfDynamic,

FramedSource* /*inputSource*/) {

return H264VideoRTPSink::createNew(envir(), rtpGroupsock, rtpPayloadTypeIfDynamic);

}

创建了一个H264VideoRTPSink实例,其继承关系:

H264VideoRTPSink->VideoRTPSink->MultiFramedRTPSink->RTPSink->MediaSink

10.从RTPSink中获取SDP信息(7.3)

这里获取的是媒体相关的描述

[cpp] view
plaincopyprint?

void OnDemandServerMediaSubsession

::setSDPLinesFromRTPSink(RTPSink* rtpSink, FramedSource* inputSource, unsigned estBitrate) {

if (rtpSink == NULL) return;

char const* mediaType = rtpSink->sdpMediaType();

unsigned char rtpPayloadType = rtpSink->rtpPayloadType();

struct in_addr serverAddrForSDP; serverAddrForSDP.s_addr = fServerAddressForSDP;

char* const ipAddressStr = strDup(our_inet_ntoa(serverAddrForSDP));

char* rtpmapLine = rtpSink->rtpmapLine();

char const* rangeLine = rangeSDPLine();

char const* auxSDPLine = getAuxSDPLine(rtpSink, inputSource); //可选的SDP扩展属性行(10.1)

if (auxSDPLine == NULL) auxSDPLine = "";

char const* const sdpFmt =

"m=%s %u RTP/AVP %d\r\n"

"c=IN IP4 %s\r\n"

"b=AS:%u\r\n"

"%s"

"%s"

"%s"

"a=control:%s\r\n";

unsigned sdpFmtSize = strlen(sdpFmt)

+ strlen(mediaType) + 5 /* max short len */ + 3 /* max char len */

+ strlen(ipAddressStr)

+ 20 /* max int len */

+ strlen(rtpmapLine)

+ strlen(rangeLine)

+ strlen(auxSDPLine)

+ strlen(trackId());

char* sdpLines = new char[sdpFmtSize];

sprintf(sdpLines, sdpFmt,

mediaType, // m= <media>

fPortNumForSDP, // m= <port>

rtpPayloadType, // m= <fmt list>

ipAddressStr, // c= address

estBitrate, // b=AS:<bandwidth>

rtpmapLine, // a=rtpmap:... (if present)

rangeLine, // a=range:... (if present)

auxSDPLine, // optional extra SDP line

trackId()); // a=control:<track-id>

delete[] (char*)rangeLine; delete[] rtpmapLine; delete[] ipAddressStr;

fSDPLines = strDup(sdpLines);

delete[] sdpLines;

}

上面的代码中,要注意的是获取SDP的扩展行属性,与具体的媒体相关。

SDP中的媒体描述说明如下:
m = (媒体名称和传输地址)

i = * (媒体标题)

c = * (连接信息 — 如果包含在会话层则该字段可选)

b = * (带宽信息)

k = * (加密密钥)

a = * (0 个或多个会话属性行)

11.获取特定媒体的可选的SDP扩展属性行(10.1)

[cpp] view
plaincopyprint?

char const* OnDemandServerMediaSubsession

::getAuxSDPLine(RTPSink* rtpSink, FramedSource* /*inputSource*/) {

// Default implementation:

return rtpSink == NULL ? NULL : rtpSink->auxSDPLine();

}

RTPSink::auxSDPLine()中的默认实现返回空,如下

[cpp] view
plaincopyprint?

char const* RTPSink::auxSDPLine() {

return NULL; // by default

}

对于H264,在H264VideoRTPSink中重新实现

[cpp] view
plaincopyprint?

char const* H264VideoRTPSink::auxSDPLine() {

// Generate a new "a=fmtp:" line each time, using parameters from

// our framer source (in case they've changed since the last time that

// we were called):

if (fOurFragmenter == NULL) return NULL; // we don't yet have a fragmenter (and therefore not a source)

H264VideoStreamFramer* framerSource = (H264VideoStreamFramer*)(fOurFragmenter->inputSource()); //这里注意H264FUAFragmenter

if (framerSource == NULL) return NULL; // we don't yet have a source

u_int8_t* sps; unsigned spsSize;

u_int8_t* pps; unsigned ppsSize;

framerSource->getSPSandPPS(sps, spsSize, pps, ppsSize); //从H264VideoStreamFramer中获取PPS和SPS

if (sps == NULL || pps == NULL) return NULL; // our source isn't ready

u_int32_t profile_level_id;

if (spsSize < 4) { // sanity check

profile_level_id = 0;

} else {

profile_level_id = (sps[1]<<16)|(sps[2]<<8)|sps[3]; // profile_idc|constraint_setN_flag|level_idc

}

// Set up the "a=fmtp:" SDP line for this stream:

char* sps_base64 = base64Encode((char*)sps, spsSize); //经过base64编码

char* pps_base64 = base64Encode((char*)pps, ppsSize);

char const* fmtpFmt =

"a=fmtp:%d packetization-mode=1"

";profile-level-id=%06X"

";sprop-parameter-sets=%s,%s\r\n";

unsigned fmtpFmtSize = strlen(fmtpFmt)

+ 3 /* max char len */

+ 6 /* 3 bytes in hex */

+ strlen(sps_base64) + strlen(pps_base64);

char* fmtp = new char[fmtpFmtSize];

sprintf(fmtp, fmtpFmt,

rtpPayloadType(),

profile_level_id,

sps_base64, pps_base64);

delete[] sps_base64;

delete[] pps_base64;

delete[] fFmtpSDPLine; fFmtpSDPLine = fmtp;

return fFmtpSDPLine;

}

对于H264最重要的是PPS(图像参数集)和SPS(序列参数集),经过base64编码。上面的代码中多出了一个H264FUAFragmenter类,继承自FramedFilter,它主要是实现H264RTP的分包操作。

上面的代码中还有一个问题,用H264VideoStreamFramer::getSPSandPPS函数获取SPS与PPS,但查看H264VideoStreamFramer代码发现,其SPS、PPS默认值为空,需要从文件中读取值,显然上面的代码中没有读取文件的操作,读文件操作在哪里?

12.从H264视频文件中获取SPS、PPS(10.1)

原来在H264VideoFileServerMediaSubsession中对getAuxSDPLine函数重新实现了

[cpp] view
plaincopyprint?

char const* H264VideoFileServerMediaSubsession::getAuxSDPLine(RTPSink* rtpSink, FramedSource* inputSource) {

if (fAuxSDPLine != NULL) return fAuxSDPLine; // it's already been set up (for a previous client)

if (fDummyRTPSink == NULL) { // we're not already setting it up for another, concurrent stream

// Note: For H264 video files, the 'config' information ("profile-level-id" and "sprop-parameter-sets") isn't known

// until we start reading the file. This means that "rtpSink"s "auxSDPLine()" will be NULL initially,

// and we need to start reading data from our file until this changes.

fDummyRTPSink = rtpSink;

// Start reading the file:

fDummyRTPSink->startPlaying(*inputSource, afterPlayingDummy, this); //读取文件

// Check whether the sink's 'auxSDPLine()' is ready:

checkForAuxSDPLine(this);

}

envir().taskScheduler().doEventLoop(&fDoneFlag); //进入事件循环

return fAuxSDPLine;

}

调用RTPSink::startPlaying开始播放(其实只是为了读取文件中的SPS及PPS),然后进行入事件循环,具体过各这里不再说明了,因为比较复杂,将另外讨论。

再来看一下checkForAuxSDPLine的代码

[cpp] view
plaincopyprint?

static void checkForAuxSDPLine(void* clientData) {

H264VideoFileServerMediaSubsession* subsess = (H264VideoFileServerMediaSubsession*)clientData;

subsess->checkForAuxSDPLine1();

}

void H264VideoFileServerMediaSubsession::checkForAuxSDPLine1() {

char const* dasl;

if (fAuxSDPLine != NULL) {

// Signal the event loop that we're done:

setDoneFlag();

} else if (fDummyRTPSink != NULL && (dasl = fDummyRTPSink->auxSDPLine()) != NULL) {

fAuxSDPLine = strDup(dasl);

fDummyRTPSink = NULL;

// Signal the event loop that we're done:

setDoneFlag();

} else {

// try again after a brief delay:

int uSecsToDelay = 100000; // 100 ms

nextTask() = envir().taskScheduler().scheduleDelayedTask(uSecsToDelay,

(TaskFunc*)checkForAuxSDPLine, this);

}

}

每一100ms检查一个fAuxSDPLine字符串的值

13.总结

DECRIBE命令的处理过程比较复杂,这里简单的概括一下

1)创建了session及subsession,一个媒体文件将对应一个session,媒体文件中的每一个流对应一个subssion。session中,记录了一个subsession的链表。

2)为了获取SDP信息,做了大量的工作,不但创建了sink、source等实例, 还需要从媒体文件中获取信息。需要注意的是,这里创建的sink、source只是临时的,只是为了获取SDP信息而存在。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: