您的位置:首页 > 其它

media foundation (Using the Source Reader to Process Media Data)

2017-12-01 10:43 567 查看
Overview of the Media Foundation Architecture
https://msdn.microsoft.com/en-us/library/windows/desktop/ff819455(v=vs.85).aspx

Overview of the Media Foundation Architecture

This topic describes the general design of Microsoft Media Foundation. For information about using Media Foundation for specific programming tasks, see

Media Foundation Programming Guide.

The following diagram shows a high-level view of the Media Foundation architecture.



Media Foundation provides two distinct programming models. The first model, shown on the left side of the diagram, uses an end-to-end pipeline for media data. The application initializes the pipeline—for example, by providing the URL of a file to play—and
then calls methods to control streaming. In the second model, shown on the right side of the diagram, the application either pulls data from a source, or pushes it to a destination (or both). This model is particularly useful if you need to process the data,
because the application has direct access to the data stream.

Primitives and Platform

Starting from the bottom of the diagram, the primitives are helper objects used throughout the Media Foundation API:

Attributes are a generic way to store information inside an object, as a list of key/value pairs.
Media Types describe the format of a media data stream.
Media Buffers hold chunks of media data, such as video frames and audio samples, and are used to transport data between objects.
Media Samples are containers for media buffers. They also contain metadata about the buffers, such as time stamps.
The
Media Foundation Platform APIs provide some core functionality that is used by the Media Foundation pipeline, such as asynchronous callbacks and work queues. Certain applications might need to call these APIs directly; also, you will need them if you implement
a custom source, transform, or sink for Media Foundation.

Media Pipeline

The media pipeline contains three types of object that generate or process media data:

Media Sources introduce data into the pipeline. A media source might get data from a local file, such as a video file; from a network stream; or
from a hardware capture device.
Media Foundation Transforms (MFTs) process data from a stream. Encoders and decoders are implemented as MFTs.
Media Sinks consume the data; for example, by showing video on the display, playing audio, or writing the data to a media file.

Third parties can implement their own custom sources, sinks, and MFTs; for example, to support new media file formats.

The
Media Session controls the flow of data through the pipeline, and handles tasks such as quality control, audio/video synchronization, and responding to format changes.

Source Reader and Sink Writer

The
Source Reader and
Sink Writer provide an alternative way to use the basic Media Foundation components (media sources, transforms, and media sinks). The source reader hosts a media source and zero or more decoders, while the sink writer hosts a media sink and zero or more
encoders. You can use the source reader to get compressed or uncompressed data from a media source, and use the sink writer to encode data and send the data to a media sink.

Note  The source reader and sink writer are available in Windows 7.
 
This programming model gives the application more control over the flow of data, and also gives the application direct access to the data from the source.

Related topics

Media Foundation: Essential Concepts
Media Foundation Architecture

转自:
https://msdn.microsoft.com/en-us/library/windows/desktop/dd389281(v=vs.85).aspx

Using the Source Reader to Process Media Data

This topic describes how to use the
Source Reader to process media data.

To use the Source Reader, follow these basic steps:

Create an instance of the Source Reader.
Enumerate the possible output formats.
Set the actual output format for each stream.
Process data from the source.
The remainder of this topic describes these steps in detail.

Creating the Source Reader
Enumerating Output Formats
Setting Output Formats
Processing Media Data
Draining the Data Pipeline
Getting the File Duration
Seeking
Playback Rate
Hardware Acceleration
Related topics

Creating the Source Reader

To create an instance of the Source Reader, call one of the following functions:

FunctionDescription
MFCreateSourceReaderFromURL

Takes a URL as input. This function uses the
Source Resolver to create a media source from the URL.

MFCreateSourceReaderFromByteStream

Takes a pointer to a byte stream. This function also uses the Source Resolver to create the media source.

MFCreateSourceReaderFromMediaSource

Takes a pointer to a media source that has already been created. This function is useful for media sources that the Source Resolver cannot create, such capture devices or custom media sources.

 

Typically, for media files, use
MFCreateSourceReaderFromURL. For devices, such as webcams, useMFCreateSourceReaderFromMediaSource.
(For more information about capture devices in Microsoft Media Foundation, seeAudio/Video Capture.)

Each of these functions takes an optional
IMFAttributes pointer, which is used to set various options on the Source Reader, as described in the reference topics for these functions. To get the default behavior, set this parameter toNULL. Each function
returns an
IMFSourceReader pointer as an output parameter. You must callCoInitialize(Ex) and

MFStartup function before calling any of these functions.

The following code creates the Source Reader from a URL.

C++

Copy

int __cdecl wmain(int argc, __in_ecount(argc) PCWSTR* argv)
{
if (argc < 2)
{
return 1;
}

const WCHAR *pszURL = argv[1];

// Initialize the COM runtime.
HRESULT hr = CoInitializeEx(0, COINIT_MULTITHREADED);
if (SUCCEEDED(hr))
{
// Initialize the Media Foundation platform.
hr = MFStartup(MF_VERSION);
if (SUCCEEDED(hr))
{
// Create the source reader.
IMFSourceReader *pReader;
hr = MFCreateSourceReaderFromURL(pszURL, NULL, &pReader);
if (SUCCEEDED(hr))
{
ReadMediaFile(pReader);
pReader->Release();
}
// Shut down Media Foundation.
MFShutdown();
}
CoUninitialize();
}
}


Enumerating Output Formats

Every media source has at least one stream. For example, a video file might contain a video stream and an audio stream. The format of each stream is described using a media type, represented by theIMFMediaType
interface. For more information about media types, seeMedia Types. You must examine the media type to understand the format of the
data that you get from the Source Reader.

Initially, every stream has a default format, which you can find by calling theIMFSourceReader::GetCurrentMediaType
method:

For each stream, the media source offers a list of possible media types for that stream. The number of types depends on the source. If the source represents a media file, there is typically only one type per stream. A webcam, on the other hand, might be
able to stream video in several different formats. In that case, the application can select which format to use from the list of media types.

To get one of the media types for a stream, call the
IMFSourceReader::GetNativeMediaType method. This method takes two index parameters: The index of the stream, and an index into the list of media types for the stream. To enumerate all the types for a stream, increment the list
index while keeping the stream index constant. When the list index goes out of bounds,GetNativeMediaType returns
MF_E_NO_MORE_TYPES.

C++

Copy

HRESULT EnumerateTypesForStream(IMFSourceReader *pReader, DWORD dwStreamIndex)
{
HRESULT hr = S_OK;
DWORD dwMediaTypeIndex = 0;

while (SUCCEEDED(hr))
{
IMFMediaType *pType = NULL;
hr = pReader->GetNativeMediaType(dwStreamIndex, dwMediaTypeIndex, &pType);
if (hr == MF_E_NO_MORE_TYPES)
{
hr = S_OK;
break;
}
else if (SUCCEEDED(hr))
{
// Examine the media type. (Not shown.)

pType->Release();
}
++dwMediaTypeIndex;
}
return hr;
}


To enumerate the media types for every stream, increment the stream index. When the stream index goes out of bounds,GetNativeMediaType
returns MF_E_INVALIDSTREAMNUMBER.

C++

Copy

HRESULT EnumerateTypesForStream(IMFSourceReader *pReader, DWORD dwStreamIndex)
{
HRESULT hr = S_OK;
DWORD dwMediaTypeIndex = 0;

while (SUCCEEDED(hr))
{
IMFMediaType *pType = NULL;
hr = pReader->GetNativeMediaType(dwStreamIndex, dwMediaTypeIndex, &pType);
if (hr == MF_E_NO_MORE_TYPES)
{
hr = S_OK;
break;
}
else if (SUCCEEDED(hr))
{
// Examine the media type. (Not shown.)

pType->Release();
}
++dwMediaTypeIndex;
}
return hr;
}


Setting Output Formats

To change the output format, call the
IMFSourceReader::SetCurrentMediaType method. This method takes the stream index and a media type:

Copy

hr = pReader->SetCurrentMediaType(dwStreamIndex, pMediaType);


For the media type, it depends on whether you want to insert a decoder.

To get data directly from the source without decoding it, use one of the types returned byGetNativeMediaType.
To decode the stream, create a new media type that describes the desired uncompressed format.
In the case of the decoder, create the media type as follows:

Call
MFCreateMediaType to create a new media type.
Set the
MF_MT_MAJOR_TYPE attribute to specify audio or video.
Set the
MF_MT_SUBTYPE attribute to specify the subtype of the decoding format. (SeeAudio Subtype GUIDs and

Video Subtype GUIDs.)
Call
IMFSourceReader::SetCurrentMediaType.
The Source Reader will automatically load the decoder. To get the complete details of the decoded format, callIMFSourceReader::GetCurrentMediaType
after the call toSetCurrentMediaType

The following code configures the video stream for RGB-32 and the audio stream for PCM audio.

C++

Copy

HRESULT ConfigureDecoder(IMFSourceReader *pReader, DWORD dwStreamIndex)
{
IMFMediaType *pNativeType = NULL;
IMFMediaType *pType = NULL;

// Find the native format of the stream.
HRESULT hr = pReader->GetNativeMediaType(dwStreamIndex, 0, &pNativeType);
if (FAILED(hr))
{
return hr;
}

GUID majorType, subtype;

// Find the major type.
hr = pNativeType->GetGUID(MF_MT_MAJOR_TYPE, &majorType);
if (FAILED(hr))
{
goto done;
}

// Define the output type.
hr = MFCreateMediaType(&pType);
if (FAILED(hr))
{
goto done;
}

hr = pType->SetGUID(MF_MT_MAJOR_TYPE, majorType);
if (FAILED(hr))
{
goto done;
}

// Select a subtype.
if (majorType == MFMediaType_Video)
{
subtype= MFVideoFormat_RGB32;
}
else if (majorType == MFMediaType_Audio)
{
subtype = MFAudioFormat_PCM;
}
else
{
// Unrecognized type. Skip.
goto done;
}

hr = pType->SetGUID(MF_MT_SUBTYPE, subtype);
if (FAILED(hr))
{
goto done;
}

// Set the uncompressed format.
hr = pReader->SetCurrentMediaType(dwStreamIndex, NULL, pType);
if (FAILED(hr))
{
goto done;
}

done:
SafeRelease(&pNativeType);
SafeRelease(&pType);
return hr;
}


Processing Media Data

To get media data from the source, call the
IMFSourceReader::ReadSample method, as shown in the following code.

C++

Copy

DWORD streamIndex, flags;
LONGLONG llTimeStamp;

hr = pReader->ReadSample(
MF_SOURCE_READER_ANY_STREAM,    // Stream index.
0,                              // Flags.
&streamIndex,                   // Receives the actual stream index.
&flags,                         // Receives status flags.
&llTimeStamp,                   // Receives the time stamp.
&pSample                        // Receives the sample or NULL.
);


The first parameter is the index of the stream for which you want to get data. You can also specifyMF_SOURCE_READER_ANY_STREAM to get the next available data from any stream. The second parameter contains optional flags; seeMF_SOURCE_READER_CONTROL_FLAG
for a list of these. The third parameter receives the index of the stream that actually produces the data. You will need this information if you set the first parameter toMF_SOURCE_READER_ANY_STREAM. The fourth parameter receives status flags,
indicating various events that can occur while reading the data, such as format changes in the stream. For a list of status flags, seeMF_SOURCE_READER_FLAG.

If the media source is able to produce data for the requested stream, the last parameter ofReadSample
receives a pointer to the
IMFSample interface of a media sample object. Use the media sample to:

Get a pointer to the media data.
Get the presentation time and sample duration.
Get attributes that describe interlacing, field dominance, and other aspects of the sample.
The contents of the media data depend on the format of the stream. For an uncompressed video stream, each media sample contains a single video frame. For an uncompressed audio stream, each media sample contains a sequence of audio frames.

The
ReadSample method can return S_OK and yet not return a media sample in thepSample parameter. For example, when you reach the end of the file,
ReadSample sets the MF_SOURCE_READERF_ENDOFSTREAM flag indwFlags and sets
pSample to NULL. In this case, theReadSample method returns
S_OK because no error has occurred, even though thepSample parameter is set to
NULL. Therefore, always check the value ofpSample before you dereference it.

The following code shows how to call
ReadSample in a loop and check the information returned by the method, until the end of the media file is reached.

C++

Copy

HRESULT ProcessSamples(IMFSourceReader *pReader)
{
HRESULT hr = S_OK;
IMFSample *pSample = NULL;
size_t  cSamples = 0;

bool quit = false;
while (!quit)
{
DWORD streamIndex, flags;
LONGLONG llTimeStamp;

hr = pReader->ReadSample(
MF_SOURCE_READER_ANY_STREAM,    // Stream index.
0,                              // Flags.
&streamIndex,                   // Receives the actual stream index.
&flags,                         // Receives status flags.
&llTimeStamp,                   // Receives the time stamp.
&pSample                        // Receives the sample or NULL.
);

if (FAILED(hr))
{
break;
}

wprintf(L"Stream %d (%I64d)\n", streamIndex, llTimeStamp);
if (flags & MF_SOURCE_READERF_ENDOFSTREAM)
{
wprintf(L"\tEnd of stream\n");
quit = true;
}
if (flags & MF_SOURCE_READERF_NEWSTREAM)
{
wprintf(L"\tNew stream\n");
}
if (flags & MF_SOURCE_READERF_NATIVEMEDIATYPECHANGED)
{
wprintf(L"\tNative type changed\n");
}
if (flags & MF_SOURCE_READERF_CURRENTMEDIATYPECHANGED)
{
wprintf(L"\tCurrent type changed\n");
}
if (flags & MF_SOURCE_READERF_STREAMTICK)
{
wprintf(L"\tStream tick\n");
}

if (flags & MF_SOURCE_READERF_NATIVEMEDIATYPECHANGED)
{
// The format changed. Reconfigure the decoder.
hr = ConfigureDecoder(pReader, streamIndex);
if (FAILED(hr))
{
break;
}
}

if (pSample)
{
++cSamples;
}

SafeRelease(&pSample);
}

if (FAILED(hr))
{
wprintf(L"ProcessSamples FAILED, hr = 0x%x\n", hr);
}
else
{
wprintf(L"Processed %d samples\n", cSamples);
}
SafeRelease(&pSample);
return hr;
}


Draining the Data Pipeline

During data processing, a decoder or other transform might buffer input samples. In the following diagram, the application callsReadSample
and receives a sample with a presentation time equal tot1. The decoder is holding samples for
t2 and t3.


On the next call to
ReadSample, the Source Reader might give t4 to the decoder and returnt2 to the application.

If you want to decode all of the samples that are currently buffered in the decoder, without passing any new samples to the decoder, set theMF_SOURCE_READER_CONTROLF_DRAIN flag in the
dwControlFlags parameter ofReadSample. Continue to do this in a loop untilReadSample
returns a NULL sample pointer. Depending on how the decoder buffers samples, that might happen immediately or after several calls toReadSample.

Getting the File Duration

To get the duration of a media file, call the
IMFSourceReader::GetPresentationAttribute method and request theMF_PD_DURATION
attribute, as shown in the following code.

C++

Copy

HRESULT GetDuration(IMFSourceReader *pReader, LONGLONG *phnsDuration)
{
PROPVARIANT var;
HRESULT hr = pReader->GetPresentationAttribute(MF_SOURCE_READER_MEDIASOURCE,
MF_PD_DURATION, &var);
if (SUCCEEDED(hr))
{
hr = PropVariantToInt64(var, phnsDuration);
PropVariantClear(&var);
}
return hr;
}


The function shown here gets the duration in 100-nanosecond units. Divide by 10,000,000 to get the duration in seconds.

Seeking

A media source that gets data from a local file can usually seek to arbitrary positions in the file. Capture devices such as webcams generally cannot seek, because the data is live. A source that streams data over a network might be able to seek, depending
on the network streaming protocol.

To find out whether a media source can seek, call
IMFSourceReader::GetPresentationAttribute and request theMF_SOURCE_READER_MEDIASOURCE_CHARACTERISTICS
attribute, as shown in the following code:

C++

Copy

HRESULT GetSourceFlags(IMFSourceReader *pReader, ULONG *pulFlags)
{
ULONG flags = 0;

PROPVARIANT var;
PropVariantInit(&var);

HRESULT hr = pReader->GetPresentationAttribute(
MF_SOURCE_READER_MEDIASOURCE,
MF_SOURCE_READER_MEDIASOURCE_CHARACTERISTICS,
&var);

if (SUCCEEDED(hr))
{
hr = PropVariantToUInt32(var, &flags);
}
if (SUCCEEDED(hr))
{
*pulFlags = flags;
}

PropVariantClear(&var);
return hr;
}


This function gets a set of capabilities flags from the source. These flags are defined in theMFMEDIASOURCE_CHARACTERISTICS
enumeration. Two flags relate to seeking:

FlagDescription
MFMEDIASOURCE_CAN_SEEK

The source can seek.

MFMEDIASOURCE_HAS_SLOW_SEEK

Seeking might take a long time to complete. For example, the source might need to download the entire file before it can seek. (There are no strict criteria for a source to return this flag.)

 

The following code tests for the MFMEDIASOURCE_CAN_SEEK flag.

C++

Copy

BOOL SourceCanSeek(IMFSourceReader *pReader)
{
BOOL bCanSeek = FALSE;
ULONG flags;
if (SUCCEEDED(GetSourceFlags(pReader, &flags)))
{
bCanSeek = ((flags & MFMEDIASOURCE_CAN_SEEK) == MFMEDIASOURCE_CAN_SEEK);
}
return bCanSeek;
}


To seek, call the
IMFSourceReader::SetCurrentPosition method, as shown in the following code.

C++

Copy

HRESULT SetPosition(IMFSourceReader *pReader, const LONGLONG& hnsPosition)
{
PROPVARIANT var;
HRESULT hr = InitPropVariantFromInt64(hnsPosition, &var);
if (SUCCEEDED(hr))
{
hr = pReader->SetCurrentPosition(GUID_NULL, var);
PropVariantClear(&var);
}
return hr;
}


The first parameter gives the time format that you are using to specify the seek position. All media sources in Media Foundation are required to support 100-nanosecond units, indicated by the valueGUID_NULL. The second parameter is a
PROPVARIANT that contains the seek position. For 100-nanosecond time units, the data type isLONGLONG.

Be aware that not every media source provides frame-accurate seeking. The accuracy of seeking depends on several factors, such as the key frame interval, whether the media file contains an index, and whether the data has a constant or variable bit rate.
Therefore, after you seek to a position in a file, there is no guarantee that the time stamp on the next sample will exactly match the requested position. Generally, the actual position will not be later than the requested position, so you can discard samples
until you reach the desired point in the stream.

Playback Rate

Although you can set the playback rate using the Source Reader, doing is typically not very useful, for the following reasons:

The Source Reader does not support reverse playback, even if the media source does.
The application controls the presentation times, so the application can implement fast or slow play without setting the rate on the source.
Some media sources support thinning mode, where the source delivers fewer samples—typically just the key frames. However, if you want to drop non-key frames, you can check each sample for theMFSampleExtension_CleanPoint
attribute.
To set the playback rate using the Source Reader, call the
IMFSourceReader::GetServiceForStream method to get theIMFRateSupport
and
IMFRateControl interfaces from the media source.

Hardware Acceleration

The Source Reader is compatible with Microsoft DirectX Video Acceleration (DXVA) 2.0 for hardware accelerated video decoding. To use DXVA with the Source Reader, perform the following steps.

Create a Microsoft Direct3D device.
Call the
DXVA2CreateDirect3DDeviceManager9 function to create the Direct3D device manager. This function gets a pointer to theIDirect3DDeviceManager9
interface.
Call the
IDirect3DDeviceManager9::ResetDevice method with a pointer to the Direct3D device.
Create an attribute store by calling the
MFCreateAttributes function.
Create the Source Reader. Pass the attribute store in the pAttributes parameter of the creation function.
When you provide a Direct3D device, the Source Reader allocates video samples that are compatible with the DXVA video processor API. You can use DXVA video processing to perform hardware deinterlacing or video mixing. For more information, seeDXVA
Video Processing. Also, if the decoder supports DXVA 2.0, it will use the Direct3D device to perform hardware-accelerated decoding.

Important  Beginning in Windows 8,
IMFDXGIDeviceManager can be used instead of theIDirect3DDeviceManager9.
For Windows Store apps, you must useIMFDXGIDeviceManager. For more info, see theDirect3D
11 Video APIs.
 

Related topics

Using the Source Reader in Asynchronous Mode

This topic describes how to use the
Source Reader in asynchronous mode. In asynchronous mode, the application provides a callback interface, which is used to notify the application that data is available.

This topic assumes that you have already read the topicUsing the Source Reader to Process
Media Data.

Using Asynchronous Mode

The Source Reader operates either in synchronous mode or asynchronous mode. The code example shown in the previous section assumes that the Source Reader is using synchronous mode, which is the default. In synchronous mode, theIMFSourceReader::ReadSample
method blocks while the media source produces the next sample. A media source typically acquires data from some external source (such as a local file or a network connection), so the method can block the calling thread for a noticeable amount of time.
In asynchronous mode, the
ReadSample returns immediately and the work is performed on another thread. After the operation is complete, the Source Reader calls the application through theIMFSourceReaderCallback
callback interface. To use asynchronous mode, you must provide a callback pointer when you first create the Source Reader, as follows:
Create an attribute store by calling the
MFCreateAttributes function.
Set the
MF_SOURCE_READER_ASYNC_CALLBACK attribute on the attribute store. The attribute value is a pointer to your callback object.
When you create the Source Reader, pass the attribute store to the creation function in thepAttributes parameter. All of the functions to create the Source Reader have this parameter.
The following example shows these steps.

C++

Copy

HRESULT CreateSourceReaderAsync(
PCWSTR pszURL,
IMFSourceReaderCallback *pCallback,
IMFSourceReader **ppReader)
{
HRESULT hr = S_OK;
IMFAttributes *pAttributes = NULL;

hr = MFCreateAttributes(&pAttributes, 1);
if (FAILED(hr))
{
goto done;
}

hr = pAttributes->SetUnknown(MF_SOURCE_READER_ASYNC_CALLBACK, pCallback);
if (FAILED(hr))
{
goto done;
}

hr = MFCreateSourceReaderFromURL(pszURL, pAttributes, ppReader);

done:
SafeRelease(&pAttributes);
return hr;
}


After you create the Source Reader, you cannot switch modes between synchronous and asynchronous.

To get data in asynchronous mode, call the
ReadSample method but set the last four parameters toNULL, as shown in the following example.

C++

Copy

// Request the first sample.
hr = pReader->ReadSample(MF_SOURCE_READER_FIRST_VIDEO_STREAM,
0, NULL, NULL, NULL, NULL);


When the
ReadSample method completes asynchronously, the Source Reader calls yourIMFSourceReaderCallback::OnReadSample
method. This method has five parameters:

hrStatus: Contains an HRESULT value. This is the same value thatReadSample
would return in synchronous mode. IfhrStatus contains an error code, you can ignore the remaining parameters.
dwStreamIndex,dwStreamFlags, llTimestamp, andpSample: These three parameters are equivalent to the
last three parameters inReadSample. They contain the stream number, status flags, andIMFSample
pointer, respectively.

C++

Copy

STDMETHODIMP OnReadSample(HRESULT hrStatus, DWORD dwStreamIndex,
DWORD dwStreamFlags, LONGLONG llTimestamp, IMFSample *pSample);


In addition, the callback interface defines two other methods:
OnEvent. Notifies the application when certain events occur in media source, such as buffering or
network connection events.
OnFlush. Called when theFlush
method completes.

Implementing
the Callback Interface

The callback interface must be thread-safe, because
OnReadSample and the other callback methods are called from worker threads.

There are several different approaches you can take when you implement the callback. For example, you can do all of the work inside the callback, or you can use the callback to notify the application (for example, by signaling an event handle) and then do
work from the application thread.

The
OnReadSample method will be called once for every call that you make to theIMFSourceReader::ReadSample
method. To get the next sample, callReadSample again. If an error occurs,
OnReadSample is called with an error code for thehrStatus parameter.

The following example shows a minimal implementation of the callback interface. First, here is the declaration of a class that implements the interface.

C++

Copy

#include <shlwapi.h>

class SourceReaderCB : public IMFSourceReaderCallback
{
public:
SourceReaderCB(HANDLE hEvent) :
m_nRefCount(1), m_hEvent(hEvent), m_bEOS(FALSE), m_hrStatus(S_OK)
{
InitializeCriticalSection(&m_critsec);
}

// IUnknown methods
STDMETHODIMP QueryInterface(REFIID iid, void** ppv)
{
static const QITAB qit[] =
{
QITABENT(SourceReaderCB, IMFSourceReaderCallback),
{ 0 },
};
return QISearch(this, qit, iid, ppv);
}
STDMETHODIMP_(ULONG) AddRef()
{
return InterlockedIncrement(&m_nRefCount);
}
STDMETHODIMP_(ULONG) Release()
{
ULONG uCount = InterlockedDecrement(&m_nRefCount);
if (uCount == 0)
{
delete this;
}
return uCount;
}

// IMFSourceReaderCallback methods
STDMETHODIMP OnReadSample(HRESULT hrStatus, DWORD dwStreamIndex,
DWORD dwStreamFlags, LONGLONG llTimestamp, IMFSample *pSample);

STDMETHODIMP OnEvent(DWORD, IMFMediaEvent *)
{
return S_OK;
}

STDMETHODIMP OnFlush(DWORD)
{
return S_OK;
}

public:
HRESULT Wait(DWORD dwMilliseconds, BOOL *pbEOS)
{
*pbEOS = FALSE;

DWORD dwResult = WaitForSingleObject(m_hEvent, dwMilliseconds);
if (dwResult == WAIT_TIMEOUT)
{
return E_PENDING;
}
else if (dwResult != WAIT_OBJECT_0)
{
return HRESULT_FROM_WIN32(GetLastError());
}

*pbEOS = m_bEOS;
return m_hrStatus;
}

private:

// Destructor is private. Caller should call Release.
virtual ~SourceReaderCB()
{
}

void NotifyError(HRESULT hr)
{
wprintf(L"Source Reader error: 0x%X\n", hr);
}

private:
long                m_nRefCount;        // Reference count.
CRITICAL_SECTION    m_critsec;
HANDLE              m_hEvent;
BOOL                m_bEOS;
HRESULT             m_hrStatus;

};


In this example, we are not interested in the
OnEvent and
OnFlush methods, so they simply return
S_OK
. The class uses an event handle to signal the application; this handle is provided through the constructor.

In this minimal example, the
OnReadSample method just prints the time stamp to the console window. Then it stores the status code and the end-of-stream flag, and signals the event handle:

C++

Copy

HRESULT SourceReaderCB::OnReadSample(
HRESULT hrStatus,
DWORD /* dwStreamIndex */,
DWORD dwStreamFlags,
LONGLONG llTimestamp,
IMFSample *pSample      // Can be NULL
)
{
EnterCriticalSection(&m_critsec);

if (SUCCEEDED(hrStatus))
{
if (pSample)
{
// Do something with the sample.
wprintf(L"Frame @ %I64d\n", llTimestamp);
}
}
else
{
// Streaming error.
NotifyError(hrStatus);
}

if (MF_SOURCE_READERF_ENDOFSTREAM & dwStreamFlags)
{
// Reached the end of the stream.
m_bEOS = TRUE;
}
m_hrStatus = hrStatus;

LeaveCriticalSection(&m_critsec);
SetEvent(m_hEvent);
return S_OK;
}


The following code shows the application would use this callback class to read all of the video frames from a media file:

C++

Copy

HRESULT ReadMediaFile(PCWSTR pszURL)
{
HRESULT hr = S_OK;

IMFSourceReader *pReader = NULL;
SourceReaderCB *pCallback = NULL;

HANDLE hEvent = CreateEvent(NULL, FALSE, FALSE, NULL);
if (hEvent == NULL)
{
hr = HRESULT_FROM_WIN32(GetLastError());
goto done;
}

// Create an instance of the callback object.
pCallback = new (std::nothrow) SourceReaderCB(hEvent);
if (pCallback == NULL)
{
hr = E_OUTOFMEMORY;
goto done;
}

// Create the Source Reader.
hr = CreateSourceReaderAsync(pszURL, pCallback, &pReader);
if (FAILED(hr))
{
goto done;
}

hr = ConfigureDecoder(pReader, MF_SOURCE_READER_FIRST_VIDEO_STREAM);
if (FAILED(hr))
{
goto done;
}

// Request the first sample.
hr = pReader->ReadSample(MF_SOURCE_READER_FIRST_VIDEO_STREAM,
0, NULL, NULL, NULL, NULL);
if (FAILED(hr))
{
goto done;
}

while (SUCCEEDED(hr))
{
BOOL bEOS;
hr = pCallback->Wait(INFINITE, &bEOS);
if (FAILED(hr) || bEOS)
{
break;
}
hr = pReader->ReadSample(MF_SOURCE_READER_FIRST_VIDEO_STREAM,
0, NULL, NULL, NULL, NULL);
}

done:
SafeRelease(&pReader);
SafeRelease(&pCallback);
return hr;
}


Related topics

Source Reader 
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐