您的位置:首页 > 移动开发 > Android开发

FFmpeg3.3.2+SDL2实现流媒体音频播放

2017-07-30 16:24 381 查看
*****我的视频课程:FFmpeg打造Android万能音频播放器*****

前言:

由于我比较喜欢听广播(中国之声的“千里共良宵”),这个节目也是满满的鸡汤,不过确实能触碰到我们夜行侠的内心深处,并产生共鸣,大家也可以去听听试试。不过最好用自己制作的播放器,那感觉倍感亲切,固现在把我制作的过程抛砖引玉一番。

正文:

如果觉得本文有点突兀,可以先看这三篇博客,然后再看这篇就容易了:

1、FFmpeg(3.3.2)移植Android平台

2、Android Studio通过cmake创建FFmpeg项目

3、Android Studio用cmake编译SDL2

好了话不多说,还是先上图:

此项目用到了我的另外的开源项目:

1、RxjavaRetrofit访问网络

2、AdViewPager轮播图

3、RecyclerViewHeaderAndFooter



图1、广播列表



图2、播放页面



图3、后台播放,状态栏显示
怎么样,还是有点样子吧(UI是仿的蜻蜓FM的,数据是抓的中国广播CRN的数据)。不过这次不是讲这个APP得制作,因为这只是外表的假象而已,我们要讲的是它的内在,播放器的封装,下面才是我们制作和调试的UI:



这篇博客是基于Eclipse开发的,如果看了我的另外AS移植FFmpeg和SDL2博客的话,相信移植到AS中完全没问题的,那我们开始吧。
一、首先讲讲流程和所需要的一些知识点,文末会给出一些好的学习链接
1、FFmpeg的使用流程
1)注册解码器和初始化网络
        av_register_all();
        avformat_network_init();

2)声明解码器上下文
        AVFormatContext
3)打开文件
        avformat_open_input

4)找到文件中的视音频字幕流
        avformat_find_stream_info

5)然后循环取出我们需要的某一个流文件(音频流可能有多个,视频流,字幕流等)伪代码如下

for (; i < pFormatCtx->nb_streams; i++)
{
if (pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO)
{
playerState->audioStreamIndex = i;
break;
}
}
6)然后找到需要解码的流的解码器上下文,伪代码如下

pFormatCtx->streams[playerState->audioStreamIndex]->codec;
7)根据解码器上下文找到解码器

        avcodec_find_decoder

8)打开解码器
        avcodec_open2

9)然后循环读取流中的数据到avpacket结构中,并进行相应的转码,加特效等操作后播放或绘制
      for(av_read_frame){...}

2、SDL的使用流程
1)初始化所需要的功能(SDL_SetMainReady主函数已在SDL_android_main.c中初始化了的)
        SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER)

2)设置采样率和回调函数

playerState->wanted_spec.freq = audioCodecCtx->sample_rate;
playerState->wanted_spec.format = AUDIO_S16SYS;
playerState->wanted_spec.channels = 2;
playerState->wanted_spec.silence = 0;
playerState->wanted_spec.samples = SDL_AUDIO_BUFFER_SIZE;
playerState->wanted_spec.callback = audio_callback;
playerState->wanted_spec.userdata = playerState->audioCodecCtx;


3)打开音频播放设备并设置成播放状态
        SDL_OpenAudio
        SDL_PauseAudio(0)//0播放 其它暂停
3、还需要了解c语言队列,线程等知识。
其中FFmpeg负责解码初音频或者视频等,然后把音频或者视频给SDL来播放和显示,再实现音视频同步,就能实现正常播放了。
二、改造SDL2默认的SDLActivity,提取出相关的方法和本地方法到单独的文件里面,就不用集成Activity了,这里提出了播放器控制类(WlPlayer.java)和surface类(SDLSurface.java)



WlPlayer.java代码(里面加了我自己的相关的播放控制方法和回调):

package com.ywl5320.wlsdk.player;

import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
import java.util.Comparator;
import java.util.List;

import com.ywl5320.wlsdk.player.SDLSurface.OnSurfacePrepard;

import android.app.Activity;
import android.media.AudioFormat;
import android.media.AudioManager;
import android.media.AudioRecord;
import android.media.AudioTrack;
import android.media.MediaRecorder;
import android.os.Build;
import android.util.Log;
import android.view.InputDevice;
import android.view.MotionEvent;
import android.view.Surface;
import android.view.View;

public class WlPlayer {

private static OnPlayerPrepard mOnPlayerPrepard;
private static OnPlayerInfoListener onPlayerInfoListener;
private static OnErrorListener onErrorListener;
private static OnCompleteListener onCompleteListener;

private static String url;

public static Activity mSingleton;

// Keep track of the paused state
public static boolean mIsPaused, mIsSurfaceReady, mHasFocus;
public static boolean mExitCalledFromJava;

// This is what SDL runs in. It invokes SDL_main(), eventually
protected static Thread mSDLThread;

// Audio
protected static AudioTrack mAudioTrack;
protected static AudioRecord mAudioRecord;

/**
* If shared libraries (e.g. SDL or the native application) could not be
* loaded.
*/
public static boolean mBrokenLibraries;

// If we want to separate mouse and touch events.
// This is only toggled in native code when a hint is set!
public static boolean mSeparateMouseAndTouch;

public static SDLSurface mSurface;
protected static SDLJoystickHandler mJoystickHandler;

public static void initPlayer(Activity activity)
{
loadLibraries();
initialize();
mSingleton = activity;

if (Build.VERSION.SDK_INT >= 12) {
mJoystickHandler = new SDLJoystickHandler_API12();
} else {
mJoystickHandler = new SDLJoystickHandler();
}
}

/**
* This method is called by SDL before loading the native shared libraries.
* It can be overridden to provide names of shared libraries to be loaded.
* The default implementation returns the defaults. It never returns null.
* An array returned by a new implementation must at least contain "SDL2".
* Also keep in mind that the order the libraries are loaded may matter.
*
* @return names of shared libraries to be loaded (e.g. "SDL2", "main").
*/
protected static String[] getLibraries() {
return new String[] {
"SDL2",
"avutil-55",
"swresample-2",
"avcodec-57",
"avformat-57",
"swscale-4",
"postproc-54",
"avfilter-6",
"avdevice-57",
"wlPlayer" };
}

// Load the .so
public static void loadLibraries() {
for (String lib : getLibraries()) {
System.loadLibrary(lib);
}
}

public static void initialize() {
// The static nature of the singleton and Android quirkyness force us to
// initialize everything here
// Otherwise, when exiting the app and returning to it, these variables
// *keep* their pre exit values
mSingleton = null;
mSurface = null;
// mTextEdit = null;
// mLayout = null;
mJoystickHandler = null;
mSDLThread = null;
mAudioTrack = null;
mAudioRecord = null;
mExitCalledFromJava = false;
mBrokenLibraries = false;
mIsPaused = false;
mIsSurfaceReady = false;
mHasFocus = true;
}

public static void initSurface(SDLSurface surface)
{
mSurface = surface;
}

public static void setPrepardListener(OnPlayerPrepard onPlayerPrepard) {
mOnPlayerPrepard = onPlayerPrepard;
}

public static void setOnErrorListener(OnErrorListener error)
{
onErrorListener = error;
}

public static void setOnCompleteListener(OnCompleteListener onComplete)
{
onCompleteListener = onComplete;
}

public static void setDataSource(String source)
{
url = source;
}

/**
* Called by onPause or surfaceDestroyed. Even if surfaceDestroyed is the
* first to be called, mIsSurfaceReady should still be set to 'true' during
* the call to onPause (in a usual scenario).
*/
public static void handlePause() {
if (!mIsPaused && mIsSurfaceReady) {
mIsPaused = true;
nativePause();
}
}

/**
* Called by onResume or surfaceCreated. An actual resume should be done
* only when the surface is ready. Note: Some Android variants may send
* multiple surfaceChanged events, so we don't need to resume every time we
* get one of those events, only if it comes after surfaceDestroyed
*/
public static void handleResume() {
if (WlPlayer.mIsPaused && WlPlayer.mIsSurfaceReady && WlPlayer.mHasFocus) {
WlPlayer.mIsPaused = false;
WlPlayer.nativeResume();
}
}

/* The native thread has finished */
public static void handleNativeExit() {
WlPlayer.mSDLThread = null;
mSingleton.finish();
}

/**
* This method is called by SDL using JNI.
*/
public static void pollInputDevices() {
if (WlPlayer.mSDLThread != null) {
mJoystickHandler.pollInputDevices();
}
}

// Check if a given device is considered a possible SDL joystick
public static boolean isDeviceSDLJoystick(int deviceId) {
InputDevice device = InputDevice.getDevice(deviceId);
// We cannot use InputDevice.isVirtual before API 16, so let's accept
// only nonnegative device ids (VIRTUAL_KEYBOARD equals -1)
if ((device == null) || (deviceId < 0)) {
return false;
}
int sources = device.getSources();
return (((sources & InputDevice.SOURCE_CLASS_JOYSTICK) == InputDevice.SOURCE_CLASS_JOYSTICK)
|| ((sources & InputDevice.SOURCE_DPAD) == InputDevice.SOURCE_DPAD)
|| ((sources & InputDevice.SOURCE_GAMEPAD) == InputDevice.SOURCE_GAMEPAD));
}

// Joystick glue code, just a series of stubs that redirect to the
// SDLJoystickHandler instance
public static boolean handleJoystickMotionEvent(MotionEvent event) {
return mJoystickHandler.handleMotionEvent(event);
}

/**
* This method is called by SDL using JNI.
*/
public static Surface getNativeSurface() {
return WlPlayer.mSurface.getNativeSurface();
}

// Audio

/**
* This method is called by SDL using JNI.
*/
public static int audioOpen(int sampleRate, boolean is16Bit, boolean isStereo, int desiredFrames) {
int channelConfig = isStereo ? AudioFormat.CHANNEL_CONFIGURATION_STEREO
: AudioFormat.CHANNEL_CONFIGURATION_MONO;
int audioFormat = is16Bit ? AudioFormat.ENCODING_PCM_16BIT : AudioFormat.ENCODING_PCM_8BIT;
int frameSize = (isStereo ? 2 : 1) * (is16Bit ? 2 : 1);

Log.v("wlfm", "SDL audio: wanted " + (isStereo ? "stereo" : "mono") + " " + (is16Bit ? "16-bit" : "8-bit") + " "
+ (sampleRate / 1000f) + "kHz, " + desiredFrames + " frames buffer");

// Let the user pick a larger buffer if they really want -- but ye
// gods they probably shouldn't, the minimums are horrifyingly high
// latency already
desiredFrames = Math.max(desiredFrames,
(AudioTrack.getMinBufferSize(sampleRate, channelConfig, audioFormat) + frameSize - 1) / frameSize);

if (mAudioTrack == null) {
mAudioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate, channelConfig, audioFormat,
desiredFrames * frameSize, AudioTrack.MODE_STREAM);

// Instantiating AudioTrack can "succeed" without an exception and
// the track may still be invalid
// Ref:
// https://android.googlesource.com/platform/frameworks/base/+/refs/heads/master/media/java/android/media/AudioTrack.java // Ref:
// http://developer.android.com/reference/android/media/AudioTrack.html#getState() 
if (mAudioTrack.getState() != AudioTrack.STATE_INITIALIZED) {
Log.e("wlfm", "Failed during initialization of Audio Track");
mAudioTrack = null;
return -1;
}

mAudioTrack.play();
}

Log.v("wlfm",
"SDL audio: got " + ((mAudioTrack.getChannelCount() >= 2) ? "stereo" : "mono") + " "
+ ((mAudioTrack.getAudioFormat() == AudioFormat.ENCODING_PCM_16BIT) ? "16-bit" : "8-bit") + " "
+ (mAudioTrack.getSampleRate() / 1000f) + "kHz, " + desiredFrames + " frames buffer");

return 0;
}

/**
* This method is called by SDL using JNI.
*/
public static void audioWriteShortBuffer(short[] buffer) {
for (int i = 0; i < buffer.length;) {
int result = mAudioTrack.write(buffer, i, buffer.length - i);
if (result > 0) {
i += result;
} else if (result == 0) {
try {
Thread.sleep(1);
} catch (InterruptedException e) {
// Nom nom
}
} else {
Log.w("wlfm", "SDL audio: error return from write(short)");
return;
}
}
}

/**
* This method is called by SDL using JNI.
*/
public static void audioWriteByteBuffer(byte[] buffer) {
for (int i = 0; i < buffer.length;) {
int result = mAudioTrack.write(buffer, i, buffer.length - i);
if (result > 0) {
i += result;
} else if (result == 0) {
try {
Thread.sleep(1);
} catch (InterruptedException e) {
// Nom nom
}
} else {
Log.w("wlfm", "SDL audio: error return from write(byte)");
return;
}
}
}

/** This method is called by SDL using JNI. */
public static void audioClose() {
if (mAudioTrack != null) {
mAudioTrack.stop();
mAudioTrack.release();
mAudioTrack = null;
}
}

/** This method is called by SDL using JNI. */
public static void captureClose() {
if (mAudioRecord != null) {
mAudioRecord.stop();
mAudioRecord.release();
mAudioRecord = null;
}
}

/**
* This method is called by SDL using JNI.
*/
public static boolean sendMessage(int command, int param) {
return false;
}

/**
* This method is called by SDL using JNI.
*/
public static int captureOpen(int sampleRate, boolean is16Bit, boolean isStereo, int desiredFrames) {
int channelConfig = isStereo ? AudioFormat.CHANNEL_CONFIGURATION_STEREO
: AudioFormat.CHANNEL_CONFIGURATION_MONO;
int audioFormat = is16Bit ? AudioFormat.ENCODING_PCM_16BIT : AudioFormat.ENCODING_PCM_8BIT;
int frameSize = (isStereo ? 2 : 1) * (is16Bit ? 2 : 1);

Log.v("wlfm", "SDL capture: wanted " + (isStereo ? "stereo" : "mono") + " " + (is16Bit ? "16-bit" : "8-bit")
+ " " + (sampleRate / 1000f) + "kHz, " + desiredFrames + " frames buffer");

// Let the user pick a larger buffer if they really want -- but ye
// gods they probably shouldn't, the minimums are horrifyingly high
// latency already
desiredFrames = Math.max(desiredFrames,
(AudioRecord.getMinBufferSize(sampleRate, channelConfig, audioFormat) + frameSize - 1) / frameSize);

if (mAudioRecord == null) {
mAudioRecord = new AudioRecord(MediaRecorder.AudioSource.DEFAULT, sampleRate, channelConfig, audioFormat,
desiredFrames * frameSize);

// see notes about AudioTrack state in audioOpen(), above. Probably
// also applies here.
if (mAudioRecord.getState() != AudioRecord.STATE_INITIALIZED) {
Log.e("wlfm", "Failed during initialization of AudioRecord");
mAudioRecord.release();
mAudioRecord = null;
return -1;
}

mAudioRecord.startRecording();
}

Log.v("wlfm",
"SDL capture: got " + ((mAudioRecord.getChannelCount() >= 2) ? "stereo" : "mono") + " "
+ ((mAudioRecord.getAudioFormat() == AudioFormat.ENCODING_PCM_16BIT) ? "16-bit" : "8-bit") + " "
+ (mAudioRecord.getSampleRate() / 1000f) + "kHz, " + desiredFrames + " frames buffer");

return 0;
}

/**
* This method is called by SDL using JNI.
* @return an array which may be empty but is never null.
*/
public static int[] inputGetInputDeviceIds(int sources) {
int[] ids = InputDevice.getDeviceIds();
int[] filtered = new int[ids.length];
int used = 0;
for (int i = 0; i < ids.length; ++i) {
InputDevice device = InputDevice.getDevice(ids[i]);
if ((device != null) && ((device.getSources() & sources) != 0)) {
filtered[used++] = device.getId();
}
}
return Arrays.copyOf(filtered, used);
}

/** This method is called by SDL using JNI. */
public static int captureReadShortBuffer(short[] buffer, boolean blocking) {
// !!! FIXME: this is available in API Level 23. Until then, we always
// block. :(
// return mAudioRecord.read(buffer, 0, buffer.length, blocking ?
// AudioRecord.READ_BLOCKING : AudioRecord.READ_NON_BLOCKING);
return mAudioRecord.read(buffer, 0, buffer.length);
}

/** This method is called by SDL using JNI. */
public static int captureReadByteBuffer(byte[] buffer, boolean blocking) {
// !!! FIXME: this is available in API Level 23. Until then, we always
// block. :(
// return mAudioRecord.read(buffer, 0, buffer.length, blocking ?
// AudioRecord.READ_BLOCKING : AudioRecord.READ_NON_BLOCKING);
return mAudioRecord.read(buffer, 0, buffer.length);
}

public static void setOnPlayerInfoListener(OnPlayerInfoListener onInfoListener)
{
onPlayerInfoListener = onInfoListener;
}

// C functions we call
public static native int nativeInit(String arguments);

public static native void nativeLowMemory();

public static native void nativeQuit();

public static native void nativePause();

public static native void nativeResume();

public static native void onNativeDropFile(String filename);

public static native void onNativeResize(int x, int y, int format, float rate);

public static native int onNativePadDown(int device_id, int keycode);

public static native int onNativePadUp(int device_id, int keycode);

public static native void onNativeJoy(int device_id, int axis, float value);

public static native void onNativeHat(int device_id, int hat_id, int x, int y);

public static native void onNativeKeyDown(int keycode);

public static native void onNativeKeyUp(int keycode);

public static native void onNativeKeyboardFocusLost();

public static native void onNativeMouse(int button, int action, float x, float y);

public static native void onNativeTouch(int touchDevId, int pointerFingerId, int action, float x, float y, float p);

public static native void onNativeAccel(float x, float y, float z);

public static native void onNativeSurfaceChanged();

public static native void onNativeSurfaceDestroyed();

public static native int nativeAddJoystick(int device_id, String name, int is_accelerometer, int nbuttons,
int naxes, int nhats, int nballs);

public static native int nativeRemoveJoystick(int device_id);

public static native String nativeGetHint(String name);

/**my jni call method */
public static native void wlStart();
//暂停
public static native void wlPause();
//播放
public static native void wlPlay();
//得到总的时长
public static native int wlDuration();
//seek 到几秒 0:成功,1:失败
public static native int wlSeekTo(int sec);
//释放空间
public static native void wlRealease();
//得到当前播放的
public static native int wlNowTime();
//是否加载中
public static native int wlIsInit();
//是否停止
public static native int wlIsRelease();

//准备开始回调
public static void onPrepard()
{
if(mOnPlayerPrepard != null)
{
mOnPlayerPrepard.onPrepard();
}
}

//准备播放回调
public interface OnPlayerPrepard
{
void onPrepard();
}
//开始播放
public static void prePard()
{
mSDLThread = new Thread(new Runnable() {

@Override
public void run() {
// TODO Auto-generated method stub
System.out.println("url:" + url);
WlPlayer.nativeInit(url);
}
}, "mainThread");
mSDLThread.start();
}

public static void next(String u)
{
if(wlIsInit() == -1)
{
if(onErrorListener != null)
{
onErrorListener.onError(0x2001, "player is initing, please try later!");
}
return;
}
url = u;
mSDLThread = new Thread(new Runnable() {

@Override
public void run() {
// TODO Auto-generated method stub
WlPlayer.wlRealease();
WlPlayer.nativeInit(url);
}
}, "mainThread");
mSDLThread.start();
}

public static void release()
{
if(wlIsRelease() != -1)
{
mSDLThread = new Thread(new Runnable() {

@Override
public void run() {
// TODO Auto-generated method stub
WlPlayer.wlRealease();
}
}, "mainThread");
mSDLThread.start();
}
else
{
if(onErrorListener != null)
{
onErrorListener.onError(0x2002, "player is already release!");
}
}
}
//
public interface OnPlayerInfoListener
{
void onLoad();
void onPlay();
}

public interface OnCompleteListener
{
void onConplete();
}

public interface OnErrorListener
{
void onError(int code, String msg);
}

//播放完成
public static void onCompleted()
{
if(onCompleteListener != null)
{
onCompleteListener.onConplete();
}
}

//播放出错
public static void onError(int code, String msg)
{
if(onErrorListener != null)
{
onErrorListener.onError(code, msg);
}
}

//加载中
public static void onLoad()
{
if(onPlayerInfoListener != null)
{
onPlayerInfoListener.onLoad();
}
}

//播放中
public static void onPlay()
{
if(onPlayerInfoListener != null)
{
onPlayerInfoListener.onPlay();
}
}

}

/*
* A null joystick handler for API level < 12 devices (the accelerometer is
* handled separately)
*/
class SDLJoystickHandler {

/**
* Handles given MotionEvent.
*
* @param event
*            the event to be handled.
* @return if given event was processed.
*/
public boolean handleMotionEvent(MotionEvent event) {
return false;
}

/**
* Handles adding and removing of input devices.
*/
public void pollInputDevices() {
}
}

/* Actual joystick functionality available for API >= 12 devices */
class SDLJoystickHandler_API12 extends SDLJoystickHandler {

static class SDLJoystick {
public int device_id;
public String name;
public ArrayList<InputDevice.MotionRange> axes;
public ArrayList<InputDevice.MotionRange> hats;
}

static class RangeComparator implements Comparator<InputDevice.MotionRange> {
@Override
public int compare(InputDevice.MotionRange arg0, InputDevice.MotionRange arg1) {
return arg0.getAxis() - arg1.getAxis();
}
}

private ArrayList<SDLJoystick> mJoysticks;

public SDLJoystickHandler_API12() {

mJoysticks = new ArrayList<SDLJoystick>();
}

@Override
public void pollInputDevices() {
int[] deviceIds = InputDevice.getDeviceIds();
// It helps processing the device ids in reverse order
// For example, in the case of the XBox 360 wireless dongle,
// so the first controller seen by SDL matches what the receiver
// considers to be the first controller

for (int i = deviceIds.length - 1; i > -1; i--) {
SDLJoystick joystick = getJoystick(deviceIds[i]);
if (joystick == null) {
joystick = new SDLJoystick();
InputDevice joystickDevice = InputDevice.getDevice(deviceIds[i]);
if (WlPlayer.isDeviceSDLJoystick(deviceIds[i])) {
joystick.device_id = deviceIds[i];
joystick.name = joystickDevice.getName();
joystick.axes = new ArrayList<InputDevice.MotionRange>();
joystick.hats = new ArrayList<InputDevice.MotionRange>();

List<InputDevice.MotionRange> ranges = joystickDevice.getMotionRanges();
Collections.sort(ranges, new RangeComparator());
for (InputDevice.MotionRange range : ranges) {
if ((range.getSource() & InputDevice.SOURCE_CLASS_JOYSTICK) != 0) {
if (range.getAxis() == MotionEvent.AXIS_HAT_X
|| range.getAxis() == MotionEvent.AXIS_HAT_Y) {
joystick.hats.add(range);
} else {
joystick.axes.add(range);
}
}
}

mJoysticks.add(joystick);
WlPlayer.nativeAddJoystick(joystick.device_id, joystick.name, 0, -1, joystick.axes.size(),
joystick.hats.size() / 2, 0);
}
}
}

/* Check removed devices */
ArrayList<Integer> removedDevices = new ArrayList<Integer>();
for (int i = 0; i < mJoysticks.size(); i++) {
int device_id = mJoysticks.get(i).device_id;
int j;
for (j = 0; j < deviceIds.length; j++) {
if (device_id == deviceIds[j])
break;
}
if (j == deviceIds.length) {
removedDevices.add(Integer.valueOf(device_id));
}
}

for (int i = 0; i < removedDevices.size(); i++) {
int device_id = removedDevices.get(i).intValue();
WlPlayer.nativeRemoveJoystick(device_id);
for (int j = 0; j < mJoysticks.size(); j++) {
if (mJoysticks.get(j).device_id == device_id) {
mJoysticks.remove(j);
break;
}
}
}
}

protected SDLJoystick getJoystick(int device_id) {
for (int i = 0; i < mJoysticks.size(); i++) {
if (mJoysticks.get(i).device_id == device_id) {
return mJoysticks.get(i);
}
}
return null;
}

@Override
public boolean handleMotionEvent(MotionEvent event) {
if ((event.getSource() & InputDevice.SOURCE_JOYSTICK) != 0) {
int actionPointerIndex = event.getActionIndex();
int action = event.getActionMasked();
switch (action) {
case MotionEvent.ACTION_MOVE:
SDLJoystick joystick = getJoystick(event.getDeviceId());
if (joystick != null) {
for (int i = 0; i < joystick.axes.size(); i++) {
InputDevice.MotionRange range = joystick.axes.get(i);
/* Normalize the value to -1...1 */
float value = (event.getAxisValue(range.getAxis(), actionPointerIndex) - range.getMin())
/ range.getRange() * 2.0f - 1.0f;
WlPlayer.onNativeJoy(joystick.device_id, i, value);
}
for (int i = 0; i < joystick.hats.size(); i += 2) {
int hatX = Math.round(event.getAxisValue(joystick.hats.get(i).getAxis(), actionPointerIndex));
int hatY = Math
.round(event.getAxisValue(joystick.hats.get(i + 1).getAxis(), actionPointerIndex));
WlPlayer.onNativeHat(joystick.device_id, i / 2, hatX, hatY);
}
}
break;
default:
break;
}
}
return true;
}
}

class SDLGenericMotionListener_API12 implements View.OnGenericMotionListener {
// Generic Motion (mouse hover, joystick...) events go here
@Override
public boolean onGenericMotion(View v, MotionEvent event) {
float x, y;
int action;

switch (event.getSource()) {
case InputDevice.SOURCE_JOYSTICK:
case InputDevice.SOURCE_GAMEPAD:
case InputDevice.SOURCE_DPAD:
return WlPlayer.handleJoystickMotionEvent(event);

case InputDevice.SOURCE_MOUSE:
action = event.getActionMasked();
switch (action) {
case MotionEvent.ACTION_SCROLL:
x = event.getAxisValue(MotionEvent.AXIS_HSCROLL, 0);
y = event.getAxisValue(MotionEvent.AXIS_VSCROLL, 0);
WlPlayer.onNativeMouse(0, action, x, y);
return true;

case MotionEvent.ACTION_HOVER_MOVE:
x = event.getX(0);
y = event.getY(0);

WlPlayer.onNativeMouse(0, action, x, y);
return true;

default:
break;
}
break;

default:
break;
}

// Event was not managed
return false;
}

}
SDLSurface.java 代码:

package com.ywl5320.wlsdk.player;

import android.content.Context;
import android.content.pm.ActivityInfo;
import android.graphics.PixelFormat;
import android.hardware.Sensor;
import android.hardware.SensorEvent;
import android.hardware.SensorEventListener;
import android.hardware.SensorManager;
import android.os.Build;
import android.util.AttributeSet;
import android.util.Log;
import android.view.*;

/**
* @author ywl
*
*/
public class SDLSurface extends SurfaceView implements SurfaceHolder.Callback{

private OnSurfacePrepard onSurfacePrepard;

// Sensors
protected static Display mDisplay;

// Keep track of the surface size to normalize touch events
protected static float mWidth, mHeight;

// Startup
public SDLSurface(Context context) {
this(context, null);

}

public SDLSurface(Context context, AttributeSet attrs)
{
super(context, attrs);
getHolder().addCallback(this);

setFocusable(true);
setFocusableInTouchMode(true);
requestFocus();

mDisplay = ((WindowManager) context.getSystemService(Context.WINDOW_SERVICE)).getDefaultDisplay();

// Some arbitrary defaults to avoid a potential division by zero
mWidth = 1.0f;
mHeight = 1.0f;
}

public void setOnSurfacePrepard(OnSurfacePrepard onSurfacePrepard) {
this.onSurfacePrepard = onSurfacePrepard;
}

public Surface getNativeSurface() {
return getHolder().getSurface();
}

// Called when we have a valid drawing surface
@Override
public void surfaceCreated(SurfaceHolder holder) {
Log.v("SDL", "surfaceCreated()");
holder.setType(SurfaceHolder.SURFACE_TYPE_GPU);
}

// Called when we lose the surface
@Override
public void surfaceDestroyed(SurfaceHolder holder) {
Log.v("SDL", "surfaceDestroyed()");
// Call this *before* setting mIsSurfaceReady to 'false'
WlPlayer.handlePause();
WlPlayer.mIsSurfaceReady = false;
WlPlayer.onNativeSurfaceDestroyed();
}

// Called when the surface is resized
@Override
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
Log.v("SDL", "surfaceChanged()");

int sdlFormat = 0x15151002; // SDL_PIXELFORMAT_RGB565 by default
switch (format) {
case PixelFormat.A_8:
Log.v("SDL", "pixel format A_8");
break;
case PixelFormat.LA_88:
Log.v("SDL", "pixel format LA_88");
break;
case PixelFormat.L_8:
Log.v("SDL", "pixel format L_8");
break;
case PixelFormat.RGBA_4444:
Log.v("SDL", "pixel format RGBA_4444");
sdlFormat = 0x15421002; // SDL_PIXELFORMAT_RGBA4444
break;
case PixelFormat.RGBA_5551:
Log.v("SDL", "pixel format RGBA_5551");
sdlFormat = 0x15441002; // SDL_PIXELFORMAT_RGBA5551
break;
case PixelFormat.RGBA_8888:
Log.v("SDL", "pixel format RGBA_8888");
sdlFormat = 0x16462004; // SDL_PIXELFORMAT_RGBA8888
break;
case PixelFormat.RGBX_8888:
Log.v("SDL", "pixel format RGBX_8888");
sdlFormat = 0x16261804; // SDL_PIXELFORMAT_RGBX8888
break;
case PixelFormat.RGB_332:
Log.v("SDL", "pixel format RGB_332");
sdlFormat = 0x14110801; // SDL_PIXELFORMAT_RGB332
break;
case PixelFormat.RGB_565:
Log.v("SDL", "pixel format RGB_565");
sdlFormat = 0x15151002; // SDL_PIXELFORMAT_RGB565
break;
case PixelFormat.RGB_888:
Log.v("SDL", "pixel format RGB_888");
// Not sure this is right, maybe SDL_PIXELFORMAT_RGB24 instead?
sdlFormat = 0x16161804; // SDL_PIXELFORMAT_RGB888
break;
default:
Log.v("SDL", "pixel format unknown " + format);
break;
}

mWidth = width;
mHeight = height;
WlPlayer.onNativeResize(width, height, sdlFormat, mDisplay.getRefreshRate());
Log.v("ywl5320", "Window size: " + width + "x" + height);

boolean skip = false;
int requestedOrientation = WlPlayer.mSingleton.getRequestedOrientation();

if (requestedOrientation == ActivityInfo.SCREEN_ORIENTATION_UNSPECIFIED) {
// Accept any
} else if (requestedOrientation == ActivityInfo.SCREEN_ORIENTATION_PORTRAIT) {
if (mWidth > mHeight) {
skip = true;
}
} else if (requestedOrientation == ActivityInfo.SCREEN_ORIENTATION_LANDSCAPE) {
if (mWidth < mHeight) {
skip = true;
}
}

// Special Patch for Square Resolution: Black Berry Passport
if (skip) {
double min = Math.min(mWidth, mHeight);
double max = Math.max(mWidth, mHeight);

if (max / min < 1.20) {
Log.v("SDL", "Don't skip on such aspect-ratio. Could be a square resolution.");
skip = false;
}
}

if (skip) {
Log.v("SDL", "Skip .. Surface is not ready.");
return;
}

// Set mIsSurfaceReady to 'true' *before* making a call to handleResume
WlPlayer.mIsSurfaceReady = true;
WlPlayer.onNativeSurfaceChanged();

if (WlPlayer.mHasFocus) {
WlPlayer.handleResume();
}

if(onSurfacePrepard != null)
{
onSurfacePrepard.onPrepard();
}
}

public interface OnSurfacePrepard
{
void onPrepard();
}
}
这样就分离了默认的SDLActivity.java不用再集成activity了,可以单独使用。注意里面的本地方法的包名,这里的包名改成了自己的,所以需要到SDL源码中把相关的本地方法的包名给改成自己的,只需改这两个文件里面的就行:





这样SDL算是移植成功了。

三、编写我们的c文件,来解析流媒体文件

player.c源码

#include <stdlib.h>
#include <stdio.h>
#include <time.h>
#include <jni.h>

#include "SDL.h"
#include "SDL_thread.h"

#include <android/log.h>
#define LOGI(FORMAT,...) __android_log_print(ANDROID_LOG_INFO,"ywl5320",FORMAT,##__VA_ARGS__);
#define LOGE(FORMAT,...) __android_log_print(ANDROID_LOG_ERROR,"ywl5320",FORMAT,##__VA_ARGS__);

#include "libavformat/avformat.h"
#include "libavcodec/avcodec.h"
#include "libswscale/swscale.h"
#include "libswresample/swresample.h"
#include "libavutil/mathematics.h"
#include "libavutil/samplefmt.h"

#define SDL_AUDIO_BUFFER_SIZE 1024
#define AVCODEC_MAX_AUDIO_FRAME_SIZE 192000

int quit = 0;//0:播放 1:暂停 -1:结束
int play = 0;//0:init 1:播放
int isOver = 0;//0:没有 1:完了

void release();
void onErrorMsg(int code, char *msg);

typedef struct PacketQueue
{
AVPacketList *first_pkt, *last_pkt;
int nb_packets;
int size;
SDL_mutex *mutex;
SDL_cond *cond;
}PacketQueue;

void packet_queue_init(PacketQueue *q) {
memset(q, 0, sizeof(PacketQueue));
q->mutex = SDL_CreateMutex();
q->cond = SDL_CreateCond();
}

int packet_queue_put(PacketQueue *q, AVPacket *pkt) {

if(quit == -1)
return -1;
AVPacketList *pkt1;
if (av_dup_packet(pkt) < 0) {
return -1;
}
pkt1 = av_malloc(sizeof(AVPacketList));
if (!pkt1)
return -1;
pkt1->pkt = *pkt;
pkt1->next = NULL;

SDL_LockMutex(q->mutex);

if (!q->last_pkt)
q->first_pkt = pkt1;
else
q->last_pkt->next = pkt1;
q->last_pkt = pkt1;
q->nb_packets++;
q->size += pkt1->pkt.size;
SDL_CondSignal(q->cond);

SDL_UnlockMutex(q->mutex);
return 0;
}

static int packet_queue_get(PacketQueue *q, AVPacket *pkt) {
if(quit == -1)
return -1;
AVPacketList *pkt1;
int ret;
SDL_LockMutex(q->mutex);

for (;;) {
pkt1 = q->first_pkt;
if (pkt1) {
q->first_pkt = pkt1->next;
if (!q->first_pkt)
q->last_pkt = NULL;
q->nb_packets--;
q->size -= pkt1->pkt.size;
*pkt = pkt1->pkt;
av_free(pkt1);
ret = 1;
break;
}else if(quit == -1){
ret = -1;
break;
}
else {
SDL_CondWait(q->cond, q->mutex);
}
}
SDL_UnlockMutex(q->mutex);
return ret;
}

int getQueueSize(PacketQueue *q)
{
return q->nb_packets;
}

typedef struct PlayerState
{
char *url;
SDL_Thread *decodeThread;
AVFormatContext *pFormatCtx;//封装格式上下文
int audioStreamIndex;//音频流索引(暂时处理一个音频流)
AVStream *audioStream;//音频流
int audioDuration;//时长
int audioPts;
SDL_AudioSpec wanted_spec, spec;
AVCodecContext *audioCodecCtx;//音频解码器上下文
AVCodec *audioCodec;//音频解码器
PacketQueue audioq;//音频队列

}PlayerState;

PlayerState *playerState;

int audio_decode_frame(AVCodecContext *aCodecCtx, uint8_t *audio_buf, int buf_) {

if(quit == -1)
{
return -1;
}
AVFrame *frame = av_frame_alloc();
int data_size = 0;
AVPacket pkt;
int got_frame_ptr;

SwrContext *swr_ctx;

if (packet_queue_get(&playerState->audioq, &pkt) < 0)
return -1;

int ret = avcodec_send_packet(aCodecCtx, &pkt);
if (ret < 0 && ret != AVERROR(EAGAIN) && ret != AVERROR_EOF)
return -1;

ret = avcodec_receive_frame(aCodecCtx, frame);
if (ret < 0 && ret != AVERROR_EOF)
return -1;

// 设置通道数或channel_layout
if (frame->channels > 0 && frame->channel_layout == 0)
frame->channel_layout = av_get_default_channel_layout(frame->channels);
else if (frame->channels == 0 && frame->channel_layout > 0)
frame->channels = av_get_channel_layout_nb_channels(frame->channel_layout);

enum AVSampleFormat dst_format = AV_SAMPLE_FMT_S16;//av_get_packed_sample_fmt((AVSampleFormat)frame->format);

//重采样为立体声
Uint64 dst_layout = AV_CH_LAYOUT_STEREO;
// 设置转换参数
swr_ctx = swr_alloc_set_opts(NULL, dst_layout, dst_format, frame->sample_rate,
frame->channel_layout, (enum AVSampleFormat)frame->format, frame->sample_rate, 0, NULL);
if (!swr_ctx || swr_init(swr_ctx) < 0)
return -1;

// 计算转换后的sample个数 a * b / c
int dst_nb_samples = av_rescale_rnd(swr_get_delay(swr_ctx, frame->sample_rate) + frame->nb_samples, frame->sample_rate, frame->sample_rate, AV_ROUND_INF);
// 转换,返回值为转换后的sample个数
int nb = swr_convert(swr_ctx, &audio_buf, dst_nb_samples, (const uint8_t**)frame->data, frame->nb_samples);

//根据布局获取声道数
int out_channels = av_get_channel_layout_nb_channels(dst_layout);
data_size = out_channels * nb * av_get_bytes_per_sample(dst_format);
playerState->audioPts = pkt.pts;
av_packet_unref(&pkt);
av_frame_free(&frame);
swr_free(&swr_ctx);
return data_size;
}

void audio_callback(void *userdata, Uint8 *stream, int len) {

AVCodecContext *aCodecCtx = (AVCodecContext *)userdata;

int len1, audio_size;

static uint8_t audio_buff[(AVCODEC_MAX_AUDIO_FRAME_SIZE * 3) / 2];
static unsigned int audio_buf_size = 0;
static unsigned int audio_buf_index = 0;

SDL_memset(stream, 0, len);

if(quit == 1 || quit == -1)
{
SDL_PauseAudio(0);
memset(audio_buff, 0, audio_buf_size);
SDL_MixAudio(stream, audio_buff + audio_buf_index, len, 0);
return;
}

//	LOGI("pkt nums: %d    queue size: %d\n", playerState->audioq.nb_packets, playerState->audioq.size);
while (len > 0)// 想设备发送长度为len的数据
{
if (audio_buf_index >= audio_buf_size) // 缓冲区中无数据
{
// 从packet中解码数据
audio_size = audio_decode_frame(aCodecCtx, audio_buff, sizeof(audio_buff));
if (audio_size < 0) // 没有解码到数据或出错,填充0
{
audio_buf_size = 0;
memset(audio_buff, 0, audio_buf_size);
}
else
audio_buf_size = audio_size;

audio_buf_index = 0;
}
len1 = audio_buf_size - audio_buf_index; // 缓冲区中剩下的数据长度
if (len1 > len) // 向设备发送的数据长度为len
len1 = len;

SDL_MixAudio(stream, audio_buff + audio_buf_index, len, SDL_MIX_MAXVOLUME);

len -= len1;
stream += len1;
audio_buf_index += len1;
}
}

int decodeFile(void *args)
{
//	LOGI("decode ...");
if (avformat_open_input(&playerState->pFormatCtx, playerState->url, NULL, NULL) != 0)
{
//		LOGE("can not open url:%s", playerState->url);
onErrorMsg(0x1002, "can not open the source url!");
return -1;
}
//	LOGI("here ...");
if (avformat_find_stream_info(playerState->pFormatCtx, NULL) < 0)
{
//		LOGE("can not find streams from %s", playerState->url);
onErrorMsg(0x1003, "can not find streams from the source url!");
return -1;
}
//	LOGI("here2 ...");
int i = 0;
for (; i < playerState->pFormatCtx->nb_streams; i++)
{
if (playerState->pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO)
{
playerState->audioStreamIndex = i;
break;
}
}
//	LOGI("here3 ...");
if (playerState->audioStreamIndex == -1)
{
//		LOGE("can not find audio streams from %s", playerState->url);
onErrorMsg(0x1004, "can not find audio streams from the source url!");
return -1;
}

playerState->audioStream = playerState->pFormatCtx->streams[playerState->audioStreamIndex];

playerState->audioDuration = playerState->pFormatCtx->duration / 1000000;
//	LOGI("duration:%d", playerState->audioDuration);
//	int h = playerState->audioDuration / 3600;
//	int m = (playerState->audioDuration - 3600 * h) / 60;
//	int s = playerState->audioDuration - 3600 * h - m * 60;
//	LOGI("%02d:%02d:%02d", h, m, s);

AVCodecContext* pCodecCtxOrg;
pCodecCtxOrg = playerState->pFormatCtx->streams[playerState->audioStreamIndex]->codec; // codec context
playerState->audioCodec = avcodec_find_decoder(pCodecCtxOrg->codec_id);
if (!playerState->audioCodec)
{
//		LOGE("can not find audio %d codecctx!", playerState->audioStreamIndex);
onErrorMsg(0x1005, "can not find audio codecctx!");
play = 1;
return -1;
}
//	LOGI("here4 ...");
// 不直接使用从AVFormatContext得到的CodecContext,要复制一个
playerState->audioCodecCtx = avcodec_alloc_context3(playerState->audioCodec);
if (avcodec_copy_context(playerState->audioCodecCtx, pCodecCtxOrg) != 0)
{
//		LOGE("Could not copy codec context!");
onErrorMsg(0x1006, "Could not copy codec context!");
return -1;
}
avcodec_free_context(&pCodecCtxOrg);

//initaudio sdl

playerState->wanted_spec.freq = playerState->audioCodecCtx->sample_rate;
playerState->wanted_spec.format = AUDIO_S16SYS;
playerState->wanted_spec.channels = 2;
playerState->wanted_spec.silence = 0;
playerState->wanted_spec.samples = SDL_AUDIO_BUFFER_SIZE;
playerState->wanted_spec.callback = audio_callback;
playerState->wanted_spec.userdata = playerState->audioCodecCtx;
if(SDL_OpenAudio(&playerState->wanted_spec, &playerState->spec) < 0)
{
//		LOGE("sdl open audio failed:");
onErrorMsg(0x1007, "sdl open audio failed!");
return -1;
}
SDL_PauseAudio(0);

if(avcodec_open2(playerState->audioCodecCtx, playerState->audioCodec, NULL) != 0)
{
//		LOGE("open audio codec fail");
onErrorMsg(0x1008, "open audio codec fail!");
return -1;
}
onParpred();
AVPacket packet;
int index = 0;
while (1)
{
if(quit == -1)
{
break;
}
if(play == 0)
{
continue;
}
if(playerState)
{
if (getQueueSize(&playerState->audioq) < 50)
{
int ret = av_read_frame(playerState->pFormatCtx, &packet);
if (ret == 0)
{
isOver = 0;
if (packet.stream_index == playerState->audioStreamIndex)
{
packet_queue_put(&playerState->audioq, &packet);
//					LOGI("code %d", index++);
}
else
{
av_packet_unref(&packet);
}
}
else if(ret == AVERROR_EOF)
{
isOver = 1;
if(getQueueSize(&playerState->audioq) == 0)
{
quit = 1;
onComplete();
return 0;
}
//					LOGE("right av_read_frame finished return %d", ret);
}
else
{
//					LOGE("av_read_frame finished return %d", ret);
}
}
}
}
//	LOGI("here7 ...");

return 0;
}

int avformat_interrupt_cb(void *ctx)
{
if(quit == -1)
return 1;
return 0;
}

int main(int argc, char* args[])
{
//	LOGI(".............come from main............");
//	LOGI("input url: %s", args[1]);
quit = 0;
play = 0;
if(SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER))
{
onErrorMsg(0x1001, "init sdl error!");
return -1;
}
av_register_all();
avformat_network_init();
playerState = malloc(sizeof(PlayerState));
packet_queue_init(&playerState->audioq);
playerState->url = args[1];
playerState->audioStreamIndex = -1;
playerState->pFormatCtx = avformat_alloc_context();
playerState->decodeThread = SDL_CreateThread(decodeFile, "decodeThread", NULL);
playerState->pFormatCtx->interrupt_callback.callback = avformat_interrupt_cb;

for(;;)
{
if(playerState)
{
if(quit == 0)
{
if(getQueueSize(&playerState->audioq) == 0)
{
if(isOver != 1)//退出
{
//						LOGI("loading....");
onLoad();
}
}
else
{
//					LOGI("plalying....");
onPlay();
}
}
}
else{
play = 1;
return 0;
}
SDL_Delay(10);
}
play = 1;
return 0;
}

void JNICALL Java_com_ywl5320_wlsdk_player_WlPlayer_wlStart(JNIEnv* env, jclass jcls)
{
if(play == 0)
{
play = 1;
}
}

void JNICALL Java_com_ywl5320_wlsdk_player_WlPlayer_wlPause(JNIEnv* env, jclass jcls)
{
//	LOGI("pause");
if(quit != 1 && isOver != 1)
{
quit = 1;
if(playerState && playerState->pFormatCtx)
{
av_read_pause(playerState->pFormatCtx);
}
}
}
void JNICALL Java_com_ywl5320_wlsdk_player_WlPlayer_wlPlay(JNIEnv* env, jclass jcls)
{
//	LOGI("play");
if(quit != 0 && isOver != 1)
{
quit = 0;
if(playerState && playerState->pFormatCtx)
{
av_read_play(playerState->pFormatCtx);
}
}
}

jint JNICALL Java_com_ywl5320_wlsdk_player_WlPlayer_wlDuration(JNIEnv *env, jclass jcls)
{
if(playerState)
{
if(playerState->audioDuration > 0)
{
return playerState->audioDuration;
}
}
return 0;
}

//
void JNICALL Java_com_ywl5320_wlsdk_player_WlPlayer_wlRealease(JNIEnv* env, jclass jcls)
{
//	LOGI("release");
release();

}

jint JNICALL Java_com_ywl5320_wlsdk_player_WlPlayer_wlNowTime(JNIEnv* env, jclass jcls)
{
if(playerState && playerState->audioStream && getQueueSize(&playerState->audioq) > 0 && playerState->audioStream->time_base.den > 0)
{
return playerState->audioPts / playerState->audioStream->time_base.den;
}
return 0;
}

int JNICALL Java_com_ywl5320_wlsdk_player_WlPlayer_wlSeekTo(JNIEnv* env, jclass jcls, jint secds)
{
//	LOGI("wlSeekTo%d", secds);
if(playerState && playerState->audioStream && secds < playerState->audioDuration && isOver != 1)
{
quit = 1;
if(av_seek_frame(playerState->pFormatCtx, playerState->audioStreamIndex, secds * playerState->audioStream->time_base.den, AVSEEK_FLAG_ANY) >= 0)
{
playerState->audioq.first_pkt = NULL;
playerState->audioq.last_pkt = NULL;
playerState->audioq.nb_packets = 0;
playerState->audioq.size = 0;
}
quit = 0;
return 0;
}
return -1;
}

jint JNICALL Java_com_ywl5320_wlsdk_player_WlPlayer_wlIsInit(JNIEnv* env, jclass jcls)
{
if(play == 0)
{
return -1;
}
return 0;
}

jint JNICALL Java_com_ywl5320_wlsdk_player_WlPlayer_wlIsRelease(JNIEnv* env, jclass jcls)
{
return quit;
}

void release()
{
quit = -1;
play = 1;
SDL_CloseAudio();
av_free(playerState);
playerState = NULL;
SDL_Quit();
}

void onErrorMsg(int code, char *msg)
{
quit = -1;
onError(code, msg);
release();
}


里面主要是从main方法进去,然后初始化FFmpeg和SDL并解析流媒体文件,其中解析流媒体文件是在SDL提供的子线程中进行的,这个文件是我整理后的,下面是刚开始的测试文件,更容易看清楚流程。

good_audio.c

#include <stdlib.h>
#include <stdio.h>
#include <time.h>

#include "SDL.h"
#include "SDL_thread.h"

#include <android/log.h>
#define LOGI(FORMAT,...) __android_log_print(ANDROID_LOG_INFO,"ywl5320",FORMAT,##__VA_ARGS__);
#define LOGE(FORMAT,...) __android_log_print(ANDROID_LOG_ERROR,"ywl5320",FORMAT,##__VA_ARGS__);

#include "libavformat/avformat.h"
#include "libavcodec/avcodec.h"
#include "libswscale/swscale.h"
#include "libswresample/swresample.h"
#include "libavutil/mathematics.h"
#include "libavutil/samplefmt.h"

#define SDL_AUDIO_BUFFER_SIZE 1024
#define AVCODEC_MAX_AUDIO_FRAME_SIZE 192000

typedef struct PacketQueue
{
AVPacketList *first_pkt, *last_pkt;
int nb_packets;
int size;
SDL_mutex *mutex;
SDL_cond *cond;
}PacketQueue;

PacketQueue audioq;
int quit = 0;

void packet_queue_init(PacketQueue *q) {
memset(q, 0, sizeof(PacketQueue));
q->mutex = SDL_CreateMutex();
q->cond = SDL_CreateCond();
}

int packet_queue_put(PacketQueue *q, AVPacket *pkt) {

AVPacketList *pkt1;
if (av_dup_packet(pkt) < 0) {
return -1;
}
pkt1 = av_malloc(sizeof(AVPacketList));
if (!pkt1)
return -1;
pkt1->pkt = *pkt;
pkt1->next = NULL;

SDL_LockMutex(q->mutex);

if (!q->last_pkt)
q->first_pkt = pkt1;
else
q->last_pkt->next = pkt1;
q->last_pkt = pkt1;
q->nb_packets++;
q->size += pkt1->pkt.size;
SDL_CondSignal(q->cond);

SDL_UnlockMutex(q->mutex);
return 0;
}

static int packet_queue_get(PacketQueue *q, AVPacket *pkt, int block) {
AVPacketList *pkt1;
int ret;

SDL_LockMutex(q->mutex);

for (;;) {

if (quit) {
ret = -1;
break;
}

pkt1 = q->first_pkt;
if (pkt1) {
q->first_pkt = pkt1->next;
if (!q->first_pkt)
q->last_pkt = NULL;
q->nb_packets--;
q->size -= pkt1->pkt.size;
*pkt = pkt1->pkt;
av_free(pkt1);
ret = 1;
break;
} else if (!block) {
ret = 0;
break;
} else {
SDL_CondWait(q->cond, q->mutex);
}
}
SDL_UnlockMutex(q->mutex);
return ret;
}

int getQueueSize(PacketQueue *q)
{
return q->nb_packets;
}

int audio_decode_frame(AVCodecContext *aCodecCtx, uint8_t *audio_buf, int buf_size) {

AVFrame *frame = av_frame_alloc();
int data_size = 0;
AVPacket pkt;
int got_frame_ptr;

SwrContext *swr_ctx;

if (quit)
return -1;
if (packet_queue_get(&audioq, &pkt, 1) < 0)
return -1;
int ret = avcodec_decode_audio4(aCodecCtx, frame, &got_frame_ptr, &pkt);
//int ret = avcodec_send_packet(aCodecCtx, &pkt);
if (ret < 0 && ret != AVERROR(EAGAIN) && ret != AVERROR_EOF)
return -1;

//ret = avcodec_receive_frame(aCodecCtx, frame);
//if (ret < 0 && ret != AVERROR_EOF)
//	return -1;

// 设置通道数或channel_layout
if (frame->channels > 0 && frame->channel_layout == 0)
frame->channel_layout = av_get_default_channel_layout(frame->channels);
else if (frame->channels == 0 && frame->channel_layout > 0)
frame->channels = av_get_channel_layout_nb_channels(frame->channel_layout);

enum AVSampleFormat dst_format = AV_SAMPLE_FMT_S16;//av_get_packed_sample_fmt((AVSampleFormat)frame->format);

//重采样为立体声
Uint64 dst_layout = AV_CH_LAYOUT_STEREO;
// 设置转换参数
swr_ctx = swr_alloc_set_opts(NULL, dst_layout, dst_format, frame->sample_rate,
frame->channel_layout, (enum AVSampleFormat)frame->format, frame->sample_rate, 0, NULL);
if (!swr_ctx || swr_init(swr_ctx) < 0)
return -1;

// 计算转换后的sample个数 a * b / c
int dst_nb_samples = av_rescale_rnd(swr_get_delay(swr_ctx, frame->sample_rate) + frame->nb_samples, frame->sample_rate, frame->sample_rate, 1);
// 转换,返回值为转换后的sample个数
int nb = swr_convert(swr_ctx, &audio_buf, dst_nb_samples, (const uint8_t**)frame->data, frame->nb_samples);

//根据布局获取声道数
int out_channels = av_get_channel_layout_nb_channels(dst_layout);
data_size = out_channels * nb * av_get_bytes_per_sample(dst_format);

av_frame_free(&frame);
swr_free(&swr_ctx);
return data_size;
}

void audio_callback(void *userdata, Uint8 *stream, int len) {

AVCodecContext *aCodecCtx = (AVCodecContext *)userdata;

int len1, audio_size;

static uint8_t audio_buff[(AVCODEC_MAX_AUDIO_FRAME_SIZE * 3) / 2];
static unsigned int audio_buf_size = 0;
static unsigned int audio_buf_index = 0;

SDL_memset(stream, 0, len);
if (getQueueSize(&audioq) > 0)
{
LOGI("pkt nums: %d    queue size: %d\n", audioq.nb_packets, audioq.size);
while (len > 0)// 想设备发送长度为len的数据
{
if (audio_buf_index >= audio_buf_size) // 缓冲区中无数据
{
// 从packet中解码数据
audio_size = audio_decode_frame(aCodecCtx, audio_buff, sizeof(audio_buff));
if (audio_size < 0) // 没有解码到数据或出错,填充0
{
audio_buf_size = 0;
memset(audio_buff, 0, audio_buf_size);
}
else
audio_buf_size = audio_size;

audio_buf_index = 0;
}
len1 = audio_buf_size - audio_buf_index; // 缓冲区中剩下的数据长度
if (len1 > len) // 向设备发送的数据长度为len
len1 = len;

SDL_MixAudio(stream, audio_buff + audio_buf_index, len, SDL_MIX_MAXVOLUME);

len -= len1;
stream += len1;
audio_buf_index += len1;
}
}
else
{
LOGI("pkt nums: %d    queue size: %d\n", audioq.nb_packets, audioq.size);
LOGI("play complete");
SDL_CloseAudio();
SDL_Quit();
}
}

int main(int argc, char* args[])
{

LOGI(".............come from main............");
LOGI("input url: %s", args[1]);
av_register_all();
avformat_network_init();

AVFormatContext *pFormatCtx;
if (avformat_open_input(&pFormatCtx, args[1], NULL, NULL) != 0)
return -1;

if (avformat_find_stream_info(pFormatCtx, NULL) < 0)
return -1;

//	av_dump_format(pFormatCtx, 0, args[1], 0);

int audioStream = -1;
int i = 0;
for (; i < pFormatCtx->nb_streams; i++)
{
if (pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO)
{
audioStream = i;
break;
}
}

if (audioStream == -1)
return -1;

AVCodecContext* pCodecCtxOrg;
AVCodecContext* pCodecCtx;

AVCodec* pCodec;

pCodecCtxOrg = pFormatCtx->streams[audioStream]->codec; // codec context

// 找到audio stream的 decoder
pCodec = avcodec_find_decoder(pCodecCtxOrg->codec_id);

if (!pCodec)
{
LOGE("Unsupported codec!")
return -1;
}

// 不直接使用从AVFormatContext得到的CodecContext,要复制一个
pCodecCtx = avcodec_alloc_context3(pCodec);
if (avcodec_copy_context(pCodecCtx, pCodecCtxOrg) != 0)
{
LOGE("Could not copy codec context!");
return -1;
}

SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER);
// Set audio settings from codec info
SDL_AudioSpec wanted_spec, spec;
wanted_spec.freq = pCodecCtx->sample_rate;
wanted_spec.format = AUDIO_S16SYS;
wanted_spec.channels = pCodecCtx->channels;
wanted_spec.silence = 0;
wanted_spec.samples = SDL_AUDIO_BUFFER_SIZE;
wanted_spec.callback = audio_callback;
wanted_spec.userdata = pCodecCtx;

if(SDL_OpenAudio(&wanted_spec, &spec) < 0)
{
LOGE("Open audio failed:");
return -1;
}

LOGI("come here...");

avcodec_open2(pCodecCtx, pCodec, NULL);

SDL_PauseAudio(0);

AVPacket packet;
while (1)
{
if (getQueueSize(&audioq) < 50)
{
int ret = av_read_frame(pFormatCtx, &packet);
if (ret >= 0)
{
if (packet.stream_index == audioStream)
packet_queue_put(&audioq, &packet);
else
av_packet_unref(&packet);
}
}
}

avformat_close_input(&pFormatCtx);
return 0;
}
视频播放文件:

good_video.c

#include <stdlib.h>
#include <stdio.h>
#include <time.h>

#include "SDL.h"
#include "SDL_thread.h"

#include <android/log.h>
#define LOGI(FORMAT,...) __android_log_print(ANDROID_LOG_INFO,"ywl5320",FORMAT,##__VA_ARGS__);
#define LOGE(FORMAT,...) __android_log_print(ANDROID_LOG_ERROR,"ywl5320",FORMAT,##__VA_ARGS__);

#include "libavformat/avformat.h"
#include "libavcodec/avcodec.h"
#include "libswscale/swscale.h"
#include "libswresample/swresample.h"
#include "libavutil/mathematics.h"
#include "libavutil/samplefmt.h"

static const int SDL_AUDIO_BUFFER_SIZE = 1024;
static const int MAX_AUDIO_FRAME_SIZE = 192000;

/*
队列包
*/
typedef struct PackeQueue
{
AVPacketList *first_pkt, *last_pkt;

int data_size;
int nb_pkts;

SDL_mutex *mutex;
SDL_cond *cond;

}PackeQueue;

void init_Queue(PackeQueue *queue)
{
queue = (PackeQueue *)malloc(sizeof(PackeQueue));
queue->mutex = SDL_CreateMutex();
queue->cond = SDL_CreateCond();
}

int push_Queue(PackeQueue *queue, AVPacket *pkt)
{
AVPacketList *pkt1;
if (av_dup_packet(pkt) < 0) {
return -1;
}
pkt1 = (AVPacketList *)av_malloc(sizeof(AVPacketList));
if (!pkt1)
return -1;
pkt1->pkt = *pkt;
pkt1->next = NULL;

SDL_LockMutex(queue->mutex);

if (!queue->last_pkt)//下一个为空,表面里面没数据
{
queue->first_pkt = pkt1;
}
else
{
queue->last_pkt->next = pkt1;
}

queue->last_pkt = pkt1;
queue->nb_pkts++;
queue->data_size += pkt1->pkt.size;

SDL_CondSignal(queue->cond);
SDL_UnlockMutex(queue->mutex);
return 0;
}

int pop_Queue(PackeQueue *queue, AVPacket *pkt)
{
AVPacketList *pkt1;
SDL_LockMutex(queue->mutex);
pkt1 = queue->first_pkt;
int ret = -1;
for (;;)
{
if (pkt1)
{
queue->first_pkt = pkt1->next;
queue->data_size -= pkt1->pkt.size;
queue->nb_pkts--;
*pkt = pkt1->pkt;
av_free(pkt1);
ret = 0;
break;
}
else
{
SDL_CondWait(queue->cond, queue->mutex);
}
}
SDL_UnlockMutex(queue->mutex);
return ret;

}

int main(int argv, char* argc[])
{
//1.注册支持的文件格式及对应的codec
av_register_all();
avformat_network_init();

//char* filenName = "http://live.g3proxy.lecloud.com/gslb?stream_id=lb_yxlm_1800&tag=live&ext=m3u8&sign=live_tv&platid=10&splatid=1009";
//char *filenName = "jxtg3.mkv";

// 2.打开video文件
AVFormatContext* pFormatCtx = NULL;
// 读取文件头,将格式相关信息存放在AVFormatContext结构体中
if (avformat_open_input(&pFormatCtx, argc[1], NULL, NULL) != 0)
return -1; // 打开失败

// 检测文件的流信息
if (avformat_find_stream_info(pFormatCtx, NULL) < 0)
return -1; // 没有检测到流信息 stream infomation

// 在控制台输出文件信息
av_dump_format(pFormatCtx, 0, argc[1], 0);

//查找第一个视频流 video stream
int videoStream = -1;
int i = 0;
for (; i < pFormatCtx->nb_streams; i++)
{
if (pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
{
videoStream = i;
break;
}
}

if (videoStream == -1)
return -1; // 没有查找到视频流video stream

AVCodecContext* pCodecCtxOrg = NULL;
AVCodecContext* pCodecCtx = NULL;

AVCodec* pCodec = NULL;

pCodecCtxOrg = pFormatCtx->streams[videoStream]->codec; // codec context

// 找到video stream的 decoder
pCodec = avcodec_find_decoder(pCodecCtxOrg->codec_id);

if (!pCodec)
{
//		cout << "Unsupported codec!" << endl;
return -1;
}

// 不直接使用从AVFormatContext得到的CodecContext,要复制一个
pCodecCtx = avcodec_alloc_context3(pCodec);
if (avcodec_copy_context(pCodecCtx, pCodecCtxOrg) != 0)
{
//		cout << "Could not copy codec context!" << endl;
return -1;
}

// open codec
if (avcodec_open2(pCodecCtx, pCodec, NULL) < 0)
return -1; // Could open codec

AVFrame* pFrame = NULL;
AVFrame* pFrameYUV = NULL;

pFrame = av_frame_alloc();
pFrameYUV = av_frame_alloc();

// 使用的缓冲区的大小
int numBytes = 0;
uint8_t* buffer = NULL;

numBytes = avpicture_get_size(AV_PIX_FMT_YUV420P, pCodecCtx->width, pCodecCtx->height);
buffer = (uint8_t*)av_malloc(numBytes * sizeof(uint8_t));

avpicture_fill((AVPicture*)pFrameYUV, buffer, AV_PIX_FMT_YUV420P, pCodecCtx->width, pCodecCtx->height);

struct SwsContext* sws_ctx = NULL;
sws_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt,
pCodecCtx->width, pCodecCtx->height, AV_PIX_FMT_YUV420P, SWS_BILINEAR, NULL, NULL, NULL);

///////////////////////////////////////////////////////////////////////////
//
// SDL2.0
//
//////////////////////////////////////////////////////////////////////////
SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER);
SDL_Window* window = SDL_CreateWindow("FFmpeg Decode", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED,
pCodecCtx->width, pCodecCtx->height, SDL_WINDOW_OPENGL);
SDL_Renderer* renderer = SDL_CreateRenderer(window, -1, 0);
SDL_Texture* bmp = SDL_CreateTexture(renderer, SDL_PIXELFORMAT_YV12, SDL_TEXTUREACCESS_STREAMING,
pCodecCtx->width, pCodecCtx->height);
SDL_Rect rect;
rect.x = 0;
rect.y = 0;
rect.w = pCodecCtx->width;
rect.h = pCodecCtx->height;

SDL_Event event;

AVPacket packet;
while (av_read_frame(pFormatCtx, &packet) >= 0)
{
if (packet.stream_index == videoStream)
{
int frameFinished = 0;
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
if (frameFinished)
{
printf("pts:%d \n", pFrame->pkt_pts / 1000);

sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data, pFrame->linesize, 0,
pCodecCtx->height, pFrameYUV->data, pFrameYUV->linesize);

//SDL_UpdateTexture(bmp, &rect, pFrameYUV->data[0], pFrameYUV->linesize[0]);
SDL_UpdateYUVTexture(bmp, &rect,
pFrameYUV->data[0], pFrameYUV->linesize[0],
pFrameYUV->data[1], pFrameYUV->linesize[2],
pFrameYUV->data[2], pFrameYUV->linesize[1]);
SDL_RenderClear(renderer);
SDL_RenderCopy(renderer, bmp, &rect, &rect);
SDL_RenderPresent(renderer);
SDL_Delay(30);

}
}
av_free_packet(&packet);

SDL_PollEvent(&event);
switch (event.type)
{
case SDL_QUIT:
SDL_Quit();
av_free(buffer);
av_frame_free(&pFrame);
av_frame_free(&pFrameYUV);

avcodec_close(pCodecCtx);
avcodec_close(pCodecCtxOrg);

avformat_close_input(&pFormatCtx);
return 0;
}
}

av_free(buffer);
av_frame_free(&pFrame);
av_frame_free(&pFrameYUV);

avcodec_close(pCodecCtx);
avcodec_close(pCodecCtxOrg);

avformat_close_input(&pFormatCtx);

getchar();
return 0;
}


最后这两个文件是单独播放音频和视频的源码,至于音视频同步还还没有时间做,后面空了弄好了也会再分享一下。
总结:
通过上面的编码,就能实现最简单的播放器效果了。FFmpeg和SDL都是很好的源码库,但是其实在是庞大,我现在也只是窥其一隅,所以只能这样先分享一下,后面还得不断学习学习再学习,大家有好的思路或结构不妨也给大家说一下,大家一起进步。
博客中demo下载地址:有eclipse版本的和Android Studio library两个版本
GitHub:FFmpeg-SDL2PlayerSDK

学习资料
FFmpeg学习:雷神的博客 雷霄骅(leixiaohua1020)的专栏
FFmpeg和SDL:An ffmpeg and SDL Tutorial
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息