ffmpeg+nginx搭建HLS服务器及基于ARM实现的简单hls解决方案
2016-07-14 15:20
766 查看
之前做的ffmpeg+ffserver实现http流媒体播放,现在做的是ffmpeg+nginx搭建HLS流媒体服务器。
由于做的是基于ARM上的,首先要做nginx的移植,ffmpeg移植网上很多可以参考,nginx的arm移植比较推荐这一篇,一步一步对照着来,没有问题http://www.tuicool.com/articles/QZVJjez。
唯一要注意的是做rtmp式,nginx需要添加nginx-rtmp-module,在configure的时候需要增加nginx-rtmp-module的支持,下载好nginx-rtmp-module后解压,然后nginx安装时增加这个模块(–add-module),其它都是一样的.
nginx的配置文件nginx.conf示例如下:
主要包括了两个部分的配置信息,http和rtmp端口;
rtmp监听1935端口,application hls的存储路径是/tmp/hls,hls切片长度是2s。
( application myapp是做rtmp的)。
ffmpeg推送的rtmp地址是: rtmp://localhost:1935/hls/test2
nginx启动:nginx -c nginx.conf。
此次工作的不同在于ffmpeg需从FPGA fifo中读取数据,参考leixiaohua1020大神的ffmpeg内存读写,代码如下,主要修改了回调函数,修改了延时处理部分。
中间纠结了好几天回调函数的问题,在读取高清数字节目时,出现花屏卡顿的错误,百般修改,最终播放效果良好。
当我满心欢喜以为终于做成功时,却发现标清的数字电视节目均是mpeg2编码的,mpeg2没法用flv和mp4封装,rtmp没法传。这怎么办,做转码?arm根本跑不了转码。那只能自己做切片,一种方法就是前面程序推udp的流,再启一个ffmpeg做切片,这样的话,ffmpeg前面解析时间占用大约15s的时间,而且完整的ffmpeg也不好控制,整体架构混乱。另一种就是分析ffmpeg切片api,重写基于ffmpeg的内存读写切片,可以研究,但是没时间,后续一周再做。最投机取巧就是,前面FIFO读数据时,每隔几秒保存一段,做简单的切片工作,自己去写playlist.m3u8。ffmpeg在做切片时,插入了很多信息,以保证播放正常。这种简单切片可能有问题,但是从播放结果上来说,感觉良好。
大体代码如下,切为5段,每段3s,循环读写,1分钟超时自动关闭。
playlist如下:
注意X-TARGETDURATION需大于切皮时间,否则片与片之间播放时会有空白。
切片位置是nginx的hls的读取目录。
ok!
延时为2段的时间即6s,追求更小的延时可以缩小切片时间,或者前两段切片时间缩小。
由于做的是基于ARM上的,首先要做nginx的移植,ffmpeg移植网上很多可以参考,nginx的arm移植比较推荐这一篇,一步一步对照着来,没有问题http://www.tuicool.com/articles/QZVJjez。
唯一要注意的是做rtmp式,nginx需要添加nginx-rtmp-module,在configure的时候需要增加nginx-rtmp-module的支持,下载好nginx-rtmp-module后解压,然后nginx安装时增加这个模块(–add-module),其它都是一样的.
nginx的配置文件nginx.conf示例如下:
#user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; access_log off; error_log /dev/null crit; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm; } location /hls { types { application/vnd.apple.mpegurl m3u8; video/mp2ts ts; } root /tmp; # root /home/root/main/bin/nginx/; add_header Cache-Control no-cache; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } } rtmp { server { listen 1935; application myapp { live on; } application hls { live on; hls on; hls_path /tmp/hls; hls_fragment 2s; hls_playlist_length 4s; } } }
主要包括了两个部分的配置信息,http和rtmp端口;
rtmp监听1935端口,application hls的存储路径是/tmp/hls,hls切片长度是2s。
( application myapp是做rtmp的)。
ffmpeg推送的rtmp地址是: rtmp://localhost:1935/hls/test2
nginx启动:nginx -c nginx.conf。
此次工作的不同在于ffmpeg需从FPGA fifo中读取数据,参考leixiaohua1020大神的ffmpeg内存读写,代码如下,主要修改了回调函数,修改了延时处理部分。
/** * */ #include <stdlib.h> #include <stdio.h> #include <unistd.h> #include <sys/mman.h> #include <fcntl.h> #include <time.h> #include <pthread.h> #include <semaphore.h> #define __STDC_CONSTANT_MACROS #ifdef _WIN32 //Windows extern "C" { #include "libavformat/avformat.h" #include "libavutil/mathematics.h" #include "libavutil/time.h" }; #else //Linux... #ifdef __cplusplus extern "C" { #endif #include <libavformat/avformat.h> #include <libavutil/mathematics.h> #include <libavutil/time.h> #include "data_test.c" #include <unistd.h> #ifdef __cplusplus }; #endif #endif #define QueueSize 32768*1024//*1024 //fifo队列的大小 #define QueueFull 0 //fifo满置0 #define QueueEmpty 1 //FIFO空置1 #define QueueOperateOk 2 //队列操作完成 赋值为2 struct FifoQueue { unsigned int front; //队列头 unsigned int rear; //队列尾 unsigned int count; //队列计数 unsigned char dat[QueueSize]; }; struct FifoQueue MyQueue; pthread_mutex_t Device_mutex; //Queue Init void QueueInit(struct FifoQueue *Queue) { Queue->front = 0; Queue->rear = 0;//初始化时队列头队列首相连 Queue->count = 0; //队列计数为0 } // Queue In unsigned char QueueIn(struct FifoQueue *Queue,unsigned char sdat) //数据进入队列 { if((Queue->front == Queue->rear) && (Queue->count == QueueSize)) { // full //判断如果队列满了 return QueueFull; //返回队列满的标志 }else { // in Queue->dat[Queue->rear++] = sdat; //printf("Queue elem:%d,%02x\n",Queue->rear,Queue->dat[Queue->rear]); if(Queue->rear == QueueSize) Queue->rear = 0; Queue->count = Queue->count + 1; return QueueOperateOk; } } // Queue Out unsigned char QueueOut(struct FifoQueue *Queue,unsigned char *sdat) { if((Queue->front == Queue->rear) && (Queue->count == 0)) { // empty return QueueEmpty; } else { // out *sdat = Queue->dat[Queue->front++]; //printf("the queue:%d,%02x\n",Queue->front,Queue->dat[Queue->front]); if(Queue->front == QueueSize) Queue->front = 0; Queue->count = Queue->count - 1; return QueueOperateOk; } } extern void* write_iobuffer(void *opaque) { int i; unsigned char data_h,data_l; unsigned short *len_addr,*data_addr; len_addr = (unsigned short*)(*(unsigned int*)opaque); data_addr = (unsigned short*)(*((unsigned int*)opaque+1)); while(1) { unsigned short len = *len_addr; for (i = 0; i < len; i++) { unsigned short data = *data_addr; data_h = (unsigned char)(data >> 8); data_l = (unsigned char)(data); //pthread_mutex_lock(&Device_mutex); if(QueueIn(&MyQueue,data_h) == QueueFull)break; if(QueueIn(&MyQueue,data_l) == QueueFull)break; //pthread_mutex_unlock(&Device_mutex); } //pthread_mutex_lock(&Device_mutex); usleep(1); //pthread_mutex_unlock(&Device_mutex); } } int fill_iobuffer(void *opaque, unsigned char *buff, int buff_size) { unsigned int totsize; unsigned char sh; totsize = 0; //pthread_mutex_lock(&Device_mutex); while(totsize < buff_size) { pthread_mutex_lock(&Device_mutex); if(QueueOut(&MyQueue,&sh) == QueueEmpty)break; buff[totsize++] = sh; pthread_mutex_unlock(&Device_mutex); } //pthread_mutex_unlock(&Device_mutex); usleep(100); return totsize; } int main(int argc, char* argv[]) { FPGASet(argc,atoi(argv[1]),atoi(argv[2]),atoi(argv[3]),atoi(argv[4]),atoi(argv[5])); int fpga = atoi(argv[1]); int fd; unsigned short *addr[2]; if ((fd = open("/dev/mem", O_RDWR | O_SYNC)) == -1) { printf("open /dev/mem false!\n"); } addr[0] = virt_map(fpga, fd, PIDFT_LEN); addr[1] = virt_map(fpga, fd, PIDFT_DAT); AVOutputFormat *ofmt = NULL; //Input AVFormatContext and Output AVFormatContext AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL; AVPacket pkt; const char *in_filename, *out_filename; int ret, i; int videoindex=-1; int frame_index=0; int64_t start_time=0; int flag = 1; //in_filename = "cuc_ieschool.flv";//输入URL(Input file URL) //in_filename = "btv2.ts"; //out_filename = "rtmp://localhost/publishlive/livestream";//输出 URL(Output URL)[RTMP] //out_filename = "udp://@224.2.2.33:8888?overrun_nonfatal=1&fifo_size=50000000";//输出 URL(Output URL)[UDP] //out_filename = "http://localhost:8090/feed1.ffm"; //out_filename = "gaoqing6.ts"; //out_filename = "rtmp://localhost:1935/myapp/test1"; out_filename = "rtmp://localhost:1935/hls/test2";//hls //out_filename = "rtmp://localhost:1935/myapp/test1"; //rtmp av_register_all(); avformat_network_init(); QueueInit(&MyQueue); //initial the buffer ifmt_ctx = avformat_alloc_context(); unsigned char * iobuffer=(unsigned char *)av_malloc(32768);//32768 pthread_t thread1; pthread_mutex_init(&Device_mutex,NULL); if(pthread_create(&thread1,NULL,write_iobuffer,addr) == -1){ printf("create buf writer pthread error !\n"); exit(1); } sleep(5); //write_iobuffer(addr); //AVIOContext *avio =avio_alloc_context(iobuffer, 32768,0,fpga_dev,fill_iobuffer,NULL,NULL); AVIOContext *avio = avio_alloc_context(iobuffer,1024,0,0,fill_iobuffer,NULL,NULL); ifmt_ctx->pb=avio; if ((ret = avformat_open_input(&ifmt_ctx, NULL, NULL, NULL)) < 0) { printf( "Could not open input file.\n"); goto end; } if ((ret = avformat_find_stream_info(ifmt_ctx, NULL)) < 0) { printf( "Failed to retrieve input stream information\n"); goto end; } for(i=0; i<ifmt_ctx->nb_streams; i++) if(ifmt_ctx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO){ videoindex=i; break; } av_dump_format(ifmt_ctx, 0, "nothing", 0); //Output //avformat_alloc_output_context2(&ofmt_ctx, NULL, "ffm", out_filename); //ffm avformat_alloc_output_context2(&ofmt_ctx, NULL, "flv", out_filename); //RTMP //avformat_alloc_output_context2(&ofmt_ctx, NULL, "mpegts", out_filename);//UDP if (!ofmt_ctx) { printf( "Could not create output context\n"); ret = AVERROR_UNKNOWN; goto end; } ofmt = ofmt_ctx->oformat; for (i = 0; i < ifmt_ctx->nb_streams; i++) { //Create output AVStream according to input AVStream AVStream *in_stream = ifmt_ctx->streams[i]; AVStream *out_stream = avformat_new_stream(ofmt_ctx, in_stream->codec->codec); if (!out_stream) { printf( "Failed allocating output stream\n"); ret = AVERROR_UNKNOWN; goto end; } //Copy the settings of AVCodecContext ret = avcodec_copy_context(out_stream->codec, in_stream->codec); if (ret < 0) { printf( "Failed to copy context from input to output stream codec context\n"); goto end; } out_stream->codec->codec_tag = 0; if (ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER) out_stream->codec->flags |= CODEC_FLAG_GLOBAL_HEADER; } //Dump Format------------------ av_dump_format(ofmt_ctx, 0, out_filename, 1); //Open output URL if (!(ofmt->flags & AVFMT_NOFILE)) { ret = avio_open(&ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE); if (ret < 0) { printf( "Could not open output URL '%s'", out_filename); goto end; } } //Write file header ret = avformat_write_header(ofmt_ctx, NULL); if (ret < 0) { printf( "Error occurred when opening output URL\n"); goto end; } int64_t pts_start; //int64_t pts_front; //pts_front = 0; start_time=av_gettime(); while (1) { //pthread_mutex_lock(&Device_mutex); AVStream *in_stream, *out_stream; //Get an AVPacket ret = av_read_frame(ifmt_ctx, &pkt); if (ret < 0) break; //FIX£ºNo PTS (Example: Raw H.264) //Simple Write PTS /* if(pkt.pts==AV_NOPTS_VALUE){ //Write PTS AVRational time_base1=ifmt_ctx->streams[videoindex]->time_base; //Duration between 2 frames (us) int64_t calc_duration=(double)AV_TIME_BASE/av_q2d(ifmt_ctx->streams[videoindex]->r_frame_rate); //Parameters pkt.pts=(double)(frame_index*calc_duration)/(double)(av_q2d(time_base1)*AV_TIME_BASE); pkt.dts=pkt.pts-(double)(calc_duration)/(double)(av_q2d(time_base1)*AV_TIME_BASE); pkt.duration=(double)calc_duration/(double)(av_q2d(time_base1)*AV_TIME_BASE); } */ //Important:Delay /* if(pkt.stream_index==videoindex){ AVRational time_base=ifmt_ctx->streams[videoindex]->time_base; AVRational time_base_q={1,AV_TIME_BASE}; int64_t pts_time = av_rescale_q(pkt.dts, time_base, time_base_q); int64_t now_time = av_gettime() - start_time; if (pts_time > now_time) { int64_t delay_time = (pts_time - now_time > 5000)?(5000):(pts_time - now_time); av_usleep(delay_time); } } */ //Important:Delay if(pkt.stream_index==videoindex){ AVRational time_base=ifmt_ctx->streams[videoindex]->time_base; AVRational time_base_q={1,AV_TIME_BASE}; while(flag){ pts_start = av_rescale_q(pkt.dts, time_base, time_base_q); flag = 0; } int64_t pts_time = av_rescale_q(pkt.dts, time_base, time_base_q)- pts_start; int64_t now_time=av_gettime() - start_time; if (pts_time > now_time){ int64_t delay_time = (pts_time - now_time > 40000)?(40000):(pts_time - now_time); av_usleep(delay_time); } } in_stream = ifmt_ctx->streams[pkt.stream_index]; out_stream = ofmt_ctx->streams[pkt.stream_index]; /* copy packet */ //Convert PTS/DTS pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, (AVRounding)(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX)); pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, (AVRounding)(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX)); pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base); pkt.pos = -1; //Print to Screen if(pkt.stream_index==videoindex){ printf("Send %8d video frames to output URL\n",frame_index); frame_index++; } //ret = av_write_frame(ofmt_ctx, &pkt); ret = av_interleaved_write_frame(ofmt_ctx, &pkt); /* if (ret < 0) { printf( "Error muxing packet\n"); //continue;//break #modify by lyszhang } */ av_free_packet(&pkt); } //Write file trailer av_write_trailer(ofmt_ctx); end: avformat_close_input(&ifmt_ctx); /* close output */ if (ofmt_ctx && !(ofmt->flags & AVFMT_NOFILE)) avio_close(ofmt_ctx->pb); avformat_free_context(ofmt_ctx); if (ret < 0 && ret != AVERROR_EOF) { printf( "Error occurred.\n"); return -1; } return 0; }
中间纠结了好几天回调函数的问题,在读取高清数字节目时,出现花屏卡顿的错误,百般修改,最终播放效果良好。
当我满心欢喜以为终于做成功时,却发现标清的数字电视节目均是mpeg2编码的,mpeg2没法用flv和mp4封装,rtmp没法传。这怎么办,做转码?arm根本跑不了转码。那只能自己做切片,一种方法就是前面程序推udp的流,再启一个ffmpeg做切片,这样的话,ffmpeg前面解析时间占用大约15s的时间,而且完整的ffmpeg也不好控制,整体架构混乱。另一种就是分析ffmpeg切片api,重写基于ffmpeg的内存读写切片,可以研究,但是没时间,后续一周再做。最投机取巧就是,前面FIFO读数据时,每隔几秒保存一段,做简单的切片工作,自己去写playlist.m3u8。ffmpeg在做切片时,插入了很多信息,以保证播放正常。这种简单切片可能有问题,但是从播放结果上来说,感觉良好。
大体代码如下,切为5段,每段3s,循环读写,1分钟超时自动关闭。
``` int write_iobuffer(int fpga,int fp_time) { int i; int fp_count; unsigned short *len_addr,*data_addr; int totSize; unsigned char data_buf[BUF_SIZE]; double tpstart,tpend; double duration; double time_temp; timeval tv; int cycle_count; cycle_count = 0; totSize = 0; fp_count = 0; int fd; if ((fd = open("/dev/mem", O_RDWR | O_SYNC)) == -1) { printf("open /dev/mem false!\n"); return -1; } len_addr = virt_map(fpga, fd, PIDFT_LEN); data_addr = virt_map(fpga, fd, PIDFT_DAT); FILE *fp1,*fp2,*fp3,*fp4,*fp5; if(fp1=fopen("/home/root/main/bin/nginx/hls/output0000.ts","wb+")) printf("open file successful!"); else return -1; if(fp2=fopen("/home/root/main/bin/nginx/hls/output0001.ts","wb+")) printf("open file successful!"); else return -1; if(fp3=fopen("/home/root/main/bin/nginx/hls/output0002.ts","wb+")) printf("open file successful!"); else return -1; if(fp4=fopen("//home/root/main/bin/nginx/hls/output0003.ts","wb+")) printf("open file successful!"); else return -1; if(fp5=fopen("/home/root/main/bin/nginx/hls/output0004.ts","wb+")) printf("open file successful!"); else return -1; FILE *list_fp; if(list_fp = fopen("/home/root/main/bin/nginx/hls/playlist.m3u8","wt+")) printf("open file successful!"); else return -1; fputs("#EXTM3U\n",list_fp); fputs("#EXT-X-VERSION:3\n",list_fp); fputs("#EXT-X-MEDIA-SEQUENCE:0\n",list_fp); fputs("#EXT-X-ALLOW-CACHE:YES\n",list_fp); fputs("#EXT-X-TARGETDURATION:4\n",list_fp); gettimeofday(&tv,NULL); time_temp = tv.tv_usec; tpstart = tv.tv_sec + time_temp/1000000; while(1) { unsigned short len = *len_addr; printf("data length:%d\n",len); for (i = 0; i < len; i++) { unsigned short data = *data_addr; data_buf[totSize++] = (unsigned char)(data >> 8); data_buf[totSize++] = (unsigned char)(data); } usleep(1000); gettimeofday(&tv,NULL); time_temp = tv.tv_usec; tpend = tv.tv_sec + time_temp/1000000; duration = tpend - tpstart; if(duration > 3) { switch(fp_count) { case 0: { ftruncate(fileno(fp1),0); fseek(fp1,0L,SEEK_SET); fwrite(data_buf, sizeof(unsigned char), totSize, fp1); fseek(list_fp,0L,SEEK_END); fprintf(list_fp,"#EXTINF:%f,\n",duration); fputs("output0000.ts\n",list_fp); fp_count++; totSize = 0; tpstart = tpend; break; } case 1: { ftruncate(fileno(fp2),0); fseek(fp2,0L,SEEK_SET); fwrite(data_buf, sizeof(unsigned char), totSize, fp2); fseek(list_fp,0L,SEEK_END); fprintf(list_fp,"#EXTINF:%f,\n",duration); fputs("output0001.ts\n",list_fp); fp_count++; totSize = 0; tpstart = tpend; play_stat = 1; break; } case 2: { ftruncate(fileno(fp3),0); fseek(fp3,0L,SEEK_SET); fwrite(data_buf, sizeof(unsigned char), totSize, fp3); fseek(list_fp,0L,SEEK_END); fprintf(list_fp,"#EXTINF:%f,\n",duration); fputs("output0002.ts\n",list_fp); fp_count++; totSize = 0; tpstart = tpend; break; } case 3: { ftruncate(fileno(fp4),0); fseek(fp4,0L,SEEK_SET); fwrite(data_buf, sizeof(unsigned char), totSize, fp4); fseek(list_fp,0L,SEEK_END); fprintf(list_fp,"#EXTINF:%f,\n",duration); fputs("output0003.ts\n",list_fp); fp_count++; totSize = 0; tpstart = tpend; break; } case 4: { ftruncate(fileno(fp5),0); fseek(fp5,0L,SEEK_SET); fwrite(data_buf, sizeof(unsigned char), totSize, fp5); fseek(list_fp,0L,SEEK_END); fprintf(list_fp,"#EXTINF:%f,\n",duration); fputs("output0004.ts\n",list_fp); fp_count = 0; totSize = 0; tpstart = tpend; break; } } cycle_count++; if(cycle_count == 20)break; } if(pthread_stat == 0 || fp_count == 5) break; } close(fd); fclose(fp1); fclose(fp2); fclose(fp3); fclose(fp4); fclose(fp5); fclose(list_fp); return 0; }
playlist如下:
注意X-TARGETDURATION需大于切皮时间,否则片与片之间播放时会有空白。
切片位置是nginx的hls的读取目录。
ok!
延时为2段的时间即6s,追求更小的延时可以缩小切片时间,或者前两段切片时间缩小。
相关文章推荐
- #新闻拍一拍# IBM 招聘广告要求应聘者具备至少 12 年 K8S 使用经验
- nginx代理指定目录
- 访问Nginx发生SSL connection error的一种情况
- c++11 + SDL2 + ffmpeg +OpenAL + java = Android播放器
- Nginx+Naxsi部署专业级Web应用防火墙
- vivi下重新调整分区
- CentOS 6.2实战部署Nginx+MySQL+PHP
- Managed Media Aggregation using Rtsp and Rtp
- ARM Linux系统启动
- Linux及ARM Linux程序开发笔记(零基础入门篇)
- nginx中http核心模块的配置指令2
- nginx中http核心模块的配置指令3
- nginx中http核心模块的配置指令4
- nginx中http的fastcgi模块的配置指令1
- [总结]FFMPEG视音频编解码零基础学习方法
- Nginx 学习笔记(一)
- 网站502与504错误分析
- 用zabbix监控nginx_status状态
- 艰难完成 nginx + puma 部署 rails 4的详细记录