视频动作识别--Temporal Segment Networks: Towards Good Practices for Deep Action Recognition
2017-09-20 15:45
676 查看
Temporal Segment Networks: Towards Good Practices for Deep Action Recognition
ECCV2016
https://github.com/yjxiong/temporal-segment-networks
本文侧重于从更长的视频中提取 long-range temporal structure,因为某些动作的过程较长,需要看更多的视频帧才能得到正确的动作分类。
1 Introduction
在动作识别中,主要是怎么利用视频中的 appearances and dynamics,但是提取这两个信息比较难,面临一系列的挑战如: scale variations, view point changes, and camera motions,所以在提取特征的时候,一方面我们的特征应该能够解决这些挑战,另一方面我们的特征又能够保持动作类型的信息。随着近几年 CNN 在 图像分类及其他图像分析领域中取得的进展,很自然就有人讲 CNN 应用到 动作识别中,但是效果不是很理想。
这里我们分析效果不理想的原因有两个:1) long-range temporal structure 在动作视频中扮演重要的角色,但是当前主流的 CNN网络结构主要关注appearances and short-term motions,所以 lacking the capacity to incorporate long-range temporal structure,也有人尝试通过 dense temporal sampling 来解决这个问题,但是这么做导致模型的计算量很大 尤其是处理较长时间的视频。2)训练样本过少导致模型容易出现过拟合。
这里我们采用视频动作识别中的经典架构 two-stream architecture。对于 temporal structure modeling, a key observation is that consecutive frames are highly redundant,所以稠密时间采样是不需要的。sparse temporal sampling strategy 是更好的策略。对此我们提出了 temporal segment network (TSN),从长的视频中用一个 sparse sampling scheme 提出 short snippets,样本在时间轴上均匀分布。a segmental structure is employed to aggregate information from the sampled snippets. In this sense, temporal segment networks are capable of modeling long-range temporal structure over the whole video.
针对训练样本少容易过拟合,我们主要通过以下三个方法来解决:1) cross-modality pre-training; 2) regularization; 3) enhanced data augmentation
3 Action Recognition with Temporal Segment Networks
3.1 Temporal Segment Networks
当前的 two-stream ConvNets 存在的问题就是 不能对长时间的视频进行建模,只能对连续的几帧的 short snippet 提取 temporal context
an obvious problem of the two-stream ConvNets in their current forms is their inability in modeling long-range temporal structure
我们的 temporal segment network framework 主要想利用整个视频的 visual information 来进行 video-level prediction
我们将一个视频分成 K 个部分,从每个部分中随机的 选出一个 short snippet,对这个short snippet 进行 two-stream ConvNets处理,对分析结果再用 the segmental consensus function 得到 segmental consensus
四种输入形态:
Network Training 针对训练样本少的情况
1)Cross Modality Pre-training 预训练
2) Regularization Techniques: partial Batch Normalization,dropout
3)Data Augmentation
4 Experiments
UCF101 dataset
different input modalities
different segmental consensus functions
different very deep ConvNet architectures
Component analysis
效果对比
Visualization of ConvNet models for action recognition using DeepDraw
ECCV2016
https://github.com/yjxiong/temporal-segment-networks
本文侧重于从更长的视频中提取 long-range temporal structure,因为某些动作的过程较长,需要看更多的视频帧才能得到正确的动作分类。
1 Introduction
在动作识别中,主要是怎么利用视频中的 appearances and dynamics,但是提取这两个信息比较难,面临一系列的挑战如: scale variations, view point changes, and camera motions,所以在提取特征的时候,一方面我们的特征应该能够解决这些挑战,另一方面我们的特征又能够保持动作类型的信息。随着近几年 CNN 在 图像分类及其他图像分析领域中取得的进展,很自然就有人讲 CNN 应用到 动作识别中,但是效果不是很理想。
这里我们分析效果不理想的原因有两个:1) long-range temporal structure 在动作视频中扮演重要的角色,但是当前主流的 CNN网络结构主要关注appearances and short-term motions,所以 lacking the capacity to incorporate long-range temporal structure,也有人尝试通过 dense temporal sampling 来解决这个问题,但是这么做导致模型的计算量很大 尤其是处理较长时间的视频。2)训练样本过少导致模型容易出现过拟合。
这里我们采用视频动作识别中的经典架构 two-stream architecture。对于 temporal structure modeling, a key observation is that consecutive frames are highly redundant,所以稠密时间采样是不需要的。sparse temporal sampling strategy 是更好的策略。对此我们提出了 temporal segment network (TSN),从长的视频中用一个 sparse sampling scheme 提出 short snippets,样本在时间轴上均匀分布。a segmental structure is employed to aggregate information from the sampled snippets. In this sense, temporal segment networks are capable of modeling long-range temporal structure over the whole video.
针对训练样本少容易过拟合,我们主要通过以下三个方法来解决:1) cross-modality pre-training; 2) regularization; 3) enhanced data augmentation
3 Action Recognition with Temporal Segment Networks
3.1 Temporal Segment Networks
当前的 two-stream ConvNets 存在的问题就是 不能对长时间的视频进行建模,只能对连续的几帧的 short snippet 提取 temporal context
an obvious problem of the two-stream ConvNets in their current forms is their inability in modeling long-range temporal structure
我们的 temporal segment network framework 主要想利用整个视频的 visual information 来进行 video-level prediction
我们将一个视频分成 K 个部分,从每个部分中随机的 选出一个 short snippet,对这个short snippet 进行 two-stream ConvNets处理,对分析结果再用 the segmental consensus function 得到 segmental consensus
四种输入形态:
Network Training 针对训练样本少的情况
1)Cross Modality Pre-training 预训练
2) Regularization Techniques: partial Batch Normalization,dropout
3)Data Augmentation
4 Experiments
UCF101 dataset
different input modalities
different segmental consensus functions
different very deep ConvNet architectures
Component analysis
效果对比
Visualization of ConvNet models for action recognition using DeepDraw
相关文章推荐
- 论文笔记-Temporal segment network:towards good practices for deep action recognition
- 视频动作识别--Towards Good Practices for Very Deep Two-Stream ConvNets
- 视频动作识别--Two-Stream Convolutional Networks for Action Recognition in Videos
- Spatio-Temporal Laplacian Pyramid Coding forAction Recognition(动作识别的时空拉普拉斯金字塔编码)
- 视频动作识别--Convolutional Two-Stream Network Fusion for Video Action Recognition
- CVPR2016之A Key Volume Mining Deep Framework for Action Recognition论文阅读(视频关键帧选取)
- 论文阅读《Spatiotemporal Multiplier Networks for Video Action Recognition》
- Spatiotemporal Multiplier Networks for Video Action Recognition
- 【论文笔记】Spatial Temporal Graph Convolutional Networks for Skeleton-Based Action Recognition
- Discriminative Action States Discovery for Online Action Recognition (动作识别 3)
- 双流网络行为识别-Spatiotemporal Residual Networks for Video Action Recognition-论文阅读
- VGG-大规模图像识别的深度卷积网络 Very Deep Convolutional Networks for Large-Scale Image Recognition
- 论文笔记-《Towards Good Practices for Very Deep Two-Stream ConvNets》
- 《Towards Good Practices for Very Deep Two-Stream ConvNets》阅读笔记
- Spatio-temporal Fastmap-based Mapping for Human Action Recognition (动作识别 1)
- Tube Convnets: Better Exploiting Motion for Action Recognition (动作识别 2)
- Spatiotemporal Residual Networks for Video Action Recognition
- Very Deep Convolutional Networks for Large-Scale Image Recognition
- VERY DEEP CONVOLUTIONALNETWORKS FOR LARGE-SCALE IMAGE RECOGNITION
- 深度学习研究理解:Very Deep Convolutional Networks for Large-Scale Image Recognition