READING NOTE: Two-Stream Convolutional Networks for Action Recognition in Videos
2015-11-06 22:02
417 查看
TITLE: Two-Stream Convolutional Networks for Action Recognition in Videos
AUTHOR: Simonyan, Karen and Zisserman, Andrew
FROM: NIPS2014
A ConvNet trained on multi-frame dense optical flow is able to achieve a good performance in spite of small training dataset
Multi-task training procedure benefits performance on different datasets.
Two-stream architecture convolutional network:
1. Spatial stream ConvNet: take a still frame as input and perform action recognition in this single frame.
2. Temporal stream ConvNet: take a 2L-channel optical flow/trajectory stacking corresponding to the still frame as input and perform action recognition in this multi-channel input.
3. The two outputs of the streams are concated as a feature to train a SVM classifier to fuse them.
SOME DETAILS
Mean flow subtraction is utilized to eliminate displacements caused by camera movement.
At test stage, 25 frames (time points) are extracted and their corresponding 2L-channel stackings are sent to the network. In addition, 5 patches and their flips are extracted in space domain.
Competitive performance with the state of the art representations in spite of small size of training dataset.
CNN with convolution filters could generalize hand-crafted features.
AUTHOR: Simonyan, Karen and Zisserman, Andrew
FROM: NIPS2014
CONTRIBUTIONS
A two-stream ConvNet combines spatial and temporal networks.A ConvNet trained on multi-frame dense optical flow is able to achieve a good performance in spite of small training dataset
Multi-task training procedure benefits performance on different datasets.
METHOD
Two-stream architecture convolutional network:
1. Spatial stream ConvNet: take a still frame as input and perform action recognition in this single frame.
2. Temporal stream ConvNet: take a 2L-channel optical flow/trajectory stacking corresponding to the still frame as input and perform action recognition in this multi-channel input.
3. The two outputs of the streams are concated as a feature to train a SVM classifier to fuse them.
SOME DETAILS
Mean flow subtraction is utilized to eliminate displacements caused by camera movement.
At test stage, 25 frames (time points) are extracted and their corresponding 2L-channel stackings are sent to the network. In addition, 5 patches and their flips are extracted in space domain.
ADVANTAGES
Simulate bio-structure of human visual cortex.Competitive performance with the state of the art representations in spite of small size of training dataset.
CNN with convolution filters could generalize hand-crafted features.
DISADVANTAGES
Can not localize action in neither spatial nor temporal domain.相关文章推荐
- 【编程开发】opencv实现对Mat中某一列或某一行的元素进行normalization
- sift论文看后理解
- Java学习笔记(1):重载方法
- 外键约束和级联操作
- Intent 传数据
- POJ 3009 dfs暴搜
- hdoj2053(switch game
- zoj 3891 K-hash(后缀自动机)
- [TwistedFate]属性property
- Xcode快捷键
- hdu--4455+ Substrings+2012杭州区域赛C题+DP
- 黑马程序员-------Objective-C基础3
- 【LINUX/UNIX网络编程】之使用消息队列,信号量和命名管道实现的多进程服务器(多人群聊系统)
- 程序设计基石与实践系列之C中的继承和多态
- Linux 给用户及用户组分配权限以及对文件目录的操作
- Codeforces 427D Match & Catch(后缀自动机)
- 高人对libsvm的经典总结
- 非托管资源泄露
- glob 模块
- python第二课