论文笔记:Going deeper with convolution
2016-06-17 12:34
337 查看
1. Motivation of inception: To make deeper and wider CNN
comes with two drawbacks: limitation of examples and increasing computational resources. ==> need to move from fully connected to sparsely connected architectures.
2. Solutions:
i. use 1 x 1 convolutions before 3 x3 and 5 x 5 ==> reduce redundant infor and reduce dim ( 'keep the network sparse at most places and compress the signals only whenever they have
to be aggregated')
ii. introduce loss functions at hidden layers during training process
a. the model will converge faster (overthrown latter?)
b. loss of final layer: loss of auxiliary classifiers = 1: 0.3
comes with two drawbacks: limitation of examples and increasing computational resources. ==> need to move from fully connected to sparsely connected architectures.
2. Solutions:
i. use 1 x 1 convolutions before 3 x3 and 5 x 5 ==> reduce redundant infor and reduce dim ( 'keep the network sparse at most places and compress the signals only whenever they have
to be aggregated')
ii. introduce loss functions at hidden layers during training process
a. the model will converge faster (overthrown latter?)
b. loss of final layer: loss of auxiliary classifiers = 1: 0.3
相关文章推荐
- CUDA搭建
- 深入理解CNN的细节
- TensorFlow人工智能引擎入门教程所有目录
- convolutional neural network
- UFLDL Exercise: Convolutional Neural Network
- 使用深度卷积网络和支撑向量机实现的商标检测与分类的例子
- 对Pedestrian Detection aided by Deep Learning Semantic Tasks的小结
- 阅读 理解 思考 - Learning to Segment Object Candidates
- 卷积神经网络学习
- CNN: single-label to multi-label总结
- 总结:Large Scale Distributed Deep Networks
- 总结:One weird trick for parallelizing convolutional neural networks
- Extract CNN features using Caffe
- Deep Learning Face Attributes in the Wild
- 卷积神经网络CNN
- Tiled convolutional neural networks(TCNN)
- 卷积神经网络
- 卷积神经网络参数说明
- windows下的theano以及GPU加速环境的搭建
- 最受欢迎的新闻网站前15名(2014.10)