Accelerating Deep Neural Networks with Spatial Bottleneck Modules 利用空间瓶颈模块加速神经网络
中心思想: 分解卷积到两个阶段,首先,减小空间分辨率,然后,再存回需要尺寸。该操作降低了空间域上的采样密度,既独立又与信道域的网络加速方法互补,利用不同的采样率,可以在识别精度和模型复杂性之间进行权衡。作为基本的构建块,空间瓶颈可以用来代替任意一个卷积层,或者是两个卷积层的组合,并通过应用验证了空间瓶颈的有效性。对于深度残差网络,空间瓶颈在常规残差块和信道瓶颈残差块上分别达到2×4×加速比,在识别低分辨率图像时保持了较高的精度,在高分辨率图像识别方面甚至有所提高。
输入 w1 h1 d1
fliter d2 s s d1
输出 w2 h2 d2
FLOP :w2*h2*d1*d2*s*s
三个因素,即核大小(S2)、通道域中的连接数(D1d2)和输出特征映射的分辨率(W2h2),所提出的空间瓶颈模块旨在降低w2h2的因子,它独立于w2h2,因而与w2h2互补。
空间瓶颈模块:
The core idea of decreasing W2H2 is to first reduce the
spatial resolution of the feature map, and then restore it to
the desired size. In practice, we implement Y = f1(X)
as a stride-K convolution, and Z = f2(Y) as a stride-K
deconvolution. The width and height of Y are 1=K of X
and Z, and so both of these operations need 1=K2 computational costs of the original convolution. We denote these
operations by convS (a;b ×S;K) and deconvS (a;b ×S;K), respectively,
where (a; b) is a pair of integers satisfying 0 6 a; b < K
and indicating the starting index of convolution. The entire
spatial bottleneck module, denoted by SBS (a;b ×S;K), requires
roughly 2W2H2D1D2S2=K2 FLOPs4. Note that, although
the original convolution Z = f(X) is decomposed into two
layers, namely Z = f2 ◦ f1(X), as we do not insert nonlinearity between f1(·) and f2(·) when it is used to replace a
normal convolution layer, spatial bottleneck is still a linear
operation and thus the network depth remains unchanged
替换resnet中的 3*3
并在 ciffar10 和 ciffar 100中进行对比试验
错误率和参数量的比较 都为更好
- 《Neural Networks and Deep Learning》读书笔记:最简单的识别MNIST的神经网络程序(1)
- 【神经网络】Reducing the Dimensionality of Data with Neural Networks
- 深度神经网络导论Introduction to Deep Neural Networks
- 神经网络压缩(9) WAE-Learning a Wavelet-like Auto-Encoder to Accelerate Deep Neural Networks
- Neural networks and deep learning阅读笔记(3)神经网络学习方式
- Coursera 吴恩达 Deep Learning 第二课 改善神经网络 Improving Deep Neural Networks 第二周 编程作业代码Optimization methods
- coursera 吴恩达 -- 第一课 神经网络和深度学习 :第三周课后习题 Key concepts on Deep Neural Networks Quiz, 10 questions
- 《neural networks and deep learning》——使用神经网络识别手写数字
- Neural Networks and Deep Learning(神经网络与深度学习)_On the exercises and problems
- 多柱深度神经网络——Multi-column Deep Neural Networks for Image Classification
- Spatial Transformer Networks(空间变换神经网络)
- 《Neural Networks and Deep Learning》学习笔记三-神经网络输出层神经元个数
- 【论文笔记2】图像压缩神经网络在Kodak数据集上首次超越JPEG——Full Resolution Image Compression with Recurrent Neural Networks
- Reducing the Dimensionality of Data with Neural Networks:神经网络用于降维
- Spatial Transformer Networks(空间变换神经网络)
- 《Neural Networks and Deep Learning》读书笔记:最简单的识别MNIST的神经网络程序(2)
- 神经网络压缩(4) Learning Structured Sparsity in Deep Neural Networks
- 神经网络不同激活函数比较--读《Understanding the difficulty of training deep feedforward neural networks》
- Coursera 吴恩达 Deep Learning 第二课 改善神经网络 Improving Deep Neural Networks 第三周 编程作业代码 Tensorflow Tutorial
- Stanford机器学习---第四讲. 神经网络的表示 Neural Networks representation