您的位置:首页 > 理论基础 > 计算机网络

Accelerating Deep Neural Networks with Spatial Bottleneck Modules 利用空间瓶颈模块加速神经网络

2018-09-17 20:36 253 查看

 

中心思想: 分解卷积到两个阶段,首先,减小空间分辨率,然后,再存回需要尺寸。该操作降低了空间域上的采样密度,既独立又与信道域的网络加速方法互补,利用不同的采样率,可以在识别精度和模型复杂性之间进行权衡。作为基本的构建块,空间瓶颈可以用来代替任意一个卷积层,或者是两个卷积层的组合,并通过应用验证了空间瓶颈的有效性。对于深度残差网络,空间瓶颈在常规残差块和信道瓶颈残差块上分别达到2×4×加速比,在识别低分辨率图像时保持了较高的精度,在高分辨率图像识别方面甚至有所提高。

 

输入 w1 h1 d1

fliter d2 s s d1

输出 w2 h2 d2

FLOP :w2*h2*d1*d2*s*s

三个因素,即核大小(S2)、通道域中的连接数(D1d2)和输出特征映射的分辨率(W2h2),所提出的空间瓶颈模块旨在降低w2h2的因子,它独立于w2h2,因而与w2h2互补。

空间瓶颈模块:

The core idea of decreasing W2H2 is to first reduce the
spatial resolution of the feature map, and then restore it to
the desired size. In practice, we implement Y = f1(X)
as a stride-K convolution, and Z = f2(Y) as a stride-K
deconvolution. The width and height of Y are 1=K of X
and Z, and so both of these operations need 1=K2 computational costs of the original convolution. We denote these
operations by convS (a;b ×S;K) and deconvS (a;b ×S;K), respectively,
where (a; b) is a pair of integers satisfying 0 6 a; b < K
and indicating the starting index of convolution. The entire
spatial bottleneck module, denoted by SBS (a;b ×S;K), requires
roughly 2W2H2D1D2S2=K2 FLOPs4. Note that, although
the original convolution Z = f(X) is decomposed into two
layers, namely Z = f2 ◦ f1(X), as we do not insert nonlinearity between f1(·) and f2(·) when it is used to replace a
normal convolution layer, spatial bottleneck is still a linear
operation and thus the network depth remains unchanged

替换resnet中的 3*3

并在 ciffar10 和 ciffar 100中进行对比试验

错误率和参数量的比较  都为更好

 

内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐