您的位置:首页 > 其它

【论文翻译】Deep Residual Learning for Image Recognition

2020-08-18 12:56 597 查看

【论文翻译】Deep Residual Learning for Image Recognition

【论文题目】Deep Residual Learning for Image Recognition

【翻译人】

Deep Residual Learning for Image Recognition
[译]基于深度残差学习的图像识别
2016 IEEE Conference on Computer Vision and Pattern Recognition
Kaiming He Xiangyu Zhang Shaoqing Ren Jian Sun
Microsoft Research
{kahe, v-xiangz, v-shren, jiansun}@microsoft.com

Abstract

Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet testset. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIF AR-10 with 100 and 1000 layers.

The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

摘要

​ 神经网络的训练因其层次加深而变得更加困难。我们提出了一个残差学习框架,可以轻松的对更深层次的网络进行训练。相对于之前无参考函数的网络学习,我们显著重构的网络结构可根据网络的输入对其残差函数进行学习。我们提供的详实经验证据表明对这样的残差网络进行寻优更加容易,并且随网络层次的显著加深可以获得更好的准确率。我们利用ImageNet数据集验证了深达152层的残差网络——尽管该网络深度是VGG网络[[41]的8倍,但却拥有较低复杂度。该残差网络在ImageNet测试集的错误率为3.57%,这个结果取得了2015年ILSVRC分类任务的第一名。此外,我们分析了分别用100层和1000层网络对CIFAR-10数据集的处理。
  表示深度对于许多视觉识别任务至关重要。我们仅靠较深的表示,就将COCO物品检测数据集(的结果)相对提高了28%。深度残差网络是我们提交给ILSVRC和COCO 2015比赛的核心,在本次比赛中,我们在ImageNet物品检测、ImageNet地理定位、COCO物品检测和COCO分割中赢得了第一名。

1.Introduction

​ Deep convolutional neural networks [22, 21] have led to a series of breakthroughs for image classification [21, 49, 39]. Deep networks naturally integrate low/mid/high level features [49] and classifiers in an end-to-end multilayer fashion, and the “levels” of features can be enriched by the number of stacked layers (depth). Recent evidence [40, 43] reveals that network depth is of crucial importance, and the leading results [40, 43, 12, 16] on the challenging ImageNet dataset [35] all exploit “very deep” [40] models, with a depth of sixteen [40] to thirty [16]. Many other nontrivial visual recognition tasks [7, 11, 6, 32, 27] have also greatly benefited from very deep models.

​ 深度卷积神经网络 [22, 21] 引发了图像分类方法的一系列突破[[21,50, 40]。深度网络自然地以端到端多层的方式将低/中/高级特性[49]和分类器集成在一起,并且特征的“级别”可以通过堆叠层的数量(深度)来丰富。最近的实证[40,43]表明网络深度至关重要。在具有挑战性的ImageNet数据集[35]上取得的优秀结果[40,43,12,16]都得益于 “非常深”的[40]模型,其深度从16[40]到30[16]不等。许多其他重要的视觉识别任务[7,11,6,32,27]也受益于较深的模型。

​ Driven by the significance of depth, a question arises: Is learning better networks as easy as stacking more layers? An obstacle to answering this question was the notorious problem of vanishing/exploding gradients [14, 1, 8], which hamper convergence from the beginning. This problem, however, has been largely addressed by normalized initialization [23, 8, 36, 12] and intermediate normalization layers [16], which enable networks with tens of layers to start converging for stochastic gradient descent (SGD) with backpropagation [22].

​ 网络深度非常重要,但也存在一些问题:是否简单的增加更多的层次就能学习到更好的网络。解决这个问题的障碍之一就是臭名昭著的梯度消失/爆炸问题[14,1,8],它从一开始就阻碍了网络收敛。然而,这个问题已经通过规范初始化[23,8,36,12]和引入中值规范层[16]方法得到了很大程度的解决,这种方法使得具有数十层的网络开始收敛于具有反向传播[22]的随机梯度下降(SGD)。

​ When deeper networks are able to start converging, a degradation problem has been exposed: with the network depth increasing, accuracy gets saturated (which might be unsurprising) and then degrades rapidly. Unexpectedly, such degradation is not caused by overfitting, and adding more layers to a suitably deep model leads to higher training error, as reported in [10, 41] and thoroughly verified by our experiments. Fig. 1 shows a typical example.

​ 当更深层次的网络能够开始收敛时,一个退化问题也暴露出来:随着网络深度的增加,精确度提升将达到饱和(这可能并不奇怪),然后迅速下降。出乎意料的是,这种退化并不是由过拟合引起的,在一个合适的深度模型上增加更多的层次会导致更高的训练误差,这在[10,41]中有报道而且我们的实验也充分验证了这一点。图1给出了一个典型示例。

​ The degradation (of training accuracy) indicates that not all systems are similarly easy to optimize. Let us consider a shallower architecture and its deeper counterpart that adds more layers onto it. There exists a solution by construction to the deeper model: the added layers are identity mapping, and the other layers are copied from the learned shallower model. The existence of this constructed solution indicates that a deeper model should produce no higher training error than its shallower counterpart. But experiments show that our current solvers on hand are unable to find solutions that are comparably good or better than the constructed solution (or unable to do so in feasible time).

​ 退化问题(对训练的准确度而言)表明了并不是所有的系统都能容易的达到优化。让我们考虑一个较为浅层的结构以及另一个作为对比,包含了较多网络层次、相对较深的结构。构造更深的网络模型的方法之一是在所增加的层次上均采用恒等映射(Identity Mapping),而其他层次则是直接使用已经学习好的浅层网络。这种构造方法的存在表示加深后的模型所产生的训练误差不应高于它所基于的较浅的模型。但经验表明,我们从现有的办法中无法找到和这种构造方法一样好或者更好的解决方案(或是无法在可行的时间内完成更深网络的构造)。

​ In this paper, we address the degradation problem by introducing a deep residual learning framework. Instead of hoping each few stacked layers directly fit a desired underlying mapping, we explicitly let these layers fit a residual mapping. Formally, denoting the desired underlying mapping as H(x)H(x)H(x), we let the stacked nonlinear layers fit another mapping of F(x):=H(x)−xF(x) : = H(x)−xF(x):=H(x)−x. The original mapping is recast into F(x)+xF(x)+xF(x)+x. We hypothesize that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.

​ 在本文中,我们提出一种深度残差学习框架来解决退化问题。我们利用多层网络拟合一个残差映射,而不是寄希望于每组少数的几个层的网络层就可以直接拟合出我们所期望的潜在映射关系。正式的,用H(x)表示我们所期望得到的潜在映射,我们使堆叠的非线性多层网络去拟合另一个映射关系F(x) := H(x)-x,那么实际的映射关系就变为为F(x)+x。我们认为优化残差映射比优化原始的、未引用的映射更容易。更进一步,如果一个恒等映射是最优的,那么将残差推到零比用一堆非线性层来拟合恒等映射要容易得多。

The formulation of F(x)+x can be realized by feedforward neural networks with “shortcut connections” (Fig. 2). Shortcut connections [2, 33, 48] are those skipping one or more layers. In our case, the shortcut connections simply perform identity mapping, and their outputs are added to the outputs of the stacked layers (Fig. 2). Identity shortcut connections add neither extra parameter nor computational complexity. The entire network can still be trained end-to-end by SGD with backpropagation, and can be easily implemented using common libraries (e.g., Caffe [19]) without modifying the solvers.

​ 公式F(x)+x可以通过添加了“捷径连接(shortcut connections)”的前向神经网络实现(见图2)。快捷连接[2,33,48]是指那些跳过一个或多个层次的连接。在我们的案例中,快捷连接只是用于恒等映射,它们的输出被添加到堆叠层的输出中(见图2)。恒等映射捷径连接既不增加额外的参数,也不增加计算复杂度。整个网络仍然可以通过带有反向传播的SGD进行端到端的训练,并且可以使用公共库(例如Caffe[19])轻松实现,而无需修改求解器。

​ We present comprehensive experiments on ImageNet [35] to show the degradation problem and evaluate our method. We show that: 1) Our extremely deep residual nets are easy to optimize, but the counterpart “plain” nets (that simply stack layers) exhibit higher training error when the depth increases; 2) Our deep residual nets can easily enjoy accuracy gains from greatly increased depth, producing results substantially better than previous networks.

我们在ImageNet[35]上进行了详细的实验,演示了退化问题,并评估了我们的方法,结果表明:1)我们提出的极深残差网络更易于优化,但是对应的平凡网络(即仅仅是由层次堆叠而成的网络)的训练误差随着网络层次的增加而变大。2)我们所提出的深度残差网络更容易在增加深度时获得精度的提高,明显优于之前的网络结构所得到的结果。

​ Similar phenomena are also shown on the CIFAR-10 set [20], suggesting that the optimization difficulties and the effects of our method are not just akin to a particular dataset. We present successfully trained models on this dataset with over 100 layers, and explore models with over 1000 layers.

​ 在CIFAR-10集合[20]上的实验也显示了类似的现象,这表明优化的困难和我们方法的效果并不只是类似于一个特定的数据集。我们在这个数据集上成功训练了超过100层的网络模型,并探索了超过1000层的网络模型。

​ On the ImageNet classification dataset [35], we obtain excellent results by extremely deep residual nets. Our 152 layer residual net is the deepest network ever presented on ImageNet, while still having lower complexity than VGG nets [40]. Our ensemble has 3.57% top-5 error on the ImageNet test set, and won the 1st place in the ILSVRC 2015 classification competition. The extremely deep representations also have excellent generalization performance on other recognition tasks, and lead us to further win the 1st places on: ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation in ILSVRC & COCO 2015 competitions. This strong evidence shows that the residual learning principle is generic, and we expect that it is applicable in other vision and non-vision problems.

在ImageNet分类数据集[35]上,利用极深层次的残差网络得到了很好的结果。我们的152层残差网络是ImageNet上迄今为止最深的网络,同时复杂度比[40]的VGG网络更低。我们的方法对ImageNet测试数据集的误差为3.57%,位列前五,并且获得了2015年ILSVRC分类比赛的第一名。超深度的表示对其他的识别任务也有很好的泛化能力,这使得我们在2015年ILSVRC&COCO比赛中赢得了ImageNet物品检测、ImageNet地理定位、COCO物品检测和COCO图像分割的第一名。事实表明,残差学习方法是具有一般性的,我们也希望它能够被进一步应用于其他视觉和非视觉类问题。

2.Related Work

相关工作

Residual Representations. In image recognition, VLAD [18] is a representation that encodes by the residual vectors with respect to a dictionary, and Fisher Vector [30] can be formulated as a probabilistic version [18] of VLAD. Both of them are powerful shallow representations for image retrieval and classification [4, 47]. For vector quantization, encoding residual vectors [17] is shown to be more effective than encoding original vectors.

残差表示。在图像识别中,VLAD[18]是对基于词典的残差向量进行编码来表示的,Fisher向量[30]可以被认为是VLAD的概率版本[18]。它们都是非常好的用于图像检索和分类的浅层表示方法[4,47]。对于向量量化,对残差向量的编码[17]要比编码原始向量更加有效。

Fisher Vector Fisher 向量

formulated vt. 规划;用公式表示;明确地表达(formulate 的过去式和过去分词)

probabilistic version 概率版本

image retrieval(检索;恢复;取回;拯救) and classification 图像检索于分类

vector quantization 向量量化

​ In low-level vision and computer graphics, for solving Partial Differential Equations (PDEs), the widely used Multigrid method [3] reformulates the system as subproblems at multiple scales, where each subproblem is responsible for the residual solution between a coarser and a finer scale. An alternative to Multigrid is hierarchical basis preconditioning [44, 45], which relies on variables that represent residual vectors between two scales. It has been shown [3, 44, 45] that these solvers converge much faster than standard solvers that are unaware of the residual nature of the solutions. These methods suggest that a good reformulation or preconditioning can simplify the optimization.

​ 在低级视觉和计算机图形学中,对偏微分方程(Partial Differential Equation, PDE)的求解,通常使用多重网格(Multigrid)法[3]将系统重构为多尺度下的子问题,每个子问题负责求解较粗粒度和较细粒度之间的残差。除了多重网格法外,还有分层基础预处理[44,45]方法,它依赖于表示两个尺度间残差向量的变量。已经证明[3,44,45],这些求解方法比不考虑残差的求解方法要收敛的更快。对这些方法的研究表明,一个良好的模型重构或预处理有利于模型优化。

Partial Differential Equations (PDEs) 偏微分方程

Multigrid method (多重网络的方法)

coarser(粗糙的) and a finer(好的 ) scale

hierarchical(分层的) basis preconditioning

​ Shortcut Connections. Practices and theories that lead to shortcut connections [2, 33, 48] have been studied for a long time. An early practice of training multi-layer perceptrons (MLPs) is to add a linear layer connected from the network input to the output [33, 48]. In [43, 24], a few intermediate layers are directly connected to auxiliary classifiers for addressing vanishing/exploding gradients. The papers of [38, 37, 31, 46] propose methods for centering layer responses, gradients, and propagated errors, implemented by shortcut connections. In [43], an “inception” layer is composed of a shortcut branch and a few deeper branches.

multi-layer perceptrons(视感控器)(MLPs)

敏捷连接。快捷连接所基于的实践和理论[2,33,48]已经被研究了很长时间。训练多层感知器(MLPs)的一个早期实践是在网络输入层和输出层之间添加一个线性层[33,48]。在[43,24]中,少量中间层直接被连接到了辅助分类器上来解决梯度消失和梯度爆炸问题。文献[38,37,31,46]提出了通过敏捷连接方法来置中层响应、梯度和传播误差的方法。在文献[43]中,由一个敏捷分支和一些较深的分支来组成“初始”层。

Shortcut Connections 敏捷连接

inception 起初;

​ Concurrent with our work, “highway networks” [41, 42] present shortcut connections with gating functions [15]. These gates are data-dependent and have parameters, in contrast to our identity shortcuts that are parameter-free. When a gated shortcut is “closed” (approaching zero), the layers in highway networks represent non-residual functions. On the contrary, our formulation always learns residual functions; our identity shortcuts are never closed, and all information is always passed through, with additional residual functions to be learned. In addition, highway networks have not demonstrated accuracy gains with extremely increased depth (e.g., over 100 layers).

​ 与我们的工作同期,文献[41,42]提出的“高速公路网络”呈现了具有门控函数[15]的敏捷连接。与不需要调参的恒等敏捷相反,这些门是依赖于数据的,并且需要调参。当门捷径“关闭”(趋于零)时,高速公路网络中的层表现为非残差函数,相反的,我们对残差函数的学习贯穿了我们的方法;我们的恒等捷径没有关闭过,因此信息总是传递过去,我们才得以学习额外的残差函数。此外,高速公路网络没有演示出随着网络深度的极大增加,准确率是否得到提升。

formulation 构想,规划;[数]公式化;简洁陈述

3.Deep Residual Learning

深度残差学习

3.1.Residual Learning

残差学习

​ Let us consider H(x) as an underlying mapping to be fit by a few stacked layers (not necessarily the entire net), with x denoting the inputs to the first of these layers. If one hypothesizes that multiple nonlinear layers can asymptotically approximate complicated functions2, then it is equivalent to hypothesize that they can asymptotically approximate the residual functions, i.e., H(x) − x (assuming that the input and output are of the same dimensions). So rather than expect stacked layers to approximate H(x), we explicitly let these layers approximate a residual function F(x) : = H(x) − x. The original function thus becomes F(x)+x. Although both forms should be able to asymptotically approximate the desired functions (as hypothesized), the ease of learning might be different.

​ 让我们把H(x)看作一个通过若干堆叠的网络层(不一定是整个网络)拟合的底层映射,而x表示这些层中第一层的输入。多层非线性层次可以逐渐逼近复杂函数(2)这一假设就等价于它可以渐进逼近残差函数这个假设,即H(x)-x(这里假设输入和输出是相同维度的)。我们显示地让这些层次近似于残差函数F(x): = H(x)−x,而不是使堆叠层近似于H(x),因此,对应的原函数即为F(x)+x。尽管这两种形式都有可能渐进逼近目标函数(根据上述假设),然而学习难度可能会有差异。

multiple nonlinear layers 多层非线性层次

asymptotically approximate 逐渐逼近

dimensions 维度

​ This reformulation is motivated by the counterintuitive phenomena about the degradation problem (Fig. 1, left). As we discussed in the introduction, if the added layers can be constructed as identity mappings, a deeper model should have training error no greater than its shallower counterpart. The degradation problem suggests that the solvers might have difficulties in approximating identity mappings by multiple nonlinear layers. With the residual learning reformulation, if identity mappings are optimal, the solvers may simply drive the weights of the multiple nonlinear layers toward zero to approach identity mappings.

​ 这种重构的动因是由于退化问题中所表现出的反直觉现象(图1(左图))。正如我们在引言里对这个问题所作出的说明那样,如果能够以恒等映射的方式来构建所增加的层,一个加深模型的训练误差就不会大于它所基于的较浅模型。退化问题表明通过多个非线性网络层对恒等映射作逼近可能会存在求解上的困难。通过残差学习进行重构,当恒等映射达到最优,则求解可能仅仅是使多个非线性层的权重趋于零来接近恒等映射。

​ In real cases, it is unlikely that identity mappings are optimal, but our reformulation may help to precondition the problem. If the optimal function is closer to an identity mapping than to a zero mapping, it should be easier for the solver to find the perturbations with reference to an identity mapping, than to learn the function as a new one. We show by experiments (Fig. 7) that the learned residual functions in general have small responses, suggesting that identity mappings provide reasonable preconditioning.
  在实际案例中,恒等映射不大可能直接最优,但是我们重构的模型可能有助于问题的预处理。若最优函数接近于一个恒等映射而不是一个零值映射,求解时通过参考一个恒等映射来确定扰动可能更加容易,而不是将其作为一个新的函数来学习。我们通过实验(图7)展示了完成学习的残差网络一般响应较小,这表明恒等映射是一种合理的预处理手段。

perturbations 扰动

3.2. Identity Mapping by Shortcuts

利用捷径作恒等映射

​ We adopt residual learning to every few stacked layers. A building block is shown in Fig. 2. Formally, in this paper we consider a building block defined as:
y=F(x,{wi})+x       (1) y=F(x,\{w_i\})+x {\space}{\space}{\space}{\space}{\space}{\space}{\space}(1) y=F(x,{wi​})+x       (1)
Here x and y are the input and output vectors of the layers considered. The function F(x,{Wi})F(x,\{Wi\})F(x,{Wi}) represents the residual mapping to be learned. For the example in Fig. 2 that has two layers, F=W2σ(W1x)F = W_2σ(W_1x)F=W2​σ(W1​x) in which σσσ denotes ReLU [29] and the biases are omitted for simplifying notations. The operation F + x is performed by a shortcut connection and element-wise addition. We adopt the second nonlinearity after the addition (i.e., σ(y), see Fig. 2).

​ 我们对每组网络层采用残差学习。一个构造块如图2所示。形式上,在本文中我们将构造块定义为:
y=F(x,{wi})+x       (1) y=F(x,\{w_i\})+x {\space}{\space}{\space}{\space}{\space}{\space}{\space}(1) y=F(x,{wi​})+x       (1)

其中,x和y分别表示构造块的输入向量和输出向量。函数F(x, {Wi })表示学习到的残差映射。图2中的例子包含两层,F=W2σ(W1x)中的σ表示ReLU[29]的参数且为了简化符号省略了偏差。操作F+x是通过捷径连接和逐元素相加而进行的。我们在加法之后得到的模型具有二阶非线性(如图2中的σ(y))。

stacked layers

simplifying notations 简化符号

Formally 正式地;形式上

​ The shortcut connections in Eqn.(1) introduce neither extra parameter nor computation complexity. This is not only attractive in practice but also important in our comparisons between plain and residual networks. We can fairly compare plain/residual networks that simultaneously have the same number of parameters, depth, width, and computational cost (except for the negligible element-wise addition).

​ 公式(1)中的捷径连接并没有引入新的参数或计算复杂度。这不仅便于应用,而且对我们比较普通网络和残差网络也尤为重要,我们可以公平地对普通网络和残差网络进行比较(除了几乎可以忽略不计的元素加法):即在参数数量、深度、宽度和计算成本完全相同的情况下。

simultaneously 同时地

​ The dimensions of x and F must be equal in Eqn.(1). If this is not the case (e.g., when changing the input/output channels), we can perform a linear projection WsW_sWs​ by the shortcut connections to match the dimensions:
y=F(x,{wi})+Wsx       (2) y=F(x,\{w_i\})+W_sx {\space}{\space}{\space}{\space}{\space}{\space}{\space}(2) y=F(x,{wi​})+Ws​x       (2)
We can also use a square matrix Ws in Eqn.(1). But we will show by experiments that the identity mapping is sufficient for addressing the degradation problem and is economical, and thus Ws is only used when matching dimensions.

​ 公式(1)中x和F的维度必须相同,如果不是这样(例如当输入或输出的通道数量发生改变时),我们可以在捷径连接上作一个线性投影Ws 来使维数相等,即:
y=F(x,{wi})+Wsx       (2) y=F(x,\{w_i\})+W_sx {\space}{\space}{\space}{\space}{\space}{\space}{\space}(2) y=F(x,{wi​})+Ws​x       (2)

我们也可以在公式(1)中使用方块矩阵Ws ,然而我们将通过实验展示使用恒等映射就足以解决退化问题,因此,Ws 将仅仅被用来解决维数匹配的问题。

dimensions 维度

square matrix 方阵

​ The form of the residual function F is flexible. Experiments in this paper involve a function F that has two or three layers (Fig. 5), while more layers are possible. But if F has only a single layer, Eqn.(1) is similar to a linear layer: y = W1x+x, for which we have not observed advantages.

残差函数F形式灵活。本文实验包括了F为两层或三层时的情形(图5),虽然更多的层次也是可以的。但是当F仅有一层时,公式(1)将等同于一个线性层:y=W1x+x,这样的话就没有什么优势了。

​ We also note that although the above notations are about fully-connected layers for simplicity, they are applicable to convolutional layers. The function F(x,{Wi}) can represent multiple convolutional layers. The element-wise addition is performed on two feature maps, channel by channel.

我们还注意到,尽管上述公式为了简单起见都是关于全连接层的,但是它们也适用于卷积层。,此时,函数F(x, {Wi })可以表示多个卷积层,而元素加法则是在两个特征图上,逐个通道执行的。

convolutional layers 卷积层

element-wise addition 逐元素加法

plain/residual nets 平凡/残差网络

3.3. Network Architectures

网络结构

​ We have tested various plain/residual nets, and have observed consistent phenomena. To provide instances for discussion, we describe two models for ImageNet as follows.

我们对多种类型的普通网络和残差网络进行了试验,观察到了颇具一致性的现象。为了提供实证做讨论,我们如下描述了应用于ImageNet数据集的两种网络模型。

Plain Network. Our plain baselines (Fig. 3, middle) are mainly inspired by the philosophy of VGG nets [40] (Fig. 3, left). The convolutional layers mostly have 3×3 filters and follow two simple design rules: (i) for the same output feature map size, the layers have the same number of filters; and (ii) if the feature map size is halved, the number of filters is doubled so as to preserve the time complexity per layer. We perform downsampling directly by convolutional layers that have a stride of 2. The network ends with a global average pooling layer and a 1000-way fully-connected layer with softmax. The total number of weighted layers is 34 in Fig. 3 (middle).
普通网络。我们的普通网络基线(图3-中图)主要受VGG网络理论的启发[40] ,(图3-左图)。多数卷积层过滤器尺寸为3×3并且遵循两个简单的设计规则:(i)对于相同大小的输出特征图尺寸,层中必须含有相同数量的过滤器;(ii)若特征图尺寸减半,则需倍增过滤器数量来保持各层的时间复杂度。我们通过步长为2的卷积层直接进行降采样。最终的网络包含了一个全局的均值池化层和一个1000路拥有softmax激活函数的全连接层。含有权重的网络层总共有34层(图3-中图)。

plain baselines

downsampling 降采样

softmax激活函数

​ It is worth noticing that our model has fewer filters and lower complexity than VGG nets [40] (Fig. 3, left). Our 34 layer baseline has 3.6 billion FLOPs (multiply-adds), which is only 18% of VGG-19 (19.6 billion FLOPs).

值得注意的是我们的模型比VGG网络(图3-左图)包含了较少的过滤器并且有较低的计算复杂度。我们34层网络基线的基本计算量为36亿FLOPs (包括乘法运算和加法运算),大约仅为VGG-19(196亿FLOPs)的18%。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-liYQzy0T-1597726203274)(D:/myBlog/typora_photo/20160907011055812)]
图3 针对ImageNet的网络架构样例
左:作为对比的VGG-19模型[41](196亿FLOPs)
中:含有权重参数的层数为34的平凡网络(36亿FLOPs)
右:含有权重参数的层数为34的残差网络(36亿FLOPs)
点画线所标记的捷径上作了升维操作。表1中展示了更多的细节和其他变量

Residual Network. Based on the above plain network, we insert shortcut connections (Fig. 3, right) which turn the network into its counterpart residual version. The identity shortcuts (Eqn.(1)) can be directly used when the input and output are of the same dimensions (solid line shortcuts in Fig. 3). When the dimensions increase (dotted line shortcuts in Fig. 3), we consider two options: (A) The shortcut still performs identity mapping, with extra zero entries padded for increasing dimensions. This option introduces no extra parameter; (B) The projection shortcut in Eqn.(2) is used to match dimensions (done by 1×1 convolutions). For both options, when the shortcuts go across feature maps of two sizes, they are performed with a stride of 2.

残差网络。基于上述的平凡网络,我们增加一些捷径连接(见图3,右)之后,平凡网络则转化为与之对应的残差网络。当网络的输入维度和输出维度相等时可以直接应用公式(1)所示的恒等捷径(图3中的实线捷径)。当维数增加时(图3中的虚线捷径),我们考虑了两种策略:(A)捷径依然采用恒等映射,使用0填充维数增加带来的空缺,这种策略不会引入新的参数;(B)利用公式(2)所示的投影捷径来匹配维数(通过1×1的卷积层实现)。以上两种策略,当捷径连接了两个尺寸的特征图时,它们将按照步长2排布。

3.4. Implementation

实现

​ Our implementation for ImageNet follows the practice in [21, 40]. The image is resized with its shorter side randomly sampled in [256,480] for scale augmentation [40]. A 224×224 crop is randomly sampled from an image or its horizontal flip, with the per-pixel mean subtracted [21]. The standard color augmentation in [21] is used. We adopt batch normalization (BN) [16] right after each convolution and before activation, following [16]. We initialize the weights as in [12] and train all plain/residual nets from scratch. We use SGD with a mini-batch size of 256. The learning rate starts from 0.1 and is divided by 10 when the error plateaus, and the models are trained for up to 60×104 iterations. We use a weight decay of 0.0001 and a momentum of 0.9. We do not use dropout [13], following the practice in [16].

​ 我们在ImageNet数据集上的实践借鉴了[21,41]中的实践。图片按其短边重做尺寸,然后在尺寸[256,480]中随机采样进行尺度增强[40]。从图像或其水平镜像中随机采样224*224大小的剪裁图像,将剪裁图像减去像素均值[21],而且进行标准色彩增强。我们参照[16],在每一次卷积之后和在激活之前来采用批量正规化(Batch Normalization, BN)[16]。我们参考文献[12]初始化网络权重,并训练所有抓取的的平凡网络和残差网络。我们使用的最小批量为256的SGD。学习率初始为0.1,当错误率上升时,再除以10,模型迭代次数高达60万次。我们使用权值推迟,速率为0.0001,冲量为0.9。我们没有使用[13]中止技术,参照[16]的实践。

224×224 crop(产量;农作物) 224*224的剪裁图像

horizontal(水平) flip(翻转) 镜像

per-pixel(像素) mean(平均) subtracted

batch(一批;一炉) normalization (BN) 批量正规化

mini-batch size

weight decay of 0.0001 and a momentum of 0.9 权值衰减技术,速率为0.0001,冲量为0.9

​ In testing, for comparison studies we adopt the standard 10-crop testing [21]. For best results, we adopt the fully convolutional form as in [40, 12], and average the scores at multiple scales (images are resized such that the shorter side is in {224,256,384,480,640}).

​ 在试验中,为了比较研究,我们采用了标准的10镜像实验[21]。为了得到最好的结果,我们采用了文献[40,12]中的全卷积形式,对多个图像尺寸(图像被调整为较短的边为{224,256,384,480,640})。的分数取平均。

4.Experiments

实验

4.1. ImageNet Classification

ImageNet数据集分类

​ We evaluate our method on the ImageNet 2012 classification dataset [35] that consists of 1000 classes. The models are trained on the 1.28 million training images, and evaluated on the 50k validation images. We also obtain a final result on the 100k test images, reported by the test server. We evaluate both top-1 and top-5 error rates.

​ 我们在包含1000个类别的ImageNet2012分类数据集[35]上评估了我们的方法。使用128万张训练图像进行模型训练,并且使用5万张验证图像进行评估。我们还获得了由测试服务器报告的100k测试图像的最终结果。我们还验证了第1和前5的错误率。

validation(确认;批准;) images 验证图像

Plain Networks. We first evaluate 18-layer and 34-layer plain nets. The 34-layer plain net is in Fig. 3 (middle). The 18-layer plain net is of a similar form. See Table 1 for detailed architectures.

平凡网络。我们首先评估了18层和34层的平凡网络。34层的平凡网络如图3(中)所示,18层的平凡网络则形式与其类似。网络架构细节参见表1。

​ The results in Table 2 show that the deeper 34-layer plain net has higher validation error than the shallower 18-layer plain net. To reveal the reasons, in Fig. 4 (left) we compare their training/validation errors during the training procedure. We have observed the degradation problem - the 34-layer plain net has higher training error throughout the whole training procedure, even though the solution space of the 18-layer plain network is a subspace of that of the 34-layer one.

​ 表2的结果表明,较深的34层平凡网络的验证错误率要高于较浅的18层平凡网络的验证错误率。为了揭示原因,在图4(左)中,我们比较了它们在训练过程中的训练/验证误差。我们观察到了退化问题—尽管18层平凡网络的解空间是34层平凡网络解空间的子空间,但在整个训练过程中,34层平凡网络的训练误差较高。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-hc3W0XAr-1597726203288)(D:/myBlog/typora_photo/20160907011140500)]
表1 针对ImageNet的架构
工作块的配置见方括号中(亦可见于图5),几种类型的块堆叠起来构成网络架构。降采样采用的是步长为2的conv3_1,conv4_1和conv5_1

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-sQyBapJU-1597726203295)(D:/myBlog/typora_photo/20160907011230204)]
表2 对ImageNet数据集作交叉验证时的最大错误率(%,10折测试)
此处的残差网络没有在其对应的平凡网络中引入新的参数变量,图4展示了训练过程

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-8uHeUcVp-1597726203311)(D:/myBlog/typora_photo/20160907011311151)]
图4 用ImageNet数据集进行训练
细线指明了训练误差的变化情况,粗线则指明了交叉验证错误率的变化情况
左图:18层和34层平凡网络的情况
右图:18层和34层残差网络的情况
在本图中,残差网络没有在其对应的平凡网络中引入新的参数变量

​ We argue that this optimization difficulty is unlikely to be caused by vanishing gradients. These plain networks are trained with BN [16], which ensures forward propagated signals to have non-zero variances. We also verify that the backward propagated gradients exhibit healthy norms with BN. So neither forward nor backward signals vanish. In fact, the 34-layer plain net is still able to achieve competitive accuracy (Table 3), suggesting that the solver works to some extent. We conjecture that the deep plain nets may have exponentially low convergence rates, which impact the reducing of the training error3. The reason for such optimization difficulties will be studied in the future.

​ 我们认为这种优化困难不太可能是由于梯度消失造成的。这些平凡网络使用BN[16]进行训练,保证了前向传播的信号具有非零方差。我们还验证了反向传播的梯度具有BN的健康规范。所以前进信号和后退信号都不会消失。事实上,34层的平网仍然能够达到竞争精度(表3),说明求解器在一定程度上是有效的。我们推测深平原网可能具有指数级低的收敛率,这影响了训练误差的减少。造成这种优化困难的原因将在今后进行研究。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-exgoiEFt-1597726203318)(D:/myBlog/typora_photo/20160907011350261)]
表3 对ImageNet数据集进行交叉验证时的错误率(%,10折测试)
VGG-16是基于我们的测试。ResNet-50/101/152测试采用B方法仅适用投影法来增加维数

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-gcFDoJd7-1597726203320)(D:/myBlog/typora_photo/20160907011425559)]
表4 单个模型对ImageNet数据集进行交叉验证时结果的错误率(%)(除了带有标记的那个是对于测试集的)

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Ykuj9K03-1597726203337)(D:/myBlog/typora_photo/20160907011458341)]
表5 组合模型的错误率(%)
前5的错误率是由测试服务器汇报的对于ImageNet测试集的

Residual Networks. Next we evaluate 18-layer and 34-layer residual nets (ResNets). The baseline architectures are the same as the above plain nets, expect that a shortcut connection is added to each pair of 3×3 filters as in Fig. 3 (right). In the first comparison (Table 2 and Fig. 4 right), we use identity mapping for all shortcuts and zero-padding for increasing dimensions (option A). So they have no extra parameter compared to the plain counterparts.

残差网络。接下来我们评估了18层和34层的残差网络(ResNets)。残差网络的基线结构与上面平凡网络相同,只是在图3(右)中为每对3×3过滤器添加了一个敏捷连接。在第一个对比中(右表2和图4),我们对所有的捷径使用恒等映射并且对多出的维度补零(选项A)。因此,与其对应的平凡网络相比,它们没有引入额外的参数。

​ We have three major observations from Table 2 and Fig. 4. First, the situation is reversed with residual learning – the 34-layer ResNet is better than the 18-layer ResNet (by 2.8%). More importantly, the 34-layer ResNet exhibits considerably lower training error and is generalizable to the validation data. This indicates that the degradation problem is well addressed in this setting and we manage to obtain accuracy gains from increased depth.

从表2和图4中,我们得到了三个主要的观察结果。首先,在使用残差学习时情况发生了反转——34层的残差网络要优于18层的残差网络(高2.8%),更重要的是,34层的ResNet表现出相当低的训练误差,并对验证数据具有泛化能力,这表明退化问题在这种情况下被较好的控制了,即我们能够通过增加网络深度来获得更高的准确率。

​ Second, compared to its plain counterpart, the 34-layer ResNet reduces the top-1 error by 3.5% (Table 2), resulting from the successfully reduced training error (Fig. 4 right vs. left). This comparison verifies the effectiveness of residual learning on extremely deep systems

​ 其次,与对应的平凡网络相比,34层的残差网络降低了约3.5%的最大错误率(见表2),这得益于训练误差的成功降低。这个对比验证了残差学习在极深系统上的有效性。

​ Last, we also note that the 18-layer plain/residual nets are comparably accurate (Table 2), but the 18-layer ResNet converges faster (Fig. 4 right vs. left). When the net is “not overly deep” (18 layers here), the current SGD solver is still able to find good solutions to the plain net. In this case, the ResNet eases the optimization by providing faster convergence at the early stage.

​ 最后,我们还注意到18层的平凡/残差网络的精度相当(表2),但是18层的残差网络收敛速度更快(图4左图和右图)。当网络不是“太深”(这里是18层)时,现有的SGD求解方法仍然能够在平凡网络上找到好的解决方案。在这种情况下,平凡网络通过在训练初期提供的更快的收敛速度来简化模型优化。

Identity vs. Projection Shortcuts. We have shown that parameter-free, identity shortcuts help with training. Next we investigate projection shortcuts (Eqn.(2)). In Table 3 we compare three options: (A) zero-padding shortcuts are used for increasing dimensions, and all shortcuts are parameterfree (the same as Table 2 and Fig. 4 right); (B) projection shortcuts are used for increasing dimensions, and other shortcuts are identity; and © all shortcuts are projections.

恒等敏捷连接(identity shortcut) vs.vs.vs. 投影敏捷连接(projection shortcut)。我们已经证明了无参恒等映射快捷连接是有助于训练的。

接下来我们研究了投影快捷连接(公式(2))。在表3中,我们对比了三个方案:(A)以填充零的敏捷连接方式进行升维,并且所有的敏捷方式都是无参的(与右表2和图4相同);(B)投影捷径用于增加维度,其他捷径采用恒等映射;©所有敏捷连接都是投影连接。

​ Table 3 shows that all three options are considerably better than the plain counterpart. B is slightly better than A. We argue that this is because the zero-padded dimensions in A indeed have no residual learning. C is marginally better than B, and we attribute this to the extra parameters introduced by many (thirteen) projection shortcuts. But the small differences among A/B/C indicate that projection shortcuts are not essential for addressing the degradation problem. So we do not use option C in the rest of this paper, to reduce memory/time complexity and model sizes. Identity shortcuts are particularly important for not increasing the complexity of the bottleneck architectures that are introduced below.

​ 表3展示了这3种方案均优于对应的平凡网络。B略优于A,我们认为这是由于A方案中补零的维数并没有残差学习。方案C优于方案B,我们认为这是由投影快捷连接中引入的额外参数(30个)导致的。但是,方案A、B、C之间差异不大也表明了投影快捷连接并不是解决退化问题的关键。因此为了降低存储复杂度和时间复杂度以及模型大小,我们在后文中并没有采用方案C。恒等快捷方式对于不增加瓶颈架构复杂度尤为重要,瓶颈架构将在下文介绍。

Deeper Bottleneck Architectures. Next we describe our deeper nets for ImageNet. Because of concerns on the training time that we can afford, we modify the building block as a bottleneck design4. For each residual function F, we use a stack of 3 layers instead of 2 (Fig. 5). The three layers are 1×1, 3×3, and 1×1 convolutions, where the 1×1 layers are responsible for reducing and then increasing (restoring) dimensions, leaving the 3×3 layer a bottleneck with smaller input/output dimensions. Fig. 5 shows an example, where both designs have similar time complexity.

深层次瓶颈架构。接下来我们将描述针对ImageNet数据集的深度学习网络。考虑到我们所能负担的训练时间,我们将单元块的设计改成了瓶颈式的设计。对于每个残差函数F,我们使用3个网络层而不是2层(图5),3个网络层分别为1×1,3×3和1×1的卷积层,这里1×1的层是用来减少或增加(恢复)维度的,这使得那个3×3的层如同瓶颈一般具有较小的输入和输出维度。图5给出了一个例子,两种设计具有相近的时间复杂度。

​ The parameter-free identity shortcuts are particularly important for the bottleneck architectures. If the identity shortcut in Fig. 5 (right) is replaced with projection, one can show that the time complexity and model size are doubled, as the shortcut is connected to the two high-dimensional ends. So identity shortcuts lead to more efficient models for the bottleneck designs.

​ 无参恒等快捷连接对于瓶颈架构非常重要。如果用投影代替图5(右)中的恒等捷径,可以看出,由于捷径连接到两个高维端点,时间复杂度和模型大小都增加了一倍。因此,恒等捷径使瓶颈式设计变的更有效。

50-layer ResNet: We replace each 2-layer block in the 34-layer net with this 3-layer bottleneck block, resulting in a 50-layer ResNet (Table 1). We use option B for increasing dimensions. This model has 3.8 billion FLOPs.

50层的残差网络:我们将34层网络中的每2层一组构成的块替换为这种3层的瓶颈块,得到了一个50层的残差网络(表1)。我们使用方案B来升维。这个模型的基础计算量为38亿FLOPs。

101-layer and 152-layer ResNets: We construct 101-layer and 152-layer ResNets by using more 3-layer blocks (Table 1). Remarkably, although the depth is significantly increased, the 152-layer ResNet (11.3 billion FLOPs) still has lower complexity than VGG-16/19 nets (15.3/19.6 billion FLOPs).

101层和152层的残差网络:我们使用更多的3层块(表1)构建了101层和152层残差网络。值得注意的是,尽管网络深度显著增加,但是152层残差网络(113亿FLOPs)要比VGG-16/19网络(153/196亿FLOPs)由更低的时间复杂度。

​ The 50/101/152-layer ResNets are more accurate than the 34-layer ones by considerable margins (Table 3 and 4). We do not observe the degradation problem and thus enjoy significant accuracy gains from considerably increased depth. The benefits of depth are witnessed for all evaluation metrics (Table 3 and 4).

​ 50/101/152层的残差网络要比34层的残差网络更加精确(表3和表4)。我们没有观察到退化问题,因此,深度的大幅增加可以显著的提高精度。对所有的评价指标(表3和表4)来说,深度的好处都是显而易见的。  
considerable margins

metrics度量

Comparisons with State-of-the-art Methods. In Table 4 we compare with the previous best single-model results. Our baseline 34-layer ResNets have achieved very competitive accuracy. Our 152-layer ResNet has a single-model top-5 validation error of 4.49%. This single-model result outperforms all previous ensemble results (Table 5). We combine six models of different depth to form an ensemble (only with two 152-layer ones at the time of submitting). This leads to 3.57% top-5 error on the test set (Table 5). This entry won the 1st place in ILSVRC 2015.

与先进的方法进行比较。在表4中,我们与当前最好的单模型结果进行了比较。我们采用的基准34层残差网络达到了非常具有竞争力的准确性。我们152层残差网络的验证错误率为4.49%,在单模型验证错误率中排第五。这个单模型的结果超过了先前所有的综合模型(见表5)。我们将6个不同深度的模型组成一个综合模型(模型提交时仅使用了两个152层),对于表5所示的测试集获得了3.57%的错误率,排在前五。这个结果获得了2015年ILSVRC的第一名。

4.2. CIFAR-10 and Analysis

CIFAR-10数据集结果和分析

​ We conducted more studies on the CIFAR-10 dataset [20], which consists of 50k training images and 10k testing images in 10 classes. We present experiments trained on the training set and evaluated on the test set. Our focus is on the behaviors of extremely deep networks, but not on pushing the state-of-the-art results, so we intentionally use simple architectures as follows.

​ 我们对CIFAR-10数据集[20]做了进一步的研究,该数据集包含5万张训练图和分为10类的1万张测试图。我们展示了用训练集进行训练和用测试集进行验证的实验。我们着重关注极深网络的表现,而不是推动最先进的结果,因此我们主要使用下述简单网络架构。

state-of-the-art

The plain/residual architectures follow the form in Fig. 3 (middle/right). The network inputs are 32×32 images, with the per-pixel mean subtracted. The first layer is 3×3 convolutions. Then we use a stack of 6n layers with 3×3 convolutions on the feature maps of sizes {32,16,8} respectively, with 2n layers for each feature map size. The numbers of filter sare{16,32,64}respectively. The subsampling is performed by convolutions with a stride of 2. The network ends with a global average pooling, a 10-way fully-connected layer, and softmax. There are totally 6n+2 stacked weighted layers. The following table summarizes the architecture:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-5fvmFTJU-1597726203343)(D:/myBlog/typora_photo/20160907011538436)]

When shortcut connections are used, they are connected to the pairs of 3×3 layers (totally 3n shortcuts). On this dataset we use identity shortcuts in all cases (i.e., option A),so our residual models have exactly the same depth, width,and number of parameters as the plain counterparts.

简单网络和残差网络的结构遵从图3(中和右图)的形式。这个网络的输入为32×32的图像,每个图像都被减去了像素均值。第一层是一个3×3的卷积层。我们对尺寸为{32, 16, 8}的特征图使用一组包括了6n个3×3卷积层,每个特征图的尺寸都是2n层,即过滤器的数量分别为{16, 32, 64}。降采样是通过步长为2的卷积进行的。这个网络终止于一个全局的平均化池,一个10路的全连接层和一个softmax层。以上共计有6n+2个权重层。下表总结了这个网络架构:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Pbu7nsqw-1597726203347)(D:/myBlog/typora_photo/20160907011538436)]

在使用快捷连接的地方,它连接了一组3×3的网络层(总计有3n个快捷连接)。在该数据集上我们对各情形都使用恒等快捷方式(即方案A)。因此我们的残差模型与其对应的平凡网络具有完全相同的深度、宽度和参数数量。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-cOraDjiP-1597726203352)(D:/myBlog/typora_photo/20160907011609019)]
表6 对于CIFAR-10测试集的分类错误
表中方法都进行了数据增强。对于ResNet-110,我们效仿文献[43]将算法重复运行了5次,并且展示了所得结果的最优值(最优的mean+std)。

​ We use a weight decay of 0.0001 and momentum of 0.9, and adopt the weight initialization in [12] and BN [16] but with no dropout. These models are trained with a minibatch size of 128 on two GPUs. We start with a learning rate of 0.1, divide it by 10 at 32k and 48k iterations, and terminate training at 64k iterations, which is determined on a 45k/5k train/val split. We follow the simple data augmentation in [24] for training: 4 pixels are padded on each side, and a 3 2×32 crop is randomly sampled from the padded image or its horizontal flip. For testing, we only evaluate the single view of the original 32×32 image.

​ 我们使用的权重损失参数为0.0001,冲量参数为0.9,并且我们采用文献[12]中的方法和BN方法来初始化权重值,但没使用dropout技术。这些模型在两个gpu上以128为单位进行小批量训练。我们设置初始学习率为0.1,在第32k和第48k次迭代时除以10,在64k迭代终止训练,训练和验证数据集的划分为45k和5k。我们采用了文献[24]提出的简单数据增强方法:每边填充四个像素的宽度,从填充图像或其镜像中随机取样并剪裁尺寸为32x32大小。在测试时,我们仅仅对32x32图像的单个视图进行评估。

We compare n = {3,5,7,9}, leading to 20, 32, 44, and 56-layer networks. Fig. 6 (left) shows the behaviors of the plain nets. The deep plain nets suffer from increased depth, and exhibit higher training error when going deeper. This phenomenon is similar to that on ImageNet (Fig. 4, left) and on MNIST (see [41]), suggesting that such an optimization difficulty is a fundamental problem.

​ 我们比较了n={3, 5, 7, 9}时对应的网络结构分别为20层,32层,44层和56层时的情况。图6(右图)显示了平凡网络的表现。平凡网络极大的受到深度增加的影响,深度增大时训练错误也随之增大。这种现象与ImageNet(图4,左)和MNIST([41])测试数据集上的表现类似,这表明优化难度是一个根本性的问题。

Fig. 6 (middle) shows the behaviors of ResNets. Also similar to the ImageNet cases (Fig. 4, right), our ResNets manage to overcome the optimization difficulty and demonstrate accuracy gains when the depth increases
  图6(中)显示了残差网络的表现,同ImageNet数据集上的表现(图4(右)),我们的残差网络成功克服了优化难度问题,并展示了深度增加时准确度也随之增加。

We further explore n = 18 that leads to a 110-layer ResNet. In this case, we find that the initial learning rate of 0.1 is slightly too large to start converging5. So we use 0.01 to warm up the training until the training error is below 80% (about 400 iterations), and then go back to 0.1 and continue training. The rest of the learning schedule is as done previously. This 110-layer network converges well (Fig. 6, middle). It has fewer parameters than other deep and thin networks such as FitNet [34] and Highway [41] (Table 6), yet is among the state-of-the-art results (6.43%, Table 6).
  我们进一步探究了当n=18时对应的110层残差网络。在这个情况下,我们发现初始学习速率设置为0.1对于网络收敛有些过大。因此我们采用0.01的初始学习速率来进行热身训练,直到训练错误低于80%(大约400次迭代),然后回到0.1并继续训练。其余的训练方案和之前类似。这个110层的网络收敛的很好(图6(中))。它的参数数量要少于诸如FitNet[34]和Highway[41]这些较深的网络或较浅的网络,单其结果不低于先进算法的结果(6.43%,表6)。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-4ftIhfyQ-1597726203358)(D:/myBlog/typora_photo/20160907011720441)]
图6 对CIFAR-10数据集进行训练
点画线表示训练误差,粗线表示测试误差
左图:平凡网络。110层的平凡网络的错误率超过60%因此并未展示
中图:残差网络
右图:具有110层和1202层的残差网络

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-JLAlrPpl-1597726203363)(D:/myBlog/typora_photo/20160907011801361)]
图7 对CIFAR-10的层响应标准差(std)
这些响应是3×3层的输出,在BN和非线性化之前
上图:各层按照原始次序展示
下图:响应按照降续排列

Analysis of Layer Responses. Fig. 7 shows the standard deviations (std) of the layer responses. The responses are the outputs of each 3×3 layer, after BN and before other nonlinearity (ReLU/addition). For ResNets, this analysis reveals the response strength of the residual functions. Fig. 7 shows that ResNets have generally smaller responses than their plain counterparts. These results support our basic motivation (Sec.3.1) that the residual functions might be generally closer to zero than the non-residual functions. We also notice that the deeper ResNet has smaller magnitudes of responses, as evidenced by the comparisons among ResNet-20, 56, and 110 in Fig. 7. When there are more layers, an individual layer of ResNets tends to modify the signal less.

网络层响应分析。图7展示了层响应的标准差(std)。响应是指每个3×3层的输出值,在BN之后且在其他非线性层(ReLU/addition)之前。对于残差网络,这个分析结果表示了残差函数的响应强度。图7展示了残差网络通常比其对应的平凡网络响应较小。这些结果支持了我们的初始动机(本文3.1节),即残差函数可能一般比非残差函数更加接近零值。我们也通过对图7中20层、56层和110层残差网络的比较,注意到层次较深的残差网络响应值较小。即层次越多,残差网络中的每个层次对于信号的改变就越小。

standard deviations (std) 标准差

magnitudes 大小,量级

Exploring Over 1000 layers. We explore an aggressively deep model of over 1000 layers. We set n = 200 that leads to a 1202-layer network, which is trained as described above. Our method shows no optimization difficulty, and this 103-layer network is able to achieve training error <0.1% (Fig. 6, right). Its test error is still fairly good (7.93%, Table 6).

探索超过1000层的网络。我们探索一个极深的超过1000层的深度学习模型。我们设置n=200,即网络层数为1202层,训练过程如上所述。我们的方法表明没有优化困难,而且这个千层网络实现了小于0.1%(图6(右))的训练误差。而且它的测试误差也相当不错(7.93%,表6)。

​ But there are still open problems on such aggressively deep models. The testing result of this 1202-layer network is worse than that of our 110-layer network, although both have similar training error. We argue that this is because of overfitting. The 1202-layer network may be unnecessarily large (19.4M) for this small dataset. Strong regularization such as maxout [9] or dropout [13] is applied to obtain the best results ([9, 25, 24, 34]) on this dataset. In this paper, we use no maxout/dropout and just simply impose regularization via deep and thin architectures by design, without distracting from the focus on the difficulties of optimization. But combining with stronger regularization may improve results, which we will study in the future.
  但是,这种极深的网络还是存在一些问题。对于1202层网络的测试结果要劣于110层网络,尽管两者由相似的训练误差。我们认为这是由过拟合造成的。1202层的网络相对于它的数据集来说可能大(19.4M)的有点没有必要了。像maxout[9]或者dropout[13]的强力正则化被用于在这个数据集上取得最优结果([9,25,23,34])。在本文中,我们没有使用maxout或者dropout,只是在设计时仅仅通过深架构或浅架构来引入正则化,而且没有分散对优化困难的关注。但是结合更强力的正则化可能会改善结果,这些我们将在未来进行研究。

open problems 未知的问题

distracting 分心

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-ClsGg8rW-1597726203366)(D:/myBlog/typora_photo/20160907011823284)]
表7 利用基线方法Faster R-CNN对于PASCAL VOC物品检测数据集的mAP(%)
更好的结果见表10和表11

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-RGazD18B-1597726203373)(D:/myBlog/typora_photo/20160907011859550)]
表8 利用基线方法Faster R-CNN对于COCO验证数据集的mAP(%)
更好的结果见表9

4.3. Object Detection on PASCAL and MS COCO

基于PASCAL和MS COCO数据集的目标检测

​ Our method has good generalization performance on other recognition tasks. Table 7 and 8 show the object detection baseline results on PASCAL VOC 2007 and 2012 [5] and COCO [26]. We adopt Faster R-CNN [32] as the detection method. Here we are interested in the improvements of replacing VGG-16 [40] with ResNet-101. The detection implementation (see appendix) of using both models is the same, so the gains can only be attributed to better networks. Most remarkably, on the challenging COCO dataset we obtain a 6.0% increase in COCO’s standard metric (mAP@[.5, .95]), which is a 28% relative improvement. This gain is solely due to the learned representations.

​ 我们的方法对其它识别任务具有良好的泛化性能。表7和表8展示了PASCAL VOC 2007和2012[5]和COCO[26]的目标检测的基线结果。我们采用更快的R-CNN[32]作为检测方法。在这里,我们感兴趣的是用ResNet-101替换vga -16[40]所带来的改进。两种模型的检测实现(见附录)是相同的,所以增益只能归因于网络的改善。最值得注目的是,在具有挑战性的COCO数据集上,我们获得了COCO标准度量的6.0%增长(mAP@[.5, .95])。这是一个28%的相对进步。这个增益归因于学习到的表示。

​ Based on deep residual nets, we won the 1st places in several tracks in ILSVRC & COCO 2015 competitions: ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. The details are in the appendix.

​ 基于深度残差网络,我们在2015年ILSVRC和COCO比赛中获得了ImageNet detection, ImageNet localization, COCO detection, COCO segmentation几个项目的第一名。细节见附录。

myBlog

residual n,残余,剩余;

ensemble n,组合,全体

convolutional 卷积

nontrivial

stochastic gradient descent

backpropagation

degradation

thoroughly verified 充分证明

denoting 指示

recost 重铸;彻底改动

identity mapping 恒等映射

feedforward neural networks

shortcut connections

stacked layers

computational complexity 计算复杂度

implemented

plain” nets 平凡网络

exhibit

akin to 局限于

ensemble 全体;总效果;

generalization performance 泛化能力

detection, COCO segmentation几个项目的第一名。细节见附录。

myBlog

residual n,残余,剩余;

ensemble n,组合,全体

convolutional 卷积

nontrivial

stochastic gradient descent

backpropagation

degradation

thoroughly verified 充分证明

denoting 指示

recost 重铸;彻底改动

identity mapping 恒等映射

feedforward neural networks

shortcut connections

stacked layers

computational complexity 计算复杂度

implemented

plain” nets 平凡网络

exhibit

akin to 局限于

ensemble 全体;总效果;

generalization performance 泛化能力

内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: