您的位置:首页 > 其它

Semantic Segmentation -- (DeepLabv2)Semantic Image Segmentation ... Fully Connected CRFs论文解读

2018-01-23 13:35 711 查看

DeepLabv2

DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs

原文地址:DeepLabv2

收录:TPAMI2017 (IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017)

代码:

TensorFlow

bitbucket-Caffe

DeepLabv2可以看成是DeepLabv1的强化版,在空洞卷积和全连接的CRF使用上与DeepLabv1类似~

Abstract

本文为使用深度学习的语义分割任务,做出了三个主要贡献:

首先,强调使用空洞卷积,作为密集预测任务的强大工具。空洞卷积能够明确地控制DCNN内计算特征响应的分辨率,即可以有效的扩大感受野,在不增加参数量和计算量的同时获取更多的上下文。

其次,我们提出了空洞空间卷积池化金字塔(atrous spatial pyramid pooling (ASPP)),以多尺度的信息得到更强健的分割结果。ASPP并行的采用多个采样率的空洞卷积层来探测,以多个比例捕捉对象以及图像上下文。

最后,通过组合DCNN和概率图模型,改进分割边界结果。在DCNN中最大池化和下采样组合实现可平移不变性,但这对精度是有影响的。通过将最终的DCNN层响应与全连接的CRF结合来克服这个问题。

论文提出的DeepLabv2在PASCAL VOC2012上表现优异,并在PASCAL-Context, PASCAL-Person-Part, and Cityscapes上都表现不错。

Introduction

DCNN(Deep Convolutional Neural Networks)将CV(computer vision)系统的性能推向了一个新的高度。成功的关键在于DCNN对于局部图像转换的内在不变性。这使得模型可以学习高层次的抽象表示。这种不变性带了高层次抽象表示的同时也可能妨碍诸如语义分割之类的密集预测任务,在空间信息上是不理想的。

将DCNN应用在语义分割任务上,我们着重以下三个问题:

降低特征分辨率

多个尺度上存在对象

由于DCNN的内在不变性,定位精度底

接下来我们讨论并解决这些问题。

第一个挑战是因为:DCNN连续的最大池化和下采样组合引起的空间分辨率下降,为了解决这个问题,DeepLabv2在最后几个最大池化层中去除下采样,取而代之的是使用空洞卷积,以更高的采样密度计算特征映射

第二个挑战是因为:在多尺度上存在物体。解决这一问题有一个标准方法是将一张图片缩放不同版本,汇总特征或最终预测得到结果,实验表明能提高系统的性能,但这个增加了计算特征响应,需要大量的存储空间。我们受到spatial pyramid pooling(SPP)的启发,提出了一个类似的结构,在给定的输入上以不同采样率的空洞卷积并行采样,相当于以多个比例捕捉图像的上下文,称为ASPP(atrous spatial pyramid pooling)模块

第三个挑战涉及到以下情况:对象分类要求空间变换不变性,而这影响了DCNN的空间定位精度。解决这一问题的一个做法是在计算最终分类结果时,使用跳跃层,将前面的特征融合到一起。DeepLabv2是采样全连接的CRF在增强模型捕捉细节的能力。

下面是一个DeepLab的例子:



总体步骤如下:

输入经过改进的DCNN(带空洞卷积和ASPP模块)得到粗略预测结果,即
Aeroplane Coarse Score map


通过双线性插值扩大到原本大小,即
Bi-linear Interpolation


再通过全连接的CRF细化预测结果,得到最终输出
Final Output


总结一下,DeepLabv2的主要优点在于:

速度: DCNN在现代GPU上以8FPS运行,全连接的CRF在CPU上需要0.5s

准确性:在PASCAL VOC2012,PASCAL-Context, PASCALPerson-Part,Cityscapes都获得的优异的结果

简单性:系统是由两个非常成熟的模块级联而成,DCNNs和CRFs

本文DeepLabv2是在DeepLabv1的基础上做了改进,基础层由VGG16换成了更先进的ResNet,添加了多尺度和ASPP模块技术得到了更好的分割结果。

Related Work

DCNN应用于语义分割任务上,涉及到分类和定位细化,工作的核心是把两个任务结合起来。

基于DCNN的语义分割系统有三种大类:

第一种:采样基于DCNN的自下而上的图像分割级联。将形状信息合并的分类过程中,这些方法得益于传递的形状边界信息,从而能够很好分割。但这不能从错误中恢复出来。(开始错就会一直错)

第二种:依靠DCNN做密集计算得到预测结果,并将多个独立结果做耦合。其中一种是在多个分辨率下使用DCNN,使用分割树来平滑预测结果。最近有使用skip layer来级联内部的计算特征用于分类。

第三种:使用DCNN直接做密集的像素级别分类。直接使用全卷积方式应用在整个图像,将DCNN后续的FC层转为卷积层,为了处理空间定位问题,使用上采样和连接中间层的特征来细化结果。

我们的工作是建立在这些工作的基础上,自从第一个版本DeepLabv1公布,许多工作采用了其中一个或两个关键要素:在DCNN的结果上使用全连接的CRF细化结果;使用空洞卷积做密集的特征提取。也有许多工作着重这两者间end-to-end的探索,将DCNN和CRF一起做联合学习。

空洞卷积能在保持计算量和参数量的同时扩大感受野,配合使用金字塔池化方案可以聚合多尺度的上下文信息,可通过空洞卷积控制特征分辨率、配合更先进的DCNN模型、多尺度联合技术、并在DCNN之上集成全连接的CRF可以获取更好的分割结果。

DCNN和CRF的组合不是新话题,以前的作品着重于应用局部CRF,这忽略像素间的长期依赖。而DeepLab采用的是全连接的CRF模型,其中高斯核可以捕获长期依赖性,从而得到较好的分割结果。

Method

空洞卷积用于密集特征提取和扩大感受野

DCNN中连续的最大池化和下采样重复组合层大大的降低了最终的feature map的空间分辨率,有一些补救方式是使用deconvolutional layer(转置卷积,用于扩大特征映射分辨率),但这需要额外的空间和计算量。 我们主张使用空洞卷积,可以以任何特征响应分辨率计算任何层的特征映射。

关于空洞卷积使用详解:

首先考虑一维信号,空洞卷积输出为y[i],输入为x[i],长度K的滤波器为ω[k]。定义为:y[k]=∑k=1Kx[i+r⋅k]ω[k] 输入采样的步幅为参数r,标准的采样率是r=1.如下图(a)所示:



图(b)是采样率r=2的采样情况.

在看看在二维信号(图片)上使用空洞卷积的表现,给定一个图像:



上分支:首先下采样将分辨率降低2倍,做卷积。再上采样得到结果。本质上这只是在原图片的1/4内容上做卷积响应。

下分支:如果我们将全分辨率图像做空洞卷积(采样率为2,核大小与上面卷积核相同),直接得到结果。这样可以计算出整张图像的响应,如上图所示,这样做效果更佳。

空洞卷积能够放大滤波器的感受野,速率r引入r−1个零,有效的将感受野从k×k扩展到ke=k+(k−1)(r−1),而不增加参数和计算量。在DCNN中,常见的做法是混合使用空洞卷积以高的分辨率(理解为采样密度)计算最终的DCNN网络响应。DeepLabv2中使用空洞卷积将特征的密度提升4倍,将输出的特征响应双线性插值上采样8倍恢复到原始的分辨率。

使用ASPP模块表示多尺度图像

许多工作证明使用图像的多尺度信息可以提高DCNN分割不同大小物体的精度,我们尝试了两种方法来处理语义分割中尺度变化。

第一种方法是标准的多尺度处理:将放缩输入为不同版本,分别输入到DCNN中,融合得到分数图得到预测结果。这可以显著的提升预测结果,但是这也耗费了大量的计算力和空间。

第二种方法是受到 SPPNet中SPP模块结构的启发。



上图为SPPNet中SPP模块样式。

DeepLabv2的做法与SPPNet类似,并行的采用多个采样率的空洞卷积提取特征,再将特征融合,类似于空间金字塔结构,形象的称为Atrous Spatial Pyramid Pooling (ASPP)。示意图如下:



在同一
Input Feature Map
的基础上,并行的使用4个空洞卷积,空洞卷积配置为r={6,12,18,24},核大小为3×3。最终将不同卷积层得到的结果做像素加融合到一起.

使用全连接CRF做结构预测用于恢复边界精度

因为最大池化和下采样组合,DCNN的高层特征具有内在不变性(这一点反复说了很多遍了~)。分类性能和定位准确性之间的折中似乎是固有的。如下图,DCNN可以预测对象存在和粗略的位置,但不能精确的划定其边界:



我们将DCNN和全连接的CRF组合到一起,这在前面的DeepLabv1-CRF在语义分割上的应用中详解过了,这部分就跳过了~

Experiment

DeepLabv2在PASCAL VOC 2012, PASCAL-Context, PASCALPerson- Part, and Cityscapes四个数据集上做了评估。

测试细节:

项目设置
DCNN模型权重采用预训练的VGG16,ResNet101
DCNN损失函数输出的结果与ground truth下采样8倍做像素交叉熵
训练器SGD,batch=20
学习率初始为0.001,最后的分类层是0.01。每2000次迭代乘0.1
权重0.9的动量, 0.0005的衰减
模型对预训练的VGG16和ResNet101模型做fine-tune。训练时DCNN和CRF的是解耦的,即分别训练,训练CRF时DCNN输出作为CRF的一元势函数输入是固定的。

大概训练验证手段是对CRF做交叉验证。使用ω2=3和σγ=3在小的交叉验证集上寻找最佳的ω1,σα,σβ,采用从粗到细的寻找策略。

不同卷积核大小和采样率的组合下的模型:



DeepLab-LargeFOV(kernel size3×3, r = 12)取得了一个很好的平衡。可以看出小卷积核配合高采样率可以保持感受野的前提下显著减少参数量,同时加快计算速度,而且分割效果也很好。使用CRF做后端处理可以保持平均提升了3~5%的性能。

PASCAL VOC 2012

在PASCAL VOC 2012上评估了DeepLab-CRF-LargeFOV模型,这里做了三个主要的改进:

1.训练期间使用不同的学习策略;

2.使用ASPP模块;

3.使用深度网络和多层次处理.

学习策略实验

poly学习策略:学习率计算公式为lrbase∗(1−itermaxiter)power,其中power=0.9。



由上图结果,可以看出使用poly策略比固定的step策略效果要好。使用小批量batch_size=10(大了很吃显存,很吃硬件),训练20K得到的效果最佳。

ASPP模块实验

ASPP的结构如下图所示,



并行以不同采样率的空洞卷积捕获不同大小的上下文信息。

下表报告了不同ASPP模块配置的实验结果:



baseline的LargeFOV: 具体结构如Fig7的(a),在FC6上具有r=12的单分支

ASPP-S:具有并行四分支,空洞卷积采用较小的采样率r=2,4,8,12

ASPP-L:具有并行四分支,空洞卷积采用较大的采样率r=6,12,18,24

可以看到,使用大采样率的ASPP模块效果要突出。

使用ASPP模块的可视化结果:



高采样率的ASPP更能捕获全局信息,相比之下分割更为合理。

不同深度网络和多尺度处理实验

DeepLabv2主要是在ResNet上做实验,对比了几个方法:

多尺度输入:以比例{0.5,0.75,1}将输入送到DCNN,融合结果

在MS-COCO上预训练模型

训练期间随机缩放(0.5到1.5)输入图片做数据增强

模型处理方法的影响结果:



多尺度带来了2.55%的提升。多种技巧融合得到了77.69%的结果。

使用CRF后端处理的可视化效果:



CRF细化了分割结果,恢复一些错分的像素,同时也明确了一部分分割边界。

DeepLabv2与其他先进模型相比:



效果那自然是很好的~

对比了ResNet101和VGG16做基础层对比:



采用ResNet101做基础层显著的提升了模型性能,基于ResNet101的DeepLab能够比VGG16更好的沿边界分割。

PASCAL-Context

在PASCAL-Context上基于VGG16和ResNet101不同变体的模型与其他先进模型的对比结果:



采样多种技巧将最终结果提升到了45.7%。

可视化结果如下:



可以看到将更好的沿边界分割了。

PASCAL-Person-Part

在PASCAL-Person-Part数据集上更关注于ResNet101模型,与其他模型对比结果:



可视化结果如下:



Cityscapes

因为Cityscapes的数据分辨率较大,故先做下采样2倍,同时使用的各种技巧得到不错的实验结果:



可视化结果如下:



失败案例

论文给出了一些训练失败的案例,如下图:



模型丢失了很多细节,并在CRF后丢失现象更严重了。

Conclusion

DeepLabv2将空洞卷积应用到密集的特征提取,进一步的提出了空洞卷积金字塔池化结构、并将DCNN和CRF融合用于细化分割结果。实验表明,DeepLabv2在多个数据集上表现优异,有着不错的分割性能。

代码分析

代码参考的是github-TensorFlow版本注意这个代码没有实现CRF部分。

这里模型使用的框架和前面的笔记ICNet代码框架相同,可参考初期的NetWork设置详解。

DeepLab_ResNet结构

关于装饰器等定义在NetWork.py中了,这里就不赘述。主要看DeepLab_ResNet模型定义

前面主要是ResNet101的变体结构定义:

ResNet的前体

Reset基础层有两个常见的残差模块的变体:



左边是普通的残差单元: 辅分支通道直接恒等映射,主分支的前两个卷积都降通道数了,第三个卷积扩大回原通道数。这可以保持分割结果的同时大幅度减少计算量。

右边是特殊的残差单元:功能是增通道,即辅和主分支都增加通道;功能是增通道降采样,即辅分支卷积采用2步长,主分支第一个卷积采用2步长;功能包括空洞卷积的,将原3×3的普通卷积替换为不同采样率的空洞卷积即可。

ResNet部分代码如下:

from kaffe.tensorflow import Network
import tensorflow as tf

class DeepLabResNetModel(Network):
def setup(self, is_training, num_classes):
'''Network definition.

Args:
is_training: whether to update the running mean and variance of the batch normalisation layer.
If the batch size is small, it is better to keep the running mean and variance of
the-pretrained model frozen.
num_classes: number of classes to predict (including background).
'''
(self.feed('data')
.conv(7, 7, 64, 2, 2, biased=False, relu=False, name='conv1')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn_conv1')
.max_pool(3, 3, 2, 2, name='pool1')
.conv(1, 1, 256, 1, 1, biased=False, relu=False, name='res2a_branch1')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn2a_branch1'))

(self.feed('pool1')
.conv(1, 1, 64, 1, 1, biased=False, relu=False, name='res2a_branch2a')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn2a_branch2a')
.conv(3, 3, 64, 1, 1, biased=False, relu=False, name='res2a_branch2b')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn2a_branch2b')
.conv(1, 1, 256, 1, 1, biased=False, relu=False, name='res2a_branch2c')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn2a_branch2c'))

(self.feed('bn2a_branch1',
'bn2a_branch2c')
.add(name='res2a')
.relu(name='res2a_relu')
.conv(1, 1, 64, 1, 1, biased=False, relu=False, name='res2b_branch2a')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn2b_branch2a')
.conv(3, 3, 64, 1, 1, biased=False, relu=False, name='res2b_branch2b')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn2b_branch2b')
.conv(1, 1, 256, 1, 1, biased=False, relu=False, name='res2b_branch2c')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn2b_branch2c'))

(self.feed('res2a_relu',
'bn2b_branch2c')
.add(name='res2b')
.relu(name='res2b_relu')
.conv(1, 1, 64, 1, 1, biased=False, relu=False, name='res2c_branch2a')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn2c_branch2a')
.conv(3, 3, 64, 1, 1, biased=False, relu=False, name='res2c_branch2b')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn2c_branch2b')
.conv(1, 1, 256, 1, 1, biased=False, relu=False, name='res2c_branch2c')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn2c_branch2c'))

(self.feed('res2b_relu',
'bn2c_branch2c')
.add(name='res2c')
.relu(name='res2c_relu')
.conv(1, 1, 512, 2, 2, biased=False, relu=False, name='res3a_branch1')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn3a_branch1'))

(self.feed('res2c_relu')
.conv(1, 1, 128, 2, 2, biased=False, relu=False, name='res3a_branch2a')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn3a_branch2a')
.conv(3, 3, 128, 1, 1, biased=False, relu=False, name='res3a_branch2b')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn3a_branch2b')
.conv(1, 1, 512, 1, 1, biased=False, relu=False, name='res3a_branch2c')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn3a_branch2c'))

(self.feed('bn3a_branch1',
'bn3a_branch2c')
.add(name='res3a')
.relu(name='res3a_relu')
.conv(1, 1, 128, 1, 1, biased=False, relu=False, name='res3b1_branch2a')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn3b1_branch2a')
.conv(3, 3, 128, 1, 1, biased=False, relu=False, name='res3b1_branch2b')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn3b1_branch2b')
.conv(1, 1, 512, 1, 1, biased=False, relu=False, name='res3b1_branch2c')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn3b1_branch2c'))

(self.feed('res3a_relu',
'bn3b1_branch2c')
.add(name='res3b1')
.relu(name='res3b1_relu')
.conv(1, 1, 128, 1, 1, biased=False, relu=False, name='res3b2_branch2a')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn3b2_branch2a')
.conv(3, 3, 128, 1, 1, biased=False, relu=False, name='res3b2_branch2b')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn3b2_branch2b')
.conv(1, 1, 512, 1, 1, biased=False, relu=False, name='res3b2_branch2c')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn3b2_branch2c'))

(self.feed('res3b1_relu',
'bn3b2_branch2c')
.add(name='res3b2')
.relu(name='res3b2_relu')
.conv(1, 1, 128, 1, 1, biased=False, relu=False, name='res3b3_branch2a')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn3b3_branch2a')
.conv(3, 3, 128, 1, 1, biased=False, relu=False, name='res3b3_branch2b')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn3b3_branch2b')
.conv(1, 1, 512, 1, 1, biased=False, relu=False, name='res3b3_branch2c')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn3b3_branch2c'))

(self.feed('res3b2_relu',
'bn3b3_branch2c')
.add(name='res3b3')
.relu(name='res3b3_relu')
.conv(1, 1, 1024, 1, 1, biased=False, relu=False, name='res4a_branch1')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn4a_branch1'))


上面一段代码示意图如下:



总结一下,假设原输入为(1024,1024,3)

先卷积+池化操作,提取特征映射
pool1
为(256,256,64)

经过一个增加通道的残差模块,得到
res2a_relu
为(256,256,256),后面再跟两个普通的残差模块,得到
res2c_relu


再接一个升通道降采样的残差模块,得到
res3a_relu
为(128,128,512),后面接三个普通的残差模块,得到
res3b3_relu


这是ResNet变体初期的实现,到
res3b3_relu
输出的特征映射步幅已经为1024128=8了,后面要配合带空洞卷积的残差模块了~

ResNet的变体部分

DeepLabv2中使用的ResNet变体结构,前部分与原ResNet模型基本一致(即上面的代码),下部分是主要的改进部分了:

'''与上面代码有重复  '''
(self.feed('res3b2_relu',
'bn3b3_branch2c')
.add(name='res3b3')
.relu(name='res3b3_relu')
.conv(1, 1, 1024, 1, 1, biased=False, relu=False, name='res4a_branch1')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn4a_branch1'))

(self.feed('res3b3_relu')
.conv(1, 1, 256, 1, 1, biased=False, relu=False, name='res4a_branch2a')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4a_branch2a')
.atrous_conv(3, 3, 256, 2, padding='SAME', biased=False, relu=False, name='res4a_branch2b')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4a_branch2b')
.conv(1, 1, 1024, 1, 1, biased=False, relu=False, name='res4a_branch2c')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn4a_branch2c'))

(self.feed('bn4a_branch1',
'bn4a_branch2c')
.add(name='res4a')
.relu(name='res4a_relu')
.conv(1, 1, 256, 1, 1, biased=False, relu=False, name='res4b1_branch2a')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b1_branch2a')
.atrous_conv(3, 3, 256, 2, padding='SAME', biased=False, relu=False, name='res4b1_branch2b')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b1_branch2b')
.conv(1, 1, 1024, 1, 1, biased=False, relu=False, name='res4b1_branch2c')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn4b1_branch2c'))

(self.feed('res4a_relu',
'bn4b1_branch2c')
.add(name='res4b1')
.relu(name='res4b1_relu')
.conv(1, 1, 256, 1, 1, biased=False, relu=False, name='res4b2_branch2a')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b2_branch2a')
.atrous_conv(3, 3, 256, 2, padding='SAME', biased=False, relu=False, name='res4b2_branch2b')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b2_branch2b')
.conv(1, 1, 1024, 1, 1, biased=False, relu=False, name='res4b2_branch2c')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn4b2_branch2c'))

(self.feed('res4b1_relu',
'bn4b2_branch2c')
.add(name='res4b2')
.relu(name='res4b2_relu')
.conv(1, 1, 256, 1, 1, biased=False, relu=False, name='res4b3_branch2a')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b3_branch2a')
.atrous_conv(3, 3, 256, 2, padding='SAME', biased=False, relu=False, name='res4b3_branch2b')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b3_branch2b')
.conv(1, 1, 1024, 1, 1, biased=False, relu=False, name='res4b3_branch2c')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn4b3_branch2c'))

(self.feed('res4b2_relu',
'bn4b3_branch2c')
.add(name='res4b3')
.relu(name='res4b3_relu')
.conv(1, 1, 256, 1, 1, biased=False, relu=False, name='res4b4_branch2a')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b4_branch2a')
.atrous_conv(3, 3, 256, 2, padding='SAME', biased=False, relu=False, name='res4b4_branch2b')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b4_branch2b')
.conv(1, 1, 1024, 1, 1, biased=False, relu=False, name='res4b4_branch2c')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn4b4_branch2c'))

(self.feed('res4b3_relu',
'bn4b4_branch2c')
.add(name='res4b4')
.relu(name='res4b4_relu')
.conv(1, 1, 256, 1, 1, biased=False, relu=False, name='res4b5_branch2a')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b5_branch2a')
.atrous_conv(3, 3, 256, 2, padding='SAME', biased=False, relu=False, name='res4b5_branch2b')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b5_branch2b')
.conv(1, 1, 1024, 1, 1, biased=False, relu=False, name='res4b5_branch2c')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn4b5_branch2c'))

(self.feed('res4b4_relu',
'bn4b5_branch2c')
.add(name='res4b5')
.relu(name='res4b5_relu')
.conv(1, 1, 256, 1, 1, biased=False, relu=False, name='res4b6_branch2a')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b6_branch2a')
.atrous_conv(3, 3, 256, 2, padding='SAME', biased=False, relu=False, name='res4b6_branch2b')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b6_branch2b')
.conv(1, 1, 1024, 1, 1, biased=False, relu=False, name='res4b6_branch2c')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn4b6_branch2c'))

(self.feed('res4b5_relu',
'bn4b6_branch2c')
.add(name='res4b6')
.relu(name='res4b6_relu')
.conv(1, 1, 256, 1, 1, biased=False, relu=False, name='res4b7_branch2a')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b7_branch2a')
.atrous_conv(3, 3, 256, 2, padding='SAME', biased=False, relu=False, name='res4b7_branch2b')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b7_branch2b')
.conv(1, 1, 1024, 1, 1, biased=False, relu=False, name='res4b7_branch2c')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn4b7_branch2c'))

(self.feed('res4b6_relu',
'bn4b7_branch2c')
.add(name='res4b7')
.relu(name='res4b7_relu')
.conv(1, 1, 256, 1, 1, biased=False, relu=False, name='res4b8_branch2a')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b8_branch2a')
.atrous_conv(3, 3, 256, 2, padding='SAME', biased=False, relu=False, name='res4b8_branch2b')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b8_branch2b')
.conv(1, 1, 1024, 1, 1, biased=False, relu=False, name='res4b8_branch2c')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn4b8_branch2c'))

(self.feed('res4b7_relu',
'bn4b8_branch2c')
.add(name='res4b8')
.relu(name='res4b8_relu')
.conv(1, 1, 256, 1, 1, biased=False, relu=False, name='res4b9_branch2a')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b9_branch2a')
.atrous_conv(3, 3, 256, 2, padding='SAME', biased=False, relu=False, name='res4b9_branch2b')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b9_branch2b')
.conv(1, 1, 1024, 1, 1, biased=False, relu=False, name='res4b9_branch2c')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn4b9_branch2c'))

(self.feed('res4b8_relu',
'bn4b9_branch2c')
.add(name='res4b9')
.relu(name='res4b9_relu')
.conv(1, 1, 256, 1, 1, biased=False, relu=False, name='res4b10_branch2a')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b10_branch2a')
.atrous_conv(3, 3, 256, 2, padding='SAME', biased=False, relu=False, name='res4b10_branch2b')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b10_branch2b')
.conv(1, 1, 1024, 1, 1, biased=False, relu=False, name='res4b10_branch2c')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn4b10_branch2c'))

(self.feed('res4b9_relu',
'bn4b10_branch2c')
.add(name='res4b10')
.relu(name='res4b10_relu')
.conv(1, 1, 256, 1, 1, biased=False, relu=False, name='res4b11_branch2a')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b11_branch2a')
.atrous_conv(3, 3, 256, 2, padding='SAME', biased=False, relu=False, name='res4b11_branch2b')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b11_branch2b')
.conv(1, 1, 1024, 1, 1, biased=False, relu=False, name='res4b11_branch2c')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn4b11_branch2c'))

(self.feed('res4b10_relu',
'bn4b11_branch2c')
.add(name='res4b11')
.relu(name='res4b11_relu')
.conv(1, 1, 256, 1, 1, biased=False, relu=False, name='res4b12_branch2a')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b12_branch2a')
.atrous_conv(3, 3, 256, 2, padding='SAME', biased=False, relu=False, name='res4b12_branch2b')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b12_branch2b')
.conv(1, 1, 1024, 1, 1, biased=False, relu=False, name='res4b12_branch2c')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn4b12_branch2c'))

(self.feed('res4b11_relu',
'bn4b12_branch2c')
.add(name='res4b12')
.relu(name='res4b12_relu')
.conv(1, 1, 256, 1, 1, biased=False, relu=False, name='res4b13_branch2a')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b13_branch2a')
.atrous_conv(3, 3, 256, 2, padding='SAME', biased=False, relu=False, name='res4b13_branch2b')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b13_branch2b')
.conv(1, 1, 1024, 1, 1, biased=False, relu=False, name='res4b13_branch2c')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn4b13_branch2c'))

(self.feed('res4b12_relu',
'bn4b13_branch2c')
.add(name='res4b13')
.relu(name='res4b13_relu')
.conv(1, 1, 256, 1, 1, biased=False, relu=False, name='res4b14_branch2a')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b14_branch2a')
.atrous_conv(3, 3, 256, 2, padding='SAME', biased=False, relu=False, name='res4b14_branch2b')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b14_branch2b')
.conv(1, 1, 1024, 1, 1, biased=False, relu=False, name='res4b14_branch2c')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn4b14_branch2c'))

(self.feed('res4b13_relu',
'bn4b14_branch2c')
.add(name='res4b14')
.relu(name='res4b14_relu')
.conv(1, 1, 256, 1, 1, biased=False, relu=False, name='res4b15_branch2a')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b15_branch2a')
.atrous_conv(3, 3, 256, 2, padding='SAME', biased=False, relu=False, name='res4b15_branch2b')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b15_branch2b')
.conv(1, 1, 1024, 1, 1, biased=False, relu=False, name='res4b15_branch2c')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn4b15_branch2c'))

(self.feed('res4b14_relu',
'bn4b15_branch2c')
.add(name='res4b15')
.relu(name='res4b15_relu')
.conv(1, 1, 256, 1, 1, biased=False, relu=False, name='res4b16_branch2a')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b16_branch2a')
.atrous_conv(3, 3, 256, 2, padding='SAME', biased=False, relu=False, name='res4b16_branch2b')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b16_branch2b')
.conv(1, 1, 1024, 1, 1, biased=False, relu=False, name='res4b16_branch2c')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn4b16_branch2c'))

(self.feed('res4b15_relu',
'bn4b16_branch2c')
.add(name='res4b16')
.relu(name='res4b16_relu')
.conv(1, 1, 256, 1, 1, biased=False, relu=False, name='res4b17_branch2a')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b17_branch2a')
.atrous_conv(3, 3, 256, 2, padding='SAME', biased=False, relu=False, name='res4b17_branch2b')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b17_branch2b')
.conv(1, 1, 1024, 1, 1, biased=False, relu=False, name='res4b17_branch2c')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn4b17_branch2c'))

(self.feed('res4b16_relu',
'bn4b17_branch2c')
.add(name='res4b17')
.relu(name='res4b17_relu')
.conv(1, 1, 256, 1, 1, biased=False, relu=False, name='res4b18_branch2a')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b18_branch2a')
.atrous_conv(3, 3, 256, 2, padding='SAME', biased=False, relu=False, name='res4b18_branch2b')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b18_branch2b')
.conv(1, 1, 1024, 1, 1, biased=False, relu=False, name='res4b18_branch2c')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn4b18_branch2c'))

(self.feed('res4b17_relu',
'bn4b18_branch2c')
.add(name='res4b18')
.relu(name='res4b18_relu')
.conv(1, 1, 256, 1, 1, biased=False, relu=False, name='res4b19_branch2a')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b19_branch2a')
.atrous_conv(3, 3, 256, 2, padding='SAME', biased=False, relu=False, name='res4b19_branch2b')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b19_branch2b')
.conv(1, 1, 1024, 1, 1, biased=False, relu=False, name='res4b19_branch2c')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn4b19_branch2c'))

(self.feed('res4b18_relu',
'bn4b19_branch2c')
.add(name='res4b19')
.relu(name='res4b19_relu')
.conv(1, 1, 256, 1, 1, biased=False, relu=False, name='res4b20_branch2a')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b20_branch2a')
.atrous_conv(3, 3, 256, 2, padding='SAME', biased=False, relu=False, name='res4b20_branch2b')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b20_branch2b')
.conv(1, 1, 1024, 1, 1, biased=False, relu=False, name='res4b20_branch2c')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn4b20_branch2c'))

(self.feed('res4b19_relu',
'bn4b20_branch2c')
.add(name='res4b20')
.relu(name='res4b20_relu')
.conv(1, 1, 256, 1, 1, biased=False, relu=False, name='res4b21_branch2a')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b21_branch2a')
.atrous_conv(3, 3, 256, 2, padding='SAME', biased=False, relu=False, name='res4b21_branch2b')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b21_branch2b')
.conv(1, 1, 1024, 1, 1, biased=False, relu=False, name='res4b21_branch2c')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn4b21_branch2c'))

(self.feed('res4b20_relu',
'bn4b21_branch2c')
.add(name='res4b21')
.relu(name='res4b21_relu')
.conv(1, 1, 256, 1, 1, biased=False, relu=False, name='res4b22_branch2a')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b22_branch2a')
.atrous_conv(3, 3, 256, 2, padding='SAME', biased=False, relu=False, name='res4b22_branch2b')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn4b22_branch2b')
.conv(1, 1, 1024, 1, 1, biased=False, relu=False, name='res4b22_branch2c')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn4b22_branch2c'))

(self.feed('res4b21_relu',
'bn4b22_branch2c')
.add(name='res4b22')
.relu(name='res4b22_relu')
.conv(1, 1, 2048, 1, 1, biased=False, relu=False, name='res5a_branch1')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn5a_branch1'))

(self.feed('res4b22_relu')
.conv(1, 1, 512, 1, 1, biased=False, relu=False, name='res5a_branch2a')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn5a_branch2a')
.atrous_conv(3, 3, 512, 4, padding='SAME', biased=False, relu=False, name='res5a_branch2b')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn5a_branch2b')
.conv(1, 1, 2048, 1, 1, biased=False, relu=False, name='res5a_branch2c')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn5a_branch2c'))

(self.feed('bn5a_branch1',
'bn5a_branch2c')
.add(name='res5a')
.relu(name='res5a_relu')
.conv(1, 1, 512, 1, 1, biased=False, relu=False, name='res5b_branch2a')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn5b_branch2a')
.atrous_conv(3, 3, 512, 4, padding='SAME', biased=False, relu=False, name='res5b_branch2b')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn5b_branch2b')
.conv(1, 1, 2048, 1, 1, biased=False, relu=False, name='res5b_branch2c')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn5b_branch2c'))

(self.feed('res5a_relu',
'bn5b_branch2c')
.add(name='res5b')
.relu(name='res5b_relu')
.conv(1, 1, 512, 1, 1, biased=False, relu=False, name='res5c_branch2a')
.batch_normalization(is_training=is_training, activation_fn=tf.nn.relu, name='bn5c_branch2a')
.atrous_conv(3, 3, 512, 4, padding='SAME', biased=False, relu=False, name='res5c_branch2b')
.batch_normalization(activation_fn=tf.nn.relu, name='bn5c_branch2b', is_training=is_training)
.conv(1, 1, 2048, 1, 1, biased=False, relu=False, name='res5c_branch2c')
.batch_normalization(is_training=is_training, activation_fn=None, name='bn5c_branch2c'))

(self.feed('res5b_relu',
'bn5c_branch2c')
.add(name='res5c')
.relu(name='res5c_relu')
.atrous_conv(3, 3, num_classes, 6, padding='SAME', relu=False, name='fc1_voc12_c0'))


上面一段代码示意图如下:



总结一下,输入
res3b3_relu
为(128,128,512)

经过升通道带空洞卷积(r=2)的残差模块,得到
res4a_relu
为(128,128,1024)

再经过22个带空洞卷积(r=2)的残差模块,得到
res4b_relu
为(128,128,1024)

再接一个升通道带空洞卷积(r=4)的残差模块,得到
res5a_relu
为(128,128,2048)

后面接两个带空洞卷积(r=4)的的残差模块,得到
res5c_relu
为(128,128,2048)

这是ResNet变体后部分的实现,到
res5c_relu
输出的特征映射步幅虽然是1024128=8了,但是因为空洞卷积的使用,感受野扩大了很多,到这里,ResNet的部分算是结束了,下面就是ASPP模块了~

ASPP模块

DeepLabv2的ASPP和SPP模块很相似,主要就是在同一输入特征上应用不同采样率的空洞卷积,将结果融合到一起即可~

(self.feed('res5b_relu',
'bn5c_branch2c')
.add(name='res5c')
.relu(name='res5c_relu')
.atrous_conv(3, 3, num_classes, 6, padding='SAME', relu=False, name='fc1_voc12_c0'))

(self.feed('res5c_relu')
.atrous_conv(3, 3, num_classes, 12, padding='SAME', relu=False, name='fc1_voc12_c1'))

(self.feed('res5c_relu')
.atrous_conv(3, 3, num_classes, 18, padding='SAME', relu=False, name='fc1_voc12_c2'))

(self.feed('res5c_relu')
.atrous_conv(3, 3, num_classes, 24, padding='SAME', relu=False, name='fc1_voc12_c3'))

(self.feed('fc1_voc12_c0',
'fc1_voc12_c1',
'fc1_voc12_c2',
'fc1_voc12_c3')
.add(name='fc1_voc12'))


上面一段代码示意图如下:



总结一下,输入
res5c_relu
为(128,128,2048)

并行经过空洞卷积层,卷积核为(3,3,num_class),取num_class=21

4个并行空洞卷积采样率为r=6,12,18,24

得到的输出作像素加,得到最终输出
fc1_voc12
为(128,128,21)

这是ASPP模块的实现。

总得来说DeepLabv2中关于DCNN模型的实验还是很容易理解的(起码比前面的ICNet看起来简单多了),关于CRF部分,现在TensorFlow1.4中有contrib的CRF,有兴趣的可以实验一下~
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐