您的位置:首页 > Web前端

caffe中BatchNorm层和Scale层实现批量归一化(batch-normalization)注意事项

2017-12-22 09:41 447 查看
caffe中实现批量归一化(batch-normalization)需要借助两个层:BatchNorm 和 Scale

BatchNorm实现的是归一化

Scale实现的是平移和缩放

在实现的时候要注意的是由于Scale需要实现平移功能,所以要把bias_term项设为true

另外,实现BatchNorm的时候需要注意一下参数use_global_stats,在训练的时候设为false,在测试的时候设为true

use_global_stats = false 时采用滑动平均计算新的均值和方差

use_global_stats = true 时会强制使用模型中存储的BatchNorm层均值与方差参数

具体训练实现过程为(conv-batchnorm-scale-relu):

layer {
bottom: "data"
top: "conv1_1"
name: "conv1_1"
type: "Convolution"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
num_output: 64
pad: 1
kernel_size: 3
}
}
layer {
bottom: "conv1_1"
top: "conv1_1"
name: "bn_conv1_1"
type: "BatchNorm"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: false
}
}
layer {
bottom: "conv1_1"
top: "conv1_1"
name: "scale_conv1_1"
type: "Scale"
param {
lr_mult: 0.1
decay_mult: 0
}
param {
lr_mult: 0.1
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
bottom: "conv1_1"
top: "conv1_1"
name: "relu1_1"
type: "ReLU"
}


具体测试实现过程为(conv-batchnorm-scale-relu)(把use_global_stats由false设为true):

layer {
bottom: "data"
top: "conv1_1"
name: "conv1_1"
type: "Convolution"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
num_output: 64
pad: 1
kernel_size: 3
}
}
layer {
bottom: "conv1_1"
top: "conv1_1"
name: "bn_conv1_1"
type: "BatchNorm"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
bottom: "conv1_1"
top: "conv1_1"
name: "scale_conv1_1"
type: "Scale"
param {
lr_mult: 0.1
decay_mult: 0
}
param {
lr_mult: 0.1
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
bottom: "conv1_1"
top: "conv1_1"
name: "relu1_1"
type: "ReLU"
}


其实也没必要这么麻烦,因为在BathNorm层的源码中设定了如果use_global_stats缺省,那么在训练时为false,测试时为true,源代码为(caffe/src/caffe/layers/batch_norm_layer.cpp)第14行:

use_global_stats_ = this->phase_ == TEST;


在测试时为1,训练时为0,这样的话我们在代码里就不用设定use_global_stats的值了,这样上面的代码我们可以简化为(训练和测试时都一样):

layer {
bottom: "data"
top: "conv1_1"
name: "conv1_1"
type: "Convolution"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
num_output: 64
pad: 1
kernel_size: 3
}
}
layer {
bottom: "conv1_1"
top: "conv1_1"
name: "bn_conv1_1"
type: "BatchNorm"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
}
layer {
bottom: "conv1_1"
top: "conv1_1"
name: "scale_conv1_1"
type: "Scale"
param {
lr_mult: 0.1
decay_mult: 0
}
param {
lr_mult: 0.1
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
bottom: "conv1_1"
top: "conv1_1"
name: "relu1_1"
type: "ReLU"
}


另外可以看到BatchNorm层学习率都设为0,具体原因或者需不需要设为0验证了再回来补充
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
相关文章推荐