您的位置:首页 > 其它

水表训练

2015-10-14 10:30 281 查看
http://caffe.berkeleyvision.org/gathered/examples/mnist.html

问题: lenet 输出的概率总有一个1

解决方法:用softmax 前面的一层,然后归一化到0-1,好像这个问题还是解决不了。

其实我们需要解决两个问题:

A。输出概率

B. 去掉一些扫出来的明显不是数字的图片,不显示。

现在用前面的一层可以解决输出的概率的问题,但是因为输入任何一个图片就会输出一个0,1 那么如果输入的图片不是数字,那么还是会输出一个比较大的数字。并不能扔掉这些图片。

问一下GQ 这个问题。(回复:

正样本的话最大值输出可以在6000以上,负样本暂时没超过3000 ,暂时先做个阈值 把小的去掉,着实不是一个处理的好方法)

我们主要是训练了三个网络:输出都是0,1

1.用最新的caffe-master 下面 examples 下面的mnist。lenet 的prototxt 是这样的。

name: "LeNet"
input: "data"
input_shape {
dim: 1
dim: 1
dim: 28
dim: 28
}
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 20
kernel_size: 5
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "pool1"
type: "Pooling"
bottom: "conv1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "conv2"
type: "Convolution"
bottom: "pool1"
top: "conv2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 50
kernel_size: 5
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "pool2"
type: "Pooling"
bottom: "conv2"
top: "pool2"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "ip1"
type: "InnerProduct"
bottom: "pool2"
top: "ip1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 500
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "relu1"
type: "ReLU"
bottom: "ip1"
top: "ip1"
}
layer {
name: "ip2"
type: "InnerProduct"
bottom: "ip1"
top: "ip2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 20
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "prob"
type: "Softmax"
bottom: "ip2"
top: "prob"
}


2. 在网上找的一个mnist的配置文件。输入图片的尺寸是32 *32的



name: "LeNet"
input: "data"
input_dim: 1
input_dim: 1
input_dim: 32
input_dim: 32
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 6
kernel_size: 5
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "pool1"
type: "Pooling"
bottom: "conv1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "conv2"
type: "Convolution"
bottom: "pool1"
top: "conv2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 16
kernel_size: 10
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "ip1"
type: "InnerProduct"
bottom: "conv2"
top: "ip1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 120
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "relu1"
type: "ReLU"
bottom: "ip1"
top: "ip1"
}
layer {
name: "ip2"
type: "InnerProduct"
bottom: "ip1"
top: "ip2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 84
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "relu2"
type: "ReLU"
bottom: "ip2"
top: "ip2"
}
layer {
name: "ip3"
type: "InnerProduct"
bottom: "ip2"
top: "ip3"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 20
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "prob"
type: "Softmax"
bottom: "ip3"
top: "prob"
}


3. 加了均值之后的mnist为lenet2.

需要注意的问题:

A. 在用matlab 提特征的时候我们要吧prototxt 里面的数字

input_dim: 1
input_dim: 1
input_dim: 32
input_dim: 32
B. matlab 提特征的时候,要提那一层的特征把之后的层全去掉。比如我们想要softmax 之前层的特征,要把softmax 这一层全去掉。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: