您的位置:首页 > 其它

学习笔记04·预测测试数据集、LeNet…

2017-04-01 19:05 381 查看
(1)用训练好的LeNet-5模型对数据进行预测,在git中敲入命令./Build/x64/Release/caffe.exe
test -model examples/mnist/lenet_train_test.prototxt -weights
examples/mnist/lenet_iter_10000.caffemodel -iterations 100 。(这里是我们第一次使用caffe工具的test命令)表示只做预测(前向传播计算),不进行参数更新(后向传播计算),
-model 
指定了模型描述文件即上一篇博文中详细分析过的文件,
-weights
指定模型预先训练好的权值文件,
-iterations 指定测试迭代次数。
     
 
得到的输出日志本文的最后附录一,对比之前训练的输出日志,本次test输出日志有点类似train日志的后半部分,但是把后面的训练变为了测试,没有了lr=******,loss=******,而是全部变为了accuracy
= 0.99,loss =
0.0303407类似的。
(2)梳理一下之前做的一些事情:1、从网上下载了四个文件train-images、train-lables、t10k-image、t10k-lables分别是
训练数据集、训练数据集标签、测试数据集、测试数据集标签
2、使用脚本  create_mnist.sh
 将上述4个文件转化为lmdb格式,供caffe工具识别。
3、运用caffe工具的 train
命令对数据集进行训练
(3)尝试将自己手写的图片转为lmdb或leveldb(这次尝试是失败的,可以直接跳过,进入第(5)步)首先当然是利用画图软件画一个手写字符啦,为了保险起见,图片尺寸设置为28*28,另存为jpg格式。(这个图片其实是有问题的,mnist训练的图片应该是黑底白字的)



然后就是查查资料http://www.cnblogs.com/denny402/p/5082341.html,看看如何转换:在caffe中,作者为我们提供了这样一个文件:convert_imageset.cpp,存放在D:\Caffe\caffe-master\tools文件夹下,在D:\Caffe\caffe-master\Build\x64\Release下有convert_imageset.exe,这个文件的作用就是用于将图片文件转换成caffe框架中能直接使用的db文件。这个文件使用格式如下:



详细看看第一个参数FLAGS的使用说明:-gray: 是否以灰度图的方式打开图片。程序调用opencv库中的imread()函数来打开图片,默认为false-shuffle: 是否随机打乱图片顺序。默认为false-backend:需要转换成的db文件格式,可选为leveldb或lmdb,默认为lmdb-resize_width/resize_height:
改变图片的大小。在运行中,要求所有图片的尺寸一致,因此需要改变图片大小。程序调用opencv库的resize()函数来对图片放大缩小,默认为0,不改变-check_size: 检查所有的数据是否有相同的尺寸。默认为false,不检查-encoded: 是否将原图片编码放入最终的数据中,默认为false-encode_type: 与前一个参数对应,将图片编码为哪一个格式:‘png','jpg'......接下来就是准备需要的文件啦:我在D:\Caffe\caffe-master\examples\mnist\my_test目录下建立了两个文件,其中一个是之前画好的图片(这个图片其实是有问题的,后面再解释),另一个是LISTFILE参数所需要的文件列表清单:





最后,当然是执行命令,打开git,cd到/d/Caffe/caffe-master目录下,执行./Build/x64/Release/convert_imageset.exe
--resize_heig
4000
ht=28 --resize_width=28 examples/mnist/my_test/
examples/mnist/my_test/test.txt examples/mnist/my _test_lmdb


同样,自动在D:\Caffe\caffe-master\examples\mnist\目录下建立了my_test_lmdb文件夹,其中生成了两个文件:


这应该就是我们需要的lmdb文件了吧。
(4)修改lenet_train_test.prototxt使用该数据集,保险起见,先将给文件备份,修改其路径

然后,执行$
./Build/x64/Release/caffe.exe test -model
examples/mnist/lenet_train_test.prototxt -weights
examples/mnist/lenet_iter_10000.caffemodel -iterations
100当然事情总不会那么顺利出现了错误, Cannot copy param 0 weights from layer 'conv1';
shape mismatch.  Source param shape is 20 1 5 5
(500); target param shape is 20 3 5 5 (1500). To learn this layer's
parameters from scratch rather than copying from a saved net,
rename the layer.




又是查资料,http://blog.csdn.net/u010417185/article/details/52649178,“conv1” ; shape
mismatch 已经很明确的给出了错误的原因,原始shape不一致,同时又很明确的指出了是cov1层出现的错误。所以直接找shape\cpnv1,之后才发现是训练模型文件与模型定义文件中的shape不相符,我训练时图像库中有的图像是一个通道,有的是使用的3个通道,所以默认使用三个通道,而我的模型定义文件中的shape,其通道数写的是1个通道,故出现错误。 我理解的意思大概,minst自带的图片集是黑白两色的,每个像素值只有一个通道,而我的图片虽然也是黑白色,但是其实是RGB格式的,且看如何修改。这个方法有很多,matlab可以进行这样的操作,但修改完毕后,按照以上步骤重新来一遍,问题依旧。(5)第二次尝试,这次先好好查看了一下资料http://blog.csdn.net/zb1165048017/article/details/52217772
http://www.cnblogs.com/yixuan-xu/p/5862657.html这里应用到了caffe中的 
classification.exe  这个工具,(如果对应目录下没有该文件,请参照我的学习笔记02进行批生成处理。)这个classification.cpp函数共有5个参数,这5个参数分别是:1、lenet模型地址。2、训练好的模型地址。3、均值文件地址。4、标签文件地址。5、待测试的图片地址 除了之前画好的图片(图片已转为单通道),这里还需要准备两个文件,均值文件、标签文件。
标签文件很容易,新建一个txt文件,在其中输入

均值文件mean.binaryproto的产生, 是利用解决方案中的compute_image_mean.exe,打开git,输入CD命令切换到caffe文件夹下,输入$
./Build/x64/Release/compute_image_mean.exe
examples/mnist/mnist_train_lmdb mean.binaryproto
--backend=lmdb


如此就在caffe的根目录下产生
了一个均值文件


接下来文件都已准备好,这里为了方便操作,我将所有文件都放在了一个目录下


在git中输入命令。$
./Build/x64/Release/classification.exe
examples/mnist/lenet.prototxt
examples/mnist/lenet_iter_10000.caffemodel
examples/mnist/my_test/mean.binaryproto
examples/mnist/my_test/test.txt examples/mnist/my_test/5.bmp
得到输出结果:


细心的同学也许发现了上图的猫腻,有两个数字识别错误了,将6识别为了5,将9识别为了4。这也太坑爹了吧。挨个试了一遍,共有3个识别错误,0识别为6,6识别为5,9识别为4,而且每次都是这样。这么经典的模型不可能出现这样的问题,一定是我自己哪里没注意到搞错了。注意到在上面提到的两篇博文里有lenet_train_test.prototxt文件中加了一句话:


我这里是没有进行这一步操作的,于是,填上这句话,重新,训练(笔记03)--测试--这次效果好一些了,0和9都正确的识别出来,但是6这个数字仍然还是没有被识别到,识别结果为5。真是令人头疼。到底是为什么啊,于是我上网搜索了一下mnist的原始训练库,下载下来看一看

从库中挑选几个“6”去替换我刚才的那10个样本,没有问题啊,完全是能识别出来的,那为什我自己画的就不行。经过很久的尝试,我画了一下几个6还是被识别正确了

看来是6和8.5太像的缘故,下半部分开口就变5,上半部分连起就变8。导致识别率下降的缘故吧,总之就是6的圆圈部分不能太大,看来这个只经过了10000个样本训练的模型还是有其应用的局限性啊。
(6)接下来还有几篇博文,也是关于识别自己手写数字的例子
利用opencv+caffe+vs2013编写的具有可视化界面的识别自己手写数字的博文:
http://blog.csdn.net/qq_14845119/article/details/54910358
mnist的原始数据转为图片:
http://blog.csdn.net/qq_14845119/article/details/54895200MNIST数据集转换为图像:
http://blog.csdn.net/u012507022/article/details/51376626
附录一:$ ./Build/x64/Release/caffe.exe test -model
examples/mnist/lenet_train_test.prototxt -weights
examples/mnist/lenet_iter_10000.caffemodel -iterations 100 I0330
20:28:10.143836  9924 caffe.cpp:280] Use
CPU.
I0330 20:28:10.148835  9924 net.cpp:332] The
NetState phase (1) differed from the phase (0) specified by a rule
in layer mnist
I0330 20:28:10.148835  9924 net.cpp:58]
Initializing net from parameters:
name: "LeNet"
state {
  phase: TEST
  leve
10bf8
l: 0
  stage: ""
}
layer {
  name: "mnist"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TEST
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source:
"examples/mnist/mnist_test_lmdb"
    batch_size: 100
    backend: LMDB
  }
}
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 20
    kernel_size: 5
    stride: 1
    weight_filler {
      type:
"xavier"
    }
    bias_filler {
      type:
"constant"
    }
  }
}
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv1"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "conv2"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 50
    kernel_size: 5
    stride: 1
    weight_filler {
      type:
"xavier"
    }
    bias_filler {
      type:
"constant"
    }
  }
}
layer {
  name: "pool2"
  type: "Pooling"
  bottom: "conv2"
  top: "pool2"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "ip1"
  type: "InnerProduct"
  bottom: "pool2"
  top: "ip1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 500
    weight_filler {
      type:
"xavier"
    }
    bias_filler {
      type:
"constant"
    }
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "ip1"
  top: "ip1"
}
layer {
  name: "ip2"
  type: "InnerProduct"
  bottom: "ip1"
  top: "ip2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 10
    weight_filler {
      type:
"xavier"
    }
    bias_filler {
      type:
"constant"
    }
  }
}
layer {
  name: "accuracy"
  type: "Accuracy"
  bottom: "ip2"
  bottom: "label"
  top: "accuracy"
  include {
    phase: TEST
  }
}
layer {
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "ip2"
  bottom: "label"
  top: "loss"
}
I0330 20:28:10.149835  9924
layer_factory.hpp:77] Creating layer mnist
I0330 20:28:10.149835  9924 common.cpp:36]
System entropy source not available, using fallback algorithm to
generate seed instead.
I0330 20:28:10.149835  9924 net.cpp:100]
Creating Layer mnist
I0330 20:28:10.149835  9924 net.cpp:418]
mnist -> data
I0330 20:28:10.149835  9924 net.cpp:418]
mnist -> label
I0330 20:28:10.150835  8228 db_lmdb.cpp:40]
Opened lmdb examples/mnist/mnist_test_lmdb
I0330 20:28:10.150835  9924
data_layer.cpp:41] output data size: 100,1,28,28
I0330 20:28:10.151835  9924 net.cpp:150]
Setting up mnist
I0330 20:28:10.151835  9924 net.cpp:157] Top
shape: 100 1 28 28 (78400)
I0330 20:28:10.151835  9924 net.cpp:157] Top
shape: 100 (100)
I0330 20:28:10.151835  9924 net.cpp:165]
Memory required for data: 314000
I0330 20:28:10.151835  9924
layer_factory.hpp:77] Creating layer label_mnist_1_split
I0330 20:28:10.151835  9924 net.cpp:100]
Creating Layer label_mnist_1_split
I0330 20:28:10.151835  9924 net.cpp:444]
label_mnist_1_split <- label
I0330 20:28:10.151835  9924 net.cpp:418]
label_mnist_1_split -> label_mnist_1_split_0
I0330 20:28:10.151835  9924 net.cpp:418]
label_mnist_1_split -> label_mnist_1_split_1
I0330 20:28:10.151835  9924 net.cpp:150]
Setting up label_mnist_1_split
I0330 20:28:10.151835  9924 net.cpp:157] Top
shape: 100 (100)
I0330 20:28:10.151835  9924 net.cpp:157] Top
shape: 100 (100)
I0330 20:28:10.151835  9924 net.cpp:165]
Memory required for data: 314800
I0330 20:28:10.151835  9924
layer_factory.hpp:77] Creating layer conv1
I0330 20:28:10.151835  9924 net.cpp:100]
Creating Layer conv1
I0330 20:28:10.151835  9924 net.cpp:444]
conv1 <- data
I0330 20:28:10.151835  9924 net.cpp:418]
conv1 -> conv1
I0330 20:28:10.151835  9924 net.cpp:150]
Setting up conv1
I0330 20:28:10.151835  9924 net.cpp:157] Top
shape: 100 20 24 24 (1152000)
I0330 20:28:10.151835  9924 net.cpp:165]
Memory required for data: 4922800
I0330 20:28:10.151835  9924
layer_factory.hpp:77] Creating layer pool1
I0330 20:28:10.151835  9924 net.cpp:100]
Creating Layer pool1
I0330 20:28:10.151835  9924 net.cpp:444]
pool1 <- conv1
I0330 20:28:10.152835  9924 net.cpp:418]
pool1 -> pool1
I0330 20:28:10.152835  9924 net.cpp:150]
Setting up pool1
I0330 20:28:10.152835  9924 net.cpp:157] Top
shape: 100 20 12 12 (288000)
I0330 20:28:10.152835  9924 net.cpp:165]
Memory required for data: 6074800
I0330 20:28:10.152835  9924
layer_factory.hpp:77] Creating layer conv2
I0330 20:28:10.152835  9924 net.cpp:100]
Creating Layer conv2
I0330 20:28:10.152835  9924 net.cpp:444]
conv2 <- pool1
I0330 20:28:10.152835  9924 net.cpp:418]
conv2 -> conv2
I0330 20:28:10.152835  9924 net.cpp:150]
Setting up conv2
I0330 20:28:10.152835  9924 net.cpp:157] Top
shape: 100 50 8 8 (320000)
I0330 20:28:10.152835  9924 net.cpp:165]
Memory required for data: 7354800
I0330 20:28:10.152835  9924
layer_factory.hpp:77] Creating layer pool2
I0330 20:28:10.152835  9924 net.cpp:100]
Creating Layer pool2
I0330 20:28:10.152835  9924 net.cpp:444]
pool2 <- conv2
I0330 20:28:10.152835  9924 net.cpp:418]
pool2 -> pool2
I0330 20:28:10.152835  9924 net.cpp:150]
Setting up pool2
I0330 20:28:10.152835  9924 net.cpp:157] Top
shape: 100 50 4 4 (80000)
I0330 20:28:10.152835  9924 net.cpp:165]
Memory required for data: 7674800
I0330 20:28:10.152835  9924
layer_factory.hpp:77] Creating layer ip1
I0330 20:28:10.152835  9924 net.cpp:100]
Creating Layer ip1
I0330 20:28:10.152835  9924 net.cpp:444] ip1
<- pool2
I0330 20:28:10.152835  9924 net.cpp:418] ip1
-> ip1
I0330 20:28:10.160835  9924 net.cpp:150]
Setting up ip1
I0330 20:28:10.161835  9924 net.cpp:157] Top
shape: 100 500 (50000)
I0330 20:28:10.161835  9924 net.cpp:165]
Memory required for data: 7874800
I0330 20:28:10.161835  9924
layer_factory.hpp:77] Creating layer relu1
I0330 20:28:10.161835  9924 net.cpp:100]
Creating Layer relu1
I0330 20:28:10.161835  9924 net.cpp:444]
relu1 <- ip1
I0330 20:28:10.161835  9924 net.cpp:405]
relu1 -> ip1 (in-place)
I0330 20:28:10.161835  9924 net.cpp:150]
Setting up relu1
I0330 20:28:10.161835  9924 net.cpp:157] Top
shape: 100 500 (50000)
I0330 20:28:10.161835  9924 net.cpp:165]
Memory required for data: 8074800
I0330 20:28:10.161835  9924
layer_factory.hpp:77] Creating layer ip2
I0330 20:28:10.161835  9924 net.cpp:100]
Creating Layer ip2
I0330 20:28:10.161835  9924 net.cpp:444] ip2
<- ip1
I0330 20:28:10.161835  9924 net.cpp:418] ip2
-> ip2
I0330 20:28:10.161835  9924 net.cpp:150]
Setting up ip2
I0330 20:28:10.161835  9924 net.cpp:157] Top
shape: 100 10 (1000)
I0330 20:28:10.161835  9924 net.cpp:165]
Memory required for data: 8078800
I0330 20:28:10.161835  9924
layer_factory.hpp:77] Creating layer ip2_ip2_0_split
I0330 20:28:10.161835  9924 net.cpp:100]
Creating Layer ip2_ip2_0_split
I0330 20:28:10.161835  9924 net.cpp:444]
ip2_ip2_0_split <- ip2
I0330 20:28:10.161835  9924 net.cpp:418]
ip2_ip2_0_split -> ip2_ip2_0_split_0
I0330 20:28:10.161835  9924 net.cpp:418]
ip2_ip2_0_split -> ip2_ip2_0_split_1
I0330 20:28:10.161835  9924 net.cpp:150]
Setting up ip2_ip2_0_split
I0330 20:28:10.161835  9924 net.cpp:157] Top
shape: 100 10 (1000)
I0330 20:28:10.161835  9924 net.cpp:157] Top
shape: 100 10 (1000)
I0330 20:28:10.161835  9924 net.cpp:165]
Memory required for data: 8086800
I0330 20:28:10.161835  9924
layer_factory.hpp:77] Creating layer accuracy
I0330 20:28:10.161835  9924 net.cpp:100]
Creating Layer accuracy
I0330 20:28:10.161835  9924 net.cpp:444]
accuracy <- ip2_ip2_0_split_0
I0330 20:28:10.161835  9924 net.cpp:444]
accuracy <- label_mnist_1_split_0
I0330 20:28:10.161835  9924 net.cpp:418]
accuracy -> accuracy
I0330 20:28:10.161835  9924 net.cpp:150]
Setting up accuracy
I0330 20:28:10.161835  9924 net.cpp:157] Top
shape: (1)
I0330 20:28:10.161835  9924 net.cpp:165]
Memory required for data: 8086804
I0330 20:28:10.161835  9924
layer_factory.hpp:77] Creating layer loss
I0330 20:28:10.161835  9924 net.cpp:100]
Creating Layer loss
I0330 20:28:10.162835  9924 net.cpp:444] loss
<- ip2_ip2_0_split_1
I0330 20:28:10.162835  9924 net.cpp:444] loss
<- label_mnist_1_split_1
I0330 20:28:10.162835  9924 net.cpp:418] loss
-> loss
I0330 20:28:10.162835  9924
layer_factory.hpp:77] Creating layer loss
I0330 20:28:10.162835  9924 net.cpp:150]
Setting up loss
I0330 20:28:10.162835  9924 net.cpp:157] Top
shape: (1)
I0330 20:28:10.162835  9924 net.cpp:160]
    with loss weight 1
I0330 20:28:10.162835  9924 net.cpp:165]
Memory required for data: 8086808
I0330 20:28:10.162835  9924 net.cpp:226] loss
needs backward computation.
I0330 20:28:10.162835  9924 net.cpp:228]
accuracy does not need backward computation.
I0330 20:28:10.162835  9924 net.cpp:226]
ip2_ip2_0_split needs backward computation.
I0330 20:28:10.162835  9924 net.cpp:226] ip2
needs backward computation.
I0330 20:28:10.162835  9924 net.cpp:226]
relu1 needs backward computation.
I0330 20:28:10.162835  9924 net.cpp:226] ip1
needs backward computation.
I0330 20:28:10.162835  9924 net.cpp:226]
pool2 needs backward computation.
I0330 20:28:10.162835  9924 net.cpp:226]
conv2 needs backward computation.
I0330 20:28:10.162835  9924 net.cpp:226]
pool1 needs backward computation.
I0330 20:28:10.162835  9924 net.cpp:226]
conv1 needs backward computation.
I0330 20:28:10.162835  9924 net.cpp:228]
label_mnist_1_split does not need backward computation.
I0330 20:28:10.162835  9924 net.cpp:228]
mnist does not need backward computation.
I0330 20:28:10.162835  9924 net.cpp:270] This
network produces output accuracy
I0330 20:28:10.162835  9924 net.cpp:270] This
network produces output loss
I0330 20:28:10.162835  9924 net.cpp:283]
Network initialization done.
I0330 20:28:10.241835  9924 net.cpp:774]
Copying source layer mnist
I0330 20:28:10.241835  9924 net.cpp:774]
Copying source layer conv1
I0330 20:28:10.241835  9924 net.cpp:774]
Copying source layer pool1
I0330 20:28:10.241835  9924 net.cpp:774]
Copying source layer conv2
I0330 20:28:10.241835  9924 net.cpp:774]
Copying source layer pool2
I0330 20:28:10.241835  9924 net.cpp:774]
Copying source layer ip1
I0330 20:28:10.242835  9924 net.cpp:774]
Copying source layer relu1
I0330 20:28:10.242835  9924 net.cpp:774]
Copying source layer ip2
I0330 20:28:10.242835  9924 net.cpp:774]
Copying source layer loss
I0330 20:28:10.242835  9924 caffe.cpp:286]
Running for 100 iterations.
I0330 20:28:10.288836  9924 caffe.cpp:309]
Batch 0, accuracy = 0.99
I0330 20:28:10.288836  9924 caffe.cpp:309]
Batch 0, loss = 0.014888
I0330 20:28:10.365835  9924 caffe.cpp:309]
Batch 1, accuracy = 1
I0330 20:28:10.365835  9924 caffe.cpp:309]
Batch 1, loss = 0.00373568
I0330 20:28:10.407835  9924 caffe.cpp:309]
Batch 2, accuracy = 0.99
I0330 20:28:10.407835  9924 caffe.cpp:309]
Batch 2, loss = 0.0172707
I0330 20:28:10.451835  9924 caffe.cpp:309]
Batch 3, accuracy = 0.99
I0330 20:28:10.451835  9924 caffe.cpp:309]
Batch 3, loss = 0.0245162
I0330 20:28:10.492835  9924 caffe.cpp:309]
Batch 4, accuracy = 0.98
I0330 20:28:10.492835  9924 caffe.cpp:309]
Batch 4, loss = 0.0515675
I0330 20:28:10.534835  9924 caffe.cpp:309]
Batch 5, accuracy = 0.99
I0330 20:28:10.534835  9924 caffe.cpp:309]
Batch 5, loss = 0.0303407
I0330 20:28:10.575835  9924 caffe.cpp:309]
Batch 6, accuracy = 0.98
I0330 20:28:10.575835  9924 caffe.cpp:309]
Batch 6, loss = 0.0637734
I0330 20:28:10.617835  9924 caffe.cpp:309]
Batch 7, accuracy = 0.99
I0330 20:28:10.617835  9924 caffe.cpp:309]
Batch 7, loss = 0.0252728
I0330 20:28:10.662835  9924 caffe.cpp:309]
Batch 8, accuracy = 1
I0330 20:28:10.663836  9924 caffe.cpp:309]
Batch 8, loss = 0.00653941
I0330 20:28:10.707835  9924 caffe.cpp:309]
Batch 9, accuracy = 0.99
I0330 20:28:10.707835  9924 caffe.cpp:309]
Batch 9, loss = 0.0362632
I0330 20:28:10.745836  9924 caffe.cpp:309]
Batch 10, accuracy = 0.98
I0330 20:28:10.745836  9924 caffe.cpp:309]
Batch 10, loss = 0.0587546
I0330 20:28:10.791836  9924 caffe.cpp:309]
Batch 11, accuracy = 0.98
I0330 20:28:10.791836  9924 caffe.cpp:309]
Batch 11, loss = 0.0552613
I0330 20:28:10.833835  9924 caffe.cpp:309]
Batch 12, accuracy = 0.97
I0330 20:28:10.833835  9924 caffe.cpp:309]
Batch 12, loss = 0.146759
I0330 20:28:10.885835  9924 caffe.cpp:309]
Batch 13, accuracy = 0.98
I0330 20:28:10.885835  9924 caffe.cpp:309]
Batch 13, loss = 0.065812
I0330 20:28:10.921835  9924 caffe.cpp:309]
Batch 14, accuracy = 0.99
I0330 20:28:10.921835  9924 caffe.cpp:309]
Batch 14, loss = 0.022056
I0330 20:28:10.953835  9924 caffe.cpp:309]
Batch 15, accuracy = 0.98
I0330 20:28:10.953835  9924 caffe.cpp:309]
Batch 15, loss = 0.053514
I0330 20:28:10.987835  9924 caffe.cpp:309]
Batch 16, accuracy = 0.98
I0330 20:28:10.987835  9924 caffe.cpp:309]
Batch 16, loss = 0.0314983
I0330 20:28:11.016835  9924 caffe.cpp:309]
Batch 17, accuracy = 0.99
I0330 20:28:11.016835  9924 caffe.cpp:309]
Batch 17, loss = 0.0257772
I0330 20:28:11.050835  9924 caffe.cpp:309]
Batch 18, accuracy = 1
I0330 20:28:11.050835  9924 caffe.cpp:309]
Batch 18, loss = 0.00717542
I0330 20:28:11.083835  9924 caffe.cpp:309]
Batch 19, accuracy = 0.98
I0330 20:28:11.083835  9924 caffe.cpp:309]
Batch 19, loss = 0.0606562
I0330 20:28:11.113836  9924 caffe.cpp:309]
Batch 20, accuracy = 0.98
I0330 20:28:11.113836  9924 caffe.cpp:309]
Batch 20, loss = 0.0582503
I0330 20:28:11.150835  9924 caffe.cpp:309]
Batch 21, accuracy = 0.97
I0330 20:28:11.150835  9924 caffe.cpp:309]
Batch 21, loss = 0.0547471
I0330 20:28:11.180835  9924 caffe.cpp:309]
Batch 22, accuracy = 0.99
I0330 20:28:11.180835  9924 caffe.cpp:309]
Batch 22, loss = 0.0240171
I0330 20:28:11.217835  9924 caffe.cpp:309]
Batch 23, accuracy = 1
I0330 20:28:11.217835  9924 caffe.cpp:309]
Batch 23, loss = 0.0153001
I0330 20:28:11.251835  9924 caffe.cpp:309]
Batch 24, accuracy = 0.98
I0330 20:28:11.251835  9924 caffe.cpp:309]
Batch 24, loss = 0.047301
I0330 20:28:11.283835  9924 caffe.cpp:309]
Batch 25, accuracy = 0.99
I0330 20:28:11.283835  9924 caffe.cpp:309]
Batch 25, loss = 0.108868
I0330 20:28:11.313835  9924 caffe.cpp:309]
Batch 26, accuracy = 0.99
I0330 20:28:11.313835  9924 caffe.cpp:309]
Batch 26, loss = 0.0981717
I0330 20:28:11.343835  9924 caffe.cpp:309]
Batch 27, accuracy = 1
I0330 20:28:11.343835  9924 caffe.cpp:309]
Batch 27, loss = 0.0144831
I0330 20:28:11.380836  9924 caffe.cpp:309]
Batch 28, accuracy = 0.99
I0330 20:28:11.381835  9924 caffe.cpp:309]
Batch 28, loss = 0.0717378
I0330 20:28:11.411835  9924 caffe.cpp:309]
Batch 29, accuracy = 0.97
I0330 20:28:11.411835  9924 caffe.cpp:309]
Batch 29, loss = 0.102606
I0330 20:28:11.444835  9924 caffe.cpp:309]
Batch 30, accuracy = 1
I0330 20:28:11.444835  9924 caffe.cpp:309]
Batch 30, loss = 0.0186492
I0330 20:28:11.479835  9924 caffe.cpp:309]
Batch 31, accuracy = 1
I0330 20:28:11.479835  9924 caffe.cpp:309]
Batch 31, loss = 0.00344187
I0330 20:28:11.511835  9924 caffe.cpp:309]
Batch 32, accuracy = 1
I0330 20:28:11.511835  9924 caffe.cpp:309]
Batch 32, loss = 0.0121422
I0330 20:28:11.544836  9924 caffe.cpp:309]
Batch 33, accuracy = 1
I0330 20:28:11.544836  9924 caffe.cpp:309]
Batch 33, loss = 0.00813815
I0330 20:28:11.583835  9924 caffe.cpp:309]
Batch 34, accuracy = 0.98
I0330 20:28:11.583835  9924 caffe.cpp:309]
Batch 34, loss = 0.0598602
I0330 20:28:11.614835  9924 caffe.cpp:309]
Batch 35, accuracy = 0.96
I0330 20:28:11.614835  9924 caffe.cpp:309]
Batch 35, loss = 0.157269
I0330 20:28:11.645835  9924 caffe.cpp:309]
Batch 36, accuracy = 1
I0330 20:28:11.645835  9924 caffe.cpp:309]
Batch 36, loss = 0.00249111
I0330 20:28:11.681835  9924 caffe.cpp:309]
Batch 37, accuracy = 0.99
I0330 20:28:11.681835  9924 caffe.cpp:309]
Batch 37, loss = 0.0311585
I0330 20:28:11.710835  9924 caffe.cpp:309]
Batch 38, accuracy = 1
I0330 20:28:11.710835  9924 caffe.cpp:309]
Batch 38, loss = 0.0219262
I0330 20:28:11.751835  9924 caffe.cpp:309]
Batch 39, accuracy = 0.99
I0330 20:28:11.751835  9924 caffe.cpp:309]
Batch 39, loss = 0.0261821
I0330 20:28:11.786835  9924 caffe.cpp:309]
Batch 40, accuracy = 0.99
I0330 20:28:11.786835  9924 caffe.cpp:309]
Batch 40, loss = 0.0460841
I0330 20:28:11.816835  9924 caffe.cpp:309]
Batch 41, accuracy = 0.99
I0330 20:28:11.816835  9924 caffe.cpp:309]
Batch 41, loss = 0.0534093
I0330 20:28:11.855835  9924 caffe.cpp:309]
Batch 42, accuracy = 0.99
I0330 20:28:11.855835  9924 caffe.cpp:309]
Batch 42, loss = 0.0286041
I0330 20:28:11.888835  9924 caffe.cpp:309]
Batch 43, accuracy = 0.99
I0330 20:28:11.888835  9924 caffe.cpp:309]
Batch 43, loss = 0.0217
I0330 20:28:11.917835  9924 caffe.cpp:309]
Batch 44, accuracy = 0.99
I0330 20:28:11.917835  9924 caffe.cpp:309]
Batch 44, loss = 0.0504712
I0330 20:28:11.950835  9924 caffe.cpp:309]
Batch 45, accuracy = 0.98
I0330 20:28:11.950835  9924 caffe.cpp:309]
Batch 45, loss = 0.0415009
I0330 20:28:11.982836  9924 caffe.cpp:309]
Batch 46, accuracy = 1
I0330 20:28:11.982836  9924 caffe.cpp:309]
Batch 46, loss = 0.0114107
I0330 20:28:12.017835  9924 caffe.cpp:309]
Batch 47, accuracy = 0.99
I0330 20:28:12.017835  9924 caffe.cpp:309]
Batch 47, loss = 0.0129762
I0330 20:28:12.054836  9924 caffe.cpp:309]
Batch 48, accuracy = 0.95
I0330 20:28:12.054836  9924 caffe.cpp:309]
Batch 48, loss = 0.0870433
I0330 20:28:12.090836  9924 caffe.cpp:309]
Batch 49, accuracy = 1
I0330 20:28:12.090836  9924 caffe.cpp:309]
Batch 49, loss = 0.00386103
I0330 20:28:12.120836  9924 caffe.cpp:309]
Batch 50, accuracy = 1
I0330 20:28:12.120836  9924 caffe.cpp:309]
Batch 50, loss = 0.000257335
I0330 20:28:12.151835  9924 caffe.cpp:309]
Batch 51, accuracy = 0.99
I0330 20:28:12.151835  9924 caffe.cpp:309]
Batch 51, loss = 0.0129624
I0330 20:28:12.182835  9924 caffe.cpp:309]
Batch 52, accuracy = 1
I0330 20:28:12.183835  9924 caffe.cpp:309]
Batch 52, loss = 0.00886623
I0330 20:28:12.220835  9924 caffe.cpp:309]
Batch 53, accuracy = 1
I0330 20:28:12.220835  9924 caffe.cpp:309]
Batch 53, loss = 0.00294416
I0330 20:28:12.251835  9924 caffe.cpp:309]
Batch 54, accuracy = 1
I0330 20:28:12.251835  9924 caffe.cpp:309]
Batch 54, loss = 0.00508683
I0330 20:28:12.284835  9924 caffe.cpp:309]
Batch 55, accuracy = 1
I0330 20:28:12.284835  9924 caffe.cpp:309]
Batch 55, loss = 0.000315504
I0330 20:28:12.317836  9924 caffe.cpp:309]
Batch 56, accuracy = 1
I0330 20:28:12.317836  9924 caffe.cpp:309]
Batch 56, loss = 0.0103186
I0330 20:28:12.351835  9924 caffe.cpp:309]
Batch 57, accuracy = 1
I0330 20:28:12.351835  9924 caffe.cpp:309]
Batch 57, loss = 0.00344411
I0330 20:28:12.383836  9924 caffe.cpp:309]
Batch 58, accuracy = 1
I0330 20:28:12.383836  9924 caffe.cpp:309]
Batch 58, loss = 0.00490505
I0330 20:28:12.414835  9924 caffe.cpp:309]
Batch 59, accuracy = 0.96
I0330 20:28:12.414835  9924 caffe.cpp:309]
Batch 59, loss = 0.142526
I0330 20:28:12.449836  9924 caffe.cpp:309]
Batch 60, accuracy = 1
I0330 20:28:12.449836  9924 caffe.cpp:309]
Batch 60, loss = 0.00786717
I0330 20:28:12.482836  9924 caffe.cpp:309]
Batch 61, accuracy = 1
I0330 20:28:12.482836  9924 caffe.cpp:309]
Batch 61, loss = 0.00399454
I0330 20:28:12.512835  9924 caffe.cpp:309]
Batch 62, accuracy = 1
I0330 20:28:12.512835  9924 caffe.cpp:309]
Batch 62, loss = 4.03852e-005
I0330 20:28:12.546835  9924 caffe.cpp:309]
Batch 63, accuracy = 1
I0330 20:28:12.546835  9924 caffe.cpp:309]
Batch 63, loss = 0.000279823
I0330 20:28:12.579835  9924 caffe.cpp:309]
Batch 64, accuracy = 1
I0330 20:28:12.579835  9924 caffe.cpp:309]
Batch 64, loss = 0.000175293
I0330 20:28:12.612835  9924 caffe.cpp:309]
Batch 65, accuracy = 0.94
I0330 20:28:12.612835  9924 caffe.cpp:309]
Batch 65, loss = 0.17379
I0330 20:28:12.649835  9924 caffe.cpp:309]
Batch 66, accuracy = 0.98
I0330 20:28:12.649835  9924 caffe.cpp:309]
Batch 66, loss = 0.0773466
I0330 20:28:12.682835  9924 caffe.cpp:309]
Batch 67, accuracy = 0.99
I0330 20:28:12.682835  9924 caffe.cpp:309]
Batch 67, loss = 0.0225831
I0330 20:28:12.713835  9924 caffe.cpp:309]
Batch 68, accuracy = 1
I0330 20:28:12.713835  9924 caffe.cpp:309]
Batch 68, loss = 0.00724088
I0330 20:28:12.745836  9924 caffe.cpp:309]
Batch 69, accuracy = 1
I0330 20:28:12.745836  9924 caffe.cpp:309]
Batch 69, loss = 0.00234018
I0330 20:28:12.780835  9924 caffe.cpp:309]
Batch 70, accuracy = 1
I0330 20:28:12.780835  9924 caffe.cpp:309]
Batch 70, loss = 0.000897924
I0330 20:28:12.813835  9924 caffe.cpp:309]
Batch 71, accuracy = 1
I0330 20:28:12.813835  9924 caffe.cpp:309]
Batch 71, loss = 0.00025317
I0330 20:28:12.853835  9924 caffe.cpp:309]
Batch 72, accuracy = 1
I0330 20:28:12.853835  9924 caffe.cpp:309]
Batch 72, loss = 0.00741225
I0330 20:28:12.898835  9924 caffe.cpp:309]
Batch 73, accuracy = 1
I0330 20:28:12.898835  9924 caffe.cpp:309]
Batch 73, loss = 0.000170891
I0330 20:28:12.932835  9924 caffe.cpp:309]
Batch 74, accuracy = 1
I0330 20:28:12.932835  9924 caffe.cpp:309]
Batch 74, loss = 0.00269235
I0330 20:28:12.962836  9924 caffe.cpp:309]
Batch 75, accuracy = 1
I0330 20:28:12.963835  9924 caffe.cpp:309]
Batch 75, loss = 0.00363306
I0330 20:28:12.993835  9924 caffe.cpp:309]
Batch 76, accuracy = 1
I0330 20:28:12.993835  9924 caffe.cpp:309]
Batch 76, loss = 0.000149582
I0330 20:28:13.024835  9924 caffe.cpp:309]
Batch 77, accuracy = 1
I0330 20:28:13.024835  9924 caffe.cpp:309]
Batch 77, loss = 0.000174257
I0330 20:28:13.057835  9924 caffe.cpp:309]
Batch 78, accuracy = 1
I0330 20:28:13.057835  9924 caffe.cpp:309]
Batch 78, loss = 0.000875194
I0330 20:28:13.092835  9924 caffe.cpp:309]
Batch 79, accuracy = 1
I0330 20:28:13.092835  9924 caffe.cpp:309]
Batch 79, loss = 0.00250681
I0330 20:28:13.125835  9924 caffe.cpp:309]
Batch 80, accuracy = 0.99
I0330 20:28:13.125835  9924 caffe.cpp:309]
Batch 80, loss = 0.0296276
I0330 20:28:13.158835  9924 caffe.cpp:309]
Batch 81, accuracy = 1
I0330 20:28:13.158835  9924 caffe.cpp:309]
Batch 81, loss = 0.000951817
I0330 20:28:13.189836  9924 caffe.cpp:309]
Batch 82, accuracy = 1
I0330 20:28:13.189836  9924 caffe.cpp:309]
Batch 82, loss = 0.00533407
I0330 20:28:13.229835  9924 caffe.cpp:309]
Batch 83, accuracy = 1
I0330 20:28:13.229835  9924 caffe.cpp:309]
Batch 83, loss = 0.0167267
I0330 20:28:13.264835  9924 caffe.cpp:309]
Batch 84, accuracy = 0.99
I0330 20:28:13.264835  9924 caffe.cpp:309]
Batch 84, loss = 0.0299575
I0330 20:28:13.295835  9924 caffe.cpp:309]
Batch 85, accuracy = 0.99
I0330 20:28:13.295835  9924 caffe.cpp:309]
Batch 85, loss = 0.030692
I0330 20:28:13.327836  9924 caffe.cpp:309]
Batch 86, accuracy = 1
I0330 20:28:13.327836  9924 caffe.cpp:309]
Batch 86, loss = 0.000126641
I0330 20:28:13.362835  9924 caffe.cpp:309]
Batch 87, accuracy = 1
I0330 20:28:13.362835  9924 caffe.cpp:309]
Batch 87, loss = 8.27255e-005
I0330 20:28:13.403836  9924 caffe.cpp:309]
Batch 88, accuracy = 1
I0330 20:28:13.403836  9924 caffe.cpp:309]
Batch 88, loss = 1.96798e-005
I0330 20:28:13.433835  9924 caffe.cpp:309]
Batch 89, accuracy = 1
I0330 20:28:13.433835  9924 caffe.cpp:309]
Batch 89, loss = 2.51798e-005
I0330 20:28:13.467835  9924 caffe.cpp:309]
Batch 90, accuracy = 0.96
I0330 20:28:13.467835  9924 caffe.cpp:309]
Batch 90, loss = 0.147785
I0330 20:28:13.497835  9924 caffe.cpp:309]
Batch 91, accuracy = 1
I0330 20:28:13.497835  9924 caffe.cpp:309]
Batch 91, loss = 2.32204e-005
I0330 20:28:13.533835  9924 caffe.cpp:309]
Batch 92, accuracy = 1
I0330 20:28:13.533835  9924 caffe.cpp:309]
Batch 92, loss = 0.00022956
I0330 20:28:13.570835  9924 caffe.cpp:309]
Batch 93, accuracy = 1
I0330 20:28:13.570835  9924 caffe.cpp:309]
Batch 93, loss = 0.00210275
I0330 20:28:13.602835  9924 caffe.cpp:309]
Batch 94, accuracy = 1
I0330 20:28:13.602835  9924 caffe.cpp:309]
Batch 94, loss = 0.000285737
I0330 20:28:13.631835  9924 caffe.cpp:309]
Batch 95, accuracy = 1
I0330 20:28:13.631835  9924 caffe.cpp:309]
Batch 95, loss = 0.0026881
I0330 20:28:13.666836  9924 caffe.cpp:309]
Batch 96, accuracy = 0.99
I0330 20:28:13.666836  9924 caffe.cpp:309]
Batch 96, loss = 0.0349408
I0330 20:28:13.697835  9924 caffe.cpp:309]
Batch 97, accuracy = 0.98
I0330 20:28:13.697835  9924 caffe.cpp:309]
Batch 97, loss = 0.0829142
I0330 20:28:13.726835  9924 caffe.cpp:309]
Batch 98, accuracy = 1
I0330 20:28:13.726835  9924 caffe.cpp:309]
Batch 98, loss = 0.00180657
I0330 20:28:13.759835  9924 caffe.cpp:309]
Batch 99, accuracy = 1
I0330 20:28:13.759835  9924 caffe.cpp:309]
Batch 99, loss = 0.0032278
I0330 20:28:13.759835  9924 caffe.cpp:314]
Loss: 0.0299083
I0330 20:28:13.759835  9924 caffe.cpp:326]
accuracy = 0.9914
I0330 20:28:13.759835  9924 caffe.cpp:326]
loss = 0.0299083 (* 1 = 0.0299083 loss)
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: