caffe中mnist中 lenet_train_test.prototxt和lenet.prototxt(deploy文件)区别
2017-06-07 18:01
1076 查看
参照 http://blog.csdn.net/cham_3/article/details/52682479
http://blog.csdn.net/l18930738887/article/details/54898016
跑了下mnist训练,和使用训练好的模型进行预测
两个文件都在examples/mnist 中, lenet_train_test.prototxt 文件是设置train、test的网络,deploy文件是分类测试加载的文件。
大体区别思路是网络配置文件会规定的详细一些,deploy文件只是告诉分类程序(mnist_classification.bin)网络结构是怎么样的,不需要反向计算,不需要计算误差
两个文件内容先发出来:
cat lenet_train_test.prototxt
cat lenet.prototxt
区别:
1.首先删除TEST使用的部分,例如开始的输入数据test部分,
最后的test精度部分
layer {
name: "accuracy"
type: "Accuracy"
bottom: "ip2"
bottom: "label"
top: "accuracy"
include {
phase: TEST
}
}
2. trian部分的输入数据部分修改,只需要告诉输入维度
layer {
name: "mnist"
type: "Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {
scale: 0.00390625
}
data_param {
source: "examples/mnist/mnist_train_lmdb"
batch_size: 64
backend: LMDB
}
}
改为
layer {
name: "data"
type: "Input"
top: "data"
input_param { shape: { dim: 64 dim: 1 dim: 28 dim: 28 } }
}
shape: { dim: 64 dim: 1 dim: 28 dim: 28 }中第一个dim代表?,第二个dim代表channel个数1个(对于图片是RGB的,3通道就是3),第三第四个分别是width和height图片的。
3. 原来的最后一层loss层改为pro层
4000
/li>2
3
4
5
6
7
1
2
3
4
5
6
7
A:将其中的SoftmaxWithLoss替换为Softmax
B:删除其中的bottom:”label”行,因为测试时需要预测label而不是给你一个已知label。
C:同时将最后一层loss层改为pro层
因此改完之后最后一层就是这样的:
2
3
4
5
6
1
2
3
4
5
6
这里的name: “prob”就是你在预测时读取的layer的name,一定要对应上
http://blog.csdn.net/l18930738887/article/details/54898016
跑了下mnist训练,和使用训练好的模型进行预测
两个文件都在examples/mnist 中, lenet_train_test.prototxt 文件是设置train、test的网络,deploy文件是分类测试加载的文件。
大体区别思路是网络配置文件会规定的详细一些,deploy文件只是告诉分类程序(mnist_classification.bin)网络结构是怎么样的,不需要反向计算,不需要计算误差
两个文件内容先发出来:
cat lenet_train_test.prototxt
name: "LeNet" layer { name: "mnist" type: "Data" top: "data" top: "label" include { phase: TRAIN } transform_param { scale: 0.00390625 } data_param { source: "examples/mnist/mnist_train_lmdb" batch_size: 64 backend: LMDB } } layer { name: "mnist" type: "Data" top: "data" top: "label" include { phase: TEST } transform_param { scale: 0.00390625 } data_param { source: "examples/mnist/mnist_test_lmdb" batch_size: 100 backend: LMDB } } layer { name: "conv1" type: "Convolution" bottom: "data" top: "conv1" param { lr_mult: 1 } param { lr_mult: 2 } convolution_param { num_output: 20 kernel_size: 5 stride: 1 weight_filler { type: "xavier" } bias_filler { type: "constant" } } } layer { name: "pool1" type: "Pooling" bottom: "conv1" top: "pool1" pooling_param { pool: MAX kernel_size: 2 stride: 2 } } layer { name: "conv2" type: "Convolution" bottom: "pool1" top: "conv2" param { lr_mult: 1 } param { lr_mult: 2 } convolution_param { num_output: 50 kernel_size: 5 stride: 1 weight_filler { type: "xavier" } bias_filler { type: "constant" } } } layer { name: "pool2" type: "Pooling" bottom: "conv2" top: "pool2" pooling_param { pool: MAX kernel_size: 2 stride: 2 } } layer { name: "ip1" type: "InnerProduct" bottom: "pool2" top: "ip1" param { lr_mult: 1 } param { lr_mult: 2 } inner_product_param { num_output: 500 weight_filler { type: "xavier" } bias_filler { type: "constant" } } } layer { name: "relu1" type: "ReLU" bottom: "ip1" top: "ip1" } layer { name: "ip2" type: "InnerProduct" bottom: "ip1" top: "ip2" param { lr_mult: 1 } param { lr_mult: 2 } inner_product_param { num_output: 10 weight_filler { type: "xavier" } bias_filler { type: "constant" } } } layer { name: "accuracy" type: "Accuracy" bottom: "ip2" bottom: "label" top: "accuracy" include { phase: TEST } } layer { name: "loss" type: "SoftmaxWithLoss" bottom: "ip2" bottom: "label" top: "loss" }
cat lenet.prototxt
name: "LeNet" layer { name: "data" type: "Input" top: "data" input_param { shape: { dim: 64 dim: 1 dim: 28 dim: 28 } } } layer { name: "conv1" type: "Convolution" bottom: "data" top: "conv1" param { lr_mult: 1 } param { lr_mult: 2 } convolution_param { num_output: 20 kernel_size: 5 stride: 1 weight_filler { type: "xavier" } bias_filler { type: "constant" } } } layer { name: "pool1" type: "Pooling" bottom: "conv1" top: "pool1" pooling_param { pool: MAX kernel_size: 2 stride: 2 } } layer { name: "conv2" type: "Convolution" bottom: "pool1" top: "conv2" param { lr_mult: 1 } param { lr_mult: 2 } convolution_param { num_output: 50 kernel_size: 5 stride: 1 weight_filler { type: "xavier" } bias_filler { type: "constant" } } } layer { name: "pool2" type: "Pooling" bottom: "conv2" top: "pool2" pooling_param { pool: MAX kernel_size: 2 stride: 2 } } layer { name: "ip1" type: "InnerProduct" bottom: "pool2" top: "ip1" param { lr_mult: 1 } param { lr_mult: 2 } inner_product_param { num_output: 500 weight_filler { type: "xavier" } bias_filler { type: "constant" } } } layer { name: "relu1" type: "ReLU" bottom: "ip1" top: "ip1" } layer { name: "ip2" type: "InnerProduct" bottom: "ip1" top: "ip2" param { lr_mult: 1 } param { lr_mult: 2 } inner_product_param { num_output: 10 weight_filler { type: "xavier" } bias_filler { type: "constant" } } } layer { name: "prob" type: "Softmax" bottom: "ip2" top: "prob" }
区别:
1.首先删除TEST使用的部分,例如开始的输入数据test部分,
layer { name: "mnist" type: "Data" top: "data" top: "label" include { phase: TEST } transform_param { scale: 0.00390625 } data_param { source: "examples/mnist/mnist_test_lmdb" batch_size: 100 backend: LMDB } }
最后的test精度部分
layer {
name: "accuracy"
type: "Accuracy"
bottom: "ip2"
bottom: "label"
top: "accuracy"
include {
phase: TEST
}
}
2. trian部分的输入数据部分修改,只需要告诉输入维度
layer {
name: "mnist"
type: "Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {
scale: 0.00390625
}
data_param {
source: "examples/mnist/mnist_train_lmdb"
batch_size: 64
backend: LMDB
}
}
改为
layer {
name: "data"
type: "Input"
top: "data"
input_param { shape: { dim: 64 dim: 1 dim: 28 dim: 28 } }
}
shape: { dim: 64 dim: 1 dim: 28 dim: 28 }中第一个dim代表?,第二个dim代表channel个数1个(对于图片是RGB的,3通道就是3),第三第四个分别是width和height图片的。
3. 原来的最后一层loss层改为pro层
layer { name: "loss" type: "SoftmaxWithLoss" bottom: "fc8" bottom: "label" top: "loss" }1<
4000
/li>2
3
4
5
6
7
1
2
3
4
5
6
7
A:将其中的SoftmaxWithLoss替换为Softmax
B:删除其中的bottom:”label”行,因为测试时需要预测label而不是给你一个已知label。
C:同时将最后一层loss层改为pro层
因此改完之后最后一层就是这样的:
layer { name: "prob" type: "Softmax" bottom: "fc8" top: "prob" }1
2
3
4
5
6
1
2
3
4
5
6
这里的name: “prob”就是你在预测时读取的layer的name,一定要对应上
相关文章推荐
- caffe中train_val.prototxt和deploy.prototxt文件的区别
- caffe中根据 *_train_test.prototxt文件生成 *_deploy.prototxt文件 (转载)
- caffe中lenet_train_test.prototxt配置文件注解
- 浅谈caffe中train_val.prototxt和deploy.prototxt文件的区别
- 浅谈caffe中train_val.prototxt和deploy.prototxt文件的区别
- caffe配置文件 网络lenet-train-test.prototxt注释及说明
- caffe中train_val.prototxt文件和deploy.prototxt文件区别和转换--caffe学习(14)
- caffe中train_val.prototxt和deploy.prototxt文件的区别
- Caffe部署中的几个train-test-solver-prototxt-deploy等说明<二>
- 【神经网络与深度学习】Caffe部署中的几个train-test-solver-prototxt-deploy等说明
- Caffe 关于 LetNet-5 之 lenet_train_test.prototxt 解析
- 基于pycaffe从零开始写mnist(第二篇)——生成训练网络结构文件(train.prototxt)+测试网络结构文件(test.prototxt)
- Caffe部署中的几个train-test-solver-prototxt-deploy等说明 (一)
- *_train_test.prototxt,*_deploy.prototxt,*_slover.prototxt文件编写时注意事项
- Caffe部署中的几个train-test-solver-prototxt-deploy等说明 (二)
- caffe生成lenet-5的deploy.prototxt文件
- 【神经网络与深度学习】Caffe部署中的几个train-test-solver-prototxt-deploy等说明<二>
- 根据 *_train_test.prototxt文件生成 *_deploy.prototxt文件
- 区分caffe中train.prototxt,solver.prototxt,deploy.prototxt等文件
- 解析./build/tools/caffe train --solver=examples/mnist/lenet_solver.prototxt