Caffe LeNet网络模型理解
2018-02-02 20:52
411 查看
Caffe的模型具有两个重要的参数文件:网络模型和参数配置,分别是*.prototxt和*.solver.prototxt
先上图:
//输入层
layer{
name: "mnist"
type: "Data"
//input
top: "data"
top: "label"
//数据输入定义:包含训练和测试数据
include{
phase:TRAIN
}
transform_param{
scale: 0.00390625
}
data_param {
source:"examples/minist/mnist_train_lmdb"//数据路径
batch_size: 64 //批数据大小
backend: LMDB
}
}
//输出层
layer {
name: "mnist"
type: "Data"
top: "data"
top: "label"
include {
phase: TEST
}
transform_param{
scale: 0.00390625
}
data_param {
source: "examples/mnist/mnist_test_lmdb"
batch_size: 100
backend: LMDB
}
}
//卷积层
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
param {
lr_mult: 1 //weight学习率
}
param {
lr_mult: 2 //bias学习率,一般为weight的两倍
}
convolution_parm {
num_output: 20//滤波器的个数
kernel_size: 5//卷积核大小
stride: 1//步长
weight_filler{
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
//池化层
layer {
name: "pool1"
type: "Pooling"
bottom: "conv1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
//全连接层
layer {
name: "ip1"
type: "InnerProduct"
bottom: "pool2"
top: "ip1"
param {
lr_mult: 1//weight学习率
}
param {
lr_mult: 2//bias学习率,一般为weight的两倍
}
}
//ReLU激活函数,非线性变化层max(0,x),一般与卷积层成对出现。
layer {
name: "relu1"
type: "ReLU"
bottom: "ip1"
top: "ip1"
}
//LeNet SoftMax层如下
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "ip2"
bottom: "label"
top: "loss"
}
LeNet的参数配置文件:
# The train/test net protocol buffer definition
net: "examples/mnist/lenet_train_test.prototxt"
# test_iter specifies how many forward passes the test should carry out.
# In the case of MNIST, we have test batch size 100 and 100 test iterations,
# covering the full 10,000 testing images.
test_iter: 100
# Carry out testing every 500 training iterations.
test_interval: 500
# The base learning rate, momentum and the weight decay of the network.//学习速率,动量,权重衰减
base_lr: 0.01
momentum: 0.9
weight_decay: 0.0005
# The learning rate policy
lr_policy: "inv"
gamma: 0.0001
power: 0.75
# Display every 100 iterations//显示
display: 100
# The maximum number of iterations//最大迭代次数
max_iter: 10000
# snapshot intermediate results
snapshot: 5000
snapshot_prefix: "examples/mnist/lenet"
# solver mode: CPU or GPU//在何种模式下运行神经网络
solver_mode: CPU
先上图:
//输入层
layer{
name: "mnist"
type: "Data"
//input
top: "data"
top: "label"
//数据输入定义:包含训练和测试数据
include{
phase:TRAIN
}
transform_param{
scale: 0.00390625
}
data_param {
source:"examples/minist/mnist_train_lmdb"//数据路径
batch_size: 64 //批数据大小
backend: LMDB
}
}
//输出层
layer {
name: "mnist"
type: "Data"
top: "data"
top: "label"
include {
phase: TEST
}
transform_param{
scale: 0.00390625
}
data_param {
source: "examples/mnist/mnist_test_lmdb"
batch_size: 100
backend: LMDB
}
}
//卷积层
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
param {
lr_mult: 1 //weight学习率
}
param {
lr_mult: 2 //bias学习率,一般为weight的两倍
}
convolution_parm {
num_output: 20//滤波器的个数
kernel_size: 5//卷积核大小
stride: 1//步长
weight_filler{
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
//池化层
layer {
name: "pool1"
type: "Pooling"
bottom: "conv1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
//全连接层
layer {
name: "ip1"
type: "InnerProduct"
bottom: "pool2"
top: "ip1"
param {
lr_mult: 1//weight学习率
}
param {
lr_mult: 2//bias学习率,一般为weight的两倍
}
}
//ReLU激活函数,非线性变化层max(0,x),一般与卷积层成对出现。
layer {
name: "relu1"
type: "ReLU"
bottom: "ip1"
top: "ip1"
}
//LeNet SoftMax层如下
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "ip2"
bottom: "label"
top: "loss"
}
LeNet的参数配置文件:
# The train/test net protocol buffer definition
net: "examples/mnist/lenet_train_test.prototxt"
# test_iter specifies how many forward passes the test should carry out.
# In the case of MNIST, we have test batch size 100 and 100 test iterations,
# covering the full 10,000 testing images.
test_iter: 100
# Carry out testing every 500 training iterations.
test_interval: 500
# The base learning rate, momentum and the weight decay of the network.//学习速率,动量,权重衰减
base_lr: 0.01
momentum: 0.9
weight_decay: 0.0005
# The learning rate policy
lr_policy: "inv"
gamma: 0.0001
power: 0.75
# Display every 100 iterations//显示
display: 100
# The maximum number of iterations//最大迭代次数
max_iter: 10000
# snapshot intermediate results
snapshot: 5000
snapshot_prefix: "examples/mnist/lenet"
# solver mode: CPU or GPU//在何种模式下运行神经网络
solver_mode: CPU
相关文章推荐
- 深入理解Caffe MNIST DEMO中的LeNet网络模型
- caffe中LetNet-5卷积神经网络模型文件lenet.prototxt理解
- Caffe上LeNet模型理解
- caffe_手写数字识别Lenet模型理解
- Caffe上LeNet模型理解
- 常用网络模型结构LeNet,AlexNET,VGG,BN-inception,ResNet网络模型简介和资料整理--caffe学习(8)
- 【深度学习】笔记3_caffe自带的第一个例子,Mnist手写数字识别所使用的LeNet网络模型的详细解释
- 从零开始理解caffe网络的参数 - 以LeNet网络结构为例
- 学习笔记:Caffe上LeNet模型理解
- 学习笔记:Caffe上LeNet模型理解
- 理解VMware的三种网络模型
- 【caffe源码研究】第四章:完整案例源码篇(3) :LeNet初始化测试网络
- 深度学习-CAFFE利用CIFAR10网络模型训练自己的图像数据获得模型-2生成图像库的均值文件
- Caffe 中关于 LetNet-5 网络的定义文件 lenet.prototxt 解析
- 网络模型压缩和优化:ShuffleNet网络理解
- 网络编程中阻塞与非阻塞,同步与异步、I/O模型的理解
- (Caffe,LeNet)网络训练流程(二)
- Caffe学习系列(18): 绘制网络模型
- Caffe训练AlexNet网络模型——问题一
- caffe学习笔记2:lenet网络结构分析