您的位置:首页 > 理论基础 > 计算机网络

Tensorflow学习之实现卷积神经网络(三)

2017-08-06 01:23 246 查看
本次使用ImageNet数据集,ImageNet拥有1500万张标注过的高清图片,总共拥有22000类,其中约100万张标注了图片中主要物体的定位边框。
每年度的ILSVRC比赛数据集中大概拥有120万张图片以及1000类的标注,是ImageNet全部数据的一个子集。比赛一般采用top-5和top-1分类错误率作为模型性能的评测指标。
这次深度卷积网络为模型选择ALexNet,具体网络情况可以参考之前的博客,Alexnet包含了6亿3000万个连接,6000万个参数和65万个神经元,拥有5个卷积层,其中3个卷积层后面连接了最大池化层,最后还有3个全连接层。
AlexNet主要使用的新技术:
(1)成功使用ReLu作为CNN的激活函数,并验证其效果在较深的网络超过了Sigmoid,成功解决了Sigmoid在网络较深时的梯度弥散问题。
(2)训练时使用Dropout随机忽略一部分神经元,以避免模型过拟合。
(3)在CNN中使用重叠的最大池化。
(4)提出了LRN层,对局部神经元的活动创建竞争机制,使得其中响应比较大的值变得相对更大,并抑制其他反馈较小的神经元,增强了模型的泛化能力。
(5)使用CUDA加速深度卷积网络的训练,利用GPU强大的并行计算能力处理神经网络训练时大量的矩阵运算。
(6)数据增强。随机地从256x256的原始图像中截取224x224大小的区域(以及水平翻转的镜像)相当于增加了(256-224)^2x2=2048倍的数据量,防止陷入过拟合,提升了泛化能力进行预测时,则是取图片的四个角加中间共5个位置,并进行左右翻转,一共获得10张图片,对他们进行预测并对10此结果求均值。同时,AlexNet还提到了会对图像的RGB数据进行PCA主成分分析,并对主成分做一个标准差为0.1的高斯噪声扰动。


#由于对ImageNet使用ALexNet训练,耗时太多,加上硬件条件有限,所以只建立一个完整的AlexNet网络,然后对
#它每个batch的前馈计算和反馈计算的速度进行测试,这里使用随机图片数据来计算每轮前馈、反馈的平均耗时。
from datetime import datetime
import math
import time
import tensorflow as tf
batch_size=32
num_batches=100
#定义一个用来显示网络每一层结构的函数,展示每一个卷积层或者池化层输出tensor的尺寸。
def print_activations(t):
print(t.op.name,'',t.get_shape().as_list())
#接下来设计AlexNet网络结构。
def inference(images):
parameters = []
with tf.name_scope('conv1') as scope:
kernel = tf.Variable(tf.truncated_normal([11,11,3,64], # 11x11,3通道,64个卷积核
dtype=tf.float32,stddev=1e-1),name = 'weights')
conv = tf.nn.conv2d(images,kernel,[1,4,4,1],padding= 'SAME')#步长4x4,横向间隔和纵向间隔
biases = tf.Variable(tf.constant(0.0,shape=[64],dtype=tf.float32),trainable=True,name = 'biases')
bias = tf.nn.bias_add(conv,biases)
conv1 = tf.nn.relu(bias,name=scope)
print_activations(conv1)
parameters += [kernel,biases]
lrn1=tf.nn.lrn(conv1,4,bias=1.0,alpha=0.001/9,beta=0.75,name='lrn1')#定义局部相应归一化层
#VALID表示取样时不能超过边框,不像SAME模式那样可以填充边界外的点
pool1=tf.nn.max_pool(lrn1,ksize=[1,3,3,1],strides=[1,2,2,1],padding='VALID',name='pool1')
print_activations(pool1)
with tf.name_scope('conv2') as scope:
kernel = tf.Variable(tf.truncated_normal([5,5,64,192],
dtype=tf.float32,stddev=1e-1),name='weights')
conv = tf.nn.conv2d(pool1,kernel,[1,1,1,1],padding='SAME')
biases = tf.Variable(tf.constant(0.0,shape=[192],
dtype=tf.float32),trainable=True,name='biases')
bias = tf.nn.bias_add(conv,biases)
conv2 = tf.nn.relu(bias,name=scope)
parameters +=[kernel,biases]
print_activations(conv2)
lrn2 = tf.nn.lrn(conv2,4,bias=1.0,alpha=0.001/9,beta=0.75,name='lrn2')
pool2=tf.nn.max_pool(lrn2,ksize=[1,3,3,1],strides=[1,2,2,1],padding='VALID',name='pool2')
print_activations(pool2)
with tf.name_scope('conv3') as scope:
kernel = tf.Variable(tf.truncated_normal([3,3,192,384],
dtype=tf.float32,stddev=1e-1),name='weights')
conv = tf.nn.conv2d(pool2,kernel,[1,1,1,1],padding='SAME')
biases = tf.Variable(tf.constant(0.0,shape=[384],
dtype=tf.float32),trainable=True,name='biases')
bias = tf.nn.bias_add(conv,biases)
conv3 = tf.nn.relu(bias,name=scope)
parameters += [kernel,biases]
print_activations(conv3)
with tf.name_scope('conv4') as scope:
kernel = tf.Variable(tf.truncated_normal([3,3,384,256],
dtype=tf.float32,stddev=1e-1),name='weights')
conv = tf.nn.conv2d(conv3,kernel,[1,1,1,1],padding='SAME')
biases = tf.Variable(tf.constant(0.0,shape=[256],
dtype=tf.float32),trainable=True,name='biases')
bias = tf.nn.bias_add(conv,biases)
conv4 = tf.nn.relu(bias,name=scope)
parameters += [kernel,biases]
print_activations(conv4)
with tf.name_scope('conv5') as scope:
kernel = tf.Variable(tf.truncated_normal([3,3,256,256],
dtype=tf.float32,stddev=1e-1),name='weights')
conv = tf.nn.conv2d(conv4,kernel,[1,1,1,1],padding='SAME')
biases = tf.Variable(tf.constant(0.0,shape=[256],
dtype=tf.float32),trainable=True,name='biases')
bias = tf.nn.bias_add(conv,biases)
conv5 = tf.nn.relu(bias,name=scope)
parameters += [kernel,biases]
print_activations(conv5)
pool5 = tf.nn.max_pool(conv5,ksize=[1,3,3,1],strides=[1,2,2,1],
padding='VALID',name='pool5')
print_activations(pool5)
return pool5,parameters
def time_tensorflow_run(session,target,info_string):
num_steps_burn_in = 10 #预热轮数:给程序热身,头几轮迭代有显存加载,cache命中等问题因此可以跳过,只考量10论迭代之后计算时间
total_duration = 0.0 #总时间
total_duration_squared = 0.0#平方和用以计算方差
for i in range(num_batches + num_steps_burn_in):
start_time = time.time() #记录时间
_ = session.run(target) #执行每次迭代
duration = time.time() - start_time
if i >= num_steps_burn_in:
if not i %10:
print('%s:step %d,duration = %.3f' %
(datetime.now(),i-num_steps_burn_in,duration))
total_duration +=duration
total_duration_squared +=duration * duration #以便计算后面每轮耗时的均值和标准差
mn = total_duration / num_batches #平均耗时
vr = total_duration_squared / num_batches -mn * mn
sd = math.sqrt(vr) #标准差
print('%s:%s across %d steps,%.3f +/- %.3f sec /batch' % (datetime.now(),info_string,num_batches,mn,sd))
#主函数
def run_benchmark():
with tf.Graph().as_default():
image_size = 224
images = tf.Variable(tf.random_normal([batch_size,#每轮迭代的样本数
image_size,#图片尺寸224
image_size,3],#图片尺寸224,颜色通道数
dtype=tf.float32,
stddev=1e-1))
pool5,parameters = inference(images)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
time_tensorflow_run(sess,pool5,"Forward")
objective = tf.nn.l2_loss(pool5)
grad = tf.gradients(objective,parameters)
time_tensorflow_run(sess,grad,"Forward-backward")
run_benchmark()


结果:

/usr/local/Cellar/anaconda/bin/python /Users/new/Documents/JLIFE/procedure/python_tr/py_train/train1.py
conv1  [32, 56, 56, 64]
pool1  [32, 27, 27, 64]
conv2  [32, 27, 27, 192]
pool2  [32, 13, 13, 192]
conv3  [32, 13, 13, 384]
conv4  [32, 13, 13, 256]
conv5  [32, 13, 13, 256]
pool5  [32, 6, 6, 256]
2017-08-06 01:12:25.325914: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-06 01:12:25.325930: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-08-06 01:12:25.325934: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-06 01:12:25.325938: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
2017-08-06 01:12:40.241412:step 0,duration = 1.298
2017-08-06 01:12:40.243005:Forward across 100 steps,0.013 +/- 0.129 sec /batch
2017-08-06 01:12:41.582878:Forward across 100 steps,0.026 +/- 0.185 sec /batch
2017-08-06 01:12:42.933289:Forward across 100 steps,0.040 +/- 0.227 sec /batch
2017-08-06 01:12:44.320056:Forward across 100 steps,0.054 +/- 0.263 sec /batch
2017-08-06 01:12:45.657048:Forward across 100 steps,0.067 +/- 0.293 sec /batch
2017-08-06 01:12:47.026871:Forward across 100 steps,0.081 +/- 0.320 sec /batch
2017-08-06 01:12:48.359411:Forward across 100 steps,0.094 +/- 0.343 sec /batch
2017-08-06 01:12:49.676116:Forward across 100 steps,0.107 +/- 0.364 sec /batch
2017-08-06 01:12:51.020488:Forward across 100 steps,0.121 +/- 0.384 sec /batch
2017-08-06 01:12:52.423273:Forward across 100 steps,0.135 +/- 0.404 sec /batch
2017-08-06 01:12:53.844251:step 10,duration = 1.421
2017-08-06 01:12:53.844291:Forward across 100 steps,0.149 +/- 0.424 sec /batch
2017-08-06 01:12:55.182100:Forward across 100 steps,0.162 +/- 0.440 sec /batch
2017-08-06 01:12:56.503463:Forward across 100 steps,0.176 +/- 0.454 sec /batch
2017-08-06 01:12:57.833390:Forward across 100 steps,0.189 +/- 0.468 sec /batch
2017-08-06 01:12:59.162155:Forward across 100 steps,0.202 +/- 0.481 sec /batch
2017-08-06 01:13:00.515397:Forward across 100 steps,0.216 +/- 0.494 sec /batch
2017-08-06 01:13:01.859769:Forward across 100 steps,0.229 +/- 0.506 sec /batch
2017-08-06 01:13:03.237667:Forward across 100 steps,0.243 +/- 0.519 sec /batch
2017-08-06 01:13:04.590199:Forward across 100 steps,0.256 +/- 0.530 sec /batch
2017-08-06 01:13:05.940574:Forward across 100 steps,0.270 +/- 0.540 sec /batch
2017-08-06 01:13:07.284970:step 20,duration = 1.344
2017-08-06 01:13:07.285013:Forward across 100 steps,0.283 +/- 0.550 sec /batch
2017-08-06 01:13:08.606363:Forward across 100 steps,0.297 +/- 0.559 sec /batch
2017-08-06 01:13:09.928051:Forward across 100 steps,0.310 +/- 0.567 sec /batch
2017-08-06 01:13:11.307156:Forward across 100 steps,0.324 +/- 0.576 sec /batch
2017-08-06 01:13:12.678802:Forward across 100 steps,0.337 +/- 0.584 sec /batch
2017-08-06 01:13:14.039941:Forward across 100 steps,0.351 +/- 0.592 sec /batch
2017-08-06 01:13:15.351828:Forward across 100 steps,0.364 +/- 0.599 sec /batch
2017-08-06 01:13:16.660364:Forward across 100 steps,0.377 +/- 0.605 sec /batch
2017-08-06 01:13:17.965074:Forward across 100 steps,0.390 +/- 0.611 sec /batch
2017-08-06 01:13:19.291704:Forward across 100 steps,0.403 +/- 0.616 sec /batch
2017-08-06 01:13:20.661148:step 30,duration = 1.369
2017-08-06 01:13:20.661189:Forward across 100 steps,0.417 +/- 0.623 sec /batch
2017-08-06 01:13:22.102687:Forward across 100 steps,0.432 +/- 0.629 sec /batch
2017-08-06 01:13:23.601565:Forward across 100 steps,0.447 +/- 0.637 sec /batch
2017-08-06 01:13:24.914424:Forward across 100 steps,0.460 +/- 0.641 sec /batch
2017-08-06 01:13:26.191727:Forward across 100 steps,0.472 +/- 0.644 sec /batch
2017-08-06 01:13:27.469132:Forward across 100 steps,0.485 +/- 0.647 sec /batch
2017-08-06 01:13:28.728601:Forward across 100 steps,0.498 +/- 0.650 sec /batch
2017-08-06 01:13:30.015289:Forward across 100 steps,0.511 +/- 0.653 sec /batch
2017-08-06 01:13:31.295402:Forward across 100 steps,0.523 +/- 0.655 sec /batch
2017-08-06 01:13:32.577971:Forward across 100 steps,0.536 +/- 0.658 sec /batch
2017-08-06 01:13:33.860286:step 40,duration = 1.282
2017-08-06 01:13:33.860325:Forward across 100 steps,0.549 +/- 0.659 sec /batch
2017-08-06 01:13:35.155010:Forward across 100 steps,0.562 +/- 0.661 sec /batch
2017-08-06 01:13:36.442941:Forward across 100 steps,0.575 +/- 0.663 sec /batch
2017-08-06 01:13:37.735706:Forward across 100 steps,0.588 +/- 0.664 sec /batch
2017-08-06 01:13:39.028046:Forward across 100 steps,0.601 +/- 0.665 sec /batch
2017-08-06 01:13:40.300739:Forward across 100 steps,0.614 +/- 0.666 sec /batch
2017-08-06 01:13:41.575732:Forward across 100 steps,0.626 +/- 0.666 sec /batch
2017-08-06 01:13:42.835769:Forward across 100 steps,0.639 +/- 0.666 sec /batch
2017-08-06 01:13:44.135364:Forward across 100 steps,0.652 +/- 0.666 sec /batch
2017-08-06 01:13:45.494353:Forward across 100 steps,0.665 +/- 0.666 sec /batch
2017-08-06 01:13:46.851114:step 50,duration = 1.357
2017-08-06 01:13:46.851155:Forward across 100 steps,0.679 +/- 0.666 sec /batch
2017-08-06 01:13:48.167853:Forward across 100 steps,0.692 +/- 0.666 sec /batch
2017-08-06 01:13:49.462130:Forward across 100 steps,0.705 +/- 0.665 sec /batch
2017-08-06 01:13:50.754816:Forward across 100 steps,0.718 +/- 0.664 sec /batch
2017-08-06 01:13:52.056450:Forward across 100 steps,0.731 +/- 0.662 sec /batch
2017-08-06 01:13:53.378377:Forward across 100 steps,0.744 +/- 0.661 sec /batch
2017-08-06 01:13:54.951660:Forward across 100 steps,0.760 +/- 0.661 sec /batch
2017-08-06 01:13:56.590266:Forward across 100 steps,0.776 +/- 0.663 sec /batch
2017-08-06 01:13:58.152294:Forward across 100 steps,0.792 +/- 0.663 sec /batch
2017-08-06 01:13:59.555143:Forward across 100 steps,0.806 +/- 0.661 sec /batch
2017-08-06 01:14:01.028118:step 60,duration = 1.473
2017-08-06 01:14:01.028157:Forward across 100 steps,0.821 +/- 0.659 sec /batch
2017-08-06 01:14:02.487687:Forward across 100 steps,0.835 +/- 0.657 sec /batch
2017-08-06 01:14:03.931508:Forward across 100 steps,0.850 +/- 0.654 sec /batch
2017-08-06 01:14:05.373336:Forward across 100 steps,0.864 +/- 0.651 sec /batch
2017-08-06 01:14:06.935691:Forward across 100 steps,0.880 +/- 0.649 sec /batch
2017-08-06 01:14:08.323316:Forward across 100 steps,0.894 +/- 0.645 sec /batch
2017-08-06 01:14:09.647393:Forward across 100 steps,0.907 +/- 0.640 sec /batch
2017-08-06 01:14:10.964926:Forward across 100 steps,0.920 +/- 0.634 sec /batch
2017-08-06 01:14:12.294994:Forward across 100 steps,0.933 +/- 0.629 sec /batch
2017-08-06 01:14:13.627031:Forward across 100 steps,0.947 +/- 0.623 sec /batch
2017-08-06 01:14:14.990109:step 70,duration = 1.363
2017-08-06 01:14:14.990147:Forward across 100 steps,0.960 +/- 0.617 sec /batch
2017-08-06 01:14:16.325999:Forward across 100 steps,0.974 +/- 0.611 sec /batch
2017-08-06 01:14:17.661570:Forward across 100 steps,0.987 +/- 0.604 sec /batch
2017-08-06 01:14:18.996872:Forward across 100 steps,1.000 +/- 0.597 sec /batch
2017-08-06 01:14:20.320956:Forward across 100 steps,1.014 +/- 0.589 sec /batch
2017-08-06 01:14:21.648584:Forward across 100 steps,1.027 +/- 0.581 sec /batch
2017-08-06 01:14:22.991597:Forward across 100 steps,1.040 +/- 0.572 sec /batch
2017-08-06 01:14:24.382550:Forward across 100 steps,1.054 +/- 0.564 sec /batch
2017-08-06 01:14:25.761860:Forward across 100 steps,1.068 +/- 0.555 sec /batch
2017-08-06 01:14:27.139187:Forward across 100 steps,1.082 +/- 0.545 sec /batch
2017-08-06 01:14:28.472139:step 80,duration = 1.333
2017-08-06 01:14:28.472190:Forward across 100 steps,1.095 +/- 0.534 sec /batch
2017-08-06 01:14:29.802358:Forward across 100 steps,1.109 +/- 0.523 sec /batch
2017-08-06 01:14:31.117610:Forward across 100 steps,1.122 +/- 0.512 sec /batch
2017-08-06 01:14:32.463578:Forward across 100 steps,1.135 +/- 0.500 sec /batch
2017-08-06 01:14:33.789118:Forward across 100 steps,1.148 +/- 0.487 sec /batch
2017-08-06 01:14:35.115080:Forward across 100 steps,1.162 +/- 0.473 sec /batch
2017-08-06 01:14:36.504551:Forward across 100 steps,1.176 +/- 0.459 sec /batch
2017-08-06 01:14:37.877295:Forward across 100 steps,1.189 +/- 0.444 sec /batch
2017-08-06 01:14:39.295771:Forward across 100 steps,1.203 +/- 0.428 sec /batch
2017-08-06 01:14:40.637999:Forward across 100 steps,1.217 +/- 0.411 sec /batch
2017-08-06 01:14:41.960935:step 90,duration = 1.323
2017-08-06 01:14:41.960976:Forward across 100 steps,1.230 +/- 0.392 sec /batch
2017-08-06 01:14:43.283187:Forward across 100 steps,1.243 +/- 0.372 sec /batch
2017-08-06 01:14:44.608790:Forward across 100 steps,1.257 +/- 0.351 sec /batch
2017-08-06 01:14:45.914260:Forward across 100 steps,1.270 +/- 0.327 sec /batch
2017-08-06 01:14:47.228980:Forward across 100 steps,1.283 +/- 0.302 sec /batch
2017-08-06 01:14:48.530195:Forward across 100 steps,1.296 +/- 0.273 sec /batch
2017-08-06 01:14:49.843505:Forward across 100 steps,1.309 +/- 0.240 sec /batch
2017-08-06 01:14:51.142827:Forward across 100 steps,1.322 +/- 0.200 sec /batch
2017-08-06 01:14:52.450932:Forward across 100 steps,1.335 +/- 0.150 sec /batch
2017-08-06 01:14:53.797271:Forward across 100 steps,1.348 +/- 0.067 sec /batch
2017-08-06 01:15:38.045801:step 0,duration = 3.732
2017-08-06 01:15:38.045846:Forward-backward across 100 steps,0.037 +/- 0.371 sec /batch
2017-08-06 01:15:41.812337:Forward-backward across 100 steps,0.075 +/- 0.525 sec /batch
2017-08-06 01:15:45.648909:Forward-backward across 100 steps,0.113 +/- 0.645 sec /batch
2017-08-06 01:15:49.456738:Forward-backward across 100 steps,0.151 +/- 0.742 sec /batch
2017-08-06 01:15:53.292977:Forward-backward across 100 steps,0.190 +/- 0.827 sec /batch
2017-08-06 01:15:57.140900:Forward-backward across 100 steps,0.228 +/- 0.904 sec /batch
2017-08-06 01:16:00.978075:Forward-backward across 100 steps,0.267 +/- 0.972 sec /batch
2017-08-06 01:16:04.862419:Forward-backward across 100 steps,0.305 +/- 1.036 sec /batch
2017-08-06 01:16:08.658287:Forward-backward across 100 steps,0.343 +/- 1.092 sec /batch
2017-08-06 01:16:12.422242:Forward-backward across 100 steps,0.381 +/- 1.143 sec /batch
2017-08-06 01:16:16.249695:step 10,duration = 3.827
2017-08-06 01:16:16.249739:Forward-backward across 100 steps,0.419 +/- 1.193 sec /batch
2017-08-06 01:16:19.933728:Forward-backward across 100 steps,0.456 +/- 1.236 sec /batch
2017-08-06 01:16:23.747764:Forward-backward across 100 steps,0.494 +/- 1.279 sec /batch
2017-08-06 01:16:27.687442:Forward-backward across 100 steps,0.534 +/- 1.323 sec /batch
2017-08-06 01:16:31.993489:Forward-backward across 100 steps,0.577 +/- 1.374 sec /batch
2017-08-06 01:16:35.948148:Forward-backward across 100 steps,0.616 +/- 1.413 sec /batch
2017-08-06 01:16:39.938683:Forward-backward across 100 steps,0.656 +/- 1.451 sec /batch
2017-08-06 01:16:44.049817:Forward-backward across 100 steps,0.697 +/- 1.490 sec /batch
2017-08-06 01:16:47.905824:Forward-backward across 100 steps,0.736 +/- 1.521 sec /batch
2017-08-06 01:16:51.673787:Forward-backward across 100 steps,0.774 +/- 1.548 sec /batch
2017-08-06 01:16:56.242751:step 20,duration = 4.569
2017-08-06 01:16:56.242794:Forward-backward across 100 steps,0.819 +/- 1.592 sec /batch
2017-08-06 01:17:00.220075:Forward-backward across 100 steps,0.859 +/- 1.620 sec /batch
2017-08-06 01:17:04.357774:Forward-backward across 100 steps,0.900 +/- 1.650 sec /batch
2017-08-06 01:17:08.640638:Forward-backward across 100 steps,0.943 +/- 1.682 sec /batch
2017-08-06 01:17:12.692641:Forward-backward across 100 steps,0.984 +/- 1.707 sec /batch
2017-08-06 01:17:16.896026:Forward-backward across 100 steps,1.026 +/- 1.734 sec /batch
2017-08-06 01:17:21.007618:Forward-backward across 100 steps,1.067 +/- 1.758 sec /batch
2017-08-06 01:17:25.629080:Forward-backward across 100 steps,1.113 +/- 1.789 sec /batch
2017-08-06 01:17:29.840949:Forward-backward across 100 steps,1.155 +/- 1.812 sec /batch
2017-08-06 01:17:33.741259:Forward-backward across 100 steps,1.194 +/- 1.829 sec /batch
2017-08-06 01:17:37.972717:step 30,duration = 4.231
2017-08-06 01:17:37.972772:Forward-backward across 100 steps,1.237 +/- 1.849 sec /batch
2017-08-06 01:17:42.514734:Forward-backward across 100 steps,1.282 +/- 1.874 sec /batch
2017-08-06 01:17:47.209448:Forward-backward across 100 steps,1.329 +/- 1.900 sec /batch
2017-08-06 01:17:52.767983:Forward-backward across 100 steps,1.385 +/- 1.941 sec /batch
2017-08-06 01:17:57.984473:Forward-backward across 100 steps,1.437 +/- 1.973 sec /batch
2017-08-06 01:18:02.401553:Forward-backward across 100 steps,1.481 +/- 1.990 sec /batch
2017-08-06 01:18:06.649972:Forward-backward across 100 steps,1.523 +/- 2.003 sec /batch
2017-08-06 01:18:10.585082:Forward-backward across 100 steps,1.563 +/- 2.011 sec /batch
2017-08-06 01:18:14.803730:Forward-backward across 100 steps,1.605 +/- 2.022 sec /batch
2017-08-06 01:18:19.765404:Forward-backward across 100 steps,1.654 +/- 2.043 sec /batch
2017-08-06 01:18:24.005912:step 40,duration = 4.240
2017-08-06 01:18:24.005955:Forward-backward across 100 steps,1.697 +/- 2.052 sec /batch
2017-08-06 01:18:28.643485:Forward-backward across 100 steps,1.743 +/- 2.066 sec /batch
2017-08-06 01:18:34.236841:Forward-backward across 100 steps,1.799 +/- 2.093 sec /batch
2017-08-06 01:18:39.470719:Forward-backward across 100 steps,1.852 +/- 2.113 sec /batch
2017-08-06 01:18:44.684566:Forward-backward across 100 steps,1.904 +/- 2.131 sec /batch
2017-08-06 01:18:51.728156:Forward-backward across 100 steps,1.974 +/- 2.183 sec /batch
2017-08-06 01:19:00.759770:Forward-backward across 100 steps,2.064 +/- 2.284 sec /batch
2017-08-06 01:19:10.966021:Forward-backward across 100 steps,2.166 +/- 2.414 sec /batch
2017-08-06 01:19:18.466335:Forward-backward across 100 steps,2.241 +/- 2.461 sec /batch
2017-08-06 01:19:24.013366:Forward-backward across 100 steps,2.297 +/- 2.472 sec /batch
2017-08-06 01:19:29.227375:step 50,duration = 5.214
2017-08-06 01:19:29.227587:Forward-backward across 100 steps,2.349 +/- 2.478 sec /batch
2017-08-06 01:19:34.328564:Forward-backward across 100 steps,2.400 +/- 2.482 sec /batch
2017-08-06 01:19:40.058337:Forward-backward across 100 steps,2.457 +/- 2.492 sec /batch
2017-08-06 01:19:45.799728:Forward-backward across 100 steps,2.515 +/- 2.501 sec /batch
2017-08-06 01:19:52.319529:Forward-backward across 100 steps,2.580 +/- 2.519 sec /batch
2017-08-06 01:19:59.778427:Forward-backward across 100 steps,2.655 +/- 2.552 sec /batch
2017-08-06 01:20:05.938803:Forward-backward across 100 steps,2.716 +/- 2.562 sec /batch
2017-08-06 01:20:11.473960:Forward-backward across 100 steps,2.772 +/- 2.562 sec /batch
2017-08-06 01:20:17.064365:Forward-backward across 100 steps,2.827 +/- 2.562 sec /batch
2017-08-06 01:20:22.085099:Forward-backward across 100 steps,2.878 +/- 2.555 sec /batch
2017-08-06 01:20:27.389769:step 60,duration = 5.305
2017-08-06 01:20:27.389814:Forward-backward across 100 steps,2.931 +/- 2.550 sec /batch
2017-08-06 01:20:32.018179:Forward-backward across 100 steps,2.977 +/- 2.539 sec /batch
2017-08-06 01:20:36.978072:Forward-backward across 100 steps,3.027 +/- 2.528 sec /batch
2017-08-06 01:20:41.675430:Forward-backward across 100 steps,3.074 +/- 2.515 sec /batch
2017-08-06 01:20:46.141910:Forward-backward across 100 steps,3.118 +/- 2.500 sec /batch
2017-08-06 01:20:50.472857:Forward-backward across 100 steps,3.162 +/- 2.483 sec /batch
2017-08-06 01:20:55.116984:Forward-backward across 100 steps,3.208 +/- 2.467 sec /batch
2017-08-06 01:20:59.851943:Forward-backward across 100 steps,3.255 +/- 2.450 sec /batch
2017-08-06 01:21:04.717680:Forward-backward across 100 steps,3.304 +/- 2.433 sec /batch
2017-08-06 01:21:09.099063:Forward-backward across 100 steps,3.348 +/- 2.413 sec /batch
2017-08-06 01:21:13.479957:step 70,duration = 4.381
2017-08-06 01:21:13.480012:Forward-backward across 100 steps,3.392 +/- 2.391 sec /batch
2017-08-06 01:21:17.767600:Forward-backward across 100 steps,3.434 +/- 2.368 sec /batch
2017-08-06 01:21:22.245013:Forward-backward across 100 steps,3.479 +/- 2.345 sec /batch
2017-08-06 01:21:26.993238:Forward-backward across 100 steps,3.527 +/- 2.322 sec /batch
2017-08-06 01:21:31.539883:Forward-backward across 100 steps,3.572 +/- 2.297 sec /batch
2017-08-06 01:21:36.070793:Forward-backward across 100 steps,3.618 +/- 2.271 sec /batch
2017-08-06 01:21:40.257783:Forward-backward across 100 steps,3.659 +/- 2.242 sec /batch
2017-08-06 01:21:44.695194:Forward-backward across 100 steps,3.704 +/- 2.213 sec /batch
2017-08-06 01:21:48.938067:Forward-backward across 100 steps,3.746 +/- 2.182 sec /batch
2017-08-06 01:21:53.629006:Forward-backward across 100 steps,3.793 +/- 2.151 sec /batch
2017-08-06 01:21:58.054776:step 80,duration = 4.426
2017-08-06 01:21:58.054863:Forward-backward across 100 steps,3.837 +/- 2.118 sec /batch
2017-08-06 01:22:02.537076:Forward-backward across 100 steps,3.882 +/- 2.083 sec /batch
2017-08-06 01:22:06.807872:Forward-backward across 100 steps,3.925 +/- 2.047 sec /batch
2017-08-06 01:22:11.985111:Forward-backward across 100 steps,3.977 +/- 2.012 sec /batch
2017-08-06 01:22:16.474595:Forward-backward across 100 steps,4.022 +/- 1.972 sec /batch
2017-08-06 01:22:20.975545:Forward-backward across 100 steps,4.067 +/- 1.931 sec /batch
2017-08-06 01:22:25.437281:Forward-backward across 100 steps,4.111 +/- 1.888 sec /batch
2017-08-06 01:22:30.132119:Forward-backward across 100 steps,4.158 +/- 1.843 sec /batch
2017-08-06 01:22:34.790891:Forward-backward across 100 steps,4.205 +/- 1.795 sec /batch
2017-08-06 01:22:39.592522:Forward-backward across 100 steps,4.253 +/- 1.746 sec /batch
2017-08-06 01:22:43.622219:step 90,duration = 4.030
2017-08-06 01:22:43.622264:Forward-backward across 100 steps,4.293 +/- 1.693 sec /batch
2017-08-06 01:22:47.845012:Forward-backward across 100 steps,4.335 +/- 1.637 sec /batch
2017-08-06 01:22:51.992878:Forward-backward across 100 steps,4.377 +/- 1.578 sec /batch
2017-08-06 01:22:56.050374:Forward-backward across 100 steps,4.417 +/- 1.516 sec /batch
2017-08-06 01:23:00.111645:Forward-backward across 100 steps,4.458 +/- 1.450 sec /batch
2017-08-06 01:23:04.152818:Forward-backward across 100 steps,4.498 +/- 1.380 sec /batch
2017-08-06 01:23:08.202057:Forward-backward across 100 steps,4.539 +/- 1.305 sec /batch
2017-08-06 01:23:12.848637:Forward-backward across 100 steps,4.585 +/- 1.222 sec /batch
2017-08-06 01:23:17.380369:Forward-backward across 100 steps,4.631 +/- 1.132 sec /batch
2017-08-06 01:23:21.606323:Forward-backward across 100 steps,4.673 +/- 1.033 sec /batch

Process finished with exit code 0


CNN的训练(backward计算)过程通常比较耗时,而不像预测(forward计算)过程,训练通常需要过很多遍数据,进行大量的迭代。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  深度学习