tensorflow56 《TensorFlow技术解析与实战》06 神经网络的发展及其Tensorflow实现
2017-06-19 11:47
896 查看
# 《TensorFlow技术解析与实战》06 神经网络的发展及其TensorFlow实现 # win10 Tensorflow1.2.0-RC0 python3.5.3 # CUDA v8.0 cudnn-8.0-windows10-x64-v5.1 # filename:nntf06.01.py mnist的AlexNet实现 # 参考: # https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/convolutional_network.py # https://github.com/tensorflow/models/blob/master/tutorials/image/alexnet/alexnet_benchmark.py import tensorflow as tf # 输入数据 from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("/tmp/data/", one_hot=True) # 定义网络的超参数 learing_rate = 0.001 training_iters = 200000 batch_size = 128 displayer_step = 10 # 定义网络的参数 n_input = 784 # 输入维度(img shape: 28x28) n_classes = 10 # 标记维度(0-9 digits) dropout = 0.75 # Dropout概率,输出的可能性 # 输入占位符 x = tf.placeholder(tf.float32, [None, n_input]) y = tf.placeholder(tf.float32, [None, n_classes]) keep_prob = tf.placeholder(tf.float32) # dropout # 定义卷积操作 def conv2d(name, x, W, b, strides=1): x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME') x = tf.nn.bias_add(x, b) return tf.nn.relu(x, name=name) # 使用relu激活函数 # 定义池化操作 def maxpool2d(name, x, k=2): return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1], padding='SAME', name=name) # 规范化操作 def norm(name, l_input, lsize=4): return tf.nn.lrn(l_input, lsize, bias=1.0, alpha=0.001/9.0, beta=0.75, name=name) # 定义所有网络参数 weights = { 'wc1': tf.Variable(tf.random_normal([11, 11, 1, 96])), 'wc2': tf.Variable(tf.random_normal([5, 5, 96, 256])), 'wc3': tf.Variable(tf.random_normal([3, 3, 256, 384])), 'wc4': tf.Variable(tf.random_normal([3, 3, 384, 384])), 'wc5': tf.Variable(tf.random_normal([3, 3, 384, 256])), 'wd1': tf.Variable(tf.random_normal([4*4*256, 4096])), 'wd2': tf.Variable(tf.random_normal([4096, 4096])), 'out': tf.Variable(tf.random_normal([4096, 10])) } biases = { 'bc1': tf.Variable(tf.random_normal([96])), 'bc2': tf.Variable(tf.random_normal([256])), 'bc3': tf.Variable(tf.random_normal([384])), 'bc4': tf.Variable(tf.random_normal([384])), 'bc5': tf.Variable(tf.random_normal([256])), 'bd1': tf.Variable(tf.random_normal([4096])), 'bd2': tf.Variable(tf.random_normal([4096])), 'out': tf.Variable(tf.random_normal([n_classes])) } # 定义网络 def alex_net(x, weights, biases, dropout): x = tf.reshape(x, shape=[-1, 28, 28, 1]) conv1 = conv2d('conv1', x, weights['wc1'], biases['bc1']) pool1 = maxpool2d('pool1', conv1, k=2) norm1 = norm('norm1', pool1, lsize=4) conv2 = conv2d('conv2', conv1, weights['wc2'], biases['bc2']) pool2 = maxpool2d('pool2', conv2, k = 2) norm2 = norm('norm2', pool2, lsize=4) conv3 = conv2d('conv3', norm2, weights['wc3'], biases['bc3']) pool3 = maxpool2d('pool3', conv3, k = 2) norm3 = norm('norm3', pool3, lsize=4) conv4 = conv2d('conv4', norm3, weights['wc4'], biases['bc4']) conv5 = conv2d('conv5', norm3, weights['wc5'], biases['bc5']) pool5 = maxpool2d('pool5', conv5, k = 2) norm5 = norm('norm5', pool5, lsize=4) fc1 = tf.reshape(norm5, [-1, weights['wd1'].get_shape().as_list()[0]]) fc1 = tf.add(tf.matmul(fc1, weights['wd1']), biases['bd1']) fc1 = tf.nn.relu(fc1) fc1 = tf.nn.dropout(fc1, dropout) fc2 = tf.reshape(fc1, [-1, weights['wd2'].get_shape().as_list()[0]]) fc2 = tf.add(tf.matmul(fc2, weights['wd2']), biases['bd2']) fc2 = tf.nn.relu(fc2) fc2 = tf.nn.dropout(fc2, dropout) out = tf.add(tf.matmul(fc2, weights['out']), biases['out']) return out #构建预测模型 predict_model = alex_net(x, weights, biases, keep_prob) # 定义损失函数和优化器 cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=predict_model)) optimizer = tf.train.AdamOptimizer(learning_rate=learing_rate).minimize(cost) # 评估函数 correct_pred = tf.equal(tf.argmax(predict_model, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) # 训练模型和评估模型 # 初始化变量 init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) step = 1 # 开始训练 while step*batch_size < training_iters: batch_x, batch_y = mnist.train.next_batch(batch_size) sess.run(optimizer, feed_dict={x: batch_x, y: batch_y, keep_prob: dropout}) if step % displayer_step == 0: # 计算损失值和准确度,输出 loss, acc = sess.run([cost, accuracy], feed_dict={x: batch_x, y: batch_y, keep_prob: 1.}) print("Iter " + str(step*batch_size) + ", Minibatch Loss= " + "{:.6f}".format(loss) + ", Training Accuracy= " + "{:.5f}".format(acc)) step += 1 print("Optimizer Finished!") # 计算测试集的准确度 print("Testing Accuracy: ", sess.run(accuracy, feed_dict={x: mnist.test.images[:256], y: mnist.test.labels[:256], keep_prob: 1.})) ''' Iter 1280, Minibatch Loss= 460011.468750, Training Accuracy= 0.36719 Iter 2560, Minibatch Loss= 303076.562500, Training Accuracy= 0.62500 ... Iter 198400, Minibatch Loss= 4899.899414, Training Accuracy= 0.97656 Iter 199680, Minibatch Loss= 447.203613, Training Accuracy= 0.99219 Optimizer Finished! Testing Accuracy: 0.992188 '''
相关文章推荐
- TensorFlow技术解析与实战 6 神经网络的发展及其 TensorFlow 实现
- tensorflow61 《TensorFlow技术解析与实战》13 生成式对抗网络
- [action] deep learning 深度学习 tensorflow 实战(2) 实现简单神经网络以及随机梯度下降算法S.G.D
- 学习笔记TF052:卷积网络,神经网络发展,AlexNet的TensorFlow实现
- TensorFlow技术解析与实战 13 生成式对抗网络
- 学习笔记TF052:卷积网络,神经网络发展,AlexNet的TensorFlow实现
- (尤其是训练集验证集的生成)深度学习 tensorflow 实战(2) 实现简单神经网络以及随机梯度下降算法S.G.D
- TensorFlow实战——实现神经网络
- Tensorflow实战1:最简单神经网络实现
- TensorFlow:实战Google深度学习框架(二)实现简单神经网络
- 使用循环神经网络实现语言模型——源自《TensorFlow:实战Goole深度学习框架》
- C++从零实现深度神经网络之四——神经网络的预测和输入输出的解析
- 深度学习与神经网络全局概览:核心技术的发展历程
- C++从零实现深度神经网络之六——实战手写数字识别(sigmoid和tanh)
- TensorFlow 深度学习笔记 TensorFlow实现与优化深度神经网络
- TensorFlow 深度学习笔记 TensorFlow实现与优化深度神经网络
- 基于Theano的多层神经网络及其实现(一)
- 机器学习之深入理解神经网络理论基础、BP算法及其Python实现
- TensorFlow深度学习笔记 实现与优化深度神经网络
- TensorFlow实现与优化深度神经网络