TensorFlow CNN
2019-07-03 16:25
483 查看
简单的分类任务
import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('data/', one_hot=True) # to be able to rerun the model without overwriting tf variables tf.reset_default_graph() num_classes = 10 batch_size = 64 num_train = 10000 X = tf.placeholder(tf.float32, [None, 28,28,1]) y = tf.placeholder(tf.float32, [None, num_classes]) #CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED W1 = tf.Variable(tf.random_normal(shape=[5,5,1,32], stddev=0.1)) b1 = tf.Variable(tf.constant(0.1, shape=[32])) W2 = tf.Variable(tf.random_normal(shape=[5,5,32,64], stddev=0.1)) b2 = tf.Variable(tf.constant(0.1, shape=[64])) #卷积层 conv_1 = tf.nn.relu(tf.nn.conv2d(X, filter=W1, strides=[1,1,1,1], padding="SAME") + b1) pool_1 = tf.nn.max_pool(conv_1, ksize=[1,2,2,1], strides=[1,2,2,1], padding="SAME") conv_2 = tf.nn.relu(tf.nn.conv2d(pool_1, filter=W2, strides=[1,1,1,1], padding="SAME") + b2) pool_2 = tf.nn.max_pool(conv_2, ksize=[1,2,2,1], strides=[1,2,2,1], padding="SAME") #全连接层 W_fc1 = tf.Variable(tf.random_normal(shape=[7*7*64, 1024], stddev=0.1)) b_fc1 = tf.Variable(tf.constant(0.1, shape=[1024])) W_fc2 = tf.Variable(tf.random_normal(shape=[1024, num_classes], stddev=0.1)) b_fc2 = tf.Variable(tf.constant(0.1, shape=[num_classes])) pool_2 = tf.reshape(pool_2, [-1, 7*7*64]) fc1 = tf.nn.relu(tf.matmul(pool_2, W_fc1) + b_fc1) fc2 = tf.matmul(fc1, W_fc2) + b_fc2 #the output of the last LINEAR unit loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=fc2)) train_op = tf.train.AdamOptimizer().minimize(loss) correct_prediction = tf.equal(tf.argmax(fc2, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for step in range(num_train): mini_batch = mnist.train.next_batch(batch_size) X_temp = mini_batch[0].reshape([batch_size, 28,28,1]) y_temp = mini_batch[1] sess.run(train_op, feed_dict={X:X_temp, y:y_temp}) if step % 1000 == 0: loss_var, accuracy_var = sess.run([loss,accuracy], feed_dict={X:X_temp, y:y_temp}) print("loss:", loss_var, "accuracy:", accuracy_var)
相关文章推荐
- 用tensorflow-faster-rcnn train pascal_voc数据集时遇到的一些错误
- 【Tensorflow训练Faster R-CNN中意外中止】“keep_inds = np.append(fg_inds, bg_inds) (Pdb)”
- 深度学习之卷积神经网络CNN及tensorflow代码实现示例详细介绍
- [TensorFlow深度学习深入]实战三·分别使用DNN,CNN与RNN(LSTM)做文本情感分析(机器如何读懂人心)
- 深度学习---TensorFlow学习笔记:搭建CNN模型
- TensorFlow实例(5.3)--MNIST手写数字进阶算法(卷积神经网络CNN) 之 最大池化tf.nn.max_pool
- python tensorflow 基于cnn实现手写数字识别
- tensorflow 很全的CNN练习例子(mnist数据集)
- TensorFlow 上基于 Faster RCNN 的目标检测
- 从理论到实践,手把手教你如何用 TensorFlow 实现 CNN
- tensorflow搭建cnn人脸识别训练+识别代码(python)
- 【NLP】TensorFlow实现CNN用于中文文本分类
- Deep Learning-TensorFlow (8) CNN卷积神经网络_《TensorFlow实战》及经典网络模型(上)
- TensorFlow小试牛刀(1):CNN图像分类
- 【Tensorflow】实现简单的卷积神经网络CNN实际代码
- 用CNN解决手写体数字识别。--tensorflow
- Deep Learning-TensorFlow (13) CNN卷积神经网络_ GoogLeNet 之 Inception(V1-V4)
- 字符级卷积神经网络(Char-CNN)实现文本分类--模型介绍与TensorFlow实现
- TensorFlow实战:Chapter-8上(Mask R-CNN介绍与实现)
- Ubuntu16.04 安装tensorflow+Fast-RCNN+cuda+cudnn过程