您的位置:首页 > 其它

TensorFlow 手写数字识别

2018-04-12 20:28 253 查看
一、MNIST介绍
      MNIST:在实现手写数字识别需要使用到手写数字的图片,是从MNIST下载的。MNIST(Modified National Institute of Standards and Technology)是一个大型数据库的手写数字通常用于训练各种图像处理系统。MNIST将手写数字数据集分为两个部分,训练集和测试集。训练集包括了60000行的训练数据,测试集包括了10000行的测试数据,每一个数据都是有一张手写数字图片和其对应的标签组成。每一张手写数字图片的大小为28*28(长和宽),总共由784个像素点组成。
二、数据的预处理
      数据预处理:在机器学习训练模型的过程中,数据的预处理占有非常重要的作用,指从数据源获取数据之后直接处理数据。一般,在使用机器学习训练某一个模型的时候,需要将原数据经过一定的处理之后,再用于模型的训练。在手写数字识别的模型中,输入的数据是一张的图片,为了简化处理,将每张28*28的图片装换成了一个784维的向量,很显然在转换的过程中,不得不丢弃了图片的二维结构信息,如果想要不丢弃图片的二维结构信息,可以使用卷积,在这个例子中,做了简化处理。手写数字识别的模型中,最终的目的是将一张图片与一个数字(0-9)进行对应,其实也就是将这个问题转换成了一个多分类的问题进行处理。在机器学习中,对于多分类问题,可以使用softmax回归进行处理。在MNIST数据集中,mnist.train.images是一个形状为[59999,784]的张量,第一维的大小代表的是MNIST中作为训练数据的大小,所以第一维的大小为0到59999,0表示的就是第一张图片,所以最大是到59999。第二维表示的是这张图片的784个像素在图片上的强度值(在0到1之间),如[0,0,....0.342,0.4232....]。而,mnist.train.labels是一个[59999,10]的张量,第一维的表示是图片的下标,第二维表示的是那个数字,10表示的是一个10维的向量,如9表示就是[0,0,0,0,0,0,0,0,0,1]。
三、softmax函数
      softmax是logistic的推广形式,logistic主要用于处理二分类问题,当结果y大于0.5的时候为1类,小于0.5的时候为0类,而softmax则是用于处理多分类问题。softmax函数是逻辑函数的一种推广,它可以将一个含有任意实数的k维向量z“压缩”到另一个k维实向量中,被压缩到的k维实向量中元素的范围都在(0,1)之间,而且所有的元素之和为1。softmax函数的形式如下:
下面引用维基百科的一个例子来形象简单的说明一下softmax:   输入向量{\displaystyle [1,2,3,4,1,2,3]}对应的Softmax函数的值为{\displaystyle [0.024,0.064,0.175,0.475,0.024,0.064,0.175]}。输出向量中拥有最大权重的项对应着输入向量中的最大值“4”。这也显示了这个函数通常的意义:对向量进行归一化,凸显其中最大的值并抑制远低于最大值的其他分量。
下面是使用tensorflow进行函数计算的示例代码:
前向传播:mnist_forward.py#coding:utf-8
import tensorflow as tf

INPUT_NODE = 784
OUTPUT_NODE = 10
LAYER1_NODE = 500

def get_weight(shape, regularizer):
w = tf.Variable(tf.truncated_normal(shape,stddev=0.1))
if regularizer != None: tf.add_to_collection('losses', tf.contrib.layers.l2_regularizer(regularizer)(w))
return w

def get_bias(shape):
b = tf.Variable(tf.zeros(shape))
return b

def forward(x, regularizer):
w1 = get_weight([INPUT_NODE, LAYER1_NODE], regularizer)
b1 = get_bias([LAYER1_NODE])
y1 = tf.nn.relu(tf.matmul(x, w1) + b1)

w2 = get_weight([LAYER1_NODE, OUTPUT_NODE], regularizer)
b2 = get_bias([OUTPUT_NODE])
y = tf.matmul(y1, w2) + b2
return y后向传播:mnist_backward.py#coding:utf-8
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
import mnist_forward
import os

BATCH_SIZE = 200
LEARNING_RATE_BASE = 0.1
LEARNING_RATE_DECAY = 0.99
REGULARIZER = 0.0001
STEPS = 500000
MOVING_AVERAGE_DECAY = 0.99
MODEL_SAVE_PATH="./model/"
MODEL_NAME="mnist_model"

def backward(mnist):

x = tf.placeholder(tf.float32, [None, mnist_forward.INPUT_NODE])
y_ = tf.placeholder(tf.float32, [None, mnist_forward.OUTPUT_NODE])
y = mnist_forward.forward(x, REGULARIZER)
global_step = tf.Variable(0, trainable=False)

ce = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y, labels=tf.argmax(y_, 1))
cem = tf.reduce_mean(ce)
loss = cem + tf.add_n(tf.get_collection('losses'))

learning_rate = tf.train.exponential_decay(
LEARNING_RATE_BASE,
global_step,
mnist.train.num_examples / BATCH_SIZE,
LEARNING_RATE_DECAY,
staircase=True)

train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)

ema = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY, global_step)
ema_op = ema.apply(tf.trainable_variables())
with tf.control_dependencies([train_step, ema_op]):
train_op = tf.no_op(name='train')

saver = tf.train.Saver()

with tf.Session() as sess:
init_op = tf.global_variables_initializer()
sess.run(init_op)

for i in range(STEPS):
xs, ys = mnist.train.next_batch(BATCH_SIZE)
_, loss_value, step = sess.run([train_op, loss, global_step], feed_dict={x: xs, y_: ys})
if i % 1000 == 0:
print("After %d training step(s), loss on training batch is %g." % (step, loss_value))
saver.save(sess, os.path.join(MODEL_SAVE_PATH, MODEL_NAME), global_step=global_step)

def main():
mnist = input_data.read_data_sets("./data/", one_hot=True)
backward(mnist)

if __name__ == '__main__':
main()

测试:mnist_test.py#coding:utf-8
import time
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
import mnist_forward
import mnist_backward
TEST_INTERVAL_SECS = 5

def test(mnist):
with tf.Graph().as_default() as g:
x = tf.placeholder(tf.float32, [None, mnist_forward.INPUT_NODE])
y_ = tf.placeholder(tf.float32, [None, mnist_forward.OUTPUT_NODE])
y = mnist_forward.forward(x, None)

ema = tf.train.ExponentialMovingAverage(mnist_backward.MOVING_AVERAGE_DECAY)
ema_restore = ema.variables_to_restore()
saver = tf.train.Saver(ema_restore)

correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

while True:
with tf.Session() as sess:
ckpt = tf.train.get_checkpoint_state(mnist_backward.MODEL_SAVE_PATH)
if ckpt and ckpt.model_checkpoint_path:
saver.restore(sess, ckpt.model_checkpoint_path)
global_step = ckpt.model_checkpoint_path.split('/')[-1].split('-')[-1]
accuracy_score = sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})
print("After %s training step(s), test accuracy = %g" % (global_step, accuracy_score))
else:
print('No checkpoint file found')
return
time.sleep(TEST_INTERVAL_SECS)

def main():
mnist = input_data.read_data_sets("./data/", one_hot=True)
test(mnist)

if __name__ == '__main__':
main()结果:
After 1001 training step(s), loss on training batch is 0.269449.
After 2001 training step(s), loss on training batch is 0.265405.
After 3001 training step(s), loss on training batch is 0.287048.
After 4001 training step(s), loss on training batch is 0.204382.
After 5001 training step(s), loss on training batch is 0.186171.
After 6001 training step(s), loss on training batch is 0.183109.
After 7001 training step(s), loss on training batch is 0.211252.
After 8001 training step(s), loss on training batch is 0.177219.
After 9001 training step(s), loss on training batch is 0.207285.
After 10001 training step(s), loss on training batch is 0.181238.
After 11001 training step(s), loss on training batch is 0.172206.
After 12001 training step(s), loss on training batch is 0.213331.
After 13001 training step(s), loss on training batch is 0.167271.
After 14001 training step(s), loss on training batch is 0.167317.
After 15001 training step(s), loss on training batch is 0.180954.
After 16001 training step(s), loss on training batch is 0.183364.
After 17001 training step(s), loss on training batch is 0.15222.
After 18001 training step(s), loss on training batch is 0.152043.

After 19001 training step(s), loss on training batch is 0.141971.
After 20001 training step(s), loss on training batch is 0.157286.

After 1 training step(s), test accuracy = 0.1048
After 1001 training step(s), test accuracy = 0.9475
After 2001 training step(s), test accuracy = 0.96
After 3001 training step(s), test accuracy = 0.9668
After 3001 training step(s), test accuracy = 0.9668
After 4001 training step(s), test accuracy = 0.9709
After 5001 training step(s), test accuracy = 0.9723
After 6001 training step(s), test accuracy = 0.9744
After 7001 training step(s), test accuracy = 0.975
After 7001 training step(s), test accuracy = 0.975
After 8001 training step(s), test accuracy = 0.9755
After 9001 training step(s), test accuracy = 0.9769
After 10001 training step(s), test accuracy = 0.9767
After 11001 training step(s), test accuracy = 0.9775
After 11001 training step(s), test accuracy = 0.9775
After 12001 training step(s), test accuracy = 0.9786
After 13001 training step(s), test accuracy = 0.9782
After 14001 training step(s), test accuracy = 0.9792
After 15001 training step(s), test accuracy = 0.9792
After 15001 training step(s), test accuracy = 0.9792
After 16001 training step(s), test accuracy = 0.9801
After 17001 training step(s), test accuracy = 0.98
After 18001 training step(s), test accuracy = 0.9803
After 19001 training step(s), test accuracy = 0.9805
After 19001 training step(s), test accuracy = 0.9805
After 20001 training step(s), test accuracy = 0.9805
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  TensorFlow