您的位置:首页 > 其它

tensorflow 学习笔记(1) MNIST for beginners

2016-10-06 20:34 435 查看
教程的第一部分主要讲的是mnist_softmax.py里面的代码:

首先是下载和读取mnist数据集,两行代码来实现:

from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
A one-hot vector is a vector which is 0 in most dimensions, and 1 in asingle dimension.

第一行是导入input_data模块,该模块的代码:

import tensorflow as tf
from tensorflow.contrib.learn.python.learn.datasets.mnist import read_data_sets


发现其实没有直接的input_data.read_data_sets()函数,而是又导入了另一个模块,该模块名叫read_data_sets,找到这个文件,里面的代码是:

def read_data_sets(train_dir,
fake_data=False,
one_hot=False,
dtype=dtypes.float32,
reshape=True,
validation_size=5000):
if fake_data:

def fake():
return DataSet([], [], fake_data=True, one_hot=one_hot, dtype=dtype)

train = fake()
validation = fake()
test = fake()
return base.Datasets(train=train, validation=validation, test=test)

TRAIN_IMAGES = 'train-images-idx3-ubyte.gz'
TRAIN_LABELS = 'train-labels-idx1-ubyte.gz'
TEST_IMAGES = 't10k-images-idx3-ubyte.gz'
TEST_LABELS = 't10k-labels-idx1-ubyte.gz'

local_file = base.maybe_download(TRAIN_IMAGES, train_dir,
SOURCE_URL + TRAIN_IMAGES)
with open(local_file, 'rb') as f:
train_images = extract_images(f)

local_file = base.maybe_download(TRAIN_LABELS, train_dir,
SOURCE_URL + TRAIN_LABELS)
with open(local_file, 'rb') as f:
train_labels = extract_labels(f, one_hot=one_hot)

local_file = base.maybe_download(TEST_IMAGES, train_dir,
SOURCE_URL + TEST_IMAGES)
with open(local_file, 'rb') as f:
test_images = extract_images(f)

local_file = base.maybe_download(TEST_LABELS, train_dir,
SOURCE_URL + TEST_LABELS)
with open(local_file, 'rb') as f:
test_labels = extract_labels(f, one_hot=one_hot)

if not 0 <= validation_size <= len(train_images):
raise ValueError(
'Validation size should be between 0 and {}. Received: {}.'
.format(len(train_images), validation_size))

validation_images = train_images[:validation_size]
validation_labels = train_labels[:validation_size]
train_images = train_images[validation_size:]
train_labels = train_labels[validation_size:]

train = DataSet(train_images, train_labels, dtype=dtype, reshape=reshape)
validation = DataSet(validation_images,
validation_labels,
dtype=dtype,
reshape=reshape)
test = DataSet(test_images, test_labels, dtype=dtype, reshape=reshape)

return base.Datasets(train=train, validation=validation, test=test)
这段代码,主要就是来准备训练集、测试集、验证集。其中下载下来的训练集前部分变成了准备好的训练集,后一部分变成了验证集。

然后将这个28*28的图像变为1*784的向量

最终的结果是把55000张图像,变成一个tensor张量[55000,784]。每张图像对应一个one-hotx向量1*10,所以label变成tensor[55000,10]

接下来是Softmax Regressions介绍:

其实解决的是10分类的问题,主要分为两步:首先是得出输入属于某一class的evidence,然后,将这个evidence转化成概率。

这里的evidence其实类似于加权和:



然后就是得到概率值y:

y = softmax(evidence),下面的是工作流程:



然后就是通过softmax()获得概率值:



以矩阵的形式表示:



最终的简洁表示为:

y = softmax(Wx+b)

使用tensorflow完成回归(implementing the regression)

首先导入tensorflow

import tensorflow as tf


创建x用来存放图片数据

x = tf.placeholder(tf.float32, [None, 784])


然后创建两个变量,存放权重参数与偏移参数

W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))


然后使用一行代码完成分类任务。

y = tf.nn.softmax(tf.matmul(x, W) + b)

训练(training):

训练的过程本质上是使LOSS变小的过程,若LOSS接近0,则说明训练的效果好。

交叉熵是其中一个方法来定义LOSS模型。

在使用交叉熵之前,需要将真值(金标准)(标签)导入进来:

y_ = tf.placeholder(tf.float32, [None, 10])
所以:

cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))

计算出来LOSS之后,就要利用LOSS反向传播,使用的方法为梯度下降法。(还有很多其他的优化算法)

train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)


在训练之前,需要将所有的变量初始化,

init = tf.initialize_all_variables()


现在可以在会话层中初始化了(Session)

sess = tf.Session()
sess.run(init)


训练1000次的Python代码如下:

for i in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})

最后是测试,输出精度:

correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))


完成任务。我是在ipython notebook中跑的例子。说一下这个过程中遇到的一些问题。

1、一开始我用的是下载的tensorflow 里面的mnist_softmax.py里面的代码,实在是坑,因为里面没有定义Session会话,所以会报错,根据官方教程把里面的Session加上去。

2、再运行,没有报错,但是也没有输出结果,很纳闷,然后我在终端里面输入:python 路径(mnist_softmax.py的路径)竟然有结果。

3、后来发现是因为代码里面定义了一个main()函数,所以只会编译,没有运行。

4、然后我在另一个cell里面调用这个函数,报错!关于input_data的问题,发现是因为找不到数据的位置,所以给它指定了位置,运行成功,下面附上完整代码与结果,(总结就是跑程序的时候,别用下载下来的代码,用官方教程一步一步写的代码)

from __future__ import  absolute_import
from __future__ import division
from __future__ import print_function
import argparse
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
def main(_):
print("测试")
mnist = input_data.read_data_sets("../../MNIST_data/",one_hot = True)
x = tf.placeholder(tf.float32,[None,784])
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros(10))
y = tf.matmul(x,W) + b

y_ = tf.placeholder(tf.float32,[None,10])
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y,y_))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)

init = tf.initialize_all_variables()
sess = tf.Session() sess.run(init)
for i in range(1000): batch_xs, batch_ys = mnist.train.next_batch(100) sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})

correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))

main(_)


测试结果:

测试
Extracting ../../MNIST_data/train-images-idx3-ubyte.gz
Extracting ../../MNIST_data/train-labels-idx1-ubyte.gz
Extracting ../../MNIST_data/t10k-images-idx3-ubyte.gz
Extracting ../../MNIST_data/t10k-labels-idx1-ubyte.gz
0.9142


正确率91%.

备注:运行环境:ubuntu14.04+python2.7+ipython notebook环境

参考:https://www.tensorflow.org/versions/r0.11/tutorials/mnist/beginners/index.html
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: