您的位置:首页 > 其它

tensorflow利用RNN和双向RNN实现MNIST分类问题

2017-12-04 18:17 555 查看

1.使用单向RNN

建立输入层,RNN层和输出层

n_steps * n_inputs = 28 * 28,读取的单位是图片中的一行像素

输入数据:x=[batch_size,n_steps,n_inputs]

输出数据:y=[batch_size,n_classes]

输入层:

输入数据:x=[batch_size*n_steps,n_inputs]

权重w=[n_inputs,n_hidden],偏置b=[n_hidden]

输出数据:x=[batch_size*n_steps,n_hidden]

RNN层:

输入n_steps个[batch_size,n_hidden],即对于展开的LSTM单元,一个LSTM单元输入一批数据

使用如下函数得到单向LSTM单元的输出汇总和最终状态:

output, final_state = tf.nn.dynamic_rnn(lstm_cell, x_in, initial_state=init_state, time_major=False)

x_in=[batch_size,n_steps,n_hidden],对于动态RNN,输入必须为一个tensor。

time_major针对x_in的格式,time_major=False,表示n_steps位置的数为展开步数;若为True,表示batch_size位置的数为展开步数。

输出数据的获取:

output=[batch_size,n_steps,n_hidden],与输入统一,需要降维,得到n_steps个[batch_size,n_hidden],最后一个即为输出数据

final_state=(c_state,h_state),其中h_state为最终输出的状态

输出层

权重w=[n_hidden,n_classes],偏置b=[n_classes]
import tensorflow as tf
import input_data

mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)

learning_rate = 0.01
max_samples = 40000
batch_size = 128

n_steps = 28
n_inputs = 28
n_hidden = 256
n_classes = 10

weights={
"weight_in": tf.Variable(tf.random_normal([n_inputs, n_hidden])),
"weight_out": tf.Variable(tf.random_normal([n_hidden, n_classes]))
}
biases={
"biases_in":tf.Variable(tf.random_normal([n_hidden])),
"biases_out": tf.Variable(tf.random_normal([n_classes]))
}

x = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_classes])

def RNN(x, weights, biases):
x_in = tf.reshape(x, [-1, n_inputs])
x_in = tf.matmul(x_in, weights["weight_in"]) + biases["biases_in"]
x_in = tf.reshape(x_in, [-1, n_steps, n_hidden])

lstm_cell = tf.contrib.rnn.BasicLSTMCell(n_hidden, forget_bias=1.0)
init_state = lstm_cell.zero_state(batch_size, tf.float32)
output, final_state = tf.nn.dynamic_rnn(lstm_cell, x_in, initial_state=init_state, time_major=False)
y_ = tf.matmul(final_state[1], weights["weight_out"]) + biases["biases_out"]
return y_

prediction = RNN(x, weights, biases)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)

accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1)), tf.float32))

init = tf.global_variables_initializer()

with tf.Session() as sess:
sess.run(init)
step = 1
while step * batch_size < max_samples:
batch_x, batch_y = mnist.train.next_batch(batch_size)
batch_x = batch_x.reshape((batch_size, n_steps, n_inputs))
sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})
if step % 10 == 0:
accuracy_ = sess.run(accuracy, feed_dict={x: batch_x, y: batch_y})
print(accuracy_)
step += 1


2.使用双向RNN

包括双向RNN层和输出层

双向RNN

使用outputs, _, _ = tf.contrib.rnn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x, dtype=tf.float32)函数实现双向RNN的计算
输入数据x必须为一个列表,列表的长度为n_steps,列表中每个tensor是[batch_size,n_inputs]
输出的output也是一个列表,保存每次输出的结果[batch_size,n_hidden]

输出层

权重w=[2*n_hidden,n_classes],偏置b=[n_classes]

import tensorflow as tf
import input_data

mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)

learning_rate = 0.01
max_samples = 40000
batch_size = 128

n_steps = 28
n_inputs = 28
n_hidden = 256
n_classes = 10

x = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_classes])

weights = tf.random_normal([2 * n_hidden, n_classes])
biases = tf.random_normal([n_classes])

def BiRNN(x, weights, biases):
x = tf.transpose(x, [1, 0, 2])
x = tf.reshape(x, [-1, n_inputs])
x = tf.split(x, n_steps)

lstm_fw_cell = tf.contrib.rnn.BasicLSTMCell(n_hidden, forget_bias=1.0)
lstm_bw_cell = tf.contrib.rnn.BasicLSTMCell(n_hidden, forget_bias=1.0)
outputs, _, _ = tf.contrib.rnn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x, dtype=tf.float32)
y_ = tf.matmul(outputs[1], weights["weight_out"]) + biases["biases_out"]
return y_

prediction = BiRNN(x, weights, biases)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)

accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1)), tf.float32))

init = tf.global_variables_initializer()

with tf.Session() as sess:
sess.run(init)
step = 1
while step * batch_size < max_samples:
batch_x, batch_y = mnist.train.next_batch(batch_size)
batch_x = batch_x.reshape((batch_size, n_steps, n_inputs))
sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})
if step % 10 == 0:
accuracy = sess.run(accuracy, feed_dict={x: batch_x, y: batch_y})
print(accuracy)
step += 1

x_batch = mnist.test.images[:1000].reshape((-1, n_steps, n_inputs))
y_batch = mnist.test.labels[:1000]
print("Testing Accuracy:", sess.run(accuracy, feed_dict={x: x_batch, y: y_batch}))
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐