您的位置:首页 > 理论基础 > 计算机网络

使用tensorlayer来实现:通过keras例子来理解lstm循环神经网络

2018-02-24 19:17 976 查看
原 指导+例子+keras实现:http://blog.csdn.net/ma416539432/article/details/53509607

测试环境:tensorflow-gpu 1.5.0 + tensorlayer 1.7.4

这篇文章非常方便我 lstm神经元入门,但是我主使用tensorlayer,而不是keras,所以就把代码转为tensorlayer形式

我也是学习神经网络的新手,代码不好多多包含

另外吐槽下,我用keras跑的时候,不知为啥,吃了2.4g显存。。。。我才3g显存。。

我用tensorlayer跑的时候,仅吃100m

嗯。只有最终代码,应该很容易看懂把,应该可以和原文最终代码直接对照

# Naive LSTM to learn three-char time steps to one-char mapping
import numpy
import tensorflow as tf
import tensorlayer as tl
numpy.random.seed(7)
# define the raw dataset
alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
# create mapping of characters to integers (0-25) and the reverse
char_to_int = dict((c, i) for i, c in enumerate(alphabet))
int_to_char = dict((i, c) for i, c in enumerate(alphabet))
# prepare the dataset of input to output pairs encoded as integers
seq_length = 3
dataX = []
dataY = []
for i in range(0, len(alphabet) - seq_length, 1):
seq_in = alphabet[i:i + seq_length]
seq_out = alphabet[i + seq_length]
dataX.append([char_to_int[char] for char in seq_in])
dataY.append(char_to_int[seq_out])
print(seq_in, '->', seq_out)
# reshape X to be [samples, time steps, features]
X = numpy.reshape(dataX, (len(dataX), seq_length, 1))
# normalize
X = X / float(len(alphabet))
# tensorlayer的fit函数输入数据的第一维只认numpy.array,不认numpy.matrix和list,所以包装成numpy.array
X = numpy.asarray(X)
Y = numpy.asarray(dataY)
# one hot encode the output variable
pass
# create and fit the model
x = tf.placeholder(tf.float32, [None, X.shape[1], X.shape[2]], 'x')
y_ = tf.placeholder(tf.int64, [None, ])
network = tl.layers.InputLayer(x, 'input_layer')
network = tl.layers.RNNLayer(network, tf.nn.rnn_cell.LSTMCell, n_hidden=32, n_steps=X.shape[1], return_last=True, name='lstm1')
network = tl.layers.DenseLayer(network, len(alphabet), name='output_layer')
y = network.outputs
cost = tl.cost.cross_entropy(y, y_, name='cost')
correct_prediction = tf.equal(tf.argmax(y, 1), y_)
acc = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
y_op = tf.argmax(tf.nn.softmax(y), 1)
train_params = network.all_params
train_op = tf.train.AdamOptimizer(learning_rate=0.001, beta1=0.9, beta2=0.999,
epsilon=1e-08, use_locking=False).minimize(cost, var_list=train_params)
sess = tf.InteractiveSession()
tl.layers.initialize_global_variables(sess)
network.print_params()
network.print_layers()
tl.utils.fit(sess, network, train_op, cost, X, Y, x, y_,
acc=acc, batch_size=1, n_epoch=500, print_freq=1,
X_val=X, y_val=Y, eval_train=False)
# summarize performance of the model
tl.utils.test(sess, network, acc, X, Y, x, y_, None, cost)
# demonstrate some model predictions
for pattern in dataX:
X = numpy.reshape(pattern, (1, len(pattern), 1))
X = X / float(len(alphabet))
X = numpy.asarray(X)
prediction = tl.utils.predict(sess, network, X, x, y_op, 1)
#index = numpy.argmax(prediction)
index = prediction[0]
result = int_to_char[index]
seq_in = [int_to_char[value] for value in pattern]
print(seq_in, "->", result)
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐