您的位置:首页 > 理论基础 > 计算机网络

Tensorflow入门练习(简单神经网络的训练过程)

2018-03-27 15:41 645 查看
import tensorflow as tf
import os
import numpy as np
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

BATCH_SIZE = 8
SEED = 23455
# 用种子产生随机数
rdm = np.random.RandomState(SEED)
# 取出其中 32 * 2 的数据作为输入
X = rdm.rand(32,2)

# Y的值为用以下逻辑生成
Y_ = [[int(x0 + x1< 1)] for (x0,x1) in X]
print("X:\n",X)
print("Y_:\n",Y_)

x = tf.placeholder(tf.float32, shape=(None, 2))
y_ = tf.placeholder(tf.float32, shape=(None, 1))

w1 = tf.Variable(tf.random_normal([2,3], stddev=1, seed=1))   # 2 * 3
w2 = tf.Variable(tf.random_normal([3,1], stddev=1, seed=1))   # 3 * 1

a = tf.matmul(x, w1)
y = tf.matmul(a, w2)

# 定义损失函数及反向传播函数
loss_mse = tf.reduce_mean(tf.square(y-y_))

train_step = tf.train.GradientDescentOptimizer(0.001).minimize(loss_mse)

# 开始训练
with tf.Session() as sess:
init_op = tf.global_variables_initializer()
sess.run(init_op)
# 输出目前(未经训练)的参数取值。
print("w1:\n", sess.run(w1))
print("w2:\n", sess.run(w2))
print("\n")

# 训练模型。
STEPS = 3000
for i in range(STEPS):
start = (i * BATCH_SIZE) % 32   # 总共有32组数据
end = start + BATCH_SIZE        #每次用8个训练
sess.run(train_step, feed_dict={x: X[start:end], y_: Y_[start:end]})  # 优化目标函数
if i % 500 == 0:
# 每500次输出一次计算结果,输出经过500次迭代,目标函数优化到多少
total_loss = sess.run(loss_mse, feed_dict={x: X, y_: Y_})
print("After %d training step(s), loss_mse on all data is %g" % (i, total_loss))

# 输出训练后的参数取值。
print("\n")
print("w1:\n", sess.run(w1))
print("w2:\n", sess.run(w2))
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐