tensorflow构建网络模型
2016-11-18 15:34
363 查看
from __future__ import print_function import tensorflow as tf import numpy as np import matplotlib.pyplot as plt def add_layer(inputs, in_size, out_size, activation_function=None): Weights = tf.Variable(tf.random_normal([in_size, out_size])) biases = tf.Variable(tf.zeros([1, out_size]) + 0.1) Wx_plus_b = tf.matmul(inputs, Weights) + biases if activation_function is None: outputs = Wx_plus_b else: outputs = activation_function(Wx_plus_b) return outputs # Make up some real data x_data = np.linspace(-1, 1, 300)[:, np.newaxis] noise = np.random.normal(0, 0.05, x_data.shape) y_data = np.square(x_data) - 0.5 + noise ##plt.scatter(x_data, y_data) ##plt.show() # define placeholder for inputs to network xs = tf.placeholder(tf.float32, [None, 1]) ys = tf.placeholder(tf.float32, [None, 1]) # add hidden layer l1 = add_layer(xs, 1, 10, activation_function=tf.nn.relu) # add output layer prediction = add_layer(l1, 10, 1, activation_function=None) # the error between prediciton and real data loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys-prediction), reduction_indices=[1])) train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss) # important step init = tf.initialize_all_variables() sess= tf.Session() sess.run(init) for i in range(1000): # training sess.run(train_step, feed_dict={xs: x_data, ys: y_data}) if i % 50 == 0: # to see the step improvement print(sess.run(loss, feed_dict={xs: x_data, ys: y_data}))
说明:
1、添加网络层的函数:
def add_layer(inputs, in_size, out_size, activation_function=None):
Weights = tf.Variable(tf.random_normal([in_size, out_size]))
biases = tf.Variable(tf.zeros([1, out_size]) + 0.1)
Wx_plus_b = tf.matmul(inputs, Weights) + biases
if activation_function is None:
outputs = Wx_plus_b
else:
outputs = activation_function(Wx_plus_b)
return outputs
2、模型的输入和输出要用占位符表示
# define placeholder for inputs to network
xs = tf.placeholder(tf.float32, [None, 1])
ys = tf.placeholder(tf.float32, [None, 1])
3、堆叠层的方式
# add hidden layer
l1 = add_layer(xs, 1, 10, activation_function=tf.nn.relu)
# add output layer
prediction = add_layer(l1, 10, 1, activation_function=None)
4、定义好loss和优化器
# the error between prediciton and real data
loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys-prediction), reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)
5、因为输出loss需要用到占位符。所以,要把占位符的取值通过dict传进去,在此处我们用feed_dict={xs: x_data, ys: y_data},传进去的是全部的训练集,如果用batch size,可以传进去部分样本,这里就体现出占位符的作用了
for i in range(1000):
# training
sess.run(train_step, feed_dict={xs: x_data, ys: y_data})
if i % 50 == 0:
# to see the step improvement
print(sess.run(loss, feed_dict={xs: x_data, ys: y_data}))
相关文章推荐
- TensorFlow下构建高性能神经网络模型的最佳实践
- TensorFlow 下构建高性能神经网络模型的最佳实践
- TensorFlow下构建高性能神经网络模型的最佳实践
- TensorFlow下构建高性能神经网络模型的最佳实践
- Tensorflow学习笔记-构建网络模型
- TensorFlow下构建高性能神经网络模型
- TensorFlow下构建高性能神经网络模型的最佳实践
- tensorflow从已经训练好的模型中,恢复(指定)权重(构建新变量、网络)并继续训练(finetuning)
- TensorFlow下构建高性能神经网络模型的最佳实践
- 构建你的独家TensorFlow模型
- 用tensorflow构建神经网络学习简单函数
- Tensorflow进行POS词性标注NER实体识别 - 构建LSTM网络进行序列化标注
- tensorflow构建简单神经网络
- 使用Tensorflow测试自己的分割网络模型
- tensorflow5 简单神经网络的构建
- Tensorflow 构建一个多层cnn网络,附注释。
- Tensorflow 如何存取网络模型
- TensorFlow学习记录-- 8.TensorFlow之如何构建漂亮的模型
- 利用Tensorflow实现神经网络模型
- 汐月教育之理解TensorFlow(三.2)构建全连接网络进行MNIST识别