您的位置:首页 > 运维架构 > Docker

进行docker 安装并搭建tensorflow 框架用于以后tensorflow 测试学习

2017-07-26 21:22 531 查看
既然要学习tensorflow,查看了下tensorflow系统,感觉还是python 比较方便,而整个tensorflow 系统学习,使用docker 也是比较方便了,因此有必要安装下docker 环境了。

 

我主要参考下面的三个博客进行docker环境安装测试:

 

http://blog.csdn.net/fu_shuwu/article/details/75947602

http://blog.csdn.net/fu_shuwu/article/details/75946886

http://blog.csdn.net/dream_an/article/details/51985170

 

下面是安装了docker后,并pull 了ubuntu14.04,tensorflow/tensorflow,运行了hello-world后的一个images 列表:

sudo docker images

 

REPOSITORY              TAG                 IMAGE ID            CREATED             SIZE

<none>                  <none>              000e5908dd9f        About an hour ago   188MB

ubuntu                  14.04               54333f1de4ed        5 days ago          188MB

tensorflow/tensorflow   latest              02f42dc11beb        3 weeks ago         1.17GB

hello-world             latest              1815c82652c0        5 weeks ago         1.84kB

 

 

直接运行tensorflow里面的例子:

http://localhost:8888/notebooks/2_getting_started.ipynb

下面有样例可以直接测试,这样可以通过修改一些源码的方式去熟悉tensorflow 基本原理;

 

#@test {"output":"ignore"}

import tensorflow as tf

import numpy as np

import matplotlib.pyplot as plt

 

# Set up the data with a noisy linearrelationship between X and Y.

num_examples = 50

X = np.array([np.linspace(-2, 4,num_examples), np.linspace(-6, 6, num_examples)])

# Add random noise (gaussian, mean 0, stdev1)

X += np.random.randn(2, num_examples)

# Split into x and y

x, y = X

# Add the bias node which always has avalue of 1

x_with_bias = np.array([(1., a) for a inx]).astype(np.float32)

 

# Keep track of the loss at each iterationso we can chart it later

losses = []

# How many iterations to run our training

training_steps = 50

# The learning rate. Also known has thestep size. This changes how far

# we move down the gradient toward lowererror at each step. Too large

# jumps risk inaccuracy, too small slow thelearning.

learning_rate = 0.002

 

# In TensorFlow, we need to run everythingin the context of a session.

with tf.Session() as sess:

    #Set up all the tensors.

    #Our input layer is the x value and the bias node.

   input = tf.constant(x_with_bias)

    #Our target is the y values. They need to be massaged to the right shape.

   target = tf.constant(np.transpose([y]).astype(np.float32))

    #Weights are a variable. They change every time through the loop.

    #Weights are initialized to random values (gaussian, mean 0, stdev 0.1)

   weights = tf.Variable(tf.random_normal([2, 1], 0, 0.1))

 

    #Initialize all the variables defined above.

   tf.global_variables_initializer().run()

 

    #Set up all operations that will run in the loop.

    #For all x values, generate our estimate on all y given our current

    #weights. So, this is computing y = w2 * x + w1 * bias

   yhat = tf.matmul(input, weights)

    #Compute the error, which is just the difference between our

    #estimate of y and what y actually is.

   yerror = tf.subtract(yhat, target)

    #We are going to minimize the L2 loss. The L2 loss is the sum of the

    #squared error for all our estimates of y. This penalizes large errors

    #a lot, but small errors only a little.

   loss = tf.nn.l2_loss(yerror)

 

    #Perform gradient descent.

    #This essentially just updates weights, like weights += grads * learning_rate

    #using the partial derivative of the loss with respect to the

    #weights. It's the direction we want to go to move toward lower error.

   update_weights =tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)

 

    #At this point, we've defined all our tensors and run our initialization

    #operations. We've also set up the operations that will repeatedly be run

    #inside the training loop. All the training loop is going to do is

    #repeatedly call run, inducing the gradient descent operation, which has theeffect of

    #repeatedly changing weights by a small amount in the direction (the

    #partial derivative or gradient) that will reduce the error (the L2 loss).

   for _ in range(training_steps):

       # Repeatedly run the operations, updating the TensorFlow variable.

       sess.run(update_weights)

 

       # Here, we're keeping a history of the losses to plot later

       # so we can see the change in loss as training progresses.

       losses.append(loss.eval())

 

    #Training is done, get the final values for the charts

   betas = weights.eval()

   yhat = yhat.eval()

 

# Show the results.

fig, (ax1, ax2) = plt.subplots(1, 2)

plt.subplots_adjust(wspace=.3)

fig.set_size_inches(10, 4)

ax1.scatter(x, y, alpha=.7)

ax1.scatter(x, np.transpose(yhat)[0],c="g", alpha=.6)

line_x_range = (-4, 6)

ax1.plot(line_x_range, [betas[0] + a *betas[1] for a in line_x_range], "g", alpha=0.6)

ax2.plot(range(0, training_steps), losses)

ax2.set_ylabel("Loss")

ax2.set_xlabel("Training steps")

plt.show()
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息