第一阶段-入门详细图文讲解tensorflow1.4 -(九)TensorBoard: Visualizing Learning
2017-12-20 16:24
483 查看
The computations you’ll use TensorFlow for - like training a massive deep neural network - can be complex and confusing. To make it easier to understand, debug, and optimize TensorFlow programs, we’ve included a suite of visualization tools called TensorBoard. You can use TensorBoard to visualize your TensorFlow graph, plot quantitative metrics about the execution of your graph, and show additional data like images that pass through it. When TensorBoard is fully configured, it looks like this:
TensorBoard 涉及到的运算在训练大量的深度神经网络中出现的复杂运算。
为了更方便 TensorFlow 程序的理解、调试与优化,我们发布了一套叫做 TensorBoard 的可视化工具。你可以用 TensorBoard 来展现你的 TensorFlow 图像,绘制图像生成的定量指标图以及附加数据。
当 TensorBoard 设置完成后,它应该是这样子的:
This tutorial is intended to get you started with simple TensorBoard usage. There are other resources available as well! The TensorBoard’s GitHub has a lot more information on TensorBoard usage, including tips & tricks, and debugging information.
这篇教程倾向于TensorBoard的简单用法。github上有详细的信息,包括使用,提示,调试。
第一步序列化数据
TensorBoard operates by reading TensorFlow events files, which contain summary data that you can generate when running TensorFlow. Here’s the general lifecycle for summary data within TensorBoard.
读取events files 创建TensorBoard。下面讲解一下summary data的生命周期。
First, create the TensorFlow graph that you’d like to collect summary data from, and decide which nodes you would like to annotate with summary operations.
第一步创建tf graph,这个graph由一些summary data 的nodes组成。
For example, suppose you are training a convolutional neural network for recognizing MNIST digits. You’d like to record how the learning rate varies over time, and how the objective function is changing. Collect these by attaching tf.summary.scalar ops to the nodes that output the learning rate and loss respectively. Then, give each scalar_summary a meaningful tag, like ‘learning rate’ or ‘loss function’.
回顾一下MNIST的例子,我们打算记录学习率,和目标函数的变化。应用怎么做?使用tf.summary.scalar操作
Perhaps you’d also like to visualize the distributions of activations coming off a particular layer, or the distribution of gradients or weights. Collect this data by attaching tf.summary.histogram ops to the gradient outputs and to the variable that holds your weights, respectively.
如果你想看到主要层的分布或者权重分布,怎么做?使用tf.summary.histogram操作
For details on all of the summary operations available, check out the docs on summary operations.
更多详情,请看summary文档
Operations in TensorFlow don’t do anything until you run them, or an op that depends on their output. And the summary nodes that we’ve just created are peripheral to your graph: none of the ops you are currently running depend on them. So, to generate summaries, we need to run all of these summary nodes. Managing them by hand would be tedious, so use tf.summary.merge_all to combine them into a single op that generates all the summary data.
我们需要运行所有节点,管理这个节点的运行效率很低。因此我们使用tf.summary.merge_all 合并所有节点成一个单独的操作,一次生成所有的summary data
Then, you can just run the merged summary op, which will generate a serialized Summary protobuf object with all of your summary data at a given step. Finally, to write this summary data to disk, pass the summary protobuf to a tf.summary.FileWriter.
接着,合并成一个操作之后,会生成一个序列化的Summary protobuf对象。为了将这个对象写入磁盘中,我们使用tf.summary.FileWriter
The FileWriter takes a logdi
d80c
r in its constructor - this logdir is quite important, it’s the directory where all of the events will be written out. Also, the FileWriter can optionally take a Graph in its constructor. If it receives a Graph object, then TensorBoard will visualize your graph along with tensor shape information. This will give you a much better sense of what flows through the graph: see Tensor shape information.
FileWriter (logdir,graph)logdir是存储events files的目录。graphDef是可选的,添加则显示graph
Now that you’ve modified your graph and have a FileWriter, you’re ready to start running your network! If you want, you could run the merged summary op every single step, and record a ton of training data. That’s likely to be more data than you need, though. Instead, consider running the merged summary op every n steps.
现在我们修改MNIST的代码,添加上述操作。我们使用每100步一次summary 操作。
The code example below is a modification of the simple MNIST tutorial, in which we have added some summary ops, and run them every ten steps. If you run this and then launch tensorboard –logdir=/tmp/tensorflow/mnist, you’ll be able to visualize statistics, such as how the weights or accuracy varied during training. The code below is an excerpt; full source is here.
下面代码是MNIST的修改完的代码。使用tensorboard –logdir=/tmp/tensorflow/mnist去运行TensorBoard。你将会看到一些关于权重变化率,正确率的统计量在模型运行期间。
运行:
tensorboard –logdir=D:\tmp\tensorflow\mnist\logs
TensorBoard 涉及到的运算在训练大量的深度神经网络中出现的复杂运算。
为了更方便 TensorFlow 程序的理解、调试与优化,我们发布了一套叫做 TensorBoard 的可视化工具。你可以用 TensorBoard 来展现你的 TensorFlow 图像,绘制图像生成的定量指标图以及附加数据。
当 TensorBoard 设置完成后,它应该是这样子的:
This tutorial is intended to get you started with simple TensorBoard usage. There are other resources available as well! The TensorBoard’s GitHub has a lot more information on TensorBoard usage, including tips & tricks, and debugging information.
这篇教程倾向于TensorBoard的简单用法。github上有详细的信息,包括使用,提示,调试。
第一步序列化数据
TensorBoard operates by reading TensorFlow events files, which contain summary data that you can generate when running TensorFlow. Here’s the general lifecycle for summary data within TensorBoard.
读取events files 创建TensorBoard。下面讲解一下summary data的生命周期。
First, create the TensorFlow graph that you’d like to collect summary data from, and decide which nodes you would like to annotate with summary operations.
第一步创建tf graph,这个graph由一些summary data 的nodes组成。
For example, suppose you are training a convolutional neural network for recognizing MNIST digits. You’d like to record how the learning rate varies over time, and how the objective function is changing. Collect these by attaching tf.summary.scalar ops to the nodes that output the learning rate and loss respectively. Then, give each scalar_summary a meaningful tag, like ‘learning rate’ or ‘loss function’.
回顾一下MNIST的例子,我们打算记录学习率,和目标函数的变化。应用怎么做?使用tf.summary.scalar操作
Perhaps you’d also like to visualize the distributions of activations coming off a particular layer, or the distribution of gradients or weights. Collect this data by attaching tf.summary.histogram ops to the gradient outputs and to the variable that holds your weights, respectively.
如果你想看到主要层的分布或者权重分布,怎么做?使用tf.summary.histogram操作
For details on all of the summary operations available, check out the docs on summary operations.
更多详情,请看summary文档
Operations in TensorFlow don’t do anything until you run them, or an op that depends on their output. And the summary nodes that we’ve just created are peripheral to your graph: none of the ops you are currently running depend on them. So, to generate summaries, we need to run all of these summary nodes. Managing them by hand would be tedious, so use tf.summary.merge_all to combine them into a single op that generates all the summary data.
我们需要运行所有节点,管理这个节点的运行效率很低。因此我们使用tf.summary.merge_all 合并所有节点成一个单独的操作,一次生成所有的summary data
Then, you can just run the merged summary op, which will generate a serialized Summary protobuf object with all of your summary data at a given step. Finally, to write this summary data to disk, pass the summary protobuf to a tf.summary.FileWriter.
接着,合并成一个操作之后,会生成一个序列化的Summary protobuf对象。为了将这个对象写入磁盘中,我们使用tf.summary.FileWriter
The FileWriter takes a logdi
d80c
r in its constructor - this logdir is quite important, it’s the directory where all of the events will be written out. Also, the FileWriter can optionally take a Graph in its constructor. If it receives a Graph object, then TensorBoard will visualize your graph along with tensor shape information. This will give you a much better sense of what flows through the graph: see Tensor shape information.
FileWriter (logdir,graph)logdir是存储events files的目录。graphDef是可选的,添加则显示graph
Now that you’ve modified your graph and have a FileWriter, you’re ready to start running your network! If you want, you could run the merged summary op every single step, and record a ton of training data. That’s likely to be more data than you need, though. Instead, consider running the merged summary op every n steps.
现在我们修改MNIST的代码,添加上述操作。我们使用每100步一次summary 操作。
The code example below is a modification of the simple MNIST tutorial, in which we have added some summary ops, and run them every ten steps. If you run this and then launch tensorboard –logdir=/tmp/tensorflow/mnist, you’ll be able to visualize statistics, such as how the weights or accuracy varied during training. The code below is an excerpt; full source is here.
下面代码是MNIST的修改完的代码。使用tensorboard –logdir=/tmp/tensorflow/mnist去运行TensorBoard。你将会看到一些关于权重变化率,正确率的统计量在模型运行期间。
# -*- coding: utf-8 -*- """ A simple MNIST classifier which displays summaries in TensorBoard. This is an unimpressive MNIST model, but it is a good example of using tf.name_scope to make a graph legible in the TensorBoard graph explorer, and of naming summary tags so that they are grouped meaningfully in TensorBoard. It demonstrates the functionality of every TensorBoard dashboard. """ from __future__ import absolute_import from __future__ import division from __future__ import print_function import argparse import os import sys import tensorflow as tf import input_data FLAGS = None def train(): # Import data mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True, fake_data=FLAGS.fake_data) sess = tf.InteractiveSession() # Create a multilayer model. # Input placeholders with tf.name_scope('input'): x = tf.placeholder(tf.float32, [None, 784], name='x-input') y_ = tf.placeholder(tf.float32, [None, 10], name='y-input') with tf.name_scope('input_reshape'): image_shaped_input = tf.reshape(x, [-1, 28, 28, 1]) tf.summary.image('input', image_shaped_input, 10) # We can't initialize these variables to 0 - the network will get stuck. def weight_variable(shape): """Create a weight variable with appropriate initialization.""" initial = tf.truncated_normal(shape, stddev=0.1) return tf.Variable(initial) def bias_variable(shape): """Create a bias variable with appropriate initialization.""" initial = tf.constant(0.1, shape=shape) return tf.Variable(initial) def variable_summaries(var): """Attach a lot of summaries to a Tensor (for TensorBoard visualization).""" with tf.name_scope('summaries'): mean = tf.reduce_mean(var) tf.summary.scalar('mean', mean) with tf.name_scope('stddev'): stddev = tf.sqrt(tf.reduce_mean(tf.square(var - mean))) tf.summary.scalar('stddev', stddev) tf.summary.scalar('max', tf.reduce_max(var)) tf.summary.scalar('min', tf.reduce_min(var)) tf.summary.histogram('histogram', var) def nn_layer(input_tensor, input_dim, output_dim, layer_name, act=tf.nn.relu): """Reusable code for making a simple neural net layer. It does a matrix multiply, bias add, and then uses ReLU to nonlinearize. It also sets up name scoping so that the resultant graph is easy to read, and adds a number of summary ops. """ # Adding a name scope ensures logical grouping of the layers in the graph. with tf.name_scope(layer_name): # This Variable will hold the state of the weights for the layer with tf.name_scope('weights'): weights = weight_variable([input_dim, output_dim]) variable_summaries(weights) with tf.name_scope('biases'): biases = bias_variable([output_dim]) variable_summaries(biases) with tf.name_scope('Wx_plus_b'): preactivate = tf.matmul(input_tensor, weights) + biases tf.summary.histogram('pre_activations', preactivate) activations = act(preactivate, name='activation') tf.summary.histogram('activations', activations) return activations hidden1 = nn_layer(x, 784, 500, 'layer1') with tf.name_scope('dropout'): keep_prob = tf.placeholder(tf.float32) tf.summary.scalar('dropout_keep_probability', keep_prob) dropped = tf.nn.dropout(hidden1, keep_prob) # Do not apply softmax activation yet, see below. y = nn_layer(dropped, 500, 10, 'layer2', act=tf.identity) with tf.name_scope('cross_entropy'): # The raw formulation of cross-entropy, # # tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(tf.softmax(y)), # reduction_indices=[1])) # # can be numerically unstable. # # So here we use tf.nn.softmax_cross_entropy_with_logits on the # raw outputs of the nn_layer above, and then average across # the batch. diff = tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y) with tf.name_scope('total'): cross_entropy = tf.reduce_mean(diff) tf.summary.scalar('cross_entropy', cross_entropy) with tf.name_scope('train'): train_step = tf.train.AdamOptimizer(FLAGS.learning_rate).minimize( cross_entropy) with tf.name_scope('accuracy'): with tf.name_scope('correct_prediction'): correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) with tf.name_scope('accuracy'): accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) tf.summary.scalar('accuracy', accuracy) # Merge all the summaries and write them out to # /tmp/tensorflow/mnist/logs/mnist_with_summaries (by default) merged = tf.summary.merge_all() train_writer = tf.summary.FileWriter(FLAGS.log_dir + '/train', sess.graph) test_writer = tf.summary.FileWriter(FLAGS.log_dir + '/test') tf.global_variables_initializer().run() # Train the model, and also write summaries. # Every 10th step, measure test-set accuracy, and write test summaries # All other steps, run train_step on training data, & add training summaries def feed_dict(train): """Make a TensorFlow feed_dict: maps data onto Tensor placeholders.""" if train or FLAGS.fake_data: xs, ys = mnist.train.next_batch(100, fake_data=FLAGS.fake_data) k = FLAGS.dropout else: xs, ys = mnist.test.images, mnist.test.labels k = 1.0 return {x: xs, y_: ys, keep_prob: k} for i in range(FLAGS.max_steps): if i % 10 == 0: # Record summaries and test-set accuracy summary, acc = sess.run([merged, accuracy], feed_dict=feed_dict(False)) test_writer.add_summary(summary, i) print('Accuracy at step %s: %s' % (i, acc)) else: # Record train set summaries, and train if i % 100 == 99: # Record execution stats run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE) run_metadata = tf.RunMetadata() summary, _ = sess.run([merged, train_step], feed_dict=feed_dict(True), options=run_options, run_metadata=run_metadata) train_writer.add_run_metadata(run_metadata, 'step%03d' % i) train_writer.add_summary(summary, i) print('Adding run metadata for', i) else: # Record a summary summary, _ = sess.run([merged, train_step], feed_dict=feed_dict(True)) train_writer.add_summary(summary, i) train_writer.close() test_writer.close() def main(_): if tf.gfile.Exists(FLAGS.log_dir): tf.gfile.DeleteRecursively(FLAGS.log_dir) tf.gfile.MakeDirs(FLAGS.log_dir) train() if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('--fake_data', nargs='?', const=True, type=bool, default=False, help='If true, uses fake data for unit testing.') parser.add_argument('--max_steps', type=int, default=1000, help='Number of steps to run trainer.') parser.add_argument('--learning_rate', type=float, default=0.001, help='Initial learning rate') parser.add_argument('--dropout', type=float, default=0.9, help='Keep probability for training dropout.') parser.add_argument( '--data_dir', type=str, default=os.path.join(os.getenv('TEST_TMPDIR', '/tmp'), 'tensorflow/mnist/input_data'), help='Directory for storing input data') parser.add_argument( '--log_dir', type=str, default=os.path.join(os.getenv('TEST_TMPDIR', '/tmp'), 'tensorflow/mnist/logs/mnist_with_summaries'), help='Summaries log directory') FLAGS, unparsed = parser.parse_known_args() tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
运行:
tensorboard –logdir=D:\tmp\tensorflow\mnist\logs
相关文章推荐
- 第一阶段-入门详细图文讲解tensorflow1.4 -(十一)TensorBoard Histogram Dashboard
- 第一阶段-入门详细图文讲解tensorflow1.4 -(十)TensorBoard: Graph Visualization
- 第一阶段-入门详细图文讲解tensorflow1.4 API-tf.nn.max_pool
- 第一阶段-入门详细图文讲解tensorflow1.4 API-tf.reshape
- 第一阶段-入门详细图文讲解tensorflow1.4 -(三)TensorFlow 编程基础知识
- 第一阶段-入门详细图文讲解tensorflow1.4 -(五)MNIST-CNN
- 第一阶段-入门详细图文讲解tensorflow1.4 -(七)tf.estimator的IRIS
- 第一阶段-入门详细图文讲解tensorflow1.4 -(四)新手MNIST
- 第一阶段-入门详细图文讲解tensorflow1.4 -(六)tensorflow运行机制MNIST
- 第一阶段-入门详细图文讲解tensorflow1.4 -安装(二)Windows CPU or GPU
- 第一阶段-入门详细图文讲解tensorflow1.4 API-tf.truncated_normal
- 第一阶段-入门详细图文讲解tensorflow1.4 -简介(一)
- 第一阶段-入门详细图文讲解tensorflow1.4 -(八)tf.estimator构建数据预处理bostonHouse
- 真正从零开始,TensorFlow详细安装入门图文教程!
- 真正从零开始,TensorFlow详细安装入门图文教程!
- 真正从零开始,TensorFlow详细安装入门图文教程!
- 真正从零开始,TensorFlow详细安装入门图文教程!
- 第二阶段-tensorflow程序图文详解(八)Debugging TensorFlow Programs
- 真正从零开始,TensorFlow详细安装入门图文教程!
- Tensorflow Ubuntu16.04上安装及CPU运行Tensorboard、CNN、RNN图文教程