如何安装tensorboard合适的tensorboard

为什么在windows下用不了tensorflow? - 知乎335被浏览57486分享邀请回答godoc./tensorflow/tensorflow/tensorflow/go)新的 checkpoint 格式现在是 tf.train.Saver 中的默认值。旧 V1 chekpoint 仍然可读; 由 write_version 参数控制,tf.train.Saver 现在默认写入新的 V2 格式。这种方式显著降低了恢复期间所需的内存峰值,同时降低了延迟。为无矩阵(迭代)解算器增加了新的库,支持 tensorflow/contrib/solvers 中的线性方程、线性最小二乘法,特征值和奇异值。最初的版本具有 lanczos 双对角化(bidiagonalization)、共轭梯度和 CGLS。为 matrix_solve_ls 和 self_adjoint_eig 增加了梯度。对原有内容进行了大量清理,为具有 C ++梯度的运算添加二阶梯度,并改进现有梯度,使大多数运算现在可以多次微分。增加了常微分方程的求解器:tf.contrib.integrate.odeint。用于具有命名轴的张量的新 contrib 模块,tf.contrib.labeled_tensor。TensorBoard 中嵌入(embedding)的可视化。主要 API 改进BusAdjacency 枚举被协议缓冲 DeviceLocality 代替。总线索引现在从 1 而不是 0 开始,同时,使用 bus_id==0,之前为 BUS_ANY。Env::FileExists 和 FileSystem::FileExists 现在返回 tensorflow::Status 而不是一个 bool。任何此函数的调用者都可以通过向调用添加.ok()将返回转换为 bool。C API:TF_SessionWithGraph 类型更名为 TF_Session,其在 TensorFlow 的绑定语言中成为首选。原来的 TF_Session 已更名为 TF_DeprecatedSession。C API: TF_Port 被更名为 TF_Output。C API: 调用者保留提供给 TF_Run、 TF_SessionRun、TF_SetAttrTensor 等的 TF_Tensor 对象的所有权。将 tf.image.per_image_whitening() 更名为 tf.image.per_image_standardization()。将 Summary protobuf 构造函数移动到了 tf.summary 子模块。不再使用 histogram_summary、audio_summary、 scalar_summary,image_summary、merge_summary 和 merge_all_summaries。组合 batch_ *和常规版本的线性代数和 FFT 运算。常规运算现在也处理批处理。所有 batch_ * Python 接口已删除。tf.all_variables,tf.VARIABLES 和 tf.initialize_all_variables 更名为 tf.global_variables,tf.GLOBAL_VARIABLES 和 tf.global_variable_initializers respectively。Bug 修复和其他的变化使用线程安全版本的 lgamma 函数。修复 tf.sqrt 负参数。修正了导致用于多线程基准的线程数不正确的错误。多核 CPU 上 batch_matmul 的性能优化。改进 trace,matrix_set_diag,matrix_diag_part 和它们的梯度,适用于矩形矩阵。支持复值矩阵的 SVD。选自 Google Developers Blog:23835 条评论分享收藏感谢收起/2016/11/tensorflow-0-12-adds-support-for-windows.htmlPosted by , Software EngineerToday we are launching preliminary Windows support for TensorFlow.Native support for TensorFlow on Windows was one of the first after open-sourcing TensorFlow. Although some Windows users have managed to run TensorFlow in a Docker container, we wanted to provide a more complete experience including GPU support.With the release of TensorFlow r0.12, we now provide a native TensorFlow package for Windows 7, 10, and Server 2016. This release enables you to speed up your TensorFlow training with any GPU that runs CUDA 8.We have published the latest release as a , so now you can install TensorFlow with a single command:
C:\& pip install tensorflowAnd for GPU support:
C:\& pip install tensorflow-gpuMore details about Windows support and all of the other new features in r0.12 are included in the .We're excited to offer more people the opportunity to use TF at maximum speed. Follow us on Twitter to be the first to hear about future releases – we're .AcknowledgementsMany people have contributed to making this release possible. In particular, we'd like to thank Guenther Schmuelling and Vit Stepanovs from Microsoft for their significant contributions to Windows support.原答案:TensorFlow之前没有native Windows版本的原因主要有两个:TF自己的C++代码不能用Visual Studio的MSVC toolchain编译TF在Linux和Mac上使用的Build System - Bazel不支持Windows不过目前TF已经成功把代码port到Windows,Bazel从0.3.2版本也开始支持Windows(虽然不太稳定,但基本能用。。)现在有两种在Windows上build TensorFlow的方法,CMake: Bazel: TF在port C++代码到Windows的时候用的是CMake, 所以这个方法除了编译慢应该没有大问题,并且已经开始支持GPU(GPU支持是很多人想在Windows上用TF的原因)。Bazel build的优点是速度快,特别是incremental build会比CMake快很多,0.3.2版本还只能build TF的C++ example trainer,不过刚刚发布的0.4.0版本已经支持build python PIP package了。 GPU build with Bazel还要再等等。。TensorFlow的计划当然是等Bazel在Windows上稳定后,把整个Windows build迁移到Bazel。如果不想自己build,TF之后可能也会直接发布一个Windows版本的Python wheel file (PIP package),应该能直接在winPython里安装使用,不过目前在Windows上只支持Python 3.5。相关的链接:2521 条评论分享收藏感谢收起查看更多回答4362人阅读
深度学习(3)
最近在学习Tensorflow中的tensoboard的可视化,但是怎么都无法实现,按照教程,我在网上查找了一些资料,我尝试很多种方法,教程中是在Linux中实现的,文件路径与Windows不一样,下面我以官方给出教程为例,指出在Window下要修改的地方:from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import argparse
import sys
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
FLAGS = None
def train():
# Import data
mnist = input_data.read_data_sets(FLAGS.data_dir,
one_hot=True,
fake_data=FLAGS.fake_data)
sess = tf.InteractiveSession()
# Create a multilayer model.
# Input placeholders
with tf.name_scope('input'):
x = tf.placeholder(tf.float32, [None, 784], name='x-input')
y_ = tf.placeholder(tf.float32, [None, 10], name='y-input')
with tf.name_scope('input_reshape'):
image_shaped_input = tf.reshape(x, [-1, 28, 28, 1])
tf.summary.image('input', image_shaped_input, 10)
# We can't initialize these variables to 0 - the network will get stuck.
def weight_variable(shape):
&&&Create a weight variable with appropriate initialization.&&&
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
&&&Create a bias variable with appropriate initialization.&&&
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def variable_summaries(var):
&&&Attach a lot of summaries to a Tensor (for TensorBoard visualization).&&&
with tf.name_scope('summaries'):
mean = tf.reduce_mean(var)
tf.summary.scalar('mean', mean)
with tf.name_scope('stddev'):
stddev = tf.sqrt(tf.reduce_mean(tf.square(var - mean)))
tf.summary.scalar('stddev', stddev)
tf.summary.scalar('max', tf.reduce_max(var))
tf.summary.scalar('min', tf.reduce_min(var))
tf.summary.histogram('histogram', var)
def nn_layer(input_tensor, input_dim, output_dim, layer_name, act=tf.nn.relu):
&&&Reusable code for making a simple neural net layer.
It does a matrix multiply, bias add, and then uses relu to nonlinearize.
It also sets up name scoping so that the resultant graph is easy to read,
and adds a number of summary ops.
# Adding a name scope ensures logical grouping of the layers in the graph.
with tf.name_scope(layer_name):
# This Variable will hold the state of the weights for the layer
with tf.name_scope('weights'):
weights = weight_variable([input_dim, output_dim])
variable_summaries(weights)
with tf.name_scope('biases'):
biases = bias_variable([output_dim])
variable_summaries(biases)
with tf.name_scope('Wx_plus_b'):
preactivate = tf.matmul(input_tensor, weights) + biases
tf.summary.histogram('pre_activations', preactivate)
activations = act(preactivate, name='activation')
tf.summary.histogram('activations', activations)
return activations
hidden1 = nn_layer(x, 784, 500, 'layer1')
with tf.name_scope('dropout'):
keep_prob = tf.placeholder(tf.float32)
tf.summary.scalar('dropout_keep_probability', keep_prob)
dropped = tf.nn.dropout(hidden1, keep_prob)
# Do not apply softmax activation yet, see below.
y = nn_layer(dropped, 500, 10, 'layer2', act=tf.identity)
with tf.name_scope('cross_entropy'):
# The raw formulation of cross-entropy,
# tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(tf.softmax(y)),
reduction_indices=[1]))
# can be numerically unstable.
# So here we use tf.nn.softmax_cross_entropy_with_logits on the
# raw outputs of the nn_layer above, and then average across
# the batch.
diff = tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y)
with tf.name_scope('total'):
cross_entropy = tf.reduce_mean(diff)
tf.summary.scalar('cross_entropy', cross_entropy)
with tf.name_scope('train'):
train_step = tf.train.AdamOptimizer(FLAGS.learning_rate).minimize(
cross_entropy)
with tf.name_scope('accuracy'):
with tf.name_scope('correct_prediction'):
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
with tf.name_scope('accuracy'):
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
tf.summary.scalar('accuracy', accuracy)
# Merge all the summaries and write them out to /tmp/tensorflow/mnist/logs/mnist_with_summaries (by default)
merged = tf.summary.merge_all()
train_writer = tf.summary.FileWriter(FLAGS.log_dir + '/train', sess.graph)
test_writer = tf.summary.FileWriter(FLAGS.log_dir + '/test')
tf.global_variables_initializer().run()
# Train the model, and also write summaries.
# Every 10th step, measure test-set accuracy, and write test summaries
# All other steps, run train_step on training data, & add training summaries
def feed_dict(train):
&&&Make a TensorFlow feed_dict: maps data onto Tensor placeholders.&&&
if train or FLAGS.fake_data:
xs, ys = mnist.train.next_batch(100, fake_data=FLAGS.fake_data)
k = FLAGS.dropout
xs, ys = mnist.test.images, mnist.test.labels
return {x: xs, y_: ys, keep_prob: k}
for i in range(FLAGS.max_steps):
if i % 10 == 0:
# Record summaries and test-set accuracy
summary, acc = sess.run([merged, accuracy], feed_dict=feed_dict(False))
test_writer.add_summary(summary, i)
print('Accuracy at step %s: %s' % (i, acc))
# Record train set summaries, and train
if i % 100 == 99:
# Record execution stats
run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)
run_metadata = tf.RunMetadata()
summary, _ = sess.run([merged, train_step],
feed_dict=feed_dict(True),
options=run_options,
run_metadata=run_metadata)
train_writer.add_run_metadata(run_metadata, 'step%03d' % i)
train_writer.add_summary(summary, i)
print('Adding run metadata for', i)
# Record a summary
summary, _ = sess.run([merged, train_step], feed_dict=feed_dict(True))
train_writer.add_summary(summary, i)
train_writer.close()
test_writer.close()
def main(_):
if tf.gfile.Exists(FLAGS.log_dir):
tf.gfile.DeleteRecursively(FLAGS.log_dir)
tf.gfile.MakeDirs(FLAGS.log_dir)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--fake_data', nargs='?', const=True, type=bool,
default=False,
help='If true, uses fake data for unit testing.')
parser.add_argument('--max_steps', type=int, default=1000,
help='Number of steps to run trainer.')
parser.add_argument('--learning_rate', type=float, default=0.001,
help='Initial learning rate')
parser.add_argument('--dropout', type=float, default=0.9,
help='Keep probability for training dropout.')
parser.add_argument('--data_dir', type=str, default='/tmp/tensorflow/mnist/input_data',
help='Directory for storing input data')
parser.add_argument('--log_dir', type=str, default='/tmp/tensorflow/mnist/logs/mnist_with_summaries',
help='Summaries log directory')
FLAGS, unparsed = parser.parse_known_args()
tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
上面是官方给出的例子实现,下面我给出要修改的地方:
parser.add_argument('--log_dir', type=str, default='/tmp/tensorflow/mnist/logs/mnist_with_summaries',
help='Summaries log directory')修改后的代码:
parser.add_argument('--log_dir', type=str, default='C:/tmp/tensorflow/mnist/logs/mnist_with_summaries',
& & & & & & & & & & & help='Summaries log directory')
这里目录取决你放在哪个盘,这里我放在C盘,打开cmd。在终端输入,如下图:
tensorboard --logdir= C:\tmp\tensorflow\mnist\logs\mnist_with_summaries
打开Google Chrome输入localhost:6006,结果如下图:
&&相关文章推荐
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
访问:8190次
排名:千里之外
原创:13篇
(2)(6)(1)(2)(6)
(window.slotbydup = window.slotbydup || []).push({
id: '4740881',
container: s,
size: '200,200',
display: 'inlay-fix'}

我要回帖

更多关于 tensorboard 无法显示 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信