楼主: Lisrelchen
1480 2

TensorFlow: open source software library for numerical computation [推广有奖]

  • 0关注
  • 62粉丝

VIP

院士

67%

还不是VIP/贵宾

-

TA的文库  其他...

Bayesian NewOccidental

Spatial Data Analysis

东西方数据挖掘

威望
0
论坛币
50062 个
通用积分
79.9987
学术水平
253 点
热心指数
300 点
信用等级
208 点
经验
41518 点
帖子
3256
精华
14
在线时间
766 小时
注册时间
2006-5-4
最后登录
2022-11-6

相似文件 换一批

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币

本帖隐藏的内容

tensorflow-master.zip (8.56 MB)

TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) that flow between them. This flexible architecture lets you deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device without rewriting code. TensorFlow also includes TensorBoard, a data visualization toolkit.

TensorFlow was originally developed by researchers and engineers working on the Google Brain team within Google's Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research. The system is general enough to be applicable in a wide variety of other domains, as well.

If you'd like to contribute to TensorFlow, be sure to review the contribution guidelines.

We use GitHub issues for tracking requests and bugs, but please see Community for general questions and discussion.

Installation

See Download and Setup for instructions on how to install our release binaries or how to build from source.

People who are a little bit adventurous can also try our nightly binaries:

Try your first TensorFlow program$ python
>>> import tensorflow as tf>>> hello = tf.constant('Hello, TensorFlow!')>>> sess = tf.Session()>>> sess.run(hello)Hello, TensorFlow!>>> a = tf.constant(10)>>> b = tf.constant(32)>>> sess.run(a+b)42>>>

二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:Computation Numerical software numeric Library represent software includes library between

本帖被以下文库推荐

沙发
Lisrelchen 发表于 2016-7-1 11:11:01 |只看作者 |坛友微信交流群
  1. TensorFlow

  2. TensorFlow is the library for machine learning and deep learning developed by Google. The project page is https://www.tensorflow.org/ and all the code is open to the public on GitHub at https://github.com/tensorflow/tensorflow. TensorFlow itself is written with C++, but it provides a Python and C++ API. We focus on Python implementations in this book. The installation can be done with pip, virtualenv, or docker. The installation guide is available at https://www.tensorflow.org/versions/master/get_started/os_setup.html. After the installation, you can import and use TensorFlow by writing the following code:

  3. import tensorflow as tf
  4. TensorFlow recommends you implement deep learning code with the following three parts:

  5. inference(): This makes predictions using the given data, which defines the model structure
  6. loss(): This returns the error values to be optimized
  7. training(): This applies the actual training algorithms by computing gradients
  8. We'll follow this guideline. A tutorial on MNIST classifications for beginners is introduced on https://www.tensorflow.org/versions/master/tutorials/mnist/beginners/index.html and the code for this tutorial can be found in DLWJ/src/resources/tensorflow/1_1_mnist_simple.py. Here, we consider refining the code introduced in the tutorial. You can see all the code in DLWJ/src/resources/tensorflow/1_2_mnist.py.

  9. First, what we have to consider is fetching the MNIST data. Thankfully, TensorFlow also provides the code to fetch the data in https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/mnist/input_data.py and we put the code into the same directory. Then, by writing the following code, you can import the MNIST data:

  10. import input_data
  11. MNIST data can be imported using the following code:

  12. mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
  13. Similar to Theano, we define the variable with no actual values as the placeholder:

  14. x_placeholder = tf.placeholder("float", [None, 784])
  15. label_placeholder = tf.placeholder("float", [None, 10])
  16. Here, 784 is the number of units in the input layer and 10 is the number in the output layer. We do this because the values in the placeholder change in accordance with the mini-batches. Once you define the placeholder you can move on to the model building and training. We set the non-linear activation with the softmax function in inference() here:

  17. def inference(x_placeholder):

  18.     W = tf.Variable(tf.zeros([784, 10]))
  19.     b = tf.Variable(tf.zeros([10]))

  20.     y = tf.nn.softmax(tf.matmul(x_placeholder, W) + b)

  21.     return y
  22. Here, W and b are the parameters of the model. The loss function, that is, the cross_entropy function, is defined in loss() as follows:

  23. def loss(y, label_placeholder):
  24.     cross_entropy = - tf.reduce_sum(label_placeholder * tf.log(y))

  25.     return cross_entropy
  26. With the definition of inference() and loss(), we can train the model by writing the following code:

  27. def training(loss):
  28.     train_step = tf.train.GradientDescentOptimizer(0.01).minimize(loss)

  29.     return train_step
  30. GradientDescentOptimizer() applies the gradient descent algorithm. But be careful, as this method just defines the method of training and the actual training has not yet been executed. TensorFlow also supports AdagradOptimizer(), MemontumOptimizer(), and other major optimizing algorithms.

  31. The code and methods explained previously are to define the model. To execute the actual training, you need to initialize a session of TensorFlow:

  32. init = tf.initialize_all_variables()
  33. sess.run(init)
  34. Then we train the model with mini-batches. All the data in a mini-batch is stored in feed_dict and then used in sess.run():

  35. for i in range(1000):
  36.     batch_xs, batch_ys = mnist.train.next_batch(100)
  37.     feed_dict = {x_placeholder: batch_xs, label_placeholder: batch_ys}

  38.     sess.run(train_step, feed_dict=feed_dict)
  39. That's it for the model training. It's very simple, isn't it? You can show the result by writing the following code:

  40. def res(y, label_placeholder, feed_dict):
  41.     correct_prediction = tf.equal(
  42.         tf.argmax(y, 1), tf.argmax(label_placeholder, 1)
  43.     )

  44.     accuracy = tf.reduce_mean(
  45.         tf.cast(correct_prediction, "float")
  46.     )

  47.    print sess.run(accuracy, feed_dict=feed_dict)
  48. TensorFlow makes it super easy to implement deep learning and it is very useful. Furthermore, TensorFlow has another powerful feature, TensorBoard, to visualize deep learning. By adding a few lines of code to the previous code snippet, we can use this useful feature.

  49. Let's see how the model is visualized first. The code is in DLWJ/src/resources/tensorflow/1_3_mnist_TensorBoard.py, so simply run it. After you run the program, type the following command:

  50. $ tensorboard --logdir=<ABOSOLUTE_PATH>/data
  51. Here, <ABSOLUTE_PATH> is the absolute path of the program. Then, if you access http://localhost:6006/ in your browser, you can see the following page:

  52. TensorFlow
  53. This shows the process of the value of cross_entropy. Also, when you click GRAPH in the header menu, you see the visualization of the model:

  54. TensorFlow
  55. When you click on inference on the page, you can see the model structure:

  56. TensorFlow
  57. Now let's look inside the code. To enable visualization, you need to wrap the whole area with the scope: with tf.Graph().as_default(). By adding this scope, all the variables declared in the scope will be displayed in the graph. The displayed name can be set by including the name label as follows:

  58. x_placeholder = tf.placeholder("float", [None, 784], name="input")
  59. label_placeholder = tf.placeholder("float", [None, 10], name="label")
  60. Defining other scopes will create nodes in the graph and this is where the division, inference(), loss(), and training() reveal their real values. You can define the respective scope without losing any readability:

  61. def inference(x_placeholder):
  62.     with tf.name_scope('inference') as scope:
  63.         W = tf.Variable(tf.zeros([784, 10]), name="W")
  64.         b = tf.Variable(tf.zeros([10]), name="b")

  65.         y = tf.nn.softmax(tf.matmul(x_placeholder, W) + b)

  66.     return y

  67. def loss(y, label_placeholder):
  68.     with tf.name_scope('loss') as scope:
  69.         cross_entropy = - tf.reduce_sum(label_placeholder * tf.log(y))

  70.         tf.scalar_summary("Cross Entropy", cross_entropy)

  71.     return cross_entropy

  72. def training(loss):
  73.     with tf.name_scope('training') as scope:
  74.         train_step = tf.train.GradientDescentOptimizer(0.01).minimize(loss)

  75.     return train_step
  76. tf.scalar_summary() in loss() makes the variable show up in the EVENTS menu. To enable visualization, we need the following code:

  77. summary_step = tf.merge_all_summaries()
  78. init = tf.initialize_all_variables()

  79. summary_writer = tf.train.SummaryWriter('data', graph_def=sess.graph_def)
  80. Then the process of variables can be added with the following code:

  81. summary = sess.run(summary_step, feed_dict=feed_dict)
  82. summary_writer.add_summary(summary, i)
  83. This feature of visualization will be much more useful when we're using more complicated models.
复制代码

使用道具

藤椅
jinyizhe282 发表于 2016-7-1 20:53:24 |只看作者 |坛友微信交流群
has  dongxi ~~~~

使用道具

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注jltj
拉您入交流群

京ICP备16021002-2号 京B2-20170662号 京公网安备 11010802022788号 论坛法律顾问:王进律师 知识产权保护声明   免责及隐私声明

GMT+8, 2024-6-19 06:35