楼主: ReneeBK
2155 17

【Tutorial】Deep MNIST for Experts [推广有奖]

11
ReneeBK 发表于 2017-2-19 01:38:50
  1. First Convolutional Layer

  2. We can now implement our first layer. It will consist of convolution, followed by max pooling. The convolution will compute 32 features for each 5x5 patch. Its weight tensor will have a shape of [5, 5, 1, 32]. The first two dimensions are the patch size, the next is the number of input channels, and the last is the number of output channels. We will also have a bias vector with a component for each output channel.

  3. W_conv1 = weight_variable([5, 5, 1, 32])
  4. b_conv1 = bias_variable([32])
  5. To apply the layer, we first reshape x to a 4d tensor, with the second and third dimensions corresponding to image width and height, and the final dimension corresponding to the number of color channels.

  6. x_image = tf.reshape(x, [-1,28,28,1])
  7. We then convolve x_image with the weight tensor, add the bias, apply the ReLU function, and finally max pool. The max_pool_2x2 method will reduce the image size to 14x14.

  8. h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
  9. h_pool1 = max_pool_2x2(h_conv1)
复制代码

12
ReneeBK 发表于 2017-2-19 01:39:30
  1. Second Convolutional Layer

  2. In order to build a deep network, we stack several layers of this type. The second layer will have 64 features for each 5x5 patch.

  3. W_conv2 = weight_variable([5, 5, 32, 64])
  4. b_conv2 = bias_variable([64])

  5. h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
  6. h_pool2 = max_pool_2x2(h_conv2)
复制代码

13
ReneeBK 发表于 2017-2-19 01:40:36
  1. Densely Connected Layer

  2. Now that the image size has been reduced to 7x7, we add a fully-connected layer with 1024 neurons to allow processing on the entire image. We reshape the tensor from the pooling layer into a batch of vectors, multiply by a weight matrix, add a bias, and apply a ReLU.

  3. W_fc1 = weight_variable([7 * 7 * 64, 1024])
  4. b_fc1 = bias_variable([1024])

  5. h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
  6. h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
复制代码

14
ReneeBK 发表于 2017-2-19 01:41:37
  1. Dropout

  2. To reduce overfitting, we will apply dropout before the readout layer. We create a placeholder for the probability that a neuron's output is kept during dropout. This allows us to turn dropout on during training, and turn it off during testing. TensorFlow's tf.nn.dropout op automatically handles scaling neuron outputs in addition to masking them, so dropout just works without any additional scaling.1

  3. keep_prob = tf.placeholder(tf.float32)
  4. h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
复制代码

15
ReneeBK 发表于 2017-2-19 01:42:00
  1. Readout Layer

  2. Finally, we add a layer, just like for the one layer softmax regression above.

  3. W_fc2 = weight_variable([1024, 10])
  4. b_fc2 = bias_variable([10])

  5. y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
复制代码

16
ReneeBK 发表于 2017-2-19 01:43:15
  1. Train and Evaluate the Model

  2. How well does this model do? To train and evaluate it we will use code that is nearly identical to that for the simple one layer SoftMax network above.

  3. The differences are that:

  4. We will replace the steepest gradient descent optimizer with the more sophisticated ADAM optimizer.
  5. We will include the additional parameter keep_prob in feed_dict to control the dropout rate.
  6. We will add logging to every 100th iteration in the training process.
  7. Feel free to go ahead and run this code, but it does 20,000 training iterations and may take a while (possibly up to half an hour), depending on your processor.

  8. cross_entropy = tf.reduce_mean(
  9.     tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))
  10. train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
  11. correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
  12. accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
  13. sess.run(tf.global_variables_initializer())
  14. for i in range(20000):
  15.   batch = mnist.train.next_batch(50)
  16.   if i%100 == 0:
  17.     train_accuracy = accuracy.eval(feed_dict={
  18.         x:batch[0], y_: batch[1], keep_prob: 1.0})
  19.     print("step %d, training accuracy %g"%(i, train_accuracy))
  20.   train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})

  21. print("test accuracy %g"%accuracy.eval(feed_dict={
  22.     x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))
  23. The final test set accuracy after running this code should be approximately 99.2%.

  24. We have learned how to quickly and easily build, train, and evaluate a fairly sophisticated deep learning model using TensorFlow.
复制代码

17
钱学森64 发表于 2017-2-19 17:38:36
谢谢分享,理解有难度

18
h2h2 发表于 2017-2-19 17:53:38
谢谢分享

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注jltj
拉您入交流群
GMT+8, 2026-1-1 20:39