楼主: Reader's
1605 5

【GitHub】R Deep Learning Cookbook [推广有奖]

  • 0关注
  • 0粉丝

博士生

59%

还不是VIP/贵宾

-

TA的文库  其他...

可解釋的機器學習

Operations Research(运筹学)

国际金融(Finance)

威望
0
论坛币
41133 个
通用积分
2.0023
学术水平
7 点
热心指数
5 点
信用等级
5 点
经验
2201 点
帖子
198
精华
1
在线时间
36 小时
注册时间
2015-6-1
最后登录
2024-3-3

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币


R Deep Learning Cookbook

This is the code repository for R Deep Learning Cookbook, published by Packt. It contains all the supporting project files necessary to work through the book from start to finish.

About the Book

Deep Learning is the next big thing. It is a part of machine learning. It's favorable results in applications with huge and complex data is remarkable. Simultaneously, R programming language is very popular amongst the data miners and statisticians.

Instructions and Navigation

All of the code is organized into folders. Each folder starts with a number followed by the application name. For example, Chapter02.

The code will look like the following:

[default] exten => s,1,Dial(Zap/1|30) exten => s,2,Voicemail(u100) exten => s,102,Voicemail(b100) exten => i,1,Voicemail(s0)

A lot of inquisitiveness, perseverance, and passion is required to build a strong background in data science. The scope of deep learning is quite broad; thus, the following backgrounds is required to effectively utilize this cookbook:

  • Basics of machine learning and data analysis
  • Proficiency in R programming
  • Basics of Python and Docker Lastly, you need to appreciate deep learning algorithms and know how they solve complex problems in multiple domains
Related ProductsSuggestions and Feedback

Click here if you have any feedback or suggestions.

https://github.com/PacktPublishing/R-Deep-Learning-Cookbook



二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:Cookbook Learning earning GitHub Learn

本帖被以下文库推荐

沙发
Reader's 发表于 2017-8-13 02:19:39 |只看作者 |坛友微信交流群
  1. ###########   COLLABORATIVE FILTERING WITH RBM
  2. setwd("Set the working directory with movies.dat and ratings.dat files")

  3. ## Read movie lens data
  4. txt <- readLines("movies.dat", encoding = "latin1")
  5. txt_split <- lapply(strsplit(txt, "::"), function(x) as.data.frame(t(x), stringsAsFactors=FALSE))
  6. movies_df <- do.call(rbind, txt_split)
  7. names(movies_df) <- c("MovieID", "Title", "Genres")
  8. movies_df$MovieID <- as.numeric(movies_df$MovieID)
  9. movies_df$id_order <- 1:nrow(movies_df)

  10. ratings_df <- read.table("ratings.dat", sep=":",header=FALSE,stringsAsFactors = F)
  11. ratings_df <- ratings_df[,c(1,3,5,7)]
  12. colnames(ratings_df) <- c("UserID","MovieID","Rating","Timestamp")

  13. # Merge user ratings and movies
  14. merged_df <- merge(movies_df, ratings_df, by="MovieID",all=FALSE)

  15. # Remove unnecessary columns
  16. merged_df[,c("Timestamp","Title","Genres")] <- NULL

  17. # create % rating
  18. merged_df$rating_per <- merged_df$Rating/5

  19. # Generate a matrix of ratings
  20. num_of_users <- 1000
  21. num_of_movies <- length(unique(movies_df$MovieID))
  22. trX <- matrix(0,nrow=num_of_users,ncol=num_of_movies)
  23. for(i in 1:num_of_users){
  24.   merged_df_user <- merged_df[merged_df$UserID %in% i,]
  25.   trX[i,merged_df_user$id_order] <- merged_df_user$rating_per
  26. }

  27. # Import tenforflow libraries
  28. # Sys.setenv(TENSORFLOW_PYTHON="C:/PROGRA~1/Python35/python.exe")
  29. # Sys.setenv(TENSORFLOW_PYTHON_VERSION = 3)
  30. library(tensorflow)
  31. np <- import("numpy")

  32. # Create TensorFlow session
  33. # Reset the graph
  34. tf$reset_default_graph()
  35. # Starting session as interactive session
  36. sess <- tf$InteractiveSession()

  37. # Model Parameters
  38.   num_hidden = 20
  39.   num_input = nrow(movies_df)
  40.   vb <- tf$placeholder(tf$float32, shape = shape(num_input))    #Number of unique movies
  41.   hb <- tf$placeholder(tf$float32, shape = shape(num_hidden))   #Number of features we're going to learn
  42.   W <- tf$placeholder(tf$float32, shape = shape(num_input, num_hidden))
  43.   
  44. #Phase 1: Input Processing
  45. v0 = tf$placeholder(tf$float32,shape= shape(NULL, num_input))
  46. prob_h0= tf$nn$sigmoid(tf$matmul(v0, W) + hb)
  47. h0 = tf$nn$relu(tf$sign(prob_h0 - tf$random_uniform(tf$shape(prob_h0))))
  48. #Phase 2: Reconstruction
  49. prob_v1 = tf$nn$sigmoid(tf$matmul(h0, tf$transpose(W)) + vb)
  50. v1 = tf$nn$relu(tf$sign(prob_v1 - tf$random_uniform(tf$shape(prob_v1))))
  51. h1 = tf$nn$sigmoid(tf$matmul(v1, W) + hb)

  52. # RBM Parameters and functions
  53. #Learning rate
  54. alpha = 1.0
  55. #Create the gradients
  56. w_pos_grad = tf$matmul(tf$transpose(v0), h0)
  57. w_neg_grad = tf$matmul(tf$transpose(v1), h1)
  58. #Calculate the Contrastive Divergence to maximize
  59. CD = (w_pos_grad - w_neg_grad) / tf$to_float(tf$shape(v0)[1])
  60. #Create methods to update the weights and biases
  61. update_w = W + alpha * CD
  62. update_vb = vb + alpha * tf$reduce_mean(v0 - v1)
  63. update_hb = hb + alpha * tf$reduce_mean(h0 - h1)

  64. # Mean Absolute Error Function.
  65. err = v0 - v1
  66. err_sum = tf$reduce_mean(err * err)

  67. # Initialise variables (current and previous)
  68. cur_w = tf$Variable(tf$zeros(shape = shape(num_input, num_hidden), dtype=tf$float32))
  69. cur_vb = tf$Variable(tf$zeros(shape = shape(num_input), dtype=tf$float32))
  70. cur_hb = tf$Variable(tf$zeros(shape = shape(num_hidden), dtype=tf$float32))
  71. prv_w = tf$Variable(tf$random_normal(shape=shape(num_input, num_hidden), stddev=0.01, dtype=tf$float32))
  72. prv_vb = tf$Variable(tf$zeros(shape = shape(num_input), dtype=tf$float32))
  73. prv_hb = tf$Variable(tf$zeros(shape = shape(num_hidden), dtype=tf$float32))

  74. # Start tensorflow session
  75. sess$run(tf$global_variables_initializer())
  76. output <- sess$run(list(update_w, update_vb, update_hb), feed_dict = dict(v0=trX,
  77.                                                                           W = prv_w$eval(),
  78.                                                                           vb = prv_vb$eval(),
  79.                                                                           hb = prv_hb$eval()))
  80. prv_w <- output[[1]]
  81. prv_vb <- output[[2]]
  82. prv_hb <-  output[[3]]
  83. sess$run(err_sum, feed_dict=dict(v0=trX, W= prv_w, vb= prv_vb, hb= prv_hb))

  84. # Train RBM
  85. epochs= 500
  86. errors <- list()
  87. weights <- list()

  88. for(ep in 1:epochs){
  89.   for(i in seq(0,(dim(trX)[1]-100),100)){
  90.     batchX <- trX[(i+1):(i+100),]
  91.     output <- sess$run(list(update_w, update_vb, update_hb), feed_dict = dict(v0=batchX,
  92.                                                                               W = prv_w,
  93.                                                                               vb = prv_vb,
  94.                                                                               hb = prv_hb))
  95.     prv_w <- output[[1]]
  96.     prv_vb <- output[[2]]
  97.     prv_hb <-  output[[3]]
  98.     if(i%%1000 == 0){
  99.       errors <- c(errors,sess$run(err_sum, feed_dict=dict(v0=batchX, W= prv_w, vb= prv_vb, hb= prv_hb)))
  100.       weights <- c(weights,output[[1]])
  101.       cat(i , " : ")
  102.     }
  103.   }
  104.   cat("epoch :", ep, " : reconstruction error : ", errors[length(errors)][[1]],"\n")
  105. }

  106. # Plot reconstruction error
  107. error_vec <- unlist(errors)
  108. plot(error_vec,xlab="# of batches",ylab="mean squared reconstruction error",main="RBM-Reconstruction MSE plot")

  109. # Recommendation
  110. #Selecting the input user
  111. inputUser = as.matrix(t(trX[75,]))
  112. names(inputUser) <- movies_df$id_order

  113. # Remove the movies not watched yet
  114. inputUser <- inputUser[inputUser>0]

  115. # Plot the top genre movies
  116. top_rated_movies <- movies_df[as.numeric(names(inputUser)[order(inputUser,decreasing = TRUE)]),]$Title
  117. top_rated_genres <- movies_df[as.numeric(names(inputUser)[order(inputUser,decreasing = TRUE)]),]$Genres
  118. top_rated_genres <- as.data.frame(top_rated_genres,stringsAsFactors=F)
  119. top_rated_genres$count <- 1
  120. top_rated_genres <- aggregate(count~top_rated_genres,FUN=sum,data=top_rated_genres)
  121. top_rated_genres <- top_rated_genres[with(top_rated_genres, order(-count)), ]
  122. top_rated_genres$top_rated_genres <- factor(top_rated_genres$top_rated_genres, levels = top_rated_genres$top_rated_genres)
  123. ggplot(top_rated_genres[top_rated_genres$count>1,],aes(x=top_rated_genres,y=count))+
  124.   geom_bar(stat="identity")+
  125.   theme_bw()+
  126.   theme(axis.text.x = element_text(angle = 90, hjust = 1))+
  127.   labs(x="Genres",y="count",title="Top Rated Genres")+
  128.   theme(plot.title = element_text(hjust = 0.5))
  129.   

  130. #Feeding in the user and reconstructing the input
  131. hh0 = tf$nn$sigmoid(tf$matmul(v0, W) + hb)
  132. vv1 = tf$nn$sigmoid(tf$matmul(hh0, tf$transpose(W)) + vb)
  133. feed = sess$run(hh0, feed_dict=dict( v0= inputUser, W= prv_w, hb= prv_hb))
  134. rec = sess$run(vv1, feed_dict=dict( hh0= feed, W= prv_w, vb= prv_vb))
  135. names(rec) <- movies_df$id_order

  136. # Select all recommended movies
  137. top_recom_movies <- movies_df[as.numeric(names(rec)[order(rec,decreasing = TRUE)]),]$Title[1:10]
  138. top_recom_genres <- movies_df[as.numeric(names(rec)[order(rec,decreasing = TRUE)]),]$Genres
  139. top_recom_genres <- as.data.frame(top_recom_genres,stringsAsFactors=F)
  140. top_recom_genres$count <- 1
  141. top_recom_genres <- aggregate(count~top_recom_genres,FUN=sum,data=top_recom_genres)
  142. top_recom_genres <- top_recom_genres[with(top_recom_genres, order(-count)), ]
  143. top_recom_genres$top_recom_genres <- factor(top_recom_genres$top_recom_genres, levels = top_recom_genres$top_recom_genres)
  144. ggplot(top_recom_genres[top_recom_genres$count>20,],aes(x=top_recom_genres,y=count))+
  145.   geom_bar(stat="identity")+
  146.   theme_bw()+
  147.   theme(axis.text.x = element_text(angle = 90, hjust = 1))+
  148.   labs(x="Genres",y="count",title="Top Recommended Genres")+
  149.   theme(plot.title = element_text(hjust = 0.5))
复制代码

使用道具

藤椅
Reader's 发表于 2017-8-13 02:20:41 |只看作者 |坛友微信交流群
  1. ######  DEEP BELIEF NETWORKS
  2. # Import tenforflow libraries
  3. # Sys.setenv(TENSORFLOW_PYTHON="C:/PROGRA~1/Python35/python.exe")
  4. # Sys.setenv(TENSORFLOW_PYTHON_VERSION = 3)
  5. library(tensorflow)
  6. np <- import("numpy")

  7. # Create TensorFlow session
  8. # Reset the graph
  9. tf$reset_default_graph()
  10. # Starting session as interactive session
  11. sess <- tf$InteractiveSession()

  12. # Input data (MNIST)
  13. mnist <- tf$examples$tutorials$mnist$input_data$read_data_sets("MNIST-data/",one_hot=TRUE)
  14. trainX <- mnist$train$images
  15. trainY <- mnist$train$labels
  16. testX <- mnist$test$images
  17. testY <- mnist$test$labels

  18. # Creating DBN
  19. RBM_hidden_sizes = c(900, 500 , 300 )


  20. # Function to initialize RBM
  21. RBM <- function(input_data,
  22.                 num_input,
  23.                 num_output,
  24.                 epochs = 5,
  25.                 alpha = 0.1,
  26.                 batchsize=100){
  27.   
  28.   # Placeholder variables
  29.   vb <- tf$placeholder(tf$float32, shape = shape(num_input))
  30.   hb <- tf$placeholder(tf$float32, shape = shape(num_output))
  31.   W <- tf$placeholder(tf$float32, shape = shape(num_input, num_output))
  32.   
  33.   # Phase 1 : Forward Pass
  34.   X = tf$placeholder(tf$float32, shape=shape(NULL, num_input))
  35.   prob_h0= tf$nn$sigmoid(tf$matmul(X, W) + hb)  #probabilities of the hidden units
  36.   h0 = tf$nn$relu(tf$sign(prob_h0 - tf$random_uniform(tf$shape(prob_h0)))) #sample_h_given_X
  37.   
  38.   # Phase 2 : Backward Pass
  39.   prob_v1 = tf$nn$sigmoid(tf$matmul(h0, tf$transpose(W)) + vb)
  40.   v1 = tf$nn$relu(tf$sign(prob_v1 - tf$random_uniform(tf$shape(prob_v1))))
  41.   h1 = tf$nn$sigmoid(tf$matmul(v1, W) + hb)   
  42.   
  43.   # Calculate gradients
  44.   w_pos_grad = tf$matmul(tf$transpose(X), h0)
  45.   w_neg_grad = tf$matmul(tf$transpose(v1), h1)
  46.   CD = (w_pos_grad - w_neg_grad) / tf$to_float(tf$shape(X)[0])
  47.   update_w = W + alpha * CD
  48.   update_vb = vb + alpha * tf$reduce_mean(X - v1)
  49.   update_hb = hb + alpha * tf$reduce_mean(h0 - h1)
  50.   
  51.   # Objective function
  52.   err = tf$reduce_mean(tf$square(X - v1))
  53.   
  54.   # Initialise variables
  55.   cur_w = tf$Variable(tf$zeros(shape = shape(num_input, num_output), dtype=tf$float32))
  56.   cur_vb = tf$Variable(tf$zeros(shape = shape(num_input), dtype=tf$float32))
  57.   cur_hb = tf$Variable(tf$zeros(shape = shape(num_output), dtype=tf$float32))
  58.   prv_w = tf$Variable(tf$random_normal(shape=shape(num_input, num_output), stddev=0.01, dtype=tf$float32))
  59.   prv_vb = tf$Variable(tf$zeros(shape = shape(num_input), dtype=tf$float32))
  60.   prv_hb = tf$Variable(tf$zeros(shape = shape(num_output), dtype=tf$float32))
  61.   
  62.   # Start tensorflow session
  63.   sess$run(tf$global_variables_initializer())
  64.   output <- sess$run(list(update_w, update_vb, update_hb), feed_dict = dict(X=input_data,
  65.                                                                             W = prv_w$eval(),
  66.                                                                             vb = prv_vb$eval(),
  67.                                                                             hb = prv_hb$eval()))
  68.   prv_w <- output[[1]]
  69.   prv_vb <- output[[2]]
  70.   prv_hb <-  output[[3]]
  71.   sess$run(err, feed_dict=dict(X= input_data, W= prv_w, vb= prv_vb, hb= prv_hb))
  72.   
  73.   errors <- list()
  74.   weights <- list()
  75.   u=1
  76.   for(ep in 1:epochs){
  77.     for(i in seq(0,(dim(input_data)[1]-batchsize),batchsize)){
  78.       batchX <- input_data[(i+1):(i+batchsize),]
  79.       output <- sess$run(list(update_w, update_vb, update_hb), feed_dict = dict(X=batchX,
  80.                                                                                 W = prv_w,
  81.                                                                                 vb = prv_vb,
  82.                                                                                 hb = prv_hb))
  83.       prv_w <- output[[1]]
  84.       prv_vb <- output[[2]]
  85.       prv_hb <-  output[[3]]
  86.       if(i%%10000 == 0){
  87.         errors[[u]] <- sess$run(err, feed_dict=dict(X= batchX, W= prv_w, vb= prv_vb, hb= prv_hb))
  88.         weights[[u]] <- output[[1]]
  89.         u=u+1
  90.         cat(i , " : ")
  91.       }
  92.     }
  93.     cat("epoch :", ep, " : reconstruction error : ", errors[length(errors)][[1]],"\n")
  94.   }
  95.   
  96.   w <- prv_w
  97.   vb <- prv_vb
  98.   hb <- prv_hb
  99.   
  100.   # Get the output
  101.   input_X = tf$constant(input_data)
  102.   ph_w = tf$constant(w)
  103.   ph_hb = tf$constant(hb)
  104.   
  105.   out = tf$nn$sigmoid(tf$matmul(input_X, ph_w) + ph_hb)

  106.   sess$run(tf$global_variables_initializer())
  107.   return(list(output_data = sess$run(out),
  108.               error_list=errors,
  109.               weight_list=weights,
  110.               weight_final=w,
  111.               bias_final=hb))
  112. }

  113. #Since we are training, set input as training data
  114. inpX = trainX

  115. #Size of inputs is the number of inputs in the training set
  116. num_input = ncol(inpX)

  117. #Train RBM
  118. RBM_output <- list()
  119. for(i in 1:length(RBM_hidden_sizes)){
  120.   size <- RBM_hidden_sizes[i]
  121.   
  122.   # Train the RBM
  123.   RBM_output[[i]] <- RBM(input_data=inpX,
  124.                          num_input=num_input,
  125.                          num_output=size,
  126.                          epochs = 5,
  127.                          alpha = 0.1,
  128.                          batchsize=100)
  129.   
  130.   # Update the input data
  131.   inpX <- RBM_output[[i]]$output_data
  132.    
  133.   
  134.   # Update the input_size
  135.   num_input = size
  136.   
  137.   cat("completed size :", size,"\n")
  138. }

  139. # Plot reconstruction error
  140. error_df <- data.frame("error"=c(unlist(RBM_output[[1]]$error_list),unlist(RBM_output[[2]]$error_list),unlist(RBM_output[[3]]$error_list)),
  141.                        "batches"=c(rep(seq(1:length(unlist(RBM_output[[1]]$error_list))),times=3)),
  142.                        "hidden_layer"=c(rep(c(1,2,3),each=length(unlist(RBM_output[[1]]$error_list)))),
  143.                        stringsAsFactors = FALSE)

  144. plot(error ~ batches,
  145.      xlab = "# of batches",
  146.      ylab = "Reconstruction Error",
  147.      pch = c(1, 7, 16)[hidden_layer],
  148.      main = "Stacked RBM-Reconstruction MSE plot",
  149.      data = error_df)

  150. legend('topright',
  151.        c("H1_900","H2_500","H3_300"),
  152.        pch = c(1, 7, 16))
复制代码

使用道具

板凳
Reader's 发表于 2017-8-13 02:23:39 |只看作者 |坛友微信交流群
  1. Sys.setenv(TENSORFLOW_PYTHON="C:/PROGRA~3/ANACON~1/python.exe")
  2. Sys.setenv(TENSORFLOW_PYTHON_VERSION = 3)
  3. library(tensorflow)
  4. require(imager)
  5. require(caret)

  6. # Load mnist dataset from tensorflow library
  7. datasets <- tf$contrib$learn$datasets
  8. mnist <- datasets$mnist$read_data_sets("MNIST-data", one_hot = TRUE)


  9. # Function to plot MNIST dataset
  10. plot_mnist<-function(imageD, pixel.y=16){
  11.   require(imager)
  12.   actImage<-matrix(imageD, ncol=pixel.y, byrow=FALSE)
  13.   img.col.mat <- imappend(list(as.cimg(actImage)), "c")
  14.   plot(img.col.mat, axes=F)
  15. }

  16. # Reduce Image Size
  17. reduceImage<-function(actds, n.pixel.x=16, n.pixel.y=16){
  18.   actImage<-matrix(actds, ncol=28, byrow=FALSE)
  19.   img.col.mat <- imappend(list(as.cimg(actImage)),"c")
  20.   thmb <- resize(img.col.mat, n.pixel.x, n.pixel.y)
  21.   outputImage<-matrix(thmb[,,1,1], nrow = 1, byrow = F)
  22.   return(outputImage)
  23. }

  24. # Covert train data to 16 x 16  pixel image
  25. trainData<-t(apply(mnist$train$images, 1, FUN=reduceImage))
  26. validData<-t(apply(mnist$test$images, 1, FUN=reduceImage))
  27. labels <- mnist$train$labels
  28. labels_valid <- mnist$test$labels
  29. rm(mnist)


  30. # Reset the graph and set-up a interactive session
  31. tf$reset_default_graph()
  32. sess<-tf$InteractiveSession()

  33. # Define Model parameter
  34. n_input<-16
  35. step_size<-16
  36. n.hidden<-64
  37. n.class<-10

  38. # Define training parameter
  39. lr<-0.01
  40. batch<-500
  41. iteration = 100

  42. # Set up a most basic RNN
  43. rnn<-function(x, weight, bias){
  44.   # Unstack input into step_size
  45.   x = tf$unstack(x, step_size, 1)
  46.   
  47.   # Define a most basic RNN
  48.   rnn_cell = tf$contrib$rnn$BasicRNNCell(n.hidden)
  49.   
  50.   # create a recurrent neural network
  51.   cell_output = tf$contrib$rnn$static_rnn(rnn_cell, x, dtype=tf$float32)
  52.   
  53.   # Linear activation, using rnn inner loop
  54.   last_vec=tail(cell_output[[1]], n=1)[[1]]
  55.   return(tf$matmul(last_vec, weights) + bias)
  56. }

  57. # Function to evaluate mean accuracy
  58. eval_acc<-function(yhat, y){
  59.   # Count correct solution
  60.   correct_Count = tf$equal(tf$argmax(yhat,1L), tf$argmax(y,1L))
  61.   
  62.   # Mean accuracy
  63.   mean_accuracy = tf$reduce_mean(tf$cast(correct_Count, tf$float32))
  64.   
  65.   return(mean_accuracy)
  66. }

  67. with(tf$name_scope('input'), {
  68. # Define placeholder for input data
  69. x = tf$placeholder(tf$float32, shape=shape(NULL, step_size, n_input), name='x')
  70. y <- tf$placeholder(tf$float32, shape(NULL, n.class), name='y')

  71. # Define Weights and bias
  72. weights <- tf$Variable(tf$random_normal(shape(n.hidden, n.class)))
  73. bias <- tf$Variable(tf$random_normal(shape(n.class)))
  74. })

  75. # Evaluate rnn cell output
  76. yhat = rnn(x, weights, bias)

  77. # Define loss and optimizer
  78. cost = tf$reduce_mean(tf$nn$softmax_cross_entropy_with_logits(logits=yhat, labels=y))
  79. optimizer = tf$train$AdamOptimizer(learning_rate=lr)$minimize(cost)



  80. # Run optimization
  81. sess$run(tf$global_variables_initializer())

  82. # Running optimization
  83. for(i in 1:iteration){
  84.   spls <- sample(1:dim(trainData)[1],batch)
  85.   sample_data<-trainData[spls,]
  86.   sample_y<-labels[spls,]
  87.   
  88.   # Reshape sample into 16 sequence with each of 16 element
  89.   sample_data=tf$reshape(sample_data, shape(batch, step_size, n_input))
  90.   out<-optimizer$run(feed_dict = dict(x=sample_data$eval(), y=sample_y))
  91.   
  92.   if (i %% 1 == 0){
  93.     cat("iteration - ", i, "Training Loss - ",  cost$eval(feed_dict = dict(x=sample_data$eval(), y=sample_y)), "\n")
  94.   }
  95. }


  96. # Calculate accuracy for 128 mnist test images
  97. accuracy<-eval_acc(yhat, y)
  98. valid_data=tf$reshape(validData, shape(-1, step_size, n_input))
  99. yhat<-sess$run(tf$argmax(yhat, 1L), feed_dict = dict(x = valid_data$eval()))
  100. image(t(matrix(validData[20,], ncol = 16, nrow = 16, byrow = T)), col  = gray((0:32)/32))
  101. image(t(matrix(trainData[20,], ncol = 16, nrow = 16, byrow = T)), col  = gray((0:32)/32))

  102. cost$eval(feed_dict=dict(x=valid_data$eval(), y=labels_valid))
复制代码

使用道具

报纸
军旗飞扬 发表于 2017-8-13 06:26:37 |只看作者 |坛友微信交流群
谢谢楼主分享!

使用道具

地板
shgby 发表于 2017-8-13 15:16:53 来自手机 |只看作者 |坛友微信交流群
Deep Learning Cookbook

使用道具

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注jltj
拉您入交流群

京ICP备16021002-2号 京B2-20170662号 京公网安备 11010802022788号 论坛法律顾问:王进律师 知识产权保护声明   免责及隐私声明

GMT+8, 2024-4-28 22:08