楼主: Lisrelchen
2288 10

[Tutorials]Deep Leaning Using Apache Spark's BigDL [推广有奖]

  • 0关注
  • 62粉丝

VIP

已卖:4194份资源

院士

67%

还不是VIP/贵宾

-

TA的文库  其他...

Bayesian NewOccidental

Spatial Data Analysis

东西方数据挖掘

威望
0
论坛币
50288 个
通用积分
83.6306
学术水平
253 点
热心指数
300 点
信用等级
208 点
经验
41518 点
帖子
3256
精华
14
在线时间
766 小时
注册时间
2006-5-4
最后登录
2022-11-6

楼主
Lisrelchen 发表于 2017-9-18 09:27:13 |AI写论文

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币
  1. Deep Leaning Tutorials on Apache Spark using BigDL

  2. Step-by-step Deep Leaning Tutorials on Apache Spark using BigDL. The tutorials are inspired by Apache Spark examples, the Theano Tutorials and the Tensorflow tutorials.

  3. Topics

  4. RDD
  5. DataFrame
  6. SparkSQL
  7. StructureStreaming
  8. Forward and backward
  9. Linear Regression
  10. Introduction to MNIST
  11. Logistic Regression
  12. Feedforward Neural Network
  13. Convolutional Neural Network
  14. Recurrent Neural Network
  15. LSTM
  16. Bi-directional RNN
  17. Auto-encoder
  18. Environment

  19. Python 2.7
  20. JDK 8
  21. Apache Spark 2.1.0
  22. Jupyter Notebook 4.1
  23. BigDL 0.2.0
  24. Setup env on Mac OS / Setup env on Linux
  25. Start Jupyter Server

  26. Download BigDL 0.2.0(linux64, mac) and unzip file.
  27. Run export BIGDL_HOME=where is your unzipped bigdl folder
  28. Run export SPARK_HOME=where is your unpacked spark folder
  29. Run ./start_notebook.sh
  30. Run Demo

  31. Open a browser - Suggest Chrome or Firefox or Safari
  32. Access notebook client at address http://localhost:8888, open the example ipynb files and execute.
复制代码

本帖隐藏的内容

https://github.com/intel-analytics/BigDL-Tutorials

二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:Apache Spark Tutorials Tutorial Leaning apache

沙发
Lisrelchen 发表于 2017-9-18 09:33:50

Using BigDL to Train Linear Regression Model

  1. In [2]:
  2. FEATURES_DIM = 2
  3. data_len = 100

  4. def gen_rand_sample():
  5.     features = np.random.uniform(0, 1, (FEATURES_DIM))
  6.     label = (2 * features).sum() + 0.4
  7.     return Sample.from_ndarray(features, label)

  8. rdd_train = sc.parallelize(range(0, data_len)).map( lambda i: gen_rand_sample() )
复制代码
  1. In [3]:
  2. # Parameters
  3. learning_rate = 0.2
  4. training_epochs = 5
  5. batch_size = 4
  6. n_input = FEATURES_DIM
  7. n_output = 1

  8. def linear_regression(n_input, n_output):
  9.     # Initialize a sequential container
  10.     model = Sequential()  
  11.     # Add a linear layer
  12.     model.add(Linear(n_input, n_output))

  13.     return model

  14. model = linear_regression(n_input, n_output)
复制代码
  1. In [4]:
  2. # Create an Optimizer
  3. optimizer = Optimizer(
  4.     model=model,
  5.     training_rdd=rdd_train,
  6.     criterion=MSECriterion(),
  7.     optim_method=SGD(learningrate=learning_rate),
  8.     end_trigger=MaxEpoch(training_epochs),
  9.     batch_size=batch_size)
复制代码
  1. In [5]:
  2. # Start to train
  3. trained_model = optimizer.optimize()
复制代码
  1. In [6]:
  2. # Print the first five predicted results of training data.
  3. predict_result = trained_model.predict(rdd_train)
  4. p = predict_result.take(5)

  5. print("predict predict: \n")
  6. for i in p:
  7.     print(str(i) + "\n")
复制代码
  1. In [7]:
  2. def test_predict(trained_model):
  3.     np.random.seed(100)
  4.     total_length = 10
  5.     features = np.random.uniform(0, 1, (total_length, 2))
  6.     label = (features).sum() + 0.4
  7.     predict_data = sc.parallelize(range(0, total_length)).map(
  8.         lambda i: Sample.from_ndarray(features[i], label))
  9.    
  10.     predict_result = trained_model.predict(predict_data)
  11.     p = predict_result.take(6)
  12.     ground_label = np.array([[-0.47596836], [-0.37598032], [-0.00492062],
  13.                                  [-0.5906958], [-0.12307882], [-0.77907401]], dtype="float32")
  14.     mse = ((p - ground_label) ** 2).mean()
  15.     print mse
  16.    
  17. test_predict(trained_model)
复制代码

藤椅
MouJack007 发表于 2017-9-18 09:39:58
谢谢楼主分享!

板凳
Lisrelchen 发表于 2017-9-18 09:40:09
  1. In [1]:
  2. %pylab inline
  3. import pandas
  4. import datetime as dt

  5. from bigdl.nn.layer import *
  6. from bigdl.nn.criterion import *
  7. from bigdl.optim.optimizer import *
  8. from bigdl.util.common import *
  9. from bigdl.dataset.transformer import *
  10. from bigdl.dataset import mnist
  11. from utils import get_mnist

  12. init_engine()
复制代码

  1. In [2]:
  2. mnist_path = "datasets/mnist"
  3. (train_data, test_data) = get_mnist(sc, mnist_path)

  4. print train_data.count()
  5. print test_data.count()
复制代码
  1. In [3]:
  2. # Parameters
  3. learning_rate = 0.2
  4. training_epochs = 15
  5. batch_size = 2048

  6. # Network Parameters
  7. n_input = 784 # MNIST data input (img shape: 28*28)
  8. n_classes = 10 # MNIST total classes (0-9 digits)
复制代码
  1. In [4]:
  2. # Define the logistic_regression model
  3. def logistic_regression(n_input, n_classes):
  4.     # Initialize a sequential container
  5.     model = Sequential()

  6.     model.add(Reshape([28*28]))
  7.     model.add(Linear(n_input, n_classes))
  8.     model.add(LogSoftMax())
  9.    
  10.     return model

  11. model = logistic_regression(n_input, n_classes)
复制代码
  1. # Create an Optimizer

  2. optimizer = Optimizer(
  3.     model=model,
  4.     training_rdd=train_data,
  5.     criterion=ClassNLLCriterion(),
  6.     optim_method=SGD(learningrate=learning_rate),
  7.     end_trigger=MaxEpoch(training_epochs),
  8.     batch_size=batch_size)
复制代码
  1. %%time
  2. # Start to train
  3. trained_model = optimizer.optimize()
  4. print "Optimization Done."
复制代码
  1. In [7]:
  2. def map_predict_label(l):
  3.     return l.argmax()
  4. def map_groundtruth_label(l):
  5.     return l[0] - 1
复制代码
  1. In [8]:
  2. # Prediction
  3. predictions = trained_model.predict(test_data)
  4. imshow(np.column_stack([np.array(s.features).reshape(28,28) for s in test_data.take(8)]),cmap='gray'); axis('off')
  5. print 'Ground Truth labels:'
  6. print ', '.join(str(map_groundtruth_label(s.label)) for s in test_data.take(8))
  7. print 'Predicted labels:'
  8. print ', '.join(str(map_predict_label(s)) for s in predictions.take(8))
复制代码





报纸
MouJack007 发表于 2017-9-18 09:40:27

地板
Lisrelchen 发表于 2017-9-18 09:40:57

Digit Classfication using Deep Feed Foward Neural Network

  1. In [1]:
  2. %pylab inline
  3. import pandas
  4. import datetime as dt

  5. from bigdl.nn.layer import *
  6. from bigdl.nn.criterion import *
  7. from bigdl.optim.optimizer import *
  8. from bigdl.util.common import *
  9. from bigdl.dataset.transformer import *
  10. from utils import get_mnist

  11. init_engine()
复制代码
  1. In [2]:
  2. # Get and store MNIST into RDD of Sample, please edit the "mnist_path" accordingly.
  3. mnist_path = "datasets/mnist"
  4. (train_data, test_data) = get_mnist(sc, mnist_path)

  5. print train_data.count()
  6. print test_data.count()
复制代码
  1. In [3]:
  2. learning_rate = 0.2
  3. training_epochs = 15
  4. batch_size = 2048
  5. display_step = 1

  6. # Network Parameters
  7. n_hidden_1 = 256 # 1st layer number of features
  8. n_hidden_2 = 256 # 2nd layer number of features
  9. n_input = 784 # MNIST data input (img shape: 28*28)
  10. n_classes = 10 # MNIST total classes (0-9 digits)
复制代码
  1. In [4]:
  2. # Create model

  3. def multilayer_perceptron(n_hidden_1, n_hidden_2, n_input, n_classes):
  4.     # Initialize a sequential container
  5.     model = Sequential()
  6.     # Hidden layer with ReLu activation
  7.     model.add(Reshape([28*28]))
  8.     model.add(Linear(n_input, n_hidden_1).set_name('mlp_fc1'))
  9.     model.add(ReLU())
  10.     # Hidden layer with ReLu activation
  11.     model.add(Linear(n_hidden_1, n_hidden_2).set_name('mlp_fc2'))
  12.     model.add(ReLU())
  13.     # output layer
  14.     model.add(Linear(n_hidden_2, n_classes).set_name('mlp_fc3'))
  15.     model.add(LogSoftMax())
  16.     return model

  17. model = multilayer_perceptron(n_hidden_1, n_hidden_2, n_input, n_classes)
复制代码
  1. In [5]:
  2. # Create an Optimizer
  3. optimizer = Optimizer(
  4.     model=model,
  5.     training_rdd=train_data,
  6.     criterion=ClassNLLCriterion(),
  7.     optim_method=SGD(learningrate=learning_rate),
  8.     end_trigger=MaxEpoch(training_epochs),
  9.     batch_size=batch_size)

  10. # Set the validation logic
  11. optimizer.set_validation(
  12.     batch_size=batch_size,
  13.     val_rdd=test_data,
  14.     trigger=EveryEpoch(),
  15.     val_method=[Top1Accuracy()]
  16. )

  17. app_name='multilayer_perceptron-'+dt.datetime.now().strftime("%Y%m%d-%H%M%S")
  18. train_summary = TrainSummary(log_dir='/tmp/bigdl_summaries',
  19.                                      app_name=app_name)
  20. train_summary.set_summary_trigger("Parameters", SeveralIteration(50))
  21. val_summary = ValidationSummary(log_dir='/tmp/bigdl_summaries',
  22.                                         app_name=app_name)
  23. optimizer.set_train_summary(train_summary)
  24. optimizer.set_val_summary(val_summary)
  25. print "saving logs to ",app_name
复制代码
  1. In [6]:
  2. %%time
  3. # Boot training process
  4. trained_model = optimizer.optimize()
  5. print "Optimization Done."
复制代码
  1. In [7]:
  2. def map_predict_label(l):
  3.     return np.array(l).argmax()
  4. def map_groundtruth_label(l):
  5.     return l[0] - 1
复制代码
  1. In [8]:
  2. %%time
  3. predictions = trained_model.predict(test_data)
  4. imshow(np.column_stack([np.array(s.features).reshape(28,28) for s in test_data.take(8)]),cmap='gray'); axis('off')
  5. print 'Ground Truth labels:'
  6. print ', '.join(str(map_groundtruth_label(s.label)) for s in test_data.take(8))
  7. print 'Predicted labels:'
  8. print ', '.join(str(map_predict_label(s)) for s in predictions.take(8))
复制代码

7
Lisrelchen 发表于 2017-9-18 09:44:52

Digit Classfication using Convolutional Neural Network

  1. In [3]:
  2. # Create a LeNet model
  3. def build_model(class_num):
  4.     model = Sequential()
  5.     model.add(Reshape([1, 28, 28]))
  6.     model.add(SpatialConvolution(1, 6, 5, 5).set_name('conv1'))
  7.     model.add(Tanh())
  8.     model.add(SpatialMaxPooling(2, 2, 2, 2).set_name('pool1'))
  9.     model.add(Tanh())
  10.     model.add(SpatialConvolution(6, 12, 5, 5).set_name('conv2'))
  11.     model.add(SpatialMaxPooling(2, 2, 2, 2).set_name('pool2'))
  12.     model.add(Reshape([12 * 4 * 4]))
  13.     model.add(Linear(12 * 4 * 4, 100).set_name('fc1'))
  14.     model.add(Tanh())
  15.     model.add(Linear(100, class_num).set_name('score'))
  16.     model.add(LogSoftMax())
  17.     return model
  18. lenet_model = build_model(10)
复制代码
  1. In [4]:
  2. # Create an Optimizer

  3. optimizer = Optimizer(
  4.     model=lenet_model,
  5.     training_rdd=train_data,
  6.     criterion=ClassNLLCriterion(),
  7.     optim_method=SGD(learningrate=0.4, learningrate_decay=0.0002),
  8.     end_trigger=MaxEpoch(20),
  9.     batch_size=2048)

  10. # Set the validation logic
  11. optimizer.set_validation(
  12.     batch_size=2048,
  13.     val_rdd=test_data,
  14.     trigger=EveryEpoch(),
  15.     val_method=[Top1Accuracy()]
  16. )

  17. app_name='lenet-'+dt.datetime.now().strftime("%Y%m%d-%H%M%S")
  18. train_summary = TrainSummary(log_dir='/tmp/bigdl_summaries',
  19.                                      app_name=app_name)
  20. train_summary.set_summary_trigger("Parameters", SeveralIteration(50))
  21. val_summary = ValidationSummary(log_dir='/tmp/bigdl_summaries',
  22.                                         app_name=app_name)
  23. optimizer.set_train_summary(train_summary)
  24. optimizer.set_val_summary(val_summary)
  25. print "saving logs to ",app_name
复制代码
  1. In [5]:
  2. %%time
  3. # Boot training process
  4. trained_model = optimizer.optimize()
  5. print "Optimization Done."
复制代码
  1. In [6]:
  2. def map_predict_label(l):
  3.     return np.array(l).argmax()
  4. def map_groundtruth_label(l):
  5.     return l[0] - 1
复制代码
  1. In [7]:
  2. # label-1 to restore the original label.
  3. print "Ground Truth labels:"
  4. print ', '.join([str(map_groundtruth_label(s.label)) for s in train_data.take(8)])
  5. imshow(np.column_stack([np.array(s.features).reshape(28,28) for s in train_data.take(8)]),cmap='gray'); axis('off')
复制代码
  1. In [9]:
  2. params = trained_model.parameters()

  3. #batch num, output_dim, input_dim, spacial_dim
  4. for layer_name, param in params.iteritems():
  5.     print layer_name,param['weight'].shape,param['bias'].shape
复制代码
  1. In [10]:
  2. #vis_square is borrowed from caffe example
  3. def vis_square(data):
  4.     """Take an array of shape (n, height, width) or (n, height, width, 3)
  5.        and visualize each (height, width) thing in a grid of size approx. sqrt(n) by sqrt(n)"""
  6.    
  7.     # normalize data for display
  8.     data = (data - data.min()) / (data.max() - data.min())
  9.     # force the number of filters to be square
  10.     n = int(np.ceil(np.sqrt(data.shape[0])))
  11.     padding = (((0, n ** 2 - data.shape[0]),
  12.                (0, 1), (0, 1))                 # add some space between filters
  13.                + ((0, 0),) * (data.ndim - 3))  # don't pad the last dimension (if there is one)
  14.     data = np.pad(data, padding, mode='constant', constant_values=1)  # pad with ones (white)
  15.    
  16.     # tile the filters into an image
  17.     data = data.reshape((n, n) + data.shape[1:]).transpose((0, 2, 1, 3) + tuple(range(4, data.ndim + 1)))
  18.     data = data.reshape((n * data.shape[1], n * data.shape[3]) + data.shape[4:])
  19.   
  20.     plt.imshow(data,cmap='gray'); plt.axis('off')
复制代码
  1. In [11]:
  2. filters_conv1 = params['conv1']['weight']

  3. filters_conv1[0,0,0]

  4. vis_square(np.squeeze(filters_conv1, axis=(0,)).reshape(1*6,5,5))
复制代码
  1. In [12]:
  2. # the parameters are a list of [weights, biases]
  3. filters_conv2 = params['conv2']['weight']

  4. vis_square(np.squeeze(filters_conv2, axis=(0,)).reshape(12*6,5,5))
复制代码
  1. In [13]:
  2. loss = np.array(train_summary.read_scalar("Loss"))
  3. top1 = np.array(val_summary.read_scalar("Top1Accuracy"))

  4. plt.figure(figsize = (12,12))
  5. plt.subplot(2,1,1)
  6. plt.plot(loss[:,0],loss[:,1],label='loss')
  7. plt.xlim(0,loss.shape[0]+10)
  8. plt.grid(True)
  9. plt.title("loss")
  10. plt.subplot(2,1,2)
  11. plt.plot(top1[:,0],top1[:,1],label='top1')
  12. plt.xlim(0,loss.shape[0]+10)
  13. plt.title("top1 accuracy")
  14. plt.grid(True)
复制代码

8
Lisrelchen 发表于 2017-9-18 09:54:19

Using Recurrent Neural Network

  1. In [2]:
  2. # Get and store MNIST into RDD of Sample, please edit the "mnist_path" accordingly.
  3. mnist_path = "datasets/mnist"
  4. (train_data, test_data) = get_mnist(sc, mnist_path)

  5. train_data = train_data.map(lambda s: Sample.from_ndarray(np.resize(s.features, (28, 28)), s.label))
  6. test_data = test_data.map(lambda s: Sample.from_ndarray(np.resize(s.features, (28, 28)), s.label))
  7. print train_data.count()
  8. print test_data.count()
复制代码
  1. In [3]:
  2. # Parameters
  3. batch_size = 64

  4. # Network Parameters
  5. n_input = 28 # MNIST data input (img shape: 28*28)
  6. n_hidden = 128 # hidden layer num of features
  7. n_classes = 10 # MNIST total classes (0-9 digits)
复制代码
  1. In [4]:
  2. def build_model(input_size, hidden_size, output_size):
  3.     model = Sequential()
  4.     recurrent = Recurrent()
  5.     recurrent.add(RnnCell(input_size, hidden_size, Tanh()))
  6.     model.add(InferReshape([-1, input_size], True))
  7.     model.add(recurrent)
  8.     model.add(Select(2, -1))
  9.     model.add(Linear(hidden_size, output_size))
  10.     return model
  11. rnn_model = build_model(n_input, n_hidden, n_classes)
复制代码
  1. In [5]:
  2. # Create an Optimizer

  3. #criterion = TimeDistributedCriterion(CrossEntropyCriterion())
  4. criterion = CrossEntropyCriterion()
  5. optimizer = Optimizer(
  6.     model=rnn_model,
  7.     training_rdd=train_data,
  8.     criterion=criterion,
  9.     optim_method= Adam(),
  10.     end_trigger=MaxEpoch(5),
  11.     batch_size=batch_size)

  12. # Set the validation logic
  13. optimizer.set_validation(
  14.     batch_size=batch_size,
  15.     val_rdd=test_data,
  16.     trigger=EveryEpoch(),
  17.     val_method=[Top1Accuracy()]
  18. )

  19. app_name='rnn-'+dt.datetime.now().strftime("%Y%m%d-%H%M%S")
  20. train_summary = TrainSummary(log_dir='/tmp/bigdl_summaries',
  21.                                      app_name=app_name)
  22. train_summary.set_summary_trigger("Parameters", SeveralIteration(50))
  23. val_summary = ValidationSummary(log_dir='/tmp/bigdl_summaries',
  24.                                         app_name=app_name)
  25. optimizer.set_train_summary(train_summary)
  26. optimizer.set_val_summary(val_summary)
  27. print "saving logs to ",app_name
复制代码
  1. In [6]:
  2. %%time
  3. # Boot training process
  4. trained_model = optimizer.optimize()
  5. print "Optimization Done."
复制代码
  1. In [7]:
  2. def map_predict_label(l):
  3.     return np.array(l).argmax()
  4. def map_groundtruth_label(l):
  5.     return l[0] - 1
复制代码
  1. In [8]:
  2. %%time
  3. predictions = trained_model.predict(test_data)
  4. imshow(np.column_stack([np.array(s.features).reshape(28,28) for s in test_data.take(8)]),cmap='gray'); axis('off')
  5. print 'Ground Truth labels:'
  6. print ', '.join(str(map_groundtruth_label(s.label)) for s in test_data.take(8))
  7. print 'Predicted labels:'
  8. print ', '.join(str(map_predict_label(s)) for s in predictions.take(8))
复制代码
  1. In [9]:
  2. loss = np.array(train_summary.read_scalar("Loss"))
  3. top1 = np.array(val_summary.read_scalar("Top1Accuracy"))

  4. plt.figure(figsize = (12,12))
  5. plt.subplot(2,1,1)
  6. plt.plot(loss[:,0],loss[:,1],label='loss')
  7. plt.xlim(0,loss.shape[0]+10)
  8. plt.grid(True)
  9. plt.title("loss")
  10. plt.subplot(2,1,2)
  11. plt.plot(top1[:,0],top1[:,1],label='top1')
  12. plt.xlim(0,loss.shape[0]+10)
  13. plt.title("top1 accuracy")
  14. plt.grid(True)
复制代码

9
Lisrelchen 发表于 2017-9-18 09:58:23

Digit Classfication using LSTM

  1. In [2]:
  2. # Get and store MNIST into RDD of Sample, please edit the "mnist_path" accordingly.
  3. mnist_path = "datasets/mnist"
  4. (train_data, test_data) = get_mnist(sc, mnist_path)

  5. train_data = train_data.map(lambda s: Sample.from_ndarray(np.resize(s.features, (28, 28)), s.label))
  6. test_data = test_data.map(lambda s: Sample.from_ndarray(np.resize(s.features, (28, 28)), s.label))
  7. print train_data.count()
  8. print test_data.count()
复制代码
  1. In [3]:
  2. # Parameters
  3. batch_size = 64

  4. # Network Parameters
  5. n_input = 28 # MNIST data input (img shape: 28*28)
  6. n_hidden = 128 # hidden layer num of features
  7. n_classes = 10 # MNIST total classes (0-9 digits)
复制代码
  1. In [4]:
  2. def build_model(input_size, hidden_size, output_size):
  3.     model = Sequential()
  4.     recurrent = Recurrent()
  5.     recurrent.add(LSTM(input_size, hidden_size))
  6.     model.add(InferReshape([-1, input_size], True))
  7.     model.add(recurrent)
  8.     model.add(Select(2, -1))
  9.     model.add(Linear(hidden_size, output_size))
  10.     return model
  11. rnn_model = build_model(n_input, n_hidden, n_classes)
复制代码
  1. In [5]:
  2. # Create an Optimizer

  3. #criterion = TimeDistributedCriterion(CrossEntropyCriterion())
  4. criterion = CrossEntropyCriterion()
  5. optimizer = Optimizer(
  6.     model=rnn_model,
  7.     training_rdd=train_data,
  8.     criterion=criterion,
  9.     optim_method=Adam(),
  10.     end_trigger=MaxEpoch(5),
  11.     batch_size=batch_size)

  12. # Set the validation logic
  13. optimizer.set_validation(
  14.     batch_size=batch_size,
  15.     val_rdd=test_data,
  16.     trigger=EveryEpoch(),
  17.     val_method=[Top1Accuracy()]
  18. )

  19. app_name='rnn-'+dt.datetime.now().strftime("%Y%m%d-%H%M%S")
  20. train_summary = TrainSummary(log_dir='/tmp/bigdl_summaries',
  21.                                      app_name=app_name)
  22. train_summary.set_summary_trigger("Parameters", SeveralIteration(50))
  23. val_summary = ValidationSummary(log_dir='/tmp/bigdl_summaries',
  24.                                         app_name=app_name)
  25. optimizer.set_train_summary(train_summary)
  26. optimizer.set_val_summary(val_summary)
  27. print "saving logs to ",app_name
复制代码
  1. In [6]:
  2. %%time
  3. # Boot training process
  4. trained_model = optimizer.optimize()
  5. print "Optimization Done."
复制代码
  1. In [7]:
  2. def map_predict_label(l):
  3.     return np.array(l).argmax()
  4. def map_groundtruth_label(l):
  5.     return l[0] - 1
复制代码
  1. In [8]:
  2. %%time
  3. predictions = trained_model.predict(test_data)
  4. imshow(np.column_stack([np.array(s.features).reshape(28,28) for s in test_data.take(8)]),cmap='gray'); axis('off')
  5. print 'Ground Truth labels:'
  6. print ', '.join(str(map_groundtruth_label(s.label)) for s in test_data.take(8))
  7. print 'Predicted labels:'
  8. print ', '.join(str(map_predict_label(s)) for s in predictions.take(8))
复制代码
  1. In [9]:
  2. loss = np.array(train_summary.read_scalar("Loss"))
  3. top1 = np.array(val_summary.read_scalar("Top1Accuracy"))

  4. plt.figure(figsize = (12,12))
  5. plt.subplot(2,1,1)
  6. plt.plot(loss[:,0],loss[:,1],label='loss')
  7. plt.xlim(0,loss.shape[0]+10)
  8. plt.grid(True)
  9. plt.title("loss")
  10. plt.subplot(2,1,2)
  11. plt.plot(top1[:,0],top1[:,1],label='top1')
  12. plt.xlim(0,loss.shape[0]+10)
  13. plt.title("top1 accuracy")
  14. plt.grid(True)
复制代码

10
钱学森64 发表于 2017-9-18 10:06:34
谢谢分享

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注jltj
拉您入交流群
GMT+8, 2026-1-3 20:49