请选择 进入手机版 | 继续访问电脑版
楼主: Nicolle
3785 47

Introduction to Artificial Neural Networks and Deep Learning [推广有奖]

ReneeBK 发表于 2017-9-8 04:10:28 |显示全部楼层 |坛友微信交流群
  1. Train the perceptron for 2 epochs
  2. In [9]:
  3. model_params = perceptron_train(X_train, y_train,
  4.                                 mparams=None, zero_weights=True)

  5. for _ in range(2):
  6.     _ = perceptron_train(X_train, y_train, mparams=model_params)
复制代码

使用道具

ReneeBK 发表于 2017-9-8 04:11:25 |显示全部楼层 |坛友微信交流群
  1. mplement a function for perceptron predictions in NumPy
  2. In [10]:
  3. def perceptron_predict(features, mparams):
  4.     """Perceptron prediction function for binary class labels

  5.     Parameters
  6.     ----------
  7.     features : numpy.ndarray, shape=(n_samples, m_features)
  8.         A 2D NumPy array containing the training examples

  9.     mparams : dict
  10.         The model parameters aof the perceptron in the form:
  11.         {'weights': np.array([weight_1, weight_2, ... , weight_m]),
  12.          'bias': np.array([bias])}

  13.     Returns
  14.     -------
  15.     predicted_labels : np.ndarray, shape=(n_samples)
  16.         NumPy array containing the predicted class labels.

  17.     """
  18.     linear = np.dot(features, mparams['weights']) + mparams['bias']
  19.     predicted_labels = np.where(linear.reshape(-1) > 0., 1, 0)
  20.     return predicted_labels
复制代码

使用道具

ReneeBK 发表于 2017-9-8 04:11:57 |显示全部楼层 |坛友微信交流群
  1. Compute training and test error
  2. In [11]:
  3. train_errors = np.sum(perceptron_predict(X_train, model_params) != y_train)
  4. test_errors = np.sum(perceptron_predict(X_test, model_params) != y_test)

  5. print('Number of training errors', train_errors)
  6. print('Number of test errors', test_errors)
复制代码

使用道具

ReneeBK 发表于 2017-9-8 04:12:49 |显示全部楼层 |坛友微信交流群
  1. Setting up the perceptron graph
  2. In [16]:
  3. g = tf.Graph()


  4. n_features = X_train.shape[1]

  5. with g.as_default() as g:
  6.    
  7.     # initialize model parameters
  8.     features = tf.placeholder(dtype=tf.float32,
  9.                               shape=[None, n_features], name='features')
  10.     targets = tf.placeholder(dtype=tf.float32,
  11.                              shape=[None, 1], name='targets')
  12.     params = {
  13.         'weights': tf.Variable(tf.zeros(shape=[n_features, 1],
  14.                                         dtype=tf.float32), name='weights'),
  15.         'bias': tf.Variable([[0.]], dtype=tf.float32, name='bias')}
  16.    
  17.     # forward pass
  18.     linear = tf.matmul(features, params['weights']) + params['bias']
  19.     ones = tf.ones(shape=tf.shape(linear))
  20.     zeros = tf.zeros(shape=tf.shape(linear))
  21.     prediction = tf.where(tf.less(linear, 0.), zeros, ones, name='prediction')
  22.    
  23.     # weight update
  24.     diff = targets - prediction
  25.     weight_update = tf.assign_add(params['weights'],
  26.                                   tf.reshape(diff * features, (n_features, 1)))
  27.     bias_update = tf.assign_add(params['bias'], diff)
  28.    
  29.     saver = tf.train.Saver()
复制代码

使用道具

ReneeBK 发表于 2017-9-8 04:13:22 |显示全部楼层 |坛友微信交流群
  1. Training the perceptron for 5 training samples for illustration purposes
  2. In [17]:
  3. with tf.Session(graph=g) as sess:
  4.    
  5.     sess.run(tf.global_variables_initializer())
  6.    
  7.     i = 0
  8.     for example, target in zip(X_train, y_train):
  9.         feed_dict = {features: example.reshape(-1, n_features),
  10.                      targets: target.reshape(-1, 1)}
  11.         _, _ = sess.run([weight_update, bias_update], feed_dict=feed_dict)
  12.         
  13.         i += 1
  14.         if i >= 4:
  15.             break
  16.         

  17.     modelparams = sess.run(params)   
  18.     print('Model parameters:\n', modelparams)

  19.     saver.save(sess, save_path='perceptron')
  20.    
  21.     pred = sess.run(prediction, feed_dict={features: X_train})
  22.     errors = np.sum(pred.reshape(-1) != y_train)
  23.     print('Number of training errors:', errors)
复制代码

使用道具

ReneeBK 发表于 2017-9-8 04:14:00 |显示全部楼层 |坛友微信交流群
  1. Continue training of the graph after restoring the session from a local checkpoint (this can be useful if we have to interrupt out computational session)
  2. Now train a complete epoch
  3. In [18]:
  4. with tf.Session(graph=g) as sess:
  5.     saver.restore(sess, os.path.abspath('perceptron'))

  6.     for epoch in range(1):
  7.         for example, target in zip(X_train, y_train):
  8.             feed_dict = {features: example.reshape(-1, n_features),
  9.                          targets: target.reshape(-1, 1)}
  10.             _, _ = sess.run([weight_update, bias_update], feed_dict=feed_dict)
  11.             modelparams = sess.run(params)

  12.     saver.save(sess, save_path='perceptron')
  13.    
  14.     pred = sess.run(prediction, feed_dict={features: X_train})
  15.     train_errors = np.sum(pred.reshape(-1) != y_train)
  16.     pred = sess.run(prediction, feed_dict={features: X_train})
  17.     test_errors = np.sum(pred.reshape(-1) != y_train)

  18.     print('Number of training errors', train_errors)
  19.     print('Number of test errors', test_errors)
复制代码

使用道具

ReneeBK 发表于 2017-9-8 04:15:14 |显示全部楼层 |坛友微信交流群
  1. Plot the decision boundary for this TensorFlow perceptron. Why do you think the TensorFlow implementation performs better than our NumPy implementation on the test set?
  2. Hint 1: you can re-use the code that we used in the NumPy section
  3. Hint 2: since the bias is a 2D array, you need to access the float value via modelparams['bias'][0]
  4. In [19]:
  5. # %load solutions/03_tensorflow-boundary.py
  6. Theoretically, we could restart the Jupyter notebook now (we would just have to prepare the dataset again then, though)
  7. We are going to restore the session from a meta graph (notice "tf.Session()")
  8. First, we have to load the datasets again
  9. In [20]:
  10. with tf.Session() as sess:
  11.    
  12.     saver = tf.train.import_meta_graph(os.path.abspath('perceptron.meta'))
  13.     saver.restore(sess, os.path.abspath('perceptron'))
  14.    
  15.     pred = sess.run('prediction:0', feed_dict={'features:0': X_train})
  16.     train_errors = np.sum(pred.reshape(-1) != y_train)
  17.     pred = sess.run('prediction:0', feed_dict={'features:0': X_test})
  18.     test_errors = np.sum(pred.reshape(-1) != y_test)
  19.    
  20.     print('Number of training errors', train_errors)
  21.     print('Number of test errors', test_errors)
复制代码

使用道具

eeabcde 发表于 2017-9-21 10:12:37 |显示全部楼层 |坛友微信交流群
有书的链接吗?原帖好像没有啊!!

使用道具

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注jltj
拉您入交流群

京ICP备16021002-2号 京B2-20170662号 京公网安备 11010802022788号 论坛法律顾问:王进律师 知识产权保护声明   免责及隐私声明

GMT+8, 2024-3-29 17:21