楼主: Lisrelchen
1662 4

[博客精选]Neural Network using Python [推广有奖]

  • 0关注
  • 62粉丝

VIP

已卖:4194份资源

院士

67%

还不是VIP/贵宾

-

TA的文库  其他...

Bayesian NewOccidental

Spatial Data Analysis

东西方数据挖掘

威望
0
论坛币
50288 个
通用积分
83.6306
学术水平
253 点
热心指数
300 点
信用等级
208 点
经验
41518 点
帖子
3256
精华
14
在线时间
766 小时
注册时间
2006-5-4
最后登录
2022-11-6

楼主
Lisrelchen 发表于 2016-8-19 01:41:32 |AI写论文

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币

A neural network trained with backpropagation is attempting to use input to predict output.

Inputs

Output

0010
1111
1011
0110

Consider trying to predict the output column given the three input columns. We could solve this problem by simply measuring statistics between the input values and the output values. If we did so, we would see that the leftmost input column is perfectly correlated with the output. Backpropagation, in its simplest form, measures statistics like this to make a model. Let's jump right in and use it to do this.


本帖隐藏的内容

8.pdf (6.82 MB)


二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:network Neural python Using Work Network

沙发
Lisrelchen 发表于 2016-8-19 01:42:03
  1. import numpy as np

  2. # sigmoid function

  3. def nonlin(x,deriv=False):

  4. if(deriv==True):

  5. return x*(1-x)

  6. return 1/(1+np.exp(-x))

  7. # input dataset

  8. X = np.array([  [0,0,1],

  9. [0,1,1],

  10. [1,0,1],

  11. [1,1,1] ])

  12. # output dataset           

  13. y = np.array([[0,0,1,1]]).T

  14. # seed random numbers to make calculation

  15. # deterministic (just a good practice)

  16. np.random.seed(1)

  17. # initialize weights randomly with mean 0

  18. syn0 = 2*np.random.random((3,1)) - 1

  19. for iter in xrange(10000):

  20. # forward propagation

  21. l0 = X

  22. l1 = nonlin(np.dot(l0,syn0))

  23. # how much did we miss?

  24. l1_error = y - l1

  25. # multiply how much we missed by the

  26. # slope of the sigmoid at the values in l1

  27. l1_delta = l1_error * nonlin(l1,True)

  28. # update weights

  29. syn0 += np.dot(l0.T,l1_delta)

  30. print "Output After Training:"

  31. print l1
复制代码

藤椅
Lisrelchen 发表于 2016-8-19 01:47:04
  1. import numpy as np

  2. def nonlin(x,deriv=False):

  3. if(deriv==True):

  4. return x*(1-x)

  5. return 1/(1+np.exp(-x))

  6. X = np.array([[0,0,1],
  7. [0,1,1],
  8. [1,0,1],
  9. [1,1,1]])

  10. y = np.array([[0],
  11. [1],
  12. [1],
  13. [0]])

  14. np.random.seed(1)

  15. # randomly initialize our weights with mean 0

  16. syn0 = 2*np.random.random((3,4)) - 1

  17. syn1 = 2*np.random.random((4,1)) - 1

  18. for j in xrange(60000):

  19. # Feed forward through layers 0, 1, and 2

  20. l0 = X

  21. l1 = nonlin(np.dot(l0,syn0))

  22. l2 = nonlin(np.dot(l1,syn1))

  23. # how much did we miss the target value?

  24. l2_error = y - l2

  25. if (j% 10000) == 0:

  26. print "Error:" + str(np.mean(np.abs(l2_error)))


  27. # in what direction is the target value?

  28. # were we really sure? if so, don't change too much.

  29. l2_delta = l2_error*nonlin(l2,deriv=True)


  30. # how much did each l1 value contribute to the l2 error (according to the weights)?

  31. l1_error = l2_delta.dot(syn1.T)


  32. # in what direction is the target l1?

  33. # were we really sure? if so, don't change too much.

  34. l1_delta = l1_error * nonlin(l1,deriv=True)

  35. syn1 += l1.T.dot(l2_delta)

  36. syn0 += l0.T.dot(l1_delta)
复制代码

板凳
fengyg 企业认证  发表于 2016-8-19 07:42:39
kankan

报纸
ekscheng 发表于 2016-8-19 09:07:19

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注jltj
拉您入交流群
GMT+8, 2025-12-28 13:46