479 24

# Solving the MultiLayer Perceptron problem in Python [推广有奖]

0%

-

TA的文库  其他...

Python Programming

SAS Programming

Structural Equation Modeling

16

12314643 个

2816 点

2797 点

2624 点

429649 点

18433

80

6975 小时

2005-4-23

2018-10-18

Nicolle   发表于 2018-2-13 10:36:43 |显示全部楼层

1. The brain contains billions of neurons with tens of thousands of connections between them. Deep learning algorithms resemble the brain in many conditions, as both the brain and deep learning models involve a vast number of computation units (Neurons) that are not extraordinarily intelligent in isolation but become intelligent when they interact with each other.
2. I think people need to understand that deep learning is making a lot of things, behind-the-scenes, much better. Deep learning is already working in Google search, and in image search; it allows you to image search a term like “hug.”— Geoffrey Hinton

### 本帖被以下文库推荐

Nicolle   发表于 2018-2-13 10:37:48 |显示全部楼层
 import numpy as np print("Enter the two values for input layers") print('a = ') a = int(input()) # 2 print('b = ') b = int(input()) # 3 input_data = np.array([a, b]) weights = {     'node_0': np.array([1, 1]),     'node_1': np.array([-1, 1]),     'output_node': np.array([2, -1]) } node_0_value = (input_data * weights['node_0']).sum() # 2 * 1 +3 * 1 = 5 print('node 0_hidden: {}'.format(node_0_value)) node_1_value = (input_data * weights['node_1']).sum() # 2 * -1 + 3 * 1 = 1 print('node_1_hidden: {}'.format(node_1_value)) hidden_layer_values = np.array([node_0_value, node_1_value]) output_layer = (hidden_layer_values * weights['output_node']).sum() print("output layer : {}".format(output_layer))复制代码

 本帖最后由 Nicolle 于 2018-2-13 10:40 编辑 import numpy as np print("Enter the two values for input layers") print('a = ') a = int(input()) # 2 print('b = ') b = int(input()) weights = {     'node_0': np.array([2, 4]),     'node_1': np.array([[4, -5]]),     'output_node': np.array([2, 7]) } input_data = np.array([a, b]) def relu(input):     # Rectified Linear Activation     output = max(input, 0)     return(output) node_0_input = (input_data * weights['node_0']).sum() node_0_output = relu(node_0_input) node_1_input = (input_data * weights['node_1']).sum() node_1_output = relu(node_1_input) hidden_layer_outputs = np.array([node_0_output, node_1_output]) model_output = (hidden_layer_outputs * weights['output_node']).sum() print(model_output)复制代码

Nicolle   发表于 2018-2-13 10:40:36 |显示全部楼层
 # Importing Keras Sequential Model from keras.models import Sequential from keras.layers import Dense import numpy # Initializing the seed value to a integer. seed = 7 numpy.random.seed(seed) # Loading the data set (PIMA Diabetes Dataset) dataset = numpy.loadtxt('datasets/pima-indians-diabetes.csv', delimiter=",") # Loading the input values to X and Label values Y using slicing. X = dataset[:, 0:8] Y = dataset[:, 8] # Initializing the Sequential model from KERAS. model = Sequential() # Creating a 16 neuron hidden layer with Linear Rectified activation function. model.add(Dense(16, input_dim=8, init='uniform', activation='relu')) # Creating a 8 neuron hidden layer. model.add(Dense(8, init='uniform', activation='relu')) # Adding a output layer. model.add(Dense(1, init='uniform', activation='sigmoid')) # Compiling the model model.compile(loss='binary_crossentropy',               optimizer='adam', metrics=['accuracy']) # Fitting the model model.fit(X, Y, nb_epoch=150, batch_size=10) scores = model.evaluate(X, Y) print("%s: %.2f%%" % (model.metrics_names[1], scores[1] * 100))复制代码

 谢谢楼主分享

Nicolle + 20 鼓励积极发帖讨论

hjtoh 发表于 2018-2-13 11:12:33 来自手机 |显示全部楼层
 Nicolle 发表于 2018-2-13 10:36 **** 本内容被作者隐藏 ****谢谢分享

Nicolle + 20 鼓励积极发帖讨论

fengyg   发表于 2018-2-13 11:56:13 |显示全部楼层
 kankan

HappyAndy_Lo 发表于 2018-2-13 12:09:00 |显示全部楼层

albertwishedu 发表于 2018-2-13 12:18:46 |显示全部楼层

life_life 发表于 2018-2-13 12:52:17 |显示全部楼层
 看看  看看 ，，

 您需要登录后才可以回帖 登录 | 我要注册 回帖后跳转到最后一页