楼主: ReneeBK
2137 4

LSTM Implementation in Python Keras [推广有奖]

  • 1关注
  • 62粉丝

VIP

已卖:4897份资源

学术权威

14%

还不是VIP/贵宾

-

TA的文库  其他...

R资源总汇

Panel Data Analysis

Experimental Design

威望
1
论坛币
49635 个
通用积分
55.7537
学术水平
370 点
热心指数
273 点
信用等级
335 点
经验
57805 点
帖子
4005
精华
21
在线时间
582 小时
注册时间
2005-5-8
最后登录
2023-11-26

楼主
ReneeBK 发表于 2016-7-25 00:20:41 |AI写论文

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币

LSTM Implementation in Keras

David Hill

本帖隐藏的内容

LSTM Implementation in Keras.pdf (827.06 KB)



二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:Implementa implement python ATION ATI

已有 2 人评分经验 论坛币 学术水平 热心指数 信用等级 收起 理由
william9225 + 20 精彩帖子
晓七 + 100 + 1 + 1 + 1 奖励积极上传好的资料

总评分: 经验 + 100  论坛币 + 20  学术水平 + 1  热心指数 + 1  信用等级 + 1   查看全部评分

沙发
ReneeBK 发表于 2016-7-25 00:23:01
  1. from keras.models import Sequential
  2. from keras.layers.core import TimeDistributedDense, Activation
  3. from keras.layers.recurrent import LSTM
  4. from keras.optimizers import RMSprop
  5. import numpy as np
  6. import random
  7. #if you need help installing keras, cuda or running this code let me know:
  8. #djhill715@gmail.com

  9. # Hope is the thing with feathers, Emily Dickenson
  10. print 'Processing Poem'
  11. poem = 'Hope is the thing with feathers // That perches in the soul, // And sings the tune without the words, // And never stops at all, // // And sweetest in the gale is heard; // And sore must be the storm // That could abash the little bird // That kept so many warm. // // I\'ve heard it in the chillest land, // And on the strangest sea; // Yet, never, in extremity, // It asked a crumb of me.'
  12. #break poem into words
  13. poem = poem.split(' ')

  14. #build a dictionary
  15. wordList = []
  16. for word in poem:
  17.   wordList.append(word)
  18. #list(set()) is slow for large datasets, use bloom filters for those
  19. wordList = list(set(wordList))
  20. #insert system start token into dictionary
  21. wordList.insert(0, '#START#')

  22. #convert words into one-hot tokens
  23. wordtoix = {}
  24. ixtoword = {}
  25. for i, word in enumerate(wordList):
  26.   wordtoix[word] = i
  27.   ixtoword[i] = word
  28. #construct the input numpy arrays for the model
  29. print 'Building Training Data'
  30. #array has shape of num_seq X num_time_steps X dimensionality of features
  31. ins = np.zeros( (1, len(poem)+2, len(wordList)) )
  32. gts = np.zeros_like(ins)

  33. #encode the poem into the training sequences
  34. #ins begins with start
  35. ins[0, 0, wordtoix['#START#']] = 1
  36. for t, word in enumerate(poem):
  37.   #the ground truth at time t is the next word
  38.   gts[0, t, wordtoix[word]] = 1
  39.   #the input at time t+1 is the previous ground truth
  40.   ins[0, t+1, wordtoix[word]] = 1
  41. #ground truth ends with the end token (or reuse start token)
  42. gts[:,-1,wordtoix['#START#']] = 1

  43. #the complete model definition
  44. #We include the first dense layer because we have one-hot tokens
  45. #this improves the performance by allowing the model to project the one-hot tokens into a dense encoding
  46. #where distances between encodings matter
  47. print 'Building Model'
  48. model = Sequential()
  49. #time distributed dense is a fully-connected layer with weights shared across time. updates are averaged across time
  50. model.add(TimeDistributedDense(input_dim = len(wordList), output_dim = 64))
  51. #LSTM is a recurrent keras layer that unrolls itself to the number of timesteps
  52. model.add(LSTM(input_dim = 64, output_dim = 64, return_sequences = True, forget_bias_init='one'))
  53. #another time distributed dense to map the projection space to the output space
  54. model.add(TimeDistributedDense(input_dim = 64, output_dim = len(wordList)))
  55. #softmax to produce probabilities across our dictionary at each timestep
  56. model.add(Activation('softmax'))

  57. #keras models must be compiled into cuda C code
  58. print 'Compiling Model'
  59. #rms prop provides many benefits to LSTMs, including making the gradient norm more robust through time
  60. rms = RMSprop(lr=.003)
  61. model.compile(loss='categorical_crossentropy', optimizer=rms)

  62. #the default training function is sufficient here
  63. #keras also has a train on batch function for more control
  64. model.fit(ins, gts, batch_size=1, nb_epoch=150, verbose=1)

  65. #print the input for validation
  66. line = []
  67. print ''
  68. print ' =======INPUT======='
  69. print ''
  70. for x in xrange(ins.shape[1]):
  71.   #get the word using the dictionary
  72.   word = ixtoword[np.argmax(ins[0,x])]
  73.   if word == '//':
  74.     print ' '.join(line)
  75.     line = []
  76.   else:
  77.     line.append(word)
  78. print ' '.join(line)
  79. print ' '

  80. #prepare a new array to store predictions, initialize with start token
  81. ins = np.zeros( (1, len(poem)+2, len(wordList)) )
  82. ins[:, 0, wordtoix['#START#']] = 1
  83. #list to hold the output, also initialized with start token
  84. line = ['#START#']
  85. print ''
  86. print '=====OUTPUT====='
  87. print ''
  88. for x in xrange(ins.shape[1]-1):
  89.   #dereference the first and only batch of the predictions
  90.   predictions = model.predict(ins, batch_size=1, verbose=0)[0]
  91.   #compute the predicted dictionary index
  92.   pred_ix = np.argmax(predictions[x])
  93.   #get the word using the dictionary
  94.   word = ixtoword[pred_ix]
  95.   #also set the input for the next timestep to the predicted index
  96.   ins[:, x+1, pred_ix] = 1
  97.   #the remaining lines are just for pretty printing
  98.   if word == '//':
  99.     print ' '.join(line)
  100.     line = []
  101.   else:
  102.     line.append(word)
  103. print ' '.join(line)
复制代码

藤椅
晓七 在职认证  发表于 2016-7-25 01:11:10
谢谢分享。

板凳
william9225 学生认证  发表于 2016-7-25 01:41:52 来自手机
好资料

报纸
斜白 发表于 2016-7-25 11:20:56
看看啊

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注jltj
拉您入交流群
GMT+8, 2026-1-4 11:07