楼主: Lisrelchen
2265 4

【GitHub】tensorflow-lstm-regression [推广有奖]

  • 0关注
  • 62粉丝

VIP

已卖:4194份资源

院士

67%

还不是VIP/贵宾

-

TA的文库  其他...

Bayesian NewOccidental

Spatial Data Analysis

东西方数据挖掘

威望
0
论坛币
50288 个
通用积分
83.6906
学术水平
253 点
热心指数
300 点
信用等级
208 点
经验
41518 点
帖子
3256
精华
14
在线时间
766 小时
注册时间
2006-5-4
最后登录
2022-11-6

楼主
Lisrelchen 发表于 2017-4-26 10:23:51 |AI写论文

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币
tensorflow-lstm-regression

This is an example of a regressor based on recurrent networks:

The objective is to predict continuous values, sin and cos functions in this example, based on previous observations using the LSTM architecture.

Install and RunCreate a Virtual Environment

It is reccomended that you create a virtualenv for the setup since this example is highly dependant on the versions set in the requirements file.

$ virtualenv ~/python/ltsm$ source ~/python/ltsm/bin/activate(ltsm) $Install Requirements

This example depends on tensorflow-0.11.0 to work. You will first need to install the requirements. You will need the appropriate version of tensorflow for your platform, this example is for mac. For more details goto TAG tensorflow-0.11.0 Setup

(ltsm) $ pip install -U https://storage.googleapis.com/t ... .0-py3-none-any.whl(ltsm) $ pip install -r ./requirements.txtRunning on Jupyter

Three Jupyter notebooks are provided as examples on how to use lstm for predicting shapes. They will be available when you start up Jupyter in the project dir.

(ltsm) $ jupyter notebook

本帖隐藏的内容

tensorflow-lstm-regression-master.zip (257.06 KB)


二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:regression regressio regress Tensor GitHub continuous objective activate previous example

本帖被以下文库推荐

沙发
Lisrelchen 发表于 2017-4-26 10:24:20
  1. import numpy as np
  2. import pandas as pd
  3. import tensorflow as tf
  4. from tensorflow.python.framework import dtypes
  5. from tensorflow.contrib import learn as tflearn
  6. from tensorflow.contrib import layers as tflayers


  7. def x_sin(x):
  8.     return x * np.sin(x)


  9. def sin_cos(x):
  10.     return pd.DataFrame(dict(a=np.sin(x), b=np.cos(x)), index=x)


  11. def rnn_data(data, time_steps, labels=False):
  12.     """
  13.     creates new data frame based on previous observation
  14.       * example:
  15.         l = [1, 2, 3, 4, 5]
  16.         time_steps = 2
  17.         -> labels == False [[1, 2], [2, 3], [3, 4]]
  18.         -> labels == True [3, 4, 5]
  19.     """
  20.     rnn_df = []
  21.     for i in range(len(data) - time_steps):
  22.         if labels:
  23.             try:
  24.                 rnn_df.append(data.iloc[i + time_steps].as_matrix())
  25.             except AttributeError:
  26.                 rnn_df.append(data.iloc[i + time_steps])
  27.         else:
  28.             data_ = data.iloc[i: i + time_steps].as_matrix()
  29.             rnn_df.append(data_ if len(data_.shape) > 1 else [[i] for i in data_])

  30.     return np.array(rnn_df, dtype=np.float32)


  31. def split_data(data, val_size=0.1, test_size=0.1):
  32.     """
  33.     splits data to training, validation and testing parts
  34.     """
  35.     ntest = int(round(len(data) * (1 - test_size)))
  36.     nval = int(round(len(data.iloc[:ntest]) * (1 - val_size)))

  37.     df_train, df_val, df_test = data.iloc[:nval], data.iloc[nval:ntest], data.iloc[ntest:]

  38.     return df_train, df_val, df_test


  39. def prepare_data(data, time_steps, labels=False, val_size=0.1, test_size=0.1):
  40.     """
  41.     Given the number of `time_steps` and some data,
  42.     prepares training, validation and test data for an lstm cell.
  43.     """
  44.     df_train, df_val, df_test = split_data(data, val_size, test_size)
  45.     return (rnn_data(df_train, time_steps, labels=labels),
  46.             rnn_data(df_val, time_steps, labels=labels),
  47.             rnn_data(df_test, time_steps, labels=labels))


  48. def load_csvdata(rawdata, time_steps, seperate=False):
  49.     data = rawdata
  50.     if not isinstance(data, pd.DataFrame):
  51.         data = pd.DataFrame(data)

  52.     train_x, val_x, test_x = prepare_data(data['a'] if seperate else data, time_steps)
  53.     train_y, val_y, test_y = prepare_data(data['b'] if seperate else data, time_steps, labels=True)
  54.     return dict(train=train_x, val=val_x, test=test_x), dict(train=train_y, val=val_y, test=test_y)


  55. def generate_data(fct, x, time_steps, seperate=False):
  56.     """generates data with based on a function fct"""
  57.     data = fct(x)
  58.     if not isinstance(data, pd.DataFrame):
  59.         data = pd.DataFrame(data)
  60.     train_x, val_x, test_x = prepare_data(data['a'] if seperate else data, time_steps)
  61.     train_y, val_y, test_y = prepare_data(data['b'] if seperate else data, time_steps, labels=True)
  62.     return dict(train=train_x, val=val_x, test=test_x), dict(train=train_y, val=val_y, test=test_y)


  63. def lstm_model(num_units, rnn_layers, dense_layers=None, learning_rate=0.1, optimizer='Adagrad'):
  64.     """
  65.     Creates a deep model based on:
  66.         * stacked lstm cells
  67.         * an optional dense layers
  68.     :param num_units: the size of the cells.
  69.     :param rnn_layers: list of int or dict
  70.                          * list of int: the steps used to instantiate the `BasicLSTMCell` cell
  71.                          * list of dict: [{steps: int, keep_prob: int}, ...]
  72.     :param dense_layers: list of nodes for each layer
  73.     :return: the model definition
  74.     """

  75.     def lstm_cells(layers):
  76.         if isinstance(layers[0], dict):
  77.             return [tf.nn.rnn_cell.DropoutWrapper(tf.nn.rnn_cell.BasicLSTMCell(layer['num_units'],
  78.                                                                                state_is_tuple=True),
  79.                                                   layer['keep_prob'])
  80.                     if layer.get('keep_prob') else tf.nn.rnn_cell.BasicLSTMCell(layer['num_units'],
  81.                                                                                 state_is_tuple=True)
  82.                     for layer in layers]
  83.         return [tf.nn.rnn_cell.BasicLSTMCell(steps, state_is_tuple=True) for steps in layers]

  84.     def dnn_layers(input_layers, layers):
  85.         if layers and isinstance(layers, dict):
  86.             return tflayers.stack(input_layers, tflayers.fully_connected,
  87.                                   layers['layers'],
  88.                                   activation=layers.get('activation'),
  89.                                   dropout=layers.get('dropout'))
  90.         elif layers:
  91.             return tflayers.stack(input_layers, tflayers.fully_connected, layers)
  92.         else:
  93.             return input_layers

  94.     def _lstm_model(X, y):
  95.         stacked_lstm = tf.nn.rnn_cell.MultiRNNCell(lstm_cells(rnn_layers), state_is_tuple=True)
  96.         x_ = tf.unpack(X, axis=1, num=num_units)
  97.         output, layers = tf.nn.rnn(stacked_lstm, x_, dtype=dtypes.float32)
  98.         output = dnn_layers(output[-1], dense_layers)
  99.         prediction, loss = tflearn.models.linear_regression(output, y)
  100.         train_op = tf.contrib.layers.optimize_loss(
  101.             loss, tf.contrib.framework.get_global_step(), optimizer=optimizer,
  102.             learning_rate=learning_rate)
  103.         return prediction, loss, train_op

  104.     return _lstm_model
复制代码

藤椅
MouJack007 发表于 2017-4-26 12:19:57
谢谢楼主分享!

板凳
MouJack007 发表于 2017-4-26 12:20:24

报纸
bbyyss007 发表于 2017-5-2 13:00:38
多谢楼主。。。

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注jltj
拉您入交流群
GMT+8, 2026-1-27 11:43