Parallelizing Legendre Memory Unit Training
Narsimha Chilkuri 1 Chris Eliasmith 1 2
Abstract make it possible for us to exploit resources such as the in-
Recently, a new recurrent neural network (RNN) ternet,1 which produces 20TB of text data each month. A
named the Legendre Memory Unit (LMU) was feat such as this, from the training perspective, would be
proposed and shown to achieve state-of-the-art unimaginabl ...


雷达卡




京公网安备 11010802022788号







