Tight Bounds on the Smallest Eigenvalue of the Neural Tangent Kernel for
Deep ReLU Networks
Quynh Nguyen 1 Marco Mondelli 2 Guido Montufar 1 3
Abstract We assume that the network has a single output, namely
nL = 1 and WL ∈ RnL1 . For consistency, let n0 = d.
A recent line of work has analyzed the theoretical Let gl : Rd → Rnl be the pre-activation feature map
properties of deep neural network ...


雷达卡




京公网安备 11010802022788号







