[color=rgba(83, 100, 121, 0.9)]J. Hopfield
Publisher(opens in a new tab)
Save
Alert
Cite
Large Associative Memory Problem in Neurobiology and Machine Learning
[color=rgba(83, 100, 121, 0.9)]D. KrotovJ. Hopfield
[color=rgba(83, 100, 121, 0.9)]International Conference on Learning…
TLDR
These models are effective descriptions of a more microscopic theory that has additional (hidden) neurons and only requires two-body interactions between them and are a valid model of large associative memory with a degree of biological plausibility.Expand
arXiv(opens in a new tab)
Save
Alert
Cite
Bio-Inspired Hashing for Unsupervised Similarity Search
[color=rgba(83, 100, 121, 0.9)]Chaitanya K. RyaliJ. HopfieldLeopold GrinbergD. Krotov
[color=rgba(83, 100, 121, 0.9)]International Conference on Machine Learning
TLDR
This work proposes a novel hashing algorithm BioHash that produces sparse high dimensional hash codes in a data-driven manner and shows that BioHash outperforms previously published benchmarks for various hashing methods.Expand
arXiv(opens in a new tab)
Save
Alert
Cite
Local Unsupervised Learning for Image Analysis
[color=rgba(83, 100, 121, 0.9)]Leopold GrinbergJ. HopfieldD. Krotov
[color=rgba(83, 100, 121, 0.9)]arXiv.org
TLDR
The design of a local algorithm that can learn convolutional filters at scale on large image datasets and a successful transfer of learned representations between CIFAR-10 and ImageNet 32x32 datasets hint at the possibility that local unsupervised training might be a powerful tool for learning general representations (without specifying the task) directly from unlabeled data.Expand
arXiv(opens in a new tab)
Save
Alert
Cite
Neural networks
[color=rgba(83, 100, 121, 0.9)]J. Hopfield
[color=rgba(83, 100, 121, 0.9)]International Electron Devices Meeting
IEEE(opens in a new tab)
Save
Alert
Cite
Unsupervised learning by competing hidden units
[color=rgba(83, 100, 121, 0.9)]D. KrotovJ. Hopfield
[color=rgba(83, 100, 121, 0.9)]Proceedings of the National Academy of Sciences…
TLDR
A learning algorithm is designed that utilizes global inhibition in the hidden layer and is capable of learning early feature detectors in a completely unsupervised way, and which is motivated by Hebb’s idea that change of the synapse strength should be local.Expand
NAS(opens in a new tab)
Save
Alert
Cite
Feature to prototype transition in neural networks
[color=rgba(83, 100, 121, 0.9)]D. KrotovJ. Hopfield
Save
Alert
Cite
Dense Associative Memory Is Robust to Adversarial Inputs
[color=rgba(83, 100, 121, 0.9)]D. KrotovJ. Hopfield
[color=rgba(83, 100, 121, 0.9)]Neural Computation
TLDR
DAMs with higher-order energy functions are more robust to adversarial and rubbish inputs than DNNs with rectified linear units and open up the possibility of using higher- order models for detecting and stopping malicious adversarial attacks.Expand
MIT Press(opens in a new tab)
Save
Alert
Cite
Dense Associative Memory for Pattern Recognition
[color=rgba(83, 100, 121, 0.9)]D. KrotovJ. Hopfield
[color=rgba(83, 100, 121, 0.9)]Neural Information Processing Systems
TLDR
The proposed duality makes it possible to apply energy-based intuition from associative memory to analyze computational properties of neural networks with unusual activation functions - the higher rectified polynomials which until now have not been used in deep learning.Expand
arXiv(opens in a new tab)
Save
Alert
Cite
Understanding Emergent Dynamics: Using a Collective Activity Coordinate of a Neural Network to Recognize Time-Varying Patterns
[color=rgba(83, 100, 121, 0.9)]J. Hopfield
[color=rgba(83, 100, 121, 0.9)]Neural Computation
TLDR
How the emergent computational dynamics of a biologically based neural network generates a robust natural solution to the problem of categorizing time-varying stimulus patterns such as spoken words or animal stereotypical behaviors is described.Expand
MIT Press(opens in a new tab)
Save
Alert
Cite发帖,请点击上方的批量上传发帖按钮


雷达卡



京公网安备 11010802022788号







