楼主: mingdashike22
226 0

[统计数据] 二元突触网络中的有效监督学习 [推广有奖]

  • 0关注
  • 3粉丝

会员

学术权威

78%

还不是VIP/贵宾

-

威望
10
论坛币
10 个
通用积分
74.0016
学术水平
0 点
热心指数
0 点
信用等级
0 点
经验
24862 点
帖子
4109
精华
0
在线时间
1 小时
注册时间
2022-2-24
最后登录
2022-4-15

楼主
mingdashike22 在职认证  发表于 2022-3-6 15:34:50 来自手机 |AI写论文

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币
摘要翻译:
最近的实验研究表明,由神经元活动引起的突触变化是少数稳定状态之间的离散跳跃。在具有离散突触的系统中学习是一个计算困难的问题。在这里,我们研究了一种神经生物学上可信的在线学习算法,该算法源于信念传播算法。我们表明,它在具有二元突触的模型神经元中表现得非常好,每个突触有有限数量的“隐藏”状态,必须学习随机分类任务。这种系统能够学习许多接近理论极限的关联,在时间上是系统大小的次线性。据我们所知,这是第一个在线算法,能够有效地实现有限数量的模式学习每个二进制突触。此外,我们还证明了对于有限数目的隐藏状态,性能是最优的,而对于稀疏编码来说,隐藏状态变得非常小。该算法类似于标准的“感知器”学习算法,只有当当前呈现的模式“勉强正确”时,才会发生突触过渡。在这种情况下,突触的变化是元可塑性的(变化在隐藏状态,而不是在实际的突触状态),稳定突触在其当前的状态。最后,我们证明了具有两个可见状态和K个隐藏状态的系统比具有K个可见状态的系统对噪声的鲁棒性强得多。我们认为这个规则非常简单,可以通过神经生物学系统或硬件来实现。
---
英文标题:
《Efficient supervised learning in networks with binary synapses》
---
作者:
Carlo Baldassi, Alfredo Braunstein, Nicolas Brunel, Riccardo Zecchina
---
最新提交年份:
2007
---
分类信息:

一级分类:Quantitative Biology        数量生物学
二级分类:Neurons and Cognition        神经元与认知
分类描述:Synapse, cortex, neuronal dynamics, neural network, sensorimotor control, behavior, attention
突触,皮层,神经元动力学,神经网络,感觉运动控制,行为,注意
--
一级分类:Physics        物理学
二级分类:Statistical Mechanics        统计力学
分类描述:Phase transitions, thermodynamics, field theory, non-equilibrium phenomena, renormalization group and scaling, integrable models, turbulence
相变,热力学,场论,非平衡现象,重整化群和标度,可积模型,湍流
--
一级分类:Computer Science        计算机科学
二级分类:Neural and Evolutionary Computing        神经与进化计算
分类描述:Covers neural networks, connectionism, genetic algorithms, artificial life, adaptive behavior. Roughly includes some material in ACM Subject Class C.1.3, I.2.6, I.5.
涵盖神经网络,连接主义,遗传算法,人工生命,自适应行为。大致包括ACM学科类C.1.3、I.2.6、I.5中的一些材料。
--
一级分类:Quantitative Biology        数量生物学
二级分类:Quantitative Methods        定量方法
分类描述:All experimental, numerical, statistical and mathematical contributions of value to biology
对生物学价值的所有实验、数值、统计和数学贡献
--

---
英文摘要:
  Recent experimental studies indicate that synaptic changes induced by neuronal activity are discrete jumps between a small number of stable states. Learning in systems with discrete synapses is known to be a computationally hard problem. Here, we study a neurobiologically plausible on-line learning algorithm that derives from Belief Propagation algorithms. We show that it performs remarkably well in a model neuron with binary synapses, and a finite number of `hidden' states per synapse, that has to learn a random classification task. Such system is able to learn a number of associations close to the theoretical limit, in time which is sublinear in system size. This is to our knowledge the first on-line algorithm that is able to achieve efficiently a finite number of patterns learned per binary synapse. Furthermore, we show that performance is optimal for a finite number of hidden states which becomes very small for sparse coding. The algorithm is similar to the standard `perceptron' learning algorithm, with an additional rule for synaptic transitions which occur only if a currently presented pattern is `barely correct'. In this case, the synaptic changes are meta-plastic only (change in hidden states and not in actual synaptic state), stabilizing the synapse in its current state. Finally, we show that a system with two visible states and K hidden states is much more robust to noise than a system with K visible states. We suggest this rule is sufficiently simple to be easily implemented by neurobiological systems or in hardware.
---
PDF链接:
https://arxiv.org/pdf/707.1295
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:有效监督 Quantitative Experimental Evolutionary Mathematical binary algorithm changes learning 表明

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
扫码
拉您进交流群
GMT+8, 2026-2-6 02:48