楼主: DL-er
507 0

ARTMAP: Supervised real-time learning and classification of nonstationary d.. [推广有奖]

  • 0关注
  • 6粉丝

会员

学术权威

74%

还不是VIP/贵宾

-

威望
0
论坛币
15 个
通用积分
1.0435
学术水平
0 点
热心指数
0 点
信用等级
0 点
经验
38540 点
帖子
3853
精华
0
在线时间
813 小时
注册时间
2017-9-5
最后登录
2018-6-30

楼主
DL-er 在职认证  发表于 2018-1-9 18:00:00 |AI写论文

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币
摘要:This article introduces a new neural network architecture, called ARTMAP, that autonomously learns to classify arbitrarily many, arbitrarily ordered vectors into recognition categories based on predictive success. This supervised learning system is built up from a pair of Adaptive Resonance Theory modules (ART a and ART b ) that are capable of self-organizing stable recognition categories in response to arbitrary sequences of input patterns. During training trials, the ART a module receives a stream [ a ( p ) ] of input patterns, and ART b receives a stream [ b ( p ) ] of input patterns, where b ( p ) is the correct prediction given a ( p ) . These ART modules are linked by an associative learning network and an internal controller that ensures autonomous system operation in real time. During test trials, the remaining patterns a ( p ) are presented without b ( p ) , and their predictions at ART b are compared with b ( p ) . Tested on a benchmark machine learning database in both on-line and off-line simulations, the ARTMAP system learns orders of magnitude more quickly, efficiently, and accurately than alternative algorithms, and achieves 100% accuracy after training on less than half the input patterns in the database. It achieves these properties by using an internal controller that conjointly maximizes predictive generalization and minimizes predictive error by linking predictive success to category size on a trial-by-trial basis, using only local operations. This computation increases the vigilance parameter 媳 a of ART a by the minimal amount needed to correct a predictive error at ART b . Parameter 媳 a calibrates the minimum confidence that ART a must have in a category, or hypothesis, activated by an input a ( p ) in order for ART a to accept that category, rather than search for a better one through an automatically controlled process of hypothesis testing. Parameter 媳 a is compared with the degree of match between a ( p ) and the top-down learned expectation, or prototype, that is read-out subsequent to activation of an ART a category. Search occurs if the degree of match is less than 媳 a . ARTMAP is hereby a type of self-organizing expert system that calibrates the selectivity of its hypotheses based upon predictive success. As a result, rare but important events can be quickly and sharply distinguished even if they are similar to frequent events with different consequences. Between input trials 媳 a relaxes to a baseline vigilance ja:math . When ja:math is large, the system runs in a conservative mode, wherein predictions are made only if the system is confident of the outcome. Very few false-alarm errors then occur at any stage of learning, yet the system reaches asymptote with no loss of speed. Because ARTMAP learning is self-stabilizing, it can continue learning one or more databases, without degrading its corpus of memories, until its full memory capacity is utilized.

原文链接:http://www.sciencedirect.com/science/article/pii/089360809190012T

送人玫瑰,手留余香~如您已下载到该资源,可在回帖当中上传与大家共享,欢迎来CDA社区交流学习。(仅供学术交流用。)

二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:stationary Real-time Learning earning station

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
扫码
拉您进交流群
GMT+8, 2026-1-29 14:09