楼主: mingdashike22
144 0

[计算机科学] 高斯图形模型推理中的反馈消息传递 [推广有奖]

  • 0关注
  • 3粉丝

会员

学术权威

79%

还不是VIP/贵宾

-

威望
10
论坛币
10 个
通用积分
71.1222
学术水平
0 点
热心指数
0 点
信用等级
0 点
经验
25194 点
帖子
4201
精华
0
在线时间
1 小时
注册时间
2022-2-24
最后登录
2022-4-15

楼主
mingdashike22 在职认证  发表于 2022-4-8 12:50:00 来自手机 |只看作者 |坛友微信交流群|倒序 |AI写论文

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币
摘要翻译:
虽然循环信念传播(LBP)在一些具有循环的高斯图形模型中表现得相当好,但对于其他许多模型来说,它的性能并不令人满意。特别是对于某些模型,LBP不收敛,而且通常当它收敛时,计算的方差是不正确的(除了信念传播(BP)是非迭代和精确的无圈图)。本文提出了{em反馈消息传递}(FMP),它利用一个特殊的顶点集(称为{em反馈顶点集}或{em FVS})来传递消息。在FMP中,标准的BP算法在不包括FVS的无圈子图上多次使用,而对于FVS中的节点采用一种特殊的消息传递机制。精确推理的计算复杂度为$O(K^2n)$,其中$K$为反馈节点数,$N$为节点总数。当FVS的尺寸很大时,FMP是难以解决的问题。因此,我们提出了{em近似FMP},其中用伪FVS代替FVS,通过去除伪FVS得到的非无圈图中的推理近似地用LBP进行。我们证明,当近似FMP收敛时,它在伪FV上产生精确均值和方差,并在图的其余部分产生精确均值。我们还给出了近似FMP的收敛性和精度的理论结果。特别地,我们证明了方差计算的误差界。基于这些理论结果,我们设计了有效的算法来选择有界大小的伪FVS。伪FVS的选择允许我们明确地在效率和准确性之间进行权衡。实验结果表明,使用一个不大于$\log(n)$的伪FVS,该方法在整个图上比LBP收敛得更频繁、更快,并提供更准确的结果。
---
英文标题:
《Feedback Message Passing for Inference in Gaussian Graphical Models》
---
作者:
Ying Liu, Venkat Chandrasekaran, Animashree Anandkumar, Alan S.
  Willsky
---
最新提交年份:
2011
---
分类信息:

一级分类:Statistics        统计学
二级分类:Machine Learning        机器学习
分类描述:Covers machine learning papers (supervised, unsupervised, semi-supervised learning, graphical models, reinforcement learning, bandits, high dimensional inference, etc.) with a statistical or theoretical grounding
覆盖机器学习论文(监督,无监督,半监督学习,图形模型,强化学习,强盗,高维推理等)与统计或理论基础
--
一级分类:Computer Science        计算机科学
二级分类:Artificial Intelligence        人工智能
分类描述:Covers all areas of AI except Vision, Robotics, Machine Learning, Multiagent Systems, and Computation and Language (Natural Language Processing), which have separate subject areas. In particular, includes Expert Systems, Theorem Proving (although this may overlap with Logic in Computer Science), Knowledge Representation, Planning, and Uncertainty in AI. Roughly includes material in ACM Subject Classes I.2.0, I.2.1, I.2.3, I.2.4, I.2.8, and I.2.11.
涵盖了人工智能的所有领域,除了视觉、机器人、机器学习、多智能体系统以及计算和语言(自然语言处理),这些领域有独立的学科领域。特别地,包括专家系统,定理证明(尽管这可能与计算机科学中的逻辑重叠),知识表示,规划,和人工智能中的不确定性。大致包括ACM学科类I.2.0、I.2.1、I.2.3、I.2.4、I.2.8和I.2.11中的材料。
--

---
英文摘要:
  While loopy belief propagation (LBP) performs reasonably well for inference in some Gaussian graphical models with cycles, its performance is unsatisfactory for many others. In particular for some models LBP does not converge, and in general when it does converge, the computed variances are incorrect (except for cycle-free graphs for which belief propagation (BP) is non-iterative and exact). In this paper we propose {\em feedback message passing} (FMP), a message-passing algorithm that makes use of a special set of vertices (called a {\em feedback vertex set} or {\em FVS}) whose removal results in a cycle-free graph. In FMP, standard BP is employed several times on the cycle-free subgraph excluding the FVS while a special message-passing scheme is used for the nodes in the FVS. The computational complexity of exact inference is $O(k^2n)$, where $k$ is the number of feedback nodes, and $n$ is the total number of nodes. When the size of the FVS is very large, FMP is intractable. Hence we propose {\em approximate FMP}, where a pseudo-FVS is used instead of an FVS, and where inference in the non-cycle-free graph obtained by removing the pseudo-FVS is carried out approximately using LBP. We show that, when approximate FMP converges, it yields exact means and variances on the pseudo-FVS and exact means throughout the remainder of the graph. We also provide theoretical results on the convergence and accuracy of approximate FMP. In particular, we prove error bounds on variance computation. Based on these theoretical results, we design efficient algorithms to select a pseudo-FVS of bounded size. The choice of the pseudo-FVS allows us to explicitly trade off between efficiency and accuracy. Experimental results show that using a pseudo-FVS of size no larger than $\log(n)$, this procedure converges much more often, more quickly, and provides more accurate results than LBP on the entire graph.
---
PDF链接:
https://arxiv.org/pdf/1105.1853
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:Presentation Intelligence Experimental Approximate Computation 理论 图形 进行 FMP 使用

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加JingGuanBbs
拉您进交流群

京ICP备16021002-2号 京B2-20170662号 京公网安备 11010802022788号 论坛法律顾问:王进律师 知识产权保护声明   免责及隐私声明

GMT+8, 2024-5-29 07:35