摘要翻译:
近似贝叶斯推理是NP难的。Dagum和Luby定义了局部方差界(LVB)来度量贝叶斯网络上贝叶斯推理的逼近硬度,假设网络模型是严格正的联合概率分布,即不允许零概率。本文引入K检验来度量概率分布具有确定性因果关系的贝叶斯网络,即当条件概率为零时,推理的逼近硬度。随机抽样近似是一种广泛使用的推理方法,但由于样本被拒绝而存在效率低下的问题。k-test预测贝叶斯网络随机抽样的拒绝率是低的、适度的、高的,或者抽样是难以处理的。
---
英文标题:
《Measuring the Hardness of Stochastic Sampling on Bayesian Networks with
Deterministic Causalities: the k-Test》
---
作者:
Haohai Yu, Robert A. van Engelen
---
最新提交年份:
2012
---
分类信息:
一级分类:Computer Science 计算机科学
二级分类:Artificial Intelligence 人工智能
分类描述:Covers all areas of AI except Vision, Robotics, Machine Learning, Multiagent Systems, and Computation and Language (Natural Language Processing), which have separate subject areas. In particular, includes Expert Systems, Theorem Proving (although this may overlap with Logic in Computer Science), Knowledge Representation, Planning, and Uncertainty in AI. Roughly includes material in ACM Subject Classes I.2.0, I.2.1, I.2.3, I.2.4, I.2.8, and I.2.11.
涵盖了人工智能的所有领域,除了视觉、机器人、机器学习、多智能体系统以及计算和语言(自然语言处理),这些领域有独立的学科领域。特别地,包括专家系统,定理证明(尽管这可能与计算机科学中的逻辑重叠),知识表示,规划,和人工智能中的不确定性。大致包括ACM学科类I.2.0、I.2.1、I.2.3、I.2.4、I.2.8和I.2.11中的材料。
--
---
英文摘要:
Approximate Bayesian inference is NP-hard. Dagum and Luby defined the Local Variance Bound (LVB) to measure the approximation hardness of Bayesian inference on Bayesian networks, assuming the networks model strictly positive joint probability distributions, i.e. zero probabilities are not permitted. This paper introduces the k-test to measure the approximation hardness of inference on Bayesian networks with deterministic causalities in the probability distribution, i.e. when zero conditional probabilities are permitted. Approximation by stochastic sampling is a widely-used inference method that is known to suffer from inefficiencies due to sample rejection. The k-test predicts when rejection rates of stochastic sampling a Bayesian network will be low, modest, high, or when sampling is intractable.
---
PDF链接:
https://arxiv.org/pdf/1202.3773


雷达卡



京公网安备 11010802022788号







