摘要翻译:
提出了一种用于一般平稳遍历源的非参数估计和在线预测的学习算法。我们准备直方图,每个直方图将概率估计为有限分布,并将它们与权重混合来构造估计量。整个分析以测度理论为基础。无论信源是离散的还是连续的,估计器都能工作。如果它是平稳遍历的,那么理论上给定的Kullback-Leibler信息除以序列长度$n$的测度在$n$到无穷远时收敛到零。特别地,对于连续源,该方法不要求存在概率密度函数。
---
英文标题:
《Nonparametric Estimation and On-Line Prediction for General Stationary
Ergodic Sources》
---
作者:
Joe Suzuki
---
最新提交年份:
2010
---
分类信息:
一级分类:Computer Science 计算机科学
二级分类:Information Theory 信息论
分类描述:Covers theoretical and experimental aspects of information theory and coding. Includes material in ACM Subject Class E.4 and intersects with H.1.1.
涵盖信息论和编码的理论和实验方面。包括ACM学科类E.4中的材料,并与H.1.1有交集。
--
一级分类:Computer Science 计算机科学
二级分类:Artificial Intelligence 人工智能
分类描述:Covers all areas of AI except Vision, Robotics, Machine Learning, Multiagent Systems, and Computation and Language (Natural Language Processing), which have separate subject areas. In particular, includes Expert Systems, Theorem Proving (although this may overlap with Logic in Computer Science), Knowledge Representation, Planning, and Uncertainty in AI. Roughly includes material in ACM Subject Classes I.2.0, I.2.1, I.2.3, I.2.4, I.2.8, and I.2.11.
涵盖了人工智能的所有领域,除了视觉、机器人、机器学习、多智能体系统以及计算和语言(自然语言处理),这些领域有独立的学科领域。特别地,包括专家系统,定理证明(尽管这可能与计算机科学中的逻辑重叠),知识表示,规划,和人工智能中的不确定性。大致包括ACM学科类I.2.0、I.2.1、I.2.3、I.2.4、I.2.8和I.2.11中的材料。
--
一级分类:Mathematics 数学
二级分类:Information Theory 信息论
分类描述:math.IT is an alias for cs.IT. Covers theoretical and experimental aspects of information theory and coding.
它是cs.it的别名。涵盖信息论和编码的理论和实验方面。
--
一级分类:Mathematics 数学
二级分类:Probability 概率
分类描述:Theory and applications of probability and stochastic processes: e.g. central limit theorems, large deviations, stochastic differential equations, models from statistical mechanics, queuing theory
概率论与随机过程的理论与应用:例如中心极限定理,大偏差,随机微分方程,统计力学模型,排队论
--
---
英文摘要:
We proposed a learning algorithm for nonparametric estimation and on-line prediction for general stationary ergodic sources. We prepare histograms each of which estimates the probability as a finite distribution, and mixture them with weights to construct an estimator. The whole analysis is based on measure theory. The estimator works whether the source is discrete or continuous. If it is stationary ergodic, then the measure theoretically given Kullback-Leibler information divided by the sequence length $n$ converges to zero as $n$ goes to infinity. In particular, for continuous sources, the method does not require existence of a probability density function.
---
PDF链接:
https://arxiv.org/pdf/1002.4453


雷达卡



京公网安备 11010802022788号







