摘要翻译:
马尔可夫决策过程被广泛应用于机器人、自动控制和经济学等领域的决策问题建模。传统的MDPs假设决策者(DM)知道所有的状态和行动。然而,在许多感兴趣的情况下可能不是这样。我们定义了一个新的框架,无意识MDPs(MDPUs),以处理DM可能不知道所有可能的行动的可能性。我们给出了一个DM在MDPU中何时可以学习近似最优游戏的完整刻画,并给出了一个在可能的情况下尽可能有效地学习近似最优游戏的算法。特别地,我们刻画了在多项式时间内何时能找到一个近最优解。
---
英文标题:
《MDPs with Unawareness》
---
作者:
Joseph Y. Halpern, Nan Rong, Ashutosh Saxena
---
最新提交年份:
2010
---
分类信息:
一级分类:Computer Science 计算机科学
二级分类:Artificial Intelligence 人工智能
分类描述:Covers all areas of AI except Vision, Robotics, Machine Learning, Multiagent Systems, and Computation and Language (Natural Language Processing), which have separate subject areas. In particular, includes Expert Systems, Theorem Proving (although this may overlap with Logic in Computer Science), Knowledge Representation, Planning, and Uncertainty in AI. Roughly includes material in ACM Subject Classes I.2.0, I.2.1, I.2.3, I.2.4, I.2.8, and I.2.11.
涵盖了人工智能的所有领域,除了视觉、机器人、机器学习、多智能体系统以及计算和语言(自然语言处理),这些领域有独立的学科领域。特别地,包括专家系统,定理证明(尽管这可能与计算机科学中的逻辑重叠),知识表示,规划,和人工智能中的不确定性。大致包括ACM学科类I.2.0、I.2.1、I.2.3、I.2.4、I.2.8和I.2.11中的材料。
--
---
英文摘要:
Markov decision processes (MDPs) are widely used for modeling decision-making problems in robotics, automated control, and economics. Traditional MDPs assume that the decision maker (DM) knows all states and actions. However, this may not be true in many situations of interest. We define a new framework, MDPs with unawareness (MDPUs) to deal with the possibilities that a DM may not be aware of all possible actions. We provide a complete characterization of when a DM can learn to play near-optimally in an MDPU, and give an algorithm that learns to play near-optimally when it is possible to do so, as efficiently as possible. In particular, we characterize when a near-optimal solution can be found in polynomial time.
---
PDF链接:
https://arxiv.org/pdf/1006.2204