摘要翻译:
基于高斯过程用于Bandit问题的最新理论进展,我们提出并分析了一种新的树搜索算法GPTS。我们考虑树路径作为臂,我们假设目标/奖励函数是从GP分布中提取的。观测数据后的后验均值和方差被用来定义函数值的置信区间,我们依次扮演具有最高置信上限的臂。我们给出了GPTS的一个有效实现,并通过确定树路集合上核矩阵特征值的衰减率来调整以前的后悔界。在由树的节点索引的二叉向量特征空间中,我们考虑了两个核:线性核和高斯核。遗憾以迭代次数T的平方根增长,直到一个对数因子,常数随着高斯核宽度的增大而提高。我们侧重于T的实用值,小于臂数。最后,通过将报酬建模为独立高斯过程的折现和,我们将GPTS应用于折现马尔可夫决策过程的开环规划。我们报告了与OLOP算法相似的遗憾范围。
---
英文标题:
《Gaussian Process Bandits for Tree Search: Theory and Application to
Planning in Discounted MDPs》
---
作者:
Louis Dorard and John Shawe-Taylor
---
最新提交年份:
2011
---
分类信息:
一级分类:Computer Science 计算机科学
二级分类:Machine Learning 机器学习
分类描述:Papers on all aspects of machine learning research (supervised, unsupervised, reinforcement learning, bandit problems, and so on) including also robustness, explanation, fairness, and methodology. cs.LG is also an appropriate primary category for applications of machine learning methods.
关于机器学习研究的所有方面的论文(有监督的,无监督的,强化学习,强盗问题,等等),包括健壮性,解释性,公平性和方法论。对于机器学习方法的应用,CS.LG也是一个合适的主要类别。
--
一级分类:Computer Science 计算机科学
二级分类:Artificial Intelligence 人工智能
分类描述:Covers all areas of AI except Vision, Robotics, Machine Learning, Multiagent Systems, and Computation and Language (Natural Language Processing), which have separate subject areas. In particular, includes Expert Systems, Theorem Proving (although this may overlap with Logic in Computer Science), Knowledge Representation, Planning, and Uncertainty in AI. Roughly includes material in ACM Subject Classes I.2.0, I.2.1, I.2.3, I.2.4, I.2.8, and I.2.11.
涵盖了人工智能的所有领域,除了视觉、机器人、机器学习、多智能体系统以及计算和语言(自然语言处理),这些领域有独立的学科领域。特别地,包括专家系统,定理证明(尽管这可能与计算机科学中的逻辑重叠),知识表示,规划,和人工智能中的不确定性。大致包括ACM学科类I.2.0、I.2.1、I.2.3、I.2.4、I.2.8和I.2.11中的材料。
--
---
英文摘要:
We motivate and analyse a new Tree Search algorithm, GPTS, based on recent theoretical advances in the use of Gaussian Processes for Bandit problems. We consider tree paths as arms and we assume the target/reward function is drawn from a GP distribution. The posterior mean and variance, after observing data, are used to define confidence intervals for the function values, and we sequentially play arms with highest upper confidence bounds. We give an efficient implementation of GPTS and we adapt previous regret bounds by determining the decay rate of the eigenvalues of the kernel matrix on the whole set of tree paths. We consider two kernels in the feature space of binary vectors indexed by the nodes of the tree: linear and Gaussian. The regret grows in square root of the number of iterations T, up to a logarithmic factor, with a constant that improves with bigger Gaussian kernel widths. We focus on practical values of T, smaller than the number of arms. Finally, we apply GPTS to Open Loop Planning in discounted Markov Decision Processes by modelling the reward as a discounted sum of independent Gaussian Processes. We report similar regret bounds to those of the OLOP algorithm.
---
PDF链接:
https://arxiv.org/pdf/1009.0605