楼主: kedemingshi
284 0

[计算机科学] 预算有限的多武装匪徒基于背包的最优策略 [推广有奖]

  • 0关注
  • 4粉丝

会员

学术权威

78%

还不是VIP/贵宾

-

威望
10
论坛币
15 个
通用积分
89.2735
学术水平
0 点
热心指数
8 点
信用等级
0 点
经验
24665 点
帖子
4127
精华
0
在线时间
0 小时
注册时间
2022-2-24
最后登录
2022-4-15

楼主
kedemingshi 在职认证  发表于 2022-4-12 18:10:00 来自手机 |AI写论文

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币
摘要翻译:
在预算受限的多臂强盗(MAB)问题中,学习者的行动是昂贵的,并且受到固定预算的约束。因此,一个最优的开发策略可能不是重复地拉最优臂,就像在其他MAB变体中的情况一样,而是拉不同臂的序列,在预算范围内最大化代理的总报酬。与现有MABs的这种差异意味着需要新的方法来最大化总回报。鉴于此,我们提出了两个拉动策略,即:(i)KUBE;和(ii)分数库贝。在我们的实验环境中,前者提供了高达40%的更好的性能,而后者在计算上更便宜。我们还证明了这两种策略的后悔上界的对数性,并证明了这些上界是渐近最优的(即它们只与最优后悔相差一个常数)。
---
英文标题:
《Knapsack based Optimal Policies for Budget-Limited Multi-Armed Bandits》
---
作者:
Long Tran-Thanh, Archie Chapman, Alex Rogers, Nicholas R. Jennings
---
最新提交年份:
2012
---
分类信息:

一级分类:Computer Science        计算机科学
二级分类:Artificial Intelligence        人工智能
分类描述:Covers all areas of AI except Vision, Robotics, Machine Learning, Multiagent Systems, and Computation and Language (Natural Language Processing), which have separate subject areas. In particular, includes Expert Systems, Theorem Proving (although this may overlap with Logic in Computer Science), Knowledge Representation, Planning, and Uncertainty in AI. Roughly includes material in ACM Subject Classes I.2.0, I.2.1, I.2.3, I.2.4, I.2.8, and I.2.11.
涵盖了人工智能的所有领域,除了视觉、机器人、机器学习、多智能体系统以及计算和语言(自然语言处理),这些领域有独立的学科领域。特别地,包括专家系统,定理证明(尽管这可能与计算机科学中的逻辑重叠),知识表示,规划,和人工智能中的不确定性。大致包括ACM学科类I.2.0、I.2.1、I.2.3、I.2.4、I.2.8和I.2.11中的材料。
--
一级分类:Computer Science        计算机科学
二级分类:Machine Learning        机器学习
分类描述:Papers on all aspects of machine learning research (supervised, unsupervised, reinforcement learning, bandit problems, and so on) including also robustness, explanation, fairness, and methodology. cs.LG is also an appropriate primary category for applications of machine learning methods.
关于机器学习研究的所有方面的论文(有监督的,无监督的,强化学习,强盗问题,等等),包括健壮性,解释性,公平性和方法论。对于机器学习方法的应用,CS.LG也是一个合适的主要类别。
--

---
英文摘要:
  In budget-limited multi-armed bandit (MAB) problems, the learner's actions are costly and constrained by a fixed budget. Consequently, an optimal exploitation policy may not be to pull the optimal arm repeatedly, as is the case in other variants of MAB, but rather to pull the sequence of different arms that maximises the agent's total reward within the budget. This difference from existing MABs means that new approaches to maximising the total reward are required. Given this, we develop two pulling policies, namely: (i) KUBE; and (ii) fractional KUBE. Whereas the former provides better performance up to 40% in our experimental settings, the latter is computationally less expensive. We also prove logarithmic upper bounds for the regret of both policies, and show that these bounds are asymptotically optimal (i.e. they only differ from the best possible regret by a constant factor).
---
PDF链接:
https://arxiv.org/pdf/1204.1909
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:最优策略 Intelligence exploitation Experimental Presentation 预算 reward regret 最大化 环境

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
jg-xs1
拉您进交流群
GMT+8, 2026-1-8 23:24