楼主: nandehutu2022
167 0

[计算机科学] 竞争安全分析:多Agent中的鲁棒决策 系统 [推广有奖]

  • 0关注
  • 4粉丝

会员

学术权威

75%

还不是VIP/贵宾

-

威望
10
论坛币
10 个
通用积分
65.8368
学术水平
0 点
热心指数
0 点
信用等级
0 点
经验
24498 点
帖子
4088
精华
0
在线时间
1 小时
注册时间
2022-2-24
最后登录
2022-4-20

楼主
nandehutu2022 在职认证  发表于 2022-3-22 11:35:01 来自手机 |只看作者 |坛友微信交流群|倒序 |AI写论文

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币
摘要翻译:
人工智能中的许多工作都涉及在给定(已知或未知)环境中选择适当的动作。然而,如何选择一个适当的行动时,面对其他代理人是相当不清楚的。人工智能领域的大多数工作都采用经典的博弈均衡分析来预测agent在这种环境下的行为。然而,这种方法并不能为我们的代理提供任何保证。本文介绍了竞争安全分析。这种方法弥补了期望的规范人工智能方法和均衡分析之间的差距,在规范人工智能方法中,为了保证期望的收益,应该选择一种策略。我们证明了在几种经典的计算机科学环境下,安全水平策略能够保证在纳什均衡中获得的值。然后,我们讨论了竞争安全策略的概念,并说明了它在分散负载平衡环境中的应用,这是典型的网络问题。特别地,我们证明了当我们有许多代理人时,有可能保证期望收益是纳什均衡收益的8/9倍。我们对分散负载平衡的竞争安全分析的讨论进一步发展到处理多通信链路和任意速度。最后,我们讨论了上述概念在贝叶斯博弈中的推广,并说明了它们在一个基本拍卖设置中的应用。
---
英文标题:
《Competitive Safety Analysis: Robust Decision-Making in Multi-Agent
  Systems》
---
作者:
M. Tennenholtz
---
最新提交年份:
2011
---
分类信息:

一级分类:Computer Science        计算机科学
二级分类:Computer Science and Game Theory        计算机科学与博弈论
分类描述:Covers all theoretical and applied aspects at the intersection of computer science and game theory, including work in mechanism design, learning in games (which may overlap with Learning), foundations of agent modeling in games (which may overlap with Multiagent systems), coordination, specification and formal methods for non-cooperative computational environments. The area also deals with applications of game theory to areas such as electronic commerce.
涵盖计算机科学和博弈论交叉的所有理论和应用方面,包括机制设计的工作,游戏中的学习(可能与学习重叠),游戏中的agent建模的基础(可能与多agent系统重叠),非合作计算环境的协调、规范和形式化方法。该领域还涉及博弈论在电子商务等领域的应用。
--
一级分类:Computer Science        计算机科学
二级分类:Artificial Intelligence        人工智能
分类描述:Covers all areas of AI except Vision, Robotics, Machine Learning, Multiagent Systems, and Computation and Language (Natural Language Processing), which have separate subject areas. In particular, includes Expert Systems, Theorem Proving (although this may overlap with Logic in Computer Science), Knowledge Representation, Planning, and Uncertainty in AI. Roughly includes material in ACM Subject Classes I.2.0, I.2.1, I.2.3, I.2.4, I.2.8, and I.2.11.
涵盖了人工智能的所有领域,除了视觉、机器人、机器学习、多智能体系统以及计算和语言(自然语言处理),这些领域有独立的学科领域。特别地,包括专家系统,定理证明(尽管这可能与计算机科学中的逻辑重叠),知识表示,规划,和人工智能中的不确定性。大致包括ACM学科类I.2.0、I.2.1、I.2.3、I.2.4、I.2.8和I.2.11中的材料。
--

---
英文摘要:
  Much work in AI deals with the selection of proper actions in a given (known or unknown) environment. However, the way to select a proper action when facing other agents is quite unclear. Most work in AI adopts classical game-theoretic equilibrium analysis to predict agent behavior in such settings. This approach however does not provide us with any guarantee for the agent. In this paper we introduce competitive safety analysis. This approach bridges the gap between the desired normative AI approach, where a strategy should be selected in order to guarantee a desired payoff, and equilibrium analysis. We show that a safety level strategy is able to guarantee the value obtained in a Nash equilibrium, in several classical computer science settings. Then, we discuss the concept of competitive safety strategies, and illustrate its use in a decentralized load balancing setting, typical to network problems. In particular, we show that when we have many agents, it is possible to guarantee an expected payoff which is a factor of 8/9 of the payoff obtained in a Nash equilibrium. Our discussion of competitive safety analysis for decentralized load balancing is further developed to deal with many communication links and arbitrary speeds. Finally, we discuss the extension of the above concepts to Bayesian games, and illustrate their use in a basic auctions setup.
---
PDF链接:
https://arxiv.org/pdf/1106.4570
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:agent 全分析 Age Presentation Environments 应用 系统 保证 环境 策略

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加JingGuanBbs
拉您进交流群

京ICP备16021002-2号 京B2-20170662号 京公网安备 11010802022788号 论坛法律顾问:王进律师 知识产权保护声明   免责及隐私声明

GMT+8, 2024-6-16 09:22