楼主: 能者818
344 0

[计算机科学] 将从示例中学习集成到诊断搜索中 政策 [推广有奖]

  • 0关注
  • 6粉丝

会员

学术权威

78%

还不是VIP/贵宾

-

威望
10
论坛币
10 个
通用积分
39.5640
学术水平
0 点
热心指数
1 点
信用等级
0 点
经验
24699 点
帖子
4115
精华
0
在线时间
1 小时
注册时间
2022-2-24
最后登录
2024-12-24

楼主
能者818 在职认证  发表于 2022-3-28 12:35:00 来自手机 |AI写论文

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币
摘要翻译:
本文研究了从训练实例中学习诊断策略的问题。诊断策略是诊断师对所有可能的测试结果组合的决策动作的完整描述(即,随后是诊断决策的测试)。最优诊断策略是使预期总成本最小的策略,总成本是测量成本和误诊成本的总和。在大多数诊断设置中,存在这两种成本之间的折衷。本文将诊断决策形式化为马尔可夫决策过程(MDP)。本文在AO*算法的基础上,提出了一族新的系统搜索算法来解决这一MDP问题。为了提高AO*的效率,本文描述了一种允许的启发式算法,它使AO*能够修剪大部分搜索空间。本文还介绍了几种贪婪算法,包括对已有算法的改进。然后,本文讨论了从实例中学习诊断策略的问题。当从训练数据中计算疾病和测试结果的概率时,存在过拟合的巨大危险。为了减少过拟合,在搜索算法中引入了正则化子。最后,论文在五个基准诊断数据集上对所提出的方法进行了比较。研究表明,在大多数情况下,系统搜索方法比贪婪方法产生更好的诊断策略。此外,研究表明,对于实际规模的训练集,系统的搜索算法在当前的桌面计算机上是实用的。
---
英文标题:
《Integrating Learning from Examples into the Search for Diagnostic
  Policies》
---
作者:
V. Bayer-Zubek, T. G. Dietterich
---
最新提交年份:
2011
---
分类信息:

一级分类:Computer Science        计算机科学
二级分类:Artificial Intelligence        人工智能
分类描述:Covers all areas of AI except Vision, Robotics, Machine Learning, Multiagent Systems, and Computation and Language (Natural Language Processing), which have separate subject areas. In particular, includes Expert Systems, Theorem Proving (although this may overlap with Logic in Computer Science), Knowledge Representation, Planning, and Uncertainty in AI. Roughly includes material in ACM Subject Classes I.2.0, I.2.1, I.2.3, I.2.4, I.2.8, and I.2.11.
涵盖了人工智能的所有领域,除了视觉、机器人、机器学习、多智能体系统以及计算和语言(自然语言处理),这些领域有独立的学科领域。特别地,包括专家系统,定理证明(尽管这可能与计算机科学中的逻辑重叠),知识表示,规划,和人工智能中的不确定性。大致包括ACM学科类I.2.0、I.2.1、I.2.3、I.2.4、I.2.8和I.2.11中的材料。
--

---
英文摘要:
  This paper studies the problem of learning diagnostic policies from training examples. A diagnostic policy is a complete description of the decision-making actions of a diagnostician (i.e., tests followed by a diagnostic decision) for all possible combinations of test results. An optimal diagnostic policy is one that minimizes the expected total cost, which is the sum of measurement costs and misdiagnosis costs. In most diagnostic settings, there is a tradeoff between these two kinds of costs. This paper formalizes diagnostic decision making as a Markov Decision Process (MDP). The paper introduces a new family of systematic search algorithms based on the AO* algorithm to solve this MDP. To make AO* efficient, the paper describes an admissible heuristic that enables AO* to prune large parts of the search space. The paper also introduces several greedy algorithms including some improvements over previously-published methods. The paper then addresses the question of learning diagnostic policies from examples. When the probabilities of diseases and test results are computed from training data, there is a great danger of overfitting. To reduce overfitting, regularizers are integrated into the search algorithms. Finally, the paper compares the proposed methods on five benchmark diagnostic data sets. The studies show that in most cases the systematic search methods produce better diagnostic policies than the greedy methods. In addition, the studies show that for training sets of realistic size, the systematic search algorithms are practical on todays desktop computers.
---
PDF链接:
https://arxiv.org/pdf/1109.2127
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:Improvements Intelligence Presentation Combinations Integrating 策略 training 桌面 已有 policies

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
扫码
拉您进交流群
GMT+8, 2026-1-24 18:20