摘要翻译:
对原始问题和对偶问题的共同考虑导致了对boosting算法特性的重要新见解。在这项工作中,我们提出了一个通用的框架,可以用来设计新的boosting算法。各种各样的机器学习问题本质上最小化一个正则化的风险泛函。我们证明了所提出的boosting框架CGBoost能够以完全校正的优化方式适应各种损失函数和不同的正则化器。我们证明,通过求解原始而不是对偶,大量完全校正的boosting算法实际上可以有效地求解,而不需要复杂的凸优化求解器。我们还证明了一些boosting算法,如AdaBoost可以在我们的框架中解释--即使它们的优化也不是完全正确的。我们在实验中使用的UCIrvine机器学习数据集[1]上的经验表明,基于该框架的各种boosting算法的性能相似。
---
英文标题:
《Totally Corrective Boosting for Regularized Risk Minimization》
---
作者:
Chunhua Shen, Hanxi Li, Nick Barnes
---
最新提交年份:
2011
---
分类信息:
一级分类:Computer Science 计算机科学
二级分类:Artificial Intelligence 人工智能
分类描述:Covers all areas of AI except Vision, Robotics, Machine Learning, Multiagent Systems, and Computation and Language (Natural Language Processing), which have separate subject areas. In particular, includes Expert Systems, Theorem Proving (although this may overlap with Logic in Computer Science), Knowledge Representation, Planning, and Uncertainty in AI. Roughly includes material in ACM Subject Classes I.2.0, I.2.1, I.2.3, I.2.4, I.2.8, and I.2.11.
涵盖了人工智能的所有领域,除了视觉、机器人、机器学习、多智能体系统以及计算和语言(自然语言处理),这些领域有独立的学科领域。特别地,包括专家系统,定理证明(尽管这可能与计算机科学中的逻辑重叠),知识表示,规划,和人工智能中的不确定性。大致包括ACM学科类I.2.0、I.2.1、I.2.3、I.2.4、I.2.8和I.2.11中的材料。
--
---
英文摘要:
Consideration of the primal and dual problems together leads to important new insights into the characteristics of boosting algorithms. In this work, we propose a general framework that can be used to design new boosting algorithms. A wide variety of machine learning problems essentially minimize a regularized risk functional. We show that the proposed boosting framework, termed CGBoost, can accommodate various loss functions and different regularizers in a totally-corrective optimization fashion. We show that, by solving the primal rather than the dual, a large body of totally-corrective boosting algorithms can actually be efficiently solved and no sophisticated convex optimization solvers are needed. We also demonstrate that some boosting algorithms like AdaBoost can be interpreted in our framework--even their optimization is not totally corrective. We empirically show that various boosting algorithms based on the proposed framework perform similarly on the UCIrvine machine learning datasets [1] that we have used in the experiments.
---
PDF链接:
https://arxiv.org/pdf/1008.5188