楼主: 能者818
314 0

[计算机科学] 用半空格对单词进行不可知论学习是困难的 [推广有奖]

  • 0关注
  • 6粉丝

会员

学术权威

79%

还不是VIP/贵宾

-

威望
10
论坛币
10 个
通用积分
34.5488
学术水平
0 点
热心指数
1 点
信用等级
0 点
经验
24952 点
帖子
4198
精华
0
在线时间
0 小时
注册时间
2022-2-24
最后登录
2022-4-15

楼主
能者818 在职认证  发表于 2022-3-31 13:05:00 来自手机 |只看作者 |坛友微信交流群|倒序 |AI写论文

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币
摘要翻译:
我们证明了以下很难学习的结果:给定超立方体标记例的分布,使得存在一个与例的$(1-\eps)$一致的单项式,对于任意常数$\eps>0$,在例的$(1/2+\eps)$上找到一个正确的半空间是NP-困难的。用学习理论的术语来说,单项词的弱不可知论学习是困难的,即使允许一个人从更大的半空间概念类中输出一个假设。这个硬度结果包含了一长串以前的结果,包括两个最近的硬度结果,用于适当地学习单项式和半空间。作为我们结果的直接推论,我们证明了决策表的弱不可知论学习是NP-hard的。我们的技术与以前的学习硬度证明有很大不同。我们定义了前几个矩匹配的单项式的正反例上的分布。利用不变性原理证明了正则半空间(其系数相对于总范数绝对值较小)不能区分前几个矩匹配的分布。对于高度非正则子空间,我们使用最近关于愚弄半空间的工作中的一个结构引理来论证它们是“类军政府”的,可以在不影响半空间性能的情况下将除前几个系数之外的所有系数归零。在独裁测试/标签覆盖缩减的上下文中,前几个系数形成了半空间的自然列表解码。我们注意到,与以前的基于不变性原理的证明不同,这些证明只给出了唯一博弈的硬度,我们能够从一个版本的标记覆盖问题中减少已知的NP-hard。这启发了在一些最优几何不可逼近性结果中绕过唯一博弈猜想的后续工作。
---
英文标题:
《Agnostic Learning of Monomials by Halfspaces is Hard》
---
作者:
Vitaly Feldman, Venkatesan Guruswami, Prasad Raghavendra, Yi Wu
---
最新提交年份:
2010
---
分类信息:

一级分类:Computer Science        计算机科学
二级分类:Computational Complexity        计算复杂度
分类描述:Covers models of computation, complexity classes, structural complexity, complexity tradeoffs, upper and lower bounds. Roughly includes material in ACM Subject Classes F.1 (computation by abstract devices), F.2.3 (tradeoffs among complexity measures), and F.4.3 (formal languages), although some material in formal languages may be more appropriate for Logic in Computer Science. Some material in F.2.1 and F.2.2, may also be appropriate here, but is more likely to have Data Structures and Algorithms as the primary subject area.
涵盖计算模型,复杂度类别,结构复杂度,复杂度折衷,上限和下限。大致包括ACM学科类F.1(抽象设备的计算)、F.2.3(复杂性度量之间的权衡)和F.4.3(形式语言)中的材料,尽管形式语言中的一些材料可能更适合于计算机科学中的逻辑。在F.2.1和F.2.2中的一些材料可能也适用于这里,但更有可能以数据结构和算法作为主要主题领域。
--
一级分类:Computer Science        计算机科学
二级分类:Artificial Intelligence        人工智能
分类描述:Covers all areas of AI except Vision, Robotics, Machine Learning, Multiagent Systems, and Computation and Language (Natural Language Processing), which have separate subject areas. In particular, includes Expert Systems, Theorem Proving (although this may overlap with Logic in Computer Science), Knowledge Representation, Planning, and Uncertainty in AI. Roughly includes material in ACM Subject Classes I.2.0, I.2.1, I.2.3, I.2.4, I.2.8, and I.2.11.
涵盖了人工智能的所有领域,除了视觉、机器人、机器学习、多智能体系统以及计算和语言(自然语言处理),这些领域有独立的学科领域。特别地,包括专家系统,定理证明(尽管这可能与计算机科学中的逻辑重叠),知识表示,规划,和人工智能中的不确定性。大致包括ACM学科类I.2.0、I.2.1、I.2.3、I.2.4、I.2.8和I.2.11中的材料。
--
一级分类:Computer Science        计算机科学
二级分类:Machine Learning        机器学习
分类描述:Papers on all aspects of machine learning research (supervised, unsupervised, reinforcement learning, bandit problems, and so on) including also robustness, explanation, fairness, and methodology. cs.LG is also an appropriate primary category for applications of machine learning methods.
关于机器学习研究的所有方面的论文(有监督的,无监督的,强化学习,强盗问题,等等),包括健壮性,解释性,公平性和方法论。对于机器学习方法的应用,CS.LG也是一个合适的主要类别。
--

---
英文摘要:
  We prove the following strong hardness result for learning: Given a distribution of labeled examples from the hypercube such that there exists a monomial consistent with $(1-\eps)$ of the examples, it is NP-hard to find a halfspace that is correct on $(1/2+\eps)$ of the examples, for arbitrary constants $\eps > 0$. In learning theory terms, weak agnostic learning of monomials is hard, even if one is allowed to output a hypothesis from the much bigger concept class of halfspaces. This hardness result subsumes a long line of previous results, including two recent hardness results for the proper learning of monomials and halfspaces. As an immediate corollary of our result we show that weak agnostic learning of decision lists is NP-hard.   Our techniques are quite different from previous hardness proofs for learning. We define distributions on positive and negative examples for monomials whose first few moments match. We use the invariance principle to argue that regular halfspaces (all of whose coefficients have small absolute value relative to the total $\ell_2$ norm) cannot distinguish between distributions whose first few moments match. For highly non-regular subspaces, we use a structural lemma from recent work on fooling halfspaces to argue that they are ``junta-like'' and one can zero out all but the top few coefficients without affecting the performance of the halfspace. The top few coefficients form the natural list decoding of a halfspace in the context of dictatorship tests/Label Cover reductions.   We note that unlike previous invariance principle based proofs which are only known to give Unique-Games hardness, we are able to reduce from a version of Label Cover problem that is known to be NP-hard. This has inspired follow-up work on bypassing the Unique Games conjecture in some optimal geometric inapproximability results.
---
PDF链接:
https://arxiv.org/pdf/1012.0729
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:不可知论 可知论 distribution coefficients Dictatorship result 原理 测试 hard 硬度

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加JingGuanBbs
拉您进交流群

京ICP备16021002-2号 京B2-20170662号 京公网安备 11010802022788号 论坛法律顾问:王进律师 知识产权保护声明   免责及隐私声明

GMT+8, 2024-5-21 10:39