摘要翻译:
我们将最近提出的算法公平概念映射到机会平等的经济模型(EOP)--一个在政治哲学中被广泛研究的公平理想。通过概念映射,我们形式化地证明了许多现有的算法公平性定义,如预测值奇偶性和赔率相等性,都可以解释为EOP的特例。在这方面,我们的工作作为一个统一的道德框架来理解现有的算法公平性概念。最重要的是,这一框架使我们能够明确地阐明每个公平概念背后的道德假设,并从一个新的角度解释最近的公平不可能结果。最后但并非最不重要的是,在EOP的运气平等模型的启发下,我们提出了一系列新的算法公平性度量。我们从经验上说明了我们的建议,并表明当其潜在的道德假设不满足时,使用算法(非)公平的度量,可能会对弱势群体的福利产生毁灭性的后果。
---
英文标题:
《A Moral Framework for Understanding of Fair ML through Economic Models
of Equality of Opportunity》
---
作者:
Hoda Heidari, Michele Loi, Krishna P. Gummadi, and Andreas Krause
---
最新提交年份:
2018
---
分类信息:
一级分类:Computer Science 计算机科学
二级分类:Machine Learning 机器学习
分类描述:Papers on all aspects of machine learning research (supervised, unsupervised, reinforcement learning, bandit problems, and so on) including also robustness, explanation, fairness, and methodology. cs.LG is also an appropriate primary category for applications of machine learning methods.
关于机器学习研究的所有方面的论文(有监督的,无监督的,强化学习,强盗问题,等等),包括健壮性,解释性,公平性和方法论。对于机器学习方法的应用,CS.LG也是一个合适的主要类别。
--
一级分类:Economics 经济学
二级分类:Theoretical Economics 理论经济学
分类描述:Includes theoretical contributions to Contract Theory, Decision Theory, Game Theory, General Equilibrium, Growth, Learning and Evolution, Macroeconomics, Market and Mechanism Design, and Social Choice.
包括对契约理论、决策理论、博弈论、一般均衡、增长、学习与进化、宏观经济学、市场与机制设计、社会选择的理论贡献。
--
一级分类:Statistics 统计学
二级分类:Machine Learning 机器学习
分类描述:Covers machine learning papers (supervised, unsupervised, semi-supervised learning, graphical models, reinforcement learning, bandits, high dimensional inference, etc.) with a statistical or theoretical grounding
覆盖机器学习论文(监督,无监督,半监督学习,图形模型,强化学习,强盗,高维推理等)与统计或理论基础
--
---
英文摘要:
We map the recently proposed notions of algorithmic fairness to economic models of Equality of opportunity (EOP)---an extensively studied ideal of fairness in political philosophy. We formally show that through our conceptual mapping, many existing definition of algorithmic fairness, such as predictive value parity and equality of odds, can be interpreted as special cases of EOP. In this respect, our work serves as a unifying moral framework for understanding existing notions of algorithmic fairness. Most importantly, this framework allows us to explicitly spell out the moral assumptions underlying each notion of fairness, and interpret recent fairness impossibility results in a new light. Last but not least and inspired by luck egalitarian models of EOP, we propose a new family of measures for algorithmic fairness. We illustrate our proposal empirically and show that employing a measure of algorithmic (un)fairness when its underlying moral assumptions are not satisfied, can have devastating consequences for the disadvantaged group's welfare.
---
PDF链接:
https://arxiv.org/pdf/1809.03400


雷达卡



京公网安备 11010802022788号







