首页 > 最新文献

Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society最新文献

英文 中文
Assessing Post-hoc Explainability of the BKT Algorithm 评估BKT算法的事后可解释性
Pub Date : 2020-02-07 DOI: 10.1145/3375627.3375856
Tongyu Zhou, Haoyu Sheng, I. Howley
As machine intelligence is increasingly incorporated into educational technologies, it becomes imperative for instructors and students to understand the potential flaws of the algorithms on which their systems rely. This paper describes the design and implementation of an interactive post-hoc explanation of the Bayesian Knowledge Tracing algorithm which is implemented in learning analytics systems used across the United States. After a user-centered design process to smooth out interaction design difficulties, we ran a controlled experiment to evaluate whether the interactive or static version of the explainable led to increased learning. Our results reveal that learning about an algorithm through an explainable depends on users' educational background. For other contexts, designers of post-hoc explainables must consider their users' educational background to best determine how to empower more informed decision-making with AI-enhanced systems.
随着机器智能越来越多地融入教育技术,教师和学生必须了解他们的系统所依赖的算法的潜在缺陷。本文描述了贝叶斯知识跟踪算法的交互式事后解释的设计和实现,该算法在美国各地使用的学习分析系统中实现。在以用户为中心的设计过程消除了交互设计的困难之后,我们进行了一个对照实验,以评估可解释的交互式或静态版本是否会增加学习。我们的研究结果表明,通过可解释的学习算法取决于用户的教育背景。在其他情况下,事后解释的设计者必须考虑用户的教育背景,以最好地确定如何通过人工智能增强系统赋予更明智的决策。
{"title":"Assessing Post-hoc Explainability of the BKT Algorithm","authors":"Tongyu Zhou, Haoyu Sheng, I. Howley","doi":"10.1145/3375627.3375856","DOIUrl":"https://doi.org/10.1145/3375627.3375856","url":null,"abstract":"As machine intelligence is increasingly incorporated into educational technologies, it becomes imperative for instructors and students to understand the potential flaws of the algorithms on which their systems rely. This paper describes the design and implementation of an interactive post-hoc explanation of the Bayesian Knowledge Tracing algorithm which is implemented in learning analytics systems used across the United States. After a user-centered design process to smooth out interaction design difficulties, we ran a controlled experiment to evaluate whether the interactive or static version of the explainable led to increased learning. Our results reveal that learning about an algorithm through an explainable depends on users' educational background. For other contexts, designers of post-hoc explainables must consider their users' educational background to best determine how to empower more informed decision-making with AI-enhanced systems.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"43 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73557758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A Just Approach Balancing Rawlsian Leximax Fairness and Utilitarianism 罗尔斯极值公平与功利主义平衡的公正途径
Pub Date : 2020-02-07 DOI: 10.1145/3375627.3375844
V. Chen, J. Hooker
Numerous AI-assisted resource allocation decisions need to balance the conflicting goals of fairness and efficiency. Our paper studies the challenging task of defining and modeling a proper fairness-efficiency trade off. We define fairness with Rawlsian leximax fairness, which views the lexicographic maximum among all feasible outcomes as the most equitable; and define efficiency with Utilitarianism, which seeks to maximize the sum of utilities received by entities regardless of individual differences. Motivated by a justice-driven trade off principle: prioritize fairness to benefit the less advantaged unless too much efficiency is sacrificed, we propose a sequential optimization procedure to balance leximax fairness and utilitarianism in decision-making. Each iteration of our approach maximizes a social welfare function, and we provide a practical mixed integer/linear programming (MILP) formulation for each maximization problem. We illustrate our method on a budget allocation example. Compared with existing approaches of balancing equity and efficiency, our method is more interpretable in terms of parameter selection, and incorporates a strong equity criterion with a thoroughly balanced perspective.
许多人工智能辅助的资源分配决策需要平衡公平和效率这两个相互冲突的目标。本文研究了定义和建模适当的公平-效率权衡的挑战性任务。我们用罗尔斯的极大公平来定义公平,它认为所有可行结果中的词典最大值是最公平的;用功利主义来定义效率,它寻求使实体获得的效用总和最大化,而不考虑个体差异。基于公正驱动的权衡原则:在不牺牲太多效率的情况下,优先考虑公平以使弱势群体受益,我们提出了一种顺序优化程序来平衡决策中的最大公平和功利主义。我们方法的每次迭代都最大化一个社会福利函数,并且我们为每个最大化问题提供了一个实用的混合整数/线性规划(MILP)公式。我们用一个预算分配的例子来说明我们的方法。与现有的平衡公平和效率的方法相比,我们的方法在参数选择方面更具可解释性,并且在彻底平衡的视角下纳入了强有力的公平标准。
{"title":"A Just Approach Balancing Rawlsian Leximax Fairness and Utilitarianism","authors":"V. Chen, J. Hooker","doi":"10.1145/3375627.3375844","DOIUrl":"https://doi.org/10.1145/3375627.3375844","url":null,"abstract":"Numerous AI-assisted resource allocation decisions need to balance the conflicting goals of fairness and efficiency. Our paper studies the challenging task of defining and modeling a proper fairness-efficiency trade off. We define fairness with Rawlsian leximax fairness, which views the lexicographic maximum among all feasible outcomes as the most equitable; and define efficiency with Utilitarianism, which seeks to maximize the sum of utilities received by entities regardless of individual differences. Motivated by a justice-driven trade off principle: prioritize fairness to benefit the less advantaged unless too much efficiency is sacrificed, we propose a sequential optimization procedure to balance leximax fairness and utilitarianism in decision-making. Each iteration of our approach maximizes a social welfare function, and we provide a practical mixed integer/linear programming (MILP) formulation for each maximization problem. We illustrate our method on a budget allocation example. Compared with existing approaches of balancing equity and efficiency, our method is more interpretable in terms of parameter selection, and incorporates a strong equity criterion with a thoroughly balanced perspective.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"44 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82151691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Technocultural Pluralism: A "Clash of Civilizations" in Technology? 技术文化多元论:技术领域的“文明冲突”?
Pub Date : 2020-02-07 DOI: 10.1145/3375627.3375834
Osonde A. Osoba
At the end of the Cold War, the renowned political scientist, Samuel Huntington, argued that future conflicts were more likely to stem from cultural frictions -- ideologies, social norms, and political systems -- rather than political or economic frictions. Huntington focused his concern on the future of geopolitics in a rapidly shrinking world. This paper argues that a similar dynamic is at play in the interaction of technology cultures. We emphasize the role of culture in the evolution of technology and identify the particular role that culture (esp. privacy culture) plays in the development of AI/ML technologies. Then we examine some implications that this perspective brings to the fore.
在冷战结束时,著名的政治学家塞缪尔·亨廷顿(Samuel Huntington)认为,未来的冲突更有可能源于文化摩擦——意识形态、社会规范和政治制度——而不是政治或经济摩擦。亨廷顿关注的是在一个迅速缩小的世界里地缘政治的未来。本文认为,在技术文化的相互作用中也存在类似的动态。我们强调文化在技术发展中的作用,并确定文化(特别是隐私文化)在人工智能/机器学习技术发展中的特殊作用。然后,我们将研究这一观点所带来的一些影响。
{"title":"Technocultural Pluralism: A \"Clash of Civilizations\" in Technology?","authors":"Osonde A. Osoba","doi":"10.1145/3375627.3375834","DOIUrl":"https://doi.org/10.1145/3375627.3375834","url":null,"abstract":"At the end of the Cold War, the renowned political scientist, Samuel Huntington, argued that future conflicts were more likely to stem from cultural frictions -- ideologies, social norms, and political systems -- rather than political or economic frictions. Huntington focused his concern on the future of geopolitics in a rapidly shrinking world. This paper argues that a similar dynamic is at play in the interaction of technology cultures. We emphasize the role of culture in the evolution of technology and identify the particular role that culture (esp. privacy culture) plays in the development of AI/ML technologies. Then we examine some implications that this perspective brings to the fore.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84574179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Auditing Algorithms: On Lessons Learned and the Risks of Data Minimization 审计算法:经验教训和数据最小化的风险
Pub Date : 2020-02-07 DOI: 10.1145/3375627.3375852
G. G. Clavell, M. M. Zamorano, C. Castillo, Oliver Smith, A. Matic
In this paper, we present the Algorithmic Audit (AA) of REM!X, a personalized well-being recommendation app developed by Telefónica Innovación Alpha. The main goal of the AA was to identify and mitigate algorithmic biases in the recommendation system that could lead to the discrimination of protected groups. The audit was conducted through a qualitative methodology that included five focus groups with developers and a digital ethnography relying on users comments reported in the Google Play Store. To minimize the collection of personal information, as required by best practice and the GDPR [1], the REM!X app did not collect gender, age, race, religion, or other protected attributes from its users. This limited the algorithmic assessment and the ability to control for different algorithmic biases. Indirect evidence was thus used as a partial mitigation for the lack of data on protected attributes, and allowed the AA to identify four domains where bias and discrimination were still possible, even without direct personal identifiers. Our analysis provides important insights into how general data ethics principles such as data minimization, fairness, non-discrimination and transparency can be operationalized via algorithmic auditing, their potential and limitations, and how the collaboration between developers and algorithmic auditors can lead to better technologies
在本文中,我们提出了REM!X是由Telefónica Innovación Alpha开发的个性化健康推荐应用程序。AA的主要目标是识别和减轻推荐系统中可能导致受保护群体歧视的算法偏见。审计是通过定性方法进行的,其中包括5个开发者焦点小组和基于b谷歌Play Store用户评论的数字人种学。为了最大限度地减少个人信息的收集,根据最佳实践和GDPR bbb的要求,REM!X应用程序没有收集用户的性别、年龄、种族、宗教或其他受保护的属性。这限制了算法评估和控制不同算法偏差的能力。因此,间接证据被用作部分缓解缺乏受保护属性数据的问题,并使机管局能够确定即使没有直接的个人标识符,仍有可能存在偏见和歧视的四个领域。我们的分析为如何通过算法审计实现数据最小化、公平、非歧视和透明度等一般数据伦理原则、它们的潜力和局限性以及开发人员和算法审计人员之间的合作如何带来更好的技术提供了重要的见解
{"title":"Auditing Algorithms: On Lessons Learned and the Risks of Data Minimization","authors":"G. G. Clavell, M. M. Zamorano, C. Castillo, Oliver Smith, A. Matic","doi":"10.1145/3375627.3375852","DOIUrl":"https://doi.org/10.1145/3375627.3375852","url":null,"abstract":"In this paper, we present the Algorithmic Audit (AA) of REM!X, a personalized well-being recommendation app developed by Telefónica Innovación Alpha. The main goal of the AA was to identify and mitigate algorithmic biases in the recommendation system that could lead to the discrimination of protected groups. The audit was conducted through a qualitative methodology that included five focus groups with developers and a digital ethnography relying on users comments reported in the Google Play Store. To minimize the collection of personal information, as required by best practice and the GDPR [1], the REM!X app did not collect gender, age, race, religion, or other protected attributes from its users. This limited the algorithmic assessment and the ability to control for different algorithmic biases. Indirect evidence was thus used as a partial mitigation for the lack of data on protected attributes, and allowed the AA to identify four domains where bias and discrimination were still possible, even without direct personal identifiers. Our analysis provides important insights into how general data ethics principles such as data minimization, fairness, non-discrimination and transparency can be operationalized via algorithmic auditing, their potential and limitations, and how the collaboration between developers and algorithmic auditors can lead to better technologies","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89118614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
When Your Only Tool Is A Hammer: Ethical Limitations of Algorithmic Fairness Solutions in Healthcare Machine Learning 当你唯一的工具是一把锤子:医疗机器学习中算法公平解决方案的伦理限制
Pub Date : 2020-02-07 DOI: 10.1145/3375627.3375824
M. Mccradden, M. Mazwi, Shalmali Joshi, James A. Anderson
It is no longer a hypothetical worry that artificial intelligence - more specifically, machine learning (ML) - can propagate the effects of pernicious bias in healthcare. To address these problems, some have proposed the development of 'algorithmic fairness' solutions. The primary goal of these solutions is to constrain the effect of pernicious bias with respect to a given outcome of interest as a function of one's protected identity (i.e., characteristics generally protected by civil or human rights legislation. The technical limitations of these solutions have been well-characterized. Ethically, the problematic implication - of developers, potentially, and end users - is that by virtue of algorithmic fairness solutions a model can be rendered 'objective' (i.e., free from the influence of pernicious bias). The ostensible neutrality of these solutions may unintentionally prompt new consequences for vulnerable groups by obscuring downstream problems due to the persistence of real-world bias. The main epistemic limitation of algorithmic fairness is that it assumes the relationship between the extent of bias's impact on a given health outcome and one's protected identity is mathematically quantifiable. The reality is that social and structural factors confluence in complex and unknown ways to produce health inequalities. Some of these are biologic in nature, and differences like these are directly relevant to predicting a health event and should be incorporated into the model's design. Others are reflective of prejudice, lack of access to healthcare, or implicit bias. Sometimes, there may be a combination. With respect to any specific task, it is difficult to untangle the complex relationships between potentially influential factors and which ones are 'fair' and which are not to inform their inclusion or mitigation in the model's design.
人工智能——更具体地说,机器学习(ML)——会在医疗保健领域传播有害偏见的影响,这不再是一种假设的担忧。为了解决这些问题,一些人提出了“算法公平”解决方案的发展。这些解决办法的主要目标是限制有害偏见对特定利益结果的影响,使其成为受保护身份(即一般受民事或人权立法保护的特征)的功能。这些解决方案的技术限制已经得到了很好的描述。从道德上讲,问题的含义-开发人员,潜在的和最终用户-是由于算法公平解决方案,一个模型可以呈现“客观”(即,不受有害偏见的影响)。这些解决方案表面上的中立性可能会无意中给弱势群体带来新的后果,因为它们掩盖了由于现实世界偏见持续存在而导致的下游问题。算法公平性的主要认知限制是,它假设偏见对给定健康结果的影响程度与个人受保护身份之间的关系在数学上是可量化的。现实情况是,社会和结构因素以复杂和未知的方式汇合在一起,造成健康不平等。其中一些是生物学性质的,而这些差异与预测健康事件直接相关,应该纳入模型的设计中。其他则反映了偏见、缺乏获得医疗保健的机会或隐性偏见。有时,两者兼而有之。就任何具体任务而言,很难理清潜在影响因素之间的复杂关系,以及哪些因素是"公平的",哪些是不公平的,以便在模型设计中纳入或减轻这些因素。
{"title":"When Your Only Tool Is A Hammer: Ethical Limitations of Algorithmic Fairness Solutions in Healthcare Machine Learning","authors":"M. Mccradden, M. Mazwi, Shalmali Joshi, James A. Anderson","doi":"10.1145/3375627.3375824","DOIUrl":"https://doi.org/10.1145/3375627.3375824","url":null,"abstract":"It is no longer a hypothetical worry that artificial intelligence - more specifically, machine learning (ML) - can propagate the effects of pernicious bias in healthcare. To address these problems, some have proposed the development of 'algorithmic fairness' solutions. The primary goal of these solutions is to constrain the effect of pernicious bias with respect to a given outcome of interest as a function of one's protected identity (i.e., characteristics generally protected by civil or human rights legislation. The technical limitations of these solutions have been well-characterized. Ethically, the problematic implication - of developers, potentially, and end users - is that by virtue of algorithmic fairness solutions a model can be rendered 'objective' (i.e., free from the influence of pernicious bias). The ostensible neutrality of these solutions may unintentionally prompt new consequences for vulnerable groups by obscuring downstream problems due to the persistence of real-world bias. The main epistemic limitation of algorithmic fairness is that it assumes the relationship between the extent of bias's impact on a given health outcome and one's protected identity is mathematically quantifiable. The reality is that social and structural factors confluence in complex and unknown ways to produce health inequalities. Some of these are biologic in nature, and differences like these are directly relevant to predicting a health event and should be incorporated into the model's design. Others are reflective of prejudice, lack of access to healthcare, or implicit bias. Sometimes, there may be a combination. With respect to any specific task, it is difficult to untangle the complex relationships between potentially influential factors and which ones are 'fair' and which are not to inform their inclusion or mitigation in the model's design.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73280803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Ethics for AI Writing: The Importance of Rhetorical Context 人工智能写作的伦理:修辞语境的重要性
Pub Date : 2020-02-07 DOI: 10.1145/3375627.3375811
H. McKee, J. E. Porter
Implicit in any rhetorical interaction-between humans or between humans and machines-are ethical codes that shape the rhetorical context, the social situation in which communication happens and also the engine that drives communicative interaction. Such implicit codes are usually invisible to AI writing systems because the social factors shaping communication (the why and how of language, not the what) are not usually explicitly evident in databases the systems use to produce discourse. Can AI writing systems learn to learn rhetorical context, particularly the implicit codes for communication ethics? We see evidence that some systems do address issues of rhetorical context, at least in rudimentary ways. But we critique the information transfer communication model supporting many AI writing systems, arguing for a social context model that accounts for rhetorical context-what is, in a sense, "not there" in the data corpus but that is critical for the production of meaningful, significant, and ethical communication. We offer two ethical principles to guide design of AI writing systems: transparency about machine presence and critical data awareness, a methodological reflexivity about rhetorical context and omissions in the data that need to be provided by a human agent or accounted for in machine learning.
在人类之间或人类与机器之间的任何修辞互动中,都隐含着道德准则,这些道德准则塑造了修辞语境、交际发生的社会情境,也是推动交际互动的引擎。这种隐含的代码通常对人工智能书写系统来说是不可见的,因为塑造交流的社会因素(语言的原因和方式,而不是什么)通常在系统用来产生话语的数据库中并不明显。人工智能写作系统能否学会修辞语境,尤其是沟通伦理的隐含代码?我们看到的证据表明,一些系统确实解决了修辞上下文的问题,至少在基本的方式。但是,我们批评了支持许多人工智能写作系统的信息传递通信模型,主张使用一种社会上下文模型来解释修辞上下文——从某种意义上说,在数据语料库中“不存在”,但对于产生有意义的、重要的和道德的通信至关重要。我们提供了两个道德原则来指导人工智能写作系统的设计:关于机器存在和关键数据意识的透明度,关于修辞上下文的方法论反身性,以及需要由人类代理提供或在机器学习中考虑的数据中的遗漏。
{"title":"Ethics for AI Writing: The Importance of Rhetorical Context","authors":"H. McKee, J. E. Porter","doi":"10.1145/3375627.3375811","DOIUrl":"https://doi.org/10.1145/3375627.3375811","url":null,"abstract":"Implicit in any rhetorical interaction-between humans or between humans and machines-are ethical codes that shape the rhetorical context, the social situation in which communication happens and also the engine that drives communicative interaction. Such implicit codes are usually invisible to AI writing systems because the social factors shaping communication (the why and how of language, not the what) are not usually explicitly evident in databases the systems use to produce discourse. Can AI writing systems learn to learn rhetorical context, particularly the implicit codes for communication ethics? We see evidence that some systems do address issues of rhetorical context, at least in rudimentary ways. But we critique the information transfer communication model supporting many AI writing systems, arguing for a social context model that accounts for rhetorical context-what is, in a sense, \"not there\" in the data corpus but that is critical for the production of meaningful, significant, and ethical communication. We offer two ethical principles to guide design of AI writing systems: transparency about machine presence and critical data awareness, a methodological reflexivity about rhetorical context and omissions in the data that need to be provided by a human agent or accounted for in machine learning.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81487590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Measuring Fairness in an Unfair World 在一个不公平的世界中衡量公平
Pub Date : 2020-02-07 DOI: 10.1145/3375627.3375854
J. Herington
Computer scientists have made great strides in characterizing different measures of algorithmic fairness, and showing that certain measures of fairness cannot be jointly satisfied. In this paper, I argue that the three most popular families of measures - unconditional independence, target-conditional independence and classification-conditional independence - make assumptions that are unsustainable in the context of an unjust world. I begin by introducing the measures and the implicit idealizations they make about the underlying causal structure of the contexts in which they are deployed. I then discuss how these idealizations fall apart in the context of historical injustice, ongoing unmodeled oppression, and the permissibility of using sensitive attributes to rectify injustice. In the final section, I suggest an alternative framework for measuring fairness in the context of existing injustice: distributive fairness.
计算机科学家在描述算法公平性的不同度量方面取得了巨大的进步,并表明某些度量的公平性不能被共同满足。在本文中,我认为三种最流行的衡量标准——无条件独立性、目标-条件独立性和分类-条件独立性——在不公正的世界背景下做出的假设是不可持续的。首先,我将介绍这些衡量标准以及它们对所处环境的潜在因果结构所做的隐性理想化。然后,我讨论了这些理想化是如何在历史上的不公正、持续的未建模的压迫以及使用敏感属性纠正不公正的可能性的背景下瓦解的。在最后一节,我提出了在现有不公正的背景下衡量公平的另一种框架:分配公平。
{"title":"Measuring Fairness in an Unfair World","authors":"J. Herington","doi":"10.1145/3375627.3375854","DOIUrl":"https://doi.org/10.1145/3375627.3375854","url":null,"abstract":"Computer scientists have made great strides in characterizing different measures of algorithmic fairness, and showing that certain measures of fairness cannot be jointly satisfied. In this paper, I argue that the three most popular families of measures - unconditional independence, target-conditional independence and classification-conditional independence - make assumptions that are unsustainable in the context of an unjust world. I begin by introducing the measures and the implicit idealizations they make about the underlying causal structure of the contexts in which they are deployed. I then discuss how these idealizations fall apart in the context of historical injustice, ongoing unmodeled oppression, and the permissibility of using sensitive attributes to rectify injustice. In the final section, I suggest an alternative framework for measuring fairness in the context of existing injustice: distributive fairness.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83294224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Adoption Dynamics and Societal Impact of AI Systems in Complex Networks 复杂网络中人工智能系统的采用动态和社会影响
Pub Date : 2020-02-07 DOI: 10.1145/3375627.3375847
Pedro M. Fernandes, F. C. Santos, Manuel Lopes
We propose a game-theoretical model to simulate the dynamics of AI adoption in adaptive networks. This formalism allows us to understand the impact of the adoption of AI systems for society as a whole, addressing some of the concerns on the need for regulation. Using this model we study the adoption of AI systems, the distribution of the different types of AI (from selfish to utilitarian), the appearance of clusters of specific AI types, and the impact on the fitness of each individual. We suggest that the entangled evolution of individual strategy and network structure constitutes a key mechanism for the sustainability of utilitarian and human-conscious AI. Differently, in the absence of rewiring, a minority of the population can easily foster the adoption of selfish AI and gains a benefit at the expense of the remaining majority.
我们提出了一个博弈论模型来模拟自适应网络中人工智能采用的动态。这种形式主义使我们能够理解采用人工智能系统对整个社会的影响,解决了对监管需求的一些担忧。使用这个模型,我们研究了人工智能系统的采用,不同类型的人工智能的分布(从自私到功利),特定人工智能类型集群的出现,以及对每个个体适应度的影响。我们认为,个体策略和网络结构的纠缠进化是功利性和人类意识人工智能可持续发展的关键机制。不同的是,在没有重新布线的情况下,少数人可以很容易地促进自私的人工智能的采用,并以牺牲其余大多数人为代价获得利益。
{"title":"Adoption Dynamics and Societal Impact of AI Systems in Complex Networks","authors":"Pedro M. Fernandes, F. C. Santos, Manuel Lopes","doi":"10.1145/3375627.3375847","DOIUrl":"https://doi.org/10.1145/3375627.3375847","url":null,"abstract":"We propose a game-theoretical model to simulate the dynamics of AI adoption in adaptive networks. This formalism allows us to understand the impact of the adoption of AI systems for society as a whole, addressing some of the concerns on the need for regulation. Using this model we study the adoption of AI systems, the distribution of the different types of AI (from selfish to utilitarian), the appearance of clusters of specific AI types, and the impact on the fitness of each individual. We suggest that the entangled evolution of individual strategy and network structure constitutes a key mechanism for the sustainability of utilitarian and human-conscious AI. Differently, in the absence of rewiring, a minority of the population can easily foster the adoption of selfish AI and gains a benefit at the expense of the remaining majority.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"40 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90083363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computerize the Race Problem?: Why We Must Plan for a Just AI Future 把种族问题电脑化?:为什么我们必须规划一个公正的人工智能未来
Pub Date : 2020-02-07 DOI: 10.1145/3375627.3377140
Charlton D. McIlwain
1960s civil rights and racial justice activists tried to warn us about our technological ways, but we didn't hear them talk. The so-called wizards who stayed up late ignored or dismissed black voices, calling out from street corners to pulpits, union halls to the corridors of Congress. Instead, the men who took the first giant leaps towards conceiving and building our earliest "thinking" and "learning" machines aligned themselves with industry, government and their elite science and engineering institutions. Together, they conspired to make those fighting for racial justice the problem that their new computing machines would be designed to solve. And solve that problem they did, through color-coded, automated, and algorithmically-driven indignities and inumahities that thrive to this day. But what if yesterday's technological elite had listened to those Other voices? What if they had let them into their conversations, their classrooms, their labs, boardrooms and government task forces to help determine what new tools to build, how to build them and - most importantly - how to deploy them? What might our world look like today if the advocates for racial justice had been given the chance to frame the day's most preeminent technological question for the world and ask, "Computerize the Race Problem?" Better yet, what might our AI-driven future look like if we ask ourselves this question today?
20世纪60年代,民权和种族正义活动家试图警告我们警惕我们的技术方式,但我们没有听到他们说话。那些熬夜的所谓“巫师”们无视或无视黑人的声音,从街角到讲坛,从工会大厅到国会走廊,他们大声疾呼。相反,那些在构思和制造我们最早的“思考”和“学习”机器方面迈出第一步的人,与工业界、政府及其精英科学和工程机构结盟。他们一起密谋让那些为种族正义而战的人成为他们设计的新计算机要解决的问题。他们确实解决了这个问题,通过颜色编码,自动化和算法驱动的侮辱和歧视,这些问题一直持续到今天。但是,如果昨天的技术精英听取了这些其他的声音呢?如果他们让他们参与到他们的谈话、教室、实验室、董事会和政府工作小组中来,帮助决定要建立什么新工具,如何建立它们,最重要的是如何部署它们,结果会怎样?如果种族正义的倡导者有机会向世界提出当今最突出的技术问题,并提出“将种族问题电脑化”,我们今天的世界会是什么样子?更好的是,如果我们今天问自己这个问题,人工智能驱动的未来会是什么样子?
{"title":"Computerize the Race Problem?: Why We Must Plan for a Just AI Future","authors":"Charlton D. McIlwain","doi":"10.1145/3375627.3377140","DOIUrl":"https://doi.org/10.1145/3375627.3377140","url":null,"abstract":"1960s civil rights and racial justice activists tried to warn us about our technological ways, but we didn't hear them talk. The so-called wizards who stayed up late ignored or dismissed black voices, calling out from street corners to pulpits, union halls to the corridors of Congress. Instead, the men who took the first giant leaps towards conceiving and building our earliest \"thinking\" and \"learning\" machines aligned themselves with industry, government and their elite science and engineering institutions. Together, they conspired to make those fighting for racial justice the problem that their new computing machines would be designed to solve. And solve that problem they did, through color-coded, automated, and algorithmically-driven indignities and inumahities that thrive to this day. But what if yesterday's technological elite had listened to those Other voices? What if they had let them into their conversations, their classrooms, their labs, boardrooms and government task forces to help determine what new tools to build, how to build them and - most importantly - how to deploy them? What might our world look like today if the advocates for racial justice had been given the chance to frame the day's most preeminent technological question for the world and ask, \"Computerize the Race Problem?\" Better yet, what might our AI-driven future look like if we ask ourselves this question today?","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73210840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Towards Just, Fair and Interpretable Methods for Judicial Subset Selection 司法子集选择的公正、公平与可解释性方法探讨
Pub Date : 2020-02-07 DOI: 10.1145/3375627.3375848
Lingxiao Huang, Julia Wei, Elisa Celis
In many judicial systems -- including the United States courts of appeals, the European Court of Justice, the UK Supreme Court and the Supreme Court of Canada -- a subset of judges is selected from the entire judicial body for each case in order to hear the arguments and decide the judgment. Ideally, the subset selected is representative, i.e., the decision of the subset would match what the decision of the entire judicial body would have been had they all weighed in on the case. Further, the process should be fair in that all judges should have similar workloads, and the selection process should not allow for certain judge's opinions to be silenced or amplified via case assignments. Lastly, in order to be practical and trustworthy, the process should also be interpretable, easy to use, and (if algorithmic) computationally efficient. In this paper, we propose an algorithmic method for the judicial subset selection problem that satisfies all of the above criteria. The method satisfies fairness by design, and we prove that it has optimal representativeness asymptotically for a large range of parameters and under noisy information models about judge opinions -- something no existing methods can provably achieve. We then assess the benefits of our approach empirically by counterfactually comparing against the current practice and recent alternative algorithmic approaches using cases from the United States courts of appeals database.
在许多司法系统中,包括美国上诉法院、欧洲法院、英国最高法院和加拿大最高法院,每个案件都从整个司法机构中选出一部分法官,听取辩论并作出判决。理想情况下,所选择的子集具有代表性,即子集的决定将与整个司法机构在他们都对案件进行权衡时的决定相匹配。此外,选拔过程应该是公平的,因为所有法官的工作量应该相似,选拔过程不应该允许通过分配案件来压制或扩大某些法官的意见。最后,为了实用和值得信赖,该过程还应该是可解释的,易于使用的,并且(如果是算法)计算效率高。在本文中,我们提出了一种满足上述所有条件的司法子集选择问题的算法方法。该方法从设计上满足了公平性,并证明了该方法在大范围参数和噪声信息模型下具有最优代表性,这是现有方法无法证明的。然后,我们利用美国上诉法院数据库中的案例,通过与当前实践和最近的替代算法方法进行反事实比较,从经验上评估我们方法的好处。
{"title":"Towards Just, Fair and Interpretable Methods for Judicial Subset Selection","authors":"Lingxiao Huang, Julia Wei, Elisa Celis","doi":"10.1145/3375627.3375848","DOIUrl":"https://doi.org/10.1145/3375627.3375848","url":null,"abstract":"In many judicial systems -- including the United States courts of appeals, the European Court of Justice, the UK Supreme Court and the Supreme Court of Canada -- a subset of judges is selected from the entire judicial body for each case in order to hear the arguments and decide the judgment. Ideally, the subset selected is representative, i.e., the decision of the subset would match what the decision of the entire judicial body would have been had they all weighed in on the case. Further, the process should be fair in that all judges should have similar workloads, and the selection process should not allow for certain judge's opinions to be silenced or amplified via case assignments. Lastly, in order to be practical and trustworthy, the process should also be interpretable, easy to use, and (if algorithmic) computationally efficient. In this paper, we propose an algorithmic method for the judicial subset selection problem that satisfies all of the above criteria. The method satisfies fairness by design, and we prove that it has optimal representativeness asymptotically for a large range of parameters and under noisy information models about judge opinions -- something no existing methods can provably achieve. We then assess the benefits of our approach empirically by counterfactually comparing against the current practice and recent alternative algorithmic approaches using cases from the United States courts of appeals database.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82759489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1