首页 > 最新文献

Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society最新文献

英文 中文
A Geometric Solution to Fair Representations 公平表示的几何解
Pub Date : 2020-02-07 DOI: 10.1145/3375627.3375864
Yuzi He, K. Burghardt, Kristina Lerman
To reduce human error and prejudice, many high-stakes decisions have been turned over to machine algorithms. However, recent research suggests that this does not remove discrimination, and can perpetuate harmful stereotypes. While algorithms have been developed to improve fairness, they typically face at least one of three shortcomings: they are not interpretable, their prediction quality deteriorates quickly compared to unbiased equivalents, and %the methodology cannot easily extend other algorithms they are not easily transferable across models% (e.g., methods to reduce bias in random forests cannot be extended to neural networks) . To address these shortcomings, we propose a geometric method that removes correlations between data and any number of protected variables. Further, we can control the strength of debiasing through an adjustable parameter to address the trade-off between prediction quality and fairness. The resulting features are interpretable and can be used with many popular models, such as linear regression, random forest, and multilayer perceptrons. The resulting predictions are found to be more accurate and fair compared to several state-of-the-art fair AI algorithms across a variety of benchmark datasets. Our work shows that debiasing data is a simple and effective solution toward improving fairness.
为了减少人为的错误和偏见,许多高风险的决策都交给了机器算法。然而,最近的研究表明,这并不能消除歧视,反而会使有害的刻板印象永久化。虽然算法已经被开发出来以提高公平性,但它们通常面临以下三个缺点中的至少一个:它们不可解释,它们的预测质量与无偏等价物相比迅速恶化,并且该方法不能轻易扩展其他算法,它们不容易在模型之间转移(例如,减少随机森林中偏差的方法不能扩展到神经网络)。为了解决这些缺点,我们提出了一种几何方法来消除数据与任何数量的受保护变量之间的相关性。此外,我们可以通过一个可调参数来控制去偏的强度,以解决预测质量和公平性之间的权衡。得到的特征是可解释的,可以与许多流行的模型一起使用,比如线性回归、随机森林和多层感知器。与各种基准数据集上的几种最先进的公平人工智能算法相比,由此产生的预测更加准确和公平。我们的工作表明,消除数据偏差是提高公平性的一个简单而有效的解决方案。
{"title":"A Geometric Solution to Fair Representations","authors":"Yuzi He, K. Burghardt, Kristina Lerman","doi":"10.1145/3375627.3375864","DOIUrl":"https://doi.org/10.1145/3375627.3375864","url":null,"abstract":"To reduce human error and prejudice, many high-stakes decisions have been turned over to machine algorithms. However, recent research suggests that this does not remove discrimination, and can perpetuate harmful stereotypes. While algorithms have been developed to improve fairness, they typically face at least one of three shortcomings: they are not interpretable, their prediction quality deteriorates quickly compared to unbiased equivalents, and %the methodology cannot easily extend other algorithms they are not easily transferable across models% (e.g., methods to reduce bias in random forests cannot be extended to neural networks) . To address these shortcomings, we propose a geometric method that removes correlations between data and any number of protected variables. Further, we can control the strength of debiasing through an adjustable parameter to address the trade-off between prediction quality and fairness. The resulting features are interpretable and can be used with many popular models, such as linear regression, random forest, and multilayer perceptrons. The resulting predictions are found to be more accurate and fair compared to several state-of-the-art fair AI algorithms across a variety of benchmark datasets. Our work shows that debiasing data is a simple and effective solution toward improving fairness.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"49 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77737279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
How to Put the Data Subject's Sovereignty into Practice. Ethical Considerations and Governance Perspectives 如何落实数据主体的主权。伦理考虑和治理观点
Pub Date : 2020-02-07 DOI: 10.1145/3375627.3377142
P. Dabrock
Ethical considerations and governance approaches of AI are at a crossroads. Either one tries to convey the impression that one can bring back a status quo ante of our given "onlife"-era [1,2], or one accepts to get responsibly involved in a digital world in which informational self-determination can no longer be safeguarded and fostered through the old fashioned data protection principles of informed consent, purpose limitation and data economy [3,4,6]. The main focus of the talk is on how under the given conditions of AI and machine learning, data sovereignty (interpreted as controllability [not control (!)] of the data subject over the use of her data throughout the entire data processing cycle [5]) can be strengthened without hindering innovation dynamics of digital economy and social cohesion of fully digitized societies. In order to put this approach into practice the talk combines a presentation of the concept of data sovereignty put forward by the German Ethics Council [3] with recent research trends in effectively applying the AI ethics principles of explainability and enforceability [4-9].
人工智能的伦理考虑和治理方法正处于十字路口。要么试图传达一种印象,即人们可以带回我们给定的“在线生活”时代的现状[1,2],要么接受负责任地参与到一个数字世界中,在这个世界中,信息自决不再能够通过知情同意、目的限制和数据经济等老式数据保护原则得到保障和促进[3,4,6]。演讲的主要焦点是在人工智能和机器学习的给定条件下,如何在不阻碍数字经济的创新动态和完全数字化社会的社会凝聚力的情况下加强数据主权(被解释为数据主体在整个数据处理周期中对其数据使用的可控性[不是控制[!]])。为了将这一方法付诸实践,演讲结合了德国伦理委员会[3]提出的数据主权概念的介绍,以及有效应用可解释性和可执行性人工智能伦理原则的最新研究趋势[4-9]。
{"title":"How to Put the Data Subject's Sovereignty into Practice. Ethical Considerations and Governance Perspectives","authors":"P. Dabrock","doi":"10.1145/3375627.3377142","DOIUrl":"https://doi.org/10.1145/3375627.3377142","url":null,"abstract":"Ethical considerations and governance approaches of AI are at a crossroads. Either one tries to convey the impression that one can bring back a status quo ante of our given \"onlife\"-era [1,2], or one accepts to get responsibly involved in a digital world in which informational self-determination can no longer be safeguarded and fostered through the old fashioned data protection principles of informed consent, purpose limitation and data economy [3,4,6]. The main focus of the talk is on how under the given conditions of AI and machine learning, data sovereignty (interpreted as controllability [not control (!)] of the data subject over the use of her data throughout the entire data processing cycle [5]) can be strengthened without hindering innovation dynamics of digital economy and social cohesion of fully digitized societies. In order to put this approach into practice the talk combines a presentation of the concept of data sovereignty put forward by the German Ethics Council [3] with recent research trends in effectively applying the AI ethics principles of explainability and enforceability [4-9].","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90805130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
"The Global South is everywhere, but also always somewhere": National Policy Narratives and AI Justice “全球南方无处不在,但也总是在某个地方”:国家政策叙事和人工智能正义
Pub Date : 2020-02-07 DOI: 10.1145/3375627.3375859
Amba Kak
There is more attention than ever on the social implications of AI. In contrast to universalized paradigms of ethics and fairness, a growing body of critical work highlights bias and discrimination in AI within the frame of social justice and human rights ("AI justice"). However, the geographical location of much of this critique in the West could be engendering its own blind spots. The global supply chain of AI (data, computational power, natural resources, labor) today replicates historical colonial inequities, and the continued subordination of Global South countries. This paper draws attention to official narratives from the Indian government and the United Nations Conference on Trade and Development (UNCTAD) advocating for the role (and place) of these regions in the AI economy. Domestically, these policies are being contested for their top-down formulation, and reflect narrow industry interests. This underscores the need to approach the political economy of AI from varying altitudes - global, national, and from the perspective of communities whose lives and livelihoods are most directly impacted in this economy. Without a deliberate effort at centering this conversation it is inevitable that mainstream discourse on AI justice will grow parallel to (and potentially undercut) demands emanating from Global South governments and communities
人们比以往任何时候都更加关注人工智能的社会影响。与普遍的道德和公平范式相反,越来越多的批判性工作强调了在社会正义和人权框架内(“人工智能正义”)对人工智能的偏见和歧视。然而,这种批评在西方的地理位置可能会产生自己的盲点。今天,人工智能的全球供应链(数据、计算能力、自然资源、劳动力)复制了历史上的殖民不平等,以及全球南方国家的持续从属地位。本文提请注意印度政府和联合国贸易和发展会议(UNCTAD)的官方叙述,这些叙述倡导这些地区在人工智能经济中的作用(和地位)。在国内,这些政策因其自上而下的制定而受到质疑,反映的是狭隘的行业利益。这强调了需要从不同的高度——全球的、国家的,以及从生活和生计受人工智能经济最直接影响的社区的角度来看待人工智能的政治经济。如果没有有意识地将这一对话集中起来,那么关于人工智能正义的主流话语将不可避免地与来自全球南方政府和社区的要求平行(并可能削弱)
{"title":"\"The Global South is everywhere, but also always somewhere\": National Policy Narratives and AI Justice","authors":"Amba Kak","doi":"10.1145/3375627.3375859","DOIUrl":"https://doi.org/10.1145/3375627.3375859","url":null,"abstract":"There is more attention than ever on the social implications of AI. In contrast to universalized paradigms of ethics and fairness, a growing body of critical work highlights bias and discrimination in AI within the frame of social justice and human rights (\"AI justice\"). However, the geographical location of much of this critique in the West could be engendering its own blind spots. The global supply chain of AI (data, computational power, natural resources, labor) today replicates historical colonial inequities, and the continued subordination of Global South countries. This paper draws attention to official narratives from the Indian government and the United Nations Conference on Trade and Development (UNCTAD) advocating for the role (and place) of these regions in the AI economy. Domestically, these policies are being contested for their top-down formulation, and reflect narrow industry interests. This underscores the need to approach the political economy of AI from varying altitudes - global, national, and from the perspective of communities whose lives and livelihoods are most directly impacted in this economy. Without a deliberate effort at centering this conversation it is inevitable that mainstream discourse on AI justice will grow parallel to (and potentially undercut) demands emanating from Global South governments and communities","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"39 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77631689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
The AI-development Connection - A View from the South 人工智能发展的联系——来自南方的视角
Pub Date : 2020-02-07 DOI: 10.1145/3375627.3377139
Anita Gurumurthy
The socialisation of Artificial Intelligence and the reality of an intelligence economy mark an epochal moment. The impacts of AI are now systemic - restructuring economic organisation and value chains, public sphere architectures and sociality. These shifts carry deep geo-political implications, reinforcing historical exclusions and power relations and disrupting the norms and rules that hold ideas of equality and justice together. At the centre of this rapid change is the intelligent corporation and its obsessive pursuit of data. Directly impinging on bodies and places, the de facto rules forged by the intelligent corporation are disenfranchising the already marginal subjects of development. Using trade deals to liberalise data flows, tighten trade secret rules and enclose AI-based innovation, Big Tech and their political masters have effectively taken away the economic and political autonomy of states in the global south. Big Tech's impunity extends to a brazen exploitation - enslaving labour through data over-reach and violating female bodies to universalise data markets. Thinking through the governance of AI needs new frameworks that can grapple with the fraught questions of data sovereignty, economic democracy, and institutional ethics in a global world with local aspirations. Any effort towards norm development in this domain will need to see the geo-economics of digital intelligence and the geo-politics of development ideologies as two sides of the same coin.
人工智能的社会化和智能经济的现实标志着一个划时代的时刻。人工智能的影响现在是系统性的——重组经济组织和价值链、公共领域架构和社会。这些变化具有深刻的地缘政治影响,强化了历史上的排斥和权力关系,破坏了将平等和正义理念结合在一起的规范和规则。这种快速变化的核心是智能企业及其对数据的执着追求。智能公司制定的事实规则直接影响到实体和地方,剥夺了已经处于边缘地位的发展主体的权利。利用贸易协议开放数据流动、收紧商业秘密规则、封闭基于人工智能的创新,科技巨头及其政治主子实际上剥夺了全球南方国家的经济和政治自主权。大型科技公司不受惩罚的行为延伸到了无耻的剥削——通过过度获取数据奴役劳动力,并侵犯女性身体以普及数据市场。思考人工智能的治理需要新的框架,这些框架可以在具有地方愿望的全球世界中解决数据主权、经济民主和制度伦理等令人担忧的问题。在这一领域制定规范的任何努力都需要将数字智能的地缘经济学和发展意识形态的地缘政治视为同一枚硬币的两面。
{"title":"The AI-development Connection - A View from the South","authors":"Anita Gurumurthy","doi":"10.1145/3375627.3377139","DOIUrl":"https://doi.org/10.1145/3375627.3377139","url":null,"abstract":"The socialisation of Artificial Intelligence and the reality of an intelligence economy mark an epochal moment. The impacts of AI are now systemic - restructuring economic organisation and value chains, public sphere architectures and sociality. These shifts carry deep geo-political implications, reinforcing historical exclusions and power relations and disrupting the norms and rules that hold ideas of equality and justice together. At the centre of this rapid change is the intelligent corporation and its obsessive pursuit of data. Directly impinging on bodies and places, the de facto rules forged by the intelligent corporation are disenfranchising the already marginal subjects of development. Using trade deals to liberalise data flows, tighten trade secret rules and enclose AI-based innovation, Big Tech and their political masters have effectively taken away the economic and political autonomy of states in the global south. Big Tech's impunity extends to a brazen exploitation - enslaving labour through data over-reach and violating female bodies to universalise data markets. Thinking through the governance of AI needs new frameworks that can grapple with the fraught questions of data sovereignty, economic democracy, and institutional ethics in a global world with local aspirations. Any effort towards norm development in this domain will need to see the geo-economics of digital intelligence and the geo-politics of development ideologies as two sides of the same coin.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87618795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Empirical Approach to Capture Moral Uncertainty in AI 捕捉人工智能中道德不确定性的经验方法
Pub Date : 2020-02-07 DOI: 10.1145/3375627.3375805
Andreia Martinho, M. Kroesen, C. Chorus
As AI Systems become increasingly autonomous they are expected to engage in complex moral decision-making processes. For the purpose of guidance of such processes theoretical and empirical solutions have been sought. In this research we integrate both theoretical and empirical lines of thought to address the matters of moral reasoning in AI Systems. We reconceptualize a metanormative framework for decision-making under moral uncertainty within the Discrete Choice Analysis domain and we operationalize it through a latent class choice model. The discrete choice analysis-based formulation of the metanormative framework is theory-rooted and practical as it captures moral uncertainty through a small set of latent classes. To illustrate our approach we conceptualize a society in which AI Systems are in charge of making policy choices. In the proof of concept two AI systems make policy choices on behalf of a society but while one of the systems uses a baseline moral certain model the other uses a moral uncertain model. It was observed that there are cases in which the AI Systems disagree about the policy to be chosen which we believe is an indication about the relevance of moral uncertainty.
随着人工智能系统变得越来越自治,它们有望参与复杂的道德决策过程。为了指导这些过程,已经寻求了理论和经验的解决办法。在这项研究中,我们整合了理论和经验的思路来解决人工智能系统中的道德推理问题。我们在离散选择分析领域中重新定义了道德不确定性下决策的元框架,并通过潜在类别选择模型将其操作化。基于离散选择分析的元形态框架的公式是有理论基础的和实用的,因为它通过一小部分潜在类别捕获了道德不确定性。为了说明我们的方法,我们概念化了一个人工智能系统负责制定政策选择的社会。在概念验证中,两个人工智能系统代表社会做出政策选择,但其中一个系统使用基准道德确定模型,另一个系统使用道德不确定模型。我们观察到,在某些情况下,人工智能系统不同意所选择的政策,我们认为这表明了道德不确定性的相关性。
{"title":"An Empirical Approach to Capture Moral Uncertainty in AI","authors":"Andreia Martinho, M. Kroesen, C. Chorus","doi":"10.1145/3375627.3375805","DOIUrl":"https://doi.org/10.1145/3375627.3375805","url":null,"abstract":"As AI Systems become increasingly autonomous they are expected to engage in complex moral decision-making processes. For the purpose of guidance of such processes theoretical and empirical solutions have been sought. In this research we integrate both theoretical and empirical lines of thought to address the matters of moral reasoning in AI Systems. We reconceptualize a metanormative framework for decision-making under moral uncertainty within the Discrete Choice Analysis domain and we operationalize it through a latent class choice model. The discrete choice analysis-based formulation of the metanormative framework is theory-rooted and practical as it captures moral uncertainty through a small set of latent classes. To illustrate our approach we conceptualize a society in which AI Systems are in charge of making policy choices. In the proof of concept two AI systems make policy choices on behalf of a society but while one of the systems uses a baseline moral certain model the other uses a moral uncertain model. It was observed that there are cases in which the AI Systems disagree about the policy to be chosen which we believe is an indication about the relevance of moral uncertainty.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"61 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74485270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Machines Judging Humans: The Promise and Perils of Formalizing Evaluative Criteria 机器判断人类:正式化评估标准的希望和危险
Pub Date : 2020-02-07 DOI: 10.1145/3375627.3375839
Frank A. Pasquale
Over the past decade, algorithmic accountability has become an important concern for social scientists, computer scientists, journalists, and lawyers [1]. Exposés have sparked vibrant debates about algorithmic sentencing. Researchers have exposed tech giants showing women ads for lower-paying jobs, discriminating against the aged, deploying deceptive dark patterns to trick consumers into buying things, and manipulating users toward rabbit holes of extremist content. Public-spirited regulators have begun to address algorithmic transparency and online fairness, building on the work of legal scholars who have called for technological due process, platform neutrality, and nondiscrimination principles. This policy work is just beginning, as experts translate academic research and activist demands into statutes and regulations. Lawmakers are proposing bills requiring basic standards of algorithmic transparency and auditing. We are starting down on a long road toward ensuring that AI-based hiring practices and financial underwriting are not used if they have a disparate impact on historically marginalized communities. And just as this "first wave" of algorithmic accountability research and activism has targeted existing systems, an emerging "second wave" of algorithmic accountability has begun to address more structural concerns. Both waves will be essential to ensure a fairer, and more genuinely emancipatory, political economy of technology. Second wave work is particularly important when it comes to illuminating the promise & perils of formalizing evaluative criteria.
在过去的十年里,算法问责制已经成为社会科学家、计算机科学家、记者和律师关注的一个重要问题[1]。曝光案引发了关于算法量刑的激烈辩论。研究人员揭露了科技巨头向女性展示低薪工作的广告,歧视老年人,利用欺骗性的黑暗模式欺骗消费者购买东西,并操纵用户进入极端主义内容的兔子洞。具有公益精神的监管机构已经开始在法律学者的工作基础上解决算法透明度和在线公平性问题,法律学者呼吁采用技术正当程序、平台中立和非歧视原则。这项政策工作才刚刚开始,因为专家们将学术研究和活动家的要求转化为法规和法规。立法者正在提出法案,要求算法透明度和审计的基本标准。为了确保基于人工智能的招聘实践和金融承销不会被使用,如果它们对历史上被边缘化的社区产生了不同的影响,我们正在走上一条漫长的道路。正如“第一波”算法问责研究和行动主义针对的是现有系统一样,正在兴起的“第二波”算法问责已经开始解决更多的结构性问题。这两波浪潮对于确保技术的政治经济更公平、更真正解放至关重要。第二波工作尤其重要,因为它阐明了正式化评估标准的希望和危险。
{"title":"Machines Judging Humans: The Promise and Perils of Formalizing Evaluative Criteria","authors":"Frank A. Pasquale","doi":"10.1145/3375627.3375839","DOIUrl":"https://doi.org/10.1145/3375627.3375839","url":null,"abstract":"Over the past decade, algorithmic accountability has become an important concern for social scientists, computer scientists, journalists, and lawyers [1]. Exposés have sparked vibrant debates about algorithmic sentencing. Researchers have exposed tech giants showing women ads for lower-paying jobs, discriminating against the aged, deploying deceptive dark patterns to trick consumers into buying things, and manipulating users toward rabbit holes of extremist content. Public-spirited regulators have begun to address algorithmic transparency and online fairness, building on the work of legal scholars who have called for technological due process, platform neutrality, and nondiscrimination principles. This policy work is just beginning, as experts translate academic research and activist demands into statutes and regulations. Lawmakers are proposing bills requiring basic standards of algorithmic transparency and auditing. We are starting down on a long road toward ensuring that AI-based hiring practices and financial underwriting are not used if they have a disparate impact on historically marginalized communities. And just as this \"first wave\" of algorithmic accountability research and activism has targeted existing systems, an emerging \"second wave\" of algorithmic accountability has begun to address more structural concerns. Both waves will be essential to ensure a fairer, and more genuinely emancipatory, political economy of technology. Second wave work is particularly important when it comes to illuminating the promise & perils of formalizing evaluative criteria.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"34 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82858879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Normative Principles for Evaluating Fairness in Machine Learning 评估机器学习公平性的规范原则
Pub Date : 2020-02-07 DOI: 10.1145/3375627.3375808
D. Leben
There are many incompatible ways to measure fair outcomes for machine learning algorithms. The goal of this paper is to characterize rates of success and error across protected groups (race, gender, sexual orientation) as a distribution problem, and describe the possible solutions to this problem according to different normative principles from moral and political philosophy. These normative principles are based on various competing attributes within a distribution problem: intentions, compensation, desert, consent, and consequences. Each principle will be applied to a sample risk-assessment classifier to demonstrate the philosophical arguments underlying different sets of fairness metrics.
有许多不兼容的方法来衡量机器学习算法的公平结果。本文的目标是将受保护群体(种族、性别、性取向)的成功率和错误率描述为一个分布问题,并根据道德和政治哲学的不同规范原则描述这个问题的可能解决方案。这些规范原则是基于分配问题中各种相互竞争的属性:意图、补偿、应得、同意和后果。每个原则都将应用于一个样本风险评估分类器,以展示不同公平性指标集背后的哲学论点。
{"title":"Normative Principles for Evaluating Fairness in Machine Learning","authors":"D. Leben","doi":"10.1145/3375627.3375808","DOIUrl":"https://doi.org/10.1145/3375627.3375808","url":null,"abstract":"There are many incompatible ways to measure fair outcomes for machine learning algorithms. The goal of this paper is to characterize rates of success and error across protected groups (race, gender, sexual orientation) as a distribution problem, and describe the possible solutions to this problem according to different normative principles from moral and political philosophy. These normative principles are based on various competing attributes within a distribution problem: intentions, compensation, desert, consent, and consequences. Each principle will be applied to a sample risk-assessment classifier to demonstrate the philosophical arguments underlying different sets of fairness metrics.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86111238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Algorithmized but not Atomized? How Digital Platforms Engender New Forms of Worker Solidarity in Jakarta 算法化而非原子化?数位平台如何在雅加达催生新形式的工人团结
Pub Date : 2020-02-07 DOI: 10.1145/3375627.3375816
Rida Qadri
Jakarta's roads are green, filled as they are with the fluorescent green jackets, bright green logos and fluttering green banners of basecamps created by the city's digitized, 'online' motorbike-taxi drivers (ojol). These spaces function as waiting posts, regulatory institutions, information networks and spaces of solidarity for the ojol working for mobility-app companies, Grab and GoJek. Their existence though, presents a puzzle. In the world of on-demand matching, literature either predicts an isolated, atomized, disempowered digital worker or expects workers to have only temporary, online, ephemeral networks of mutual aid. Yet, Jakarta's ojol then introduce us to a new form of labor action that relies on an interface of the physical world and digital realm, complete with permanent shelters, quirky names, emblems, social media accounts and even their own emergency response service. This paper explores the contours of these labor formations and asks why digital workers in Jakarta are able to create collective structures of solidarity, even as app-mediated work may force them towards an individualized labor regime? I argue that these digital labor collectives are not accidental but a product of interactions between histories of social organization structures in Jakarta and affordances created by technological-mediation. Through participant observation and semi-structured interviews I excavate the bi-directional conversation between globalizing digital platforms and social norms, civic culture and labor market conditions in Jakarta which has allowed for particular forms of digital worker resistances to emerge. I recover power for the digital worker, who provides us with a path to resisting algorithmization of work while still participating in it through agentic labor actions rooted in shared identities, enabled by technological fluency and borne out of a desire for community.
雅加达的道路是绿色的,到处都是荧光绿色的夹克,亮绿色的标志,以及由城市数字化的“在线”摩托车出租车司机(ojol)创建的大本营的绿色横幅。这些空间的功能是等候站、监管机构、信息网络,以及为Grab和GoJek等移动应用公司工作的员工提供团结的空间。然而,它们的存在带来了一个谜。在按需匹配的世界里,文学要么预测了一个孤立的、原子化的、被剥夺了权力的数字工作者,要么期望工作者只有临时的、在线的、短暂的互助网络。然而,雅加达的ojol随后向我们介绍了一种新的劳动行动形式,它依赖于物理世界和数字领域的接口,包括永久性庇护所、古怪的名字、标志、社交媒体账户,甚至他们自己的应急响应服务。本文探讨了这些劳动形态的轮廓,并提出了一个问题,为什么雅加达的数字工作者能够创建团结的集体结构,即使应用程序中介的工作可能迫使他们走向个性化的劳动制度?我认为,这些数字劳动集体不是偶然的,而是雅加达社会组织结构的历史与技术中介创造的能力之间相互作用的产物。通过参与观察和半结构化访谈,我挖掘了全球化数字平台与雅加达社会规范、公民文化和劳动力市场状况之间的双向对话,这些对话允许特定形式的数字工人抵抗出现。我为数字工作者恢复了力量,他们为我们提供了一条抵制工作算法化的道路,同时仍然通过植根于共同身份的代理劳动行动参与工作,这些行动是由技术的流畅性和对社区的渴望所促成的。
{"title":"Algorithmized but not Atomized? How Digital Platforms Engender New Forms of Worker Solidarity in Jakarta","authors":"Rida Qadri","doi":"10.1145/3375627.3375816","DOIUrl":"https://doi.org/10.1145/3375627.3375816","url":null,"abstract":"Jakarta's roads are green, filled as they are with the fluorescent green jackets, bright green logos and fluttering green banners of basecamps created by the city's digitized, 'online' motorbike-taxi drivers (ojol). These spaces function as waiting posts, regulatory institutions, information networks and spaces of solidarity for the ojol working for mobility-app companies, Grab and GoJek. Their existence though, presents a puzzle. In the world of on-demand matching, literature either predicts an isolated, atomized, disempowered digital worker or expects workers to have only temporary, online, ephemeral networks of mutual aid. Yet, Jakarta's ojol then introduce us to a new form of labor action that relies on an interface of the physical world and digital realm, complete with permanent shelters, quirky names, emblems, social media accounts and even their own emergency response service. This paper explores the contours of these labor formations and asks why digital workers in Jakarta are able to create collective structures of solidarity, even as app-mediated work may force them towards an individualized labor regime? I argue that these digital labor collectives are not accidental but a product of interactions between histories of social organization structures in Jakarta and affordances created by technological-mediation. Through participant observation and semi-structured interviews I excavate the bi-directional conversation between globalizing digital platforms and social norms, civic culture and labor market conditions in Jakarta which has allowed for particular forms of digital worker resistances to emerge. I recover power for the digital worker, who provides us with a path to resisting algorithmization of work while still participating in it through agentic labor actions rooted in shared identities, enabled by technological fluency and borne out of a desire for community.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81718799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
The Perils of Objectivity: Towards a Normative Framework for Fair Judicial Decision-Making 客观的危险:走向公正司法决策的规范框架
Pub Date : 2020-02-07 DOI: 10.1145/3375627.3375869
Andi Peng, Malina Simard-Halm
Fair decision-making in criminal justice relies on the recognition and incorporation of infinite shades of grey. In this paper, we detail how algorithmic risk assessment tools are counteractive to fair legal proceedings in social institutions where desired states of the world are contested ethically and practically. We provide a normative framework for assessing fair judicial decision-making, one that does not seek the elimination of human bias from decision-making as algorithmic fairness efforts currently focus on, but instead centers on sophisticating the incorporation of individualized or discretionary bias--a process that is requisitely human. Through analysis of a case study on social disadvantage, we use this framework to provide an assessment of potential features of consideration, such as political disempowerment and demographic exclusion, that are irreconcilable by current algorithmic efforts and recommend their incorporation in future reform.
刑事司法中的公平决策依赖于对无限灰色地带的认识和纳入。在本文中,我们详细介绍了算法风险评估工具如何在社会机构中对公平的法律程序产生反作用,在社会机构中,期望的世界状态在道德和实践上受到质疑。我们提供了一个评估公平司法决策的规范框架,这个框架不寻求消除决策中的人类偏见,这是目前算法公平努力的重点,而是集中在完善个性化或自由裁量的偏见的结合上——这是一个必要的人类过程。通过对社会劣势案例研究的分析,我们使用这一框架对当前算法无法调和的潜在特征(如政治剥夺权力和人口排斥)进行评估,并建议将其纳入未来的改革中。
{"title":"The Perils of Objectivity: Towards a Normative Framework for Fair Judicial Decision-Making","authors":"Andi Peng, Malina Simard-Halm","doi":"10.1145/3375627.3375869","DOIUrl":"https://doi.org/10.1145/3375627.3375869","url":null,"abstract":"Fair decision-making in criminal justice relies on the recognition and incorporation of infinite shades of grey. In this paper, we detail how algorithmic risk assessment tools are counteractive to fair legal proceedings in social institutions where desired states of the world are contested ethically and practically. We provide a normative framework for assessing fair judicial decision-making, one that does not seek the elimination of human bias from decision-making as algorithmic fairness efforts currently focus on, but instead centers on sophisticating the incorporation of individualized or discretionary bias--a process that is requisitely human. Through analysis of a case study on social disadvantage, we use this framework to provide an assessment of potential features of consideration, such as political disempowerment and demographic exclusion, that are irreconcilable by current algorithmic efforts and recommend their incorporation in future reform.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75045485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bayesian Sensitivity Analysis for Offline Policy Evaluation 离线策略评估的贝叶斯敏感性分析
Pub Date : 2020-02-07 DOI: 10.1145/3375627.3375822
Jongbin Jung, Ravi Shroff, A. Feller, Sharad Goel
On a variety of complex decision-making tasks, from doctors prescribing treatment to judges setting bail, machine learning algorithms have been shown to outperform expert human judgments. One complication, however, is that it is often difficult to anticipate the effects of algorithmic policies prior to deployment, as one generally cannot use historical data to directly observe what would have happened had the actions recommended by the algorithm been taken. A common strategy is to model potential outcomes for alternative decisions assuming that there are no unmeasured confounders (i.e., to assume ignorability). But if this ignorability assumption is violated, the predicted and actual effects of an algorithmic policy can diverge sharply. In this paper we present a flexible Bayesian approach to gauge the sensitivity of predicted policy outcomes to unmeasured confounders. In particular, and in contrast to past work, our modeling framework easily enables confounders to vary with the observed covariates. We demonstrate the efficacy of our method on a large dataset of judicial actions, in which one must decide whether defendants awaiting trial should be required to pay bail or can be released without payment.
在各种复杂的决策任务上,从医生开出治疗处方到法官设定保释,机器学习算法的表现都优于专家的人类判断。然而,一个复杂的问题是,通常很难在部署之前预测算法策略的影响,因为通常不能使用历史数据直接观察如果采取算法建议的操作会发生什么。一个常见的策略是假设不存在不可测量的混杂因素(即假设可忽略性),对备选决策的潜在结果进行建模。但是,如果违反了这种可忽略性假设,算法政策的预测效果和实际效果可能会大相径庭。在本文中,我们提出了一种灵活的贝叶斯方法来衡量预测政策结果对未测量混杂因素的敏感性。特别是,与过去的工作相比,我们的建模框架很容易使混杂因素随观察到的协变量而变化。我们在司法行动的大型数据集上证明了我们的方法的有效性,在这些数据集中,人们必须决定等待审判的被告是否应该被要求支付保释金,或者可以不支付保释金就被释放。
{"title":"Bayesian Sensitivity Analysis for Offline Policy Evaluation","authors":"Jongbin Jung, Ravi Shroff, A. Feller, Sharad Goel","doi":"10.1145/3375627.3375822","DOIUrl":"https://doi.org/10.1145/3375627.3375822","url":null,"abstract":"On a variety of complex decision-making tasks, from doctors prescribing treatment to judges setting bail, machine learning algorithms have been shown to outperform expert human judgments. One complication, however, is that it is often difficult to anticipate the effects of algorithmic policies prior to deployment, as one generally cannot use historical data to directly observe what would have happened had the actions recommended by the algorithm been taken. A common strategy is to model potential outcomes for alternative decisions assuming that there are no unmeasured confounders (i.e., to assume ignorability). But if this ignorability assumption is violated, the predicted and actual effects of an algorithmic policy can diverge sharply. In this paper we present a flexible Bayesian approach to gauge the sensitivity of predicted policy outcomes to unmeasured confounders. In particular, and in contrast to past work, our modeling framework easily enables confounders to vary with the observed covariates. We demonstrate the efficacy of our method on a large dataset of judicial actions, in which one must decide whether defendants awaiting trial should be required to pay bail or can be released without payment.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"205 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72940005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1