首页 > 最新文献

Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society最新文献

英文 中文
Measuring Automated Influence: Between Empirical Evidence and Ethical Values 测量自动化影响:在经验证据和伦理价值之间
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462532
Daniel Susser, Vincent Grimaldi
Automated influence, delivered by digital targeting technologies such as targeted advertising, digital nudges, and recommender systems, has attracted significant interest from both empirical researchers, on one hand, and critical scholars and policymakers on the other. In this paper, we argue for closer integration of these efforts. Critical scholars and policymakers, who focus primarily on the social, ethical, and political effects of these technologies, need empirical evidence to substantiate and motivate their concerns. However, existing empirical research investigating the effectiveness of these technologies (or lack thereof), neglects other morally relevant effects-which can be felt regardless of whether or not the technologies "work" in the sense of fulfilling the promises of their designers. Drawing from the ethics and policy literature, we enumerate a range of questions begging for empirical analysis-the outline of a research agenda bridging these fields---and issue a call to action for more empirical research that takes these urgent ethics and policy questions as their starting point.
由数字定位技术(如定向广告、数字推动和推荐系统)提供的自动影响,一方面吸引了实证研究人员的极大兴趣,另一方面也吸引了关键学者和政策制定者的极大兴趣。在本文中,我们主张将这些努力更紧密地结合起来。主要关注这些技术的社会、伦理和政治影响的批判性学者和政策制定者需要经验证据来证实和激励他们的担忧。然而,现有的实证研究调查了这些技术的有效性(或缺乏有效性),忽略了其他与道德相关的影响——无论这些技术是否在实现其设计者的承诺的意义上“工作”,都可以感受到这些影响。从伦理和政策文献中,我们列举了一系列需要实证分析的问题——连接这些领域的研究议程大纲——并呼吁采取行动,以这些紧迫的伦理和政策问题为起点,进行更多的实证研究。
{"title":"Measuring Automated Influence: Between Empirical Evidence and Ethical Values","authors":"Daniel Susser, Vincent Grimaldi","doi":"10.1145/3461702.3462532","DOIUrl":"https://doi.org/10.1145/3461702.3462532","url":null,"abstract":"Automated influence, delivered by digital targeting technologies such as targeted advertising, digital nudges, and recommender systems, has attracted significant interest from both empirical researchers, on one hand, and critical scholars and policymakers on the other. In this paper, we argue for closer integration of these efforts. Critical scholars and policymakers, who focus primarily on the social, ethical, and political effects of these technologies, need empirical evidence to substantiate and motivate their concerns. However, existing empirical research investigating the effectiveness of these technologies (or lack thereof), neglects other morally relevant effects-which can be felt regardless of whether or not the technologies \"work\" in the sense of fulfilling the promises of their designers. Drawing from the ethics and policy literature, we enumerate a range of questions begging for empirical analysis-the outline of a research agenda bridging these fields---and issue a call to action for more empirical research that takes these urgent ethics and policy questions as their starting point.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"39 12","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114033524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Machine Learning Practices Outside Big Tech: How Resource Constraints Challenge Responsible Development 大型科技公司之外的机器学习实践:资源约束如何挑战负责任的开发
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462527
Aspen K. Hopkins, S. Booth
Practitioners from diverse occupations and backgrounds are increasingly using machine learning (ML) methods. Nonetheless, studies on ML Practitioners typically draw populations from Big Tech and academia, as researchers have easier access to these communities. Through this selection bias, past research often excludes the broader, lesser-resourced ML community---for example, practitioners working at startups, at non-tech companies, and in the public sector. These practitioners share many of the same ML development difficulties and ethical conundrums as their Big Tech counterparts; however, their experiences are subject to additional under-studied challenges stemming from deploying ML with limited resources, increased existential risk, and absent access to in-house research teams. We contribute a qualitative analysis of 17 interviews with stakeholders from organizations which are less represented in prior studies. We uncover a number of tensions which are introduced or exacerbated by these organizations' resource constraints---tensions between privacy and ubiquity, resource management and performance optimization, and access and monopolization. Increased academic focus on these practitioners can facilitate a more holistic understanding of ML limitations, and so is useful for prescribing a research agenda to facilitate responsible ML development for all.
来自不同职业和背景的从业者越来越多地使用机器学习(ML)方法。尽管如此,对机器学习从业者的研究通常会吸引来自大型科技公司和学术界的人群,因为研究人员更容易进入这些社区。由于这种选择偏差,过去的研究往往排除了更广泛、资源较少的ML社区——例如,在初创公司、非科技公司和公共部门工作的从业者。这些从业者与他们的大型科技同行有着许多相同的ML开发困难和道德难题;然而,他们的经验受到其他未充分研究的挑战的影响,这些挑战源于在有限的资源下部署机器学习,存在风险增加,以及无法访问内部研究团队。我们对17个访谈的利益相关者进行了定性分析,这些访谈来自先前研究中较少代表的组织。我们发现了许多由这些组织的资源约束引入或加剧的紧张关系——隐私和无处不在、资源管理和性能优化、访问和垄断之间的紧张关系。增加对这些从业者的学术关注可以促进对ML限制的更全面的理解,因此对于制定研究议程以促进所有人负责任的ML开发是有用的。
{"title":"Machine Learning Practices Outside Big Tech: How Resource Constraints Challenge Responsible Development","authors":"Aspen K. Hopkins, S. Booth","doi":"10.1145/3461702.3462527","DOIUrl":"https://doi.org/10.1145/3461702.3462527","url":null,"abstract":"Practitioners from diverse occupations and backgrounds are increasingly using machine learning (ML) methods. Nonetheless, studies on ML Practitioners typically draw populations from Big Tech and academia, as researchers have easier access to these communities. Through this selection bias, past research often excludes the broader, lesser-resourced ML community---for example, practitioners working at startups, at non-tech companies, and in the public sector. These practitioners share many of the same ML development difficulties and ethical conundrums as their Big Tech counterparts; however, their experiences are subject to additional under-studied challenges stemming from deploying ML with limited resources, increased existential risk, and absent access to in-house research teams. We contribute a qualitative analysis of 17 interviews with stakeholders from organizations which are less represented in prior studies. We uncover a number of tensions which are introduced or exacerbated by these organizations' resource constraints---tensions between privacy and ubiquity, resource management and performance optimization, and access and monopolization. Increased academic focus on these practitioners can facilitate a more holistic understanding of ML limitations, and so is useful for prescribing a research agenda to facilitate responsible ML development for all.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121063789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Gender Bias and Under-Representation in Natural Language Processing Across Human Languages 跨人类语言的自然语言处理中的性别偏见和代表性不足
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462530
Abigail V. Matthews
Natural Language Processing (NLP) systems are at the heart of many critical automated decision-making systems making crucial recommendations about our future world. However, these systems reflect a wide range of biases, from gender bias to a bias in which voices they represent. In this paper, a team including speakers of 9 languages - Chinese, Spanish, English, Arabic, German, French, Farsi, Urdu, and Wolof - reports and analyzes measurements of gender bias in the Wikipedia corpora for these 9 languages. In the process, we also document how our work exposes crucial gaps in the NLP-pipeline for many languages. Despite substantial investments in multilingual support, the modern NLP-pipeline still systematically and dramatically under-represents the majority of human voices in the NLP-guided decisions that are shaping our collective future. We develop extensions to profession-level and corpus-level gender bias metric calculations originally designed for English and apply them to 8 other languages, including languages like Spanish, Arabic, German, French and Urdu that have grammatically gendered nouns including different feminine, masculine and neuter profession words. We compare these gender bias measurements across the Wikipedia corpora in different languages as well as across some corpora of more traditional literature.
自然语言处理(NLP)系统是许多关键的自动化决策系统的核心,对我们的未来世界提出重要建议。然而,这些系统反映了广泛的偏见,从性别偏见到它们所代表的声音的偏见。在本文中,一个由9种语言(汉语、西班牙语、英语、阿拉伯语、德语、法语、波斯语、乌尔都语和沃洛夫语)使用者组成的团队报告并分析了维基百科语料库中这9种语言的性别偏见测量结果。在这个过程中,我们还记录了我们的工作如何揭示了许多语言的nlp管道中的关键差距。尽管在多语言支持方面投入了大量资金,但现代自然语言处理渠道在自然语言处理指导下的决策中,仍然系统地、显著地低估了大多数人的声音,而这些声音正在塑造我们共同的未来。我们开发了最初为英语设计的专业级和语料库级性别偏见度量计算的扩展,并将其应用于其他8种语言,包括西班牙语、阿拉伯语、德语、法语和乌尔都语等具有语法性别化名词的语言,包括不同的女性、男性和中性专业词汇。我们比较了不同语言的维基百科语料库以及一些更传统的文学语料库的性别偏见测量结果。
{"title":"Gender Bias and Under-Representation in Natural Language Processing Across Human Languages","authors":"Abigail V. Matthews","doi":"10.1145/3461702.3462530","DOIUrl":"https://doi.org/10.1145/3461702.3462530","url":null,"abstract":"Natural Language Processing (NLP) systems are at the heart of many critical automated decision-making systems making crucial recommendations about our future world. However, these systems reflect a wide range of biases, from gender bias to a bias in which voices they represent. In this paper, a team including speakers of 9 languages - Chinese, Spanish, English, Arabic, German, French, Farsi, Urdu, and Wolof - reports and analyzes measurements of gender bias in the Wikipedia corpora for these 9 languages. In the process, we also document how our work exposes crucial gaps in the NLP-pipeline for many languages. Despite substantial investments in multilingual support, the modern NLP-pipeline still systematically and dramatically under-represents the majority of human voices in the NLP-guided decisions that are shaping our collective future. We develop extensions to profession-level and corpus-level gender bias metric calculations originally designed for English and apply them to 8 other languages, including languages like Spanish, Arabic, German, French and Urdu that have grammatically gendered nouns including different feminine, masculine and neuter profession words. We compare these gender bias measurements across the Wikipedia corpora in different languages as well as across some corpora of more traditional literature.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116965314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Artificial Intelligence and the Purpose of Social Systems 人工智能与社会系统的目的
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462526
Sebastian Benthall, Jake Goldenfein
The law and ethics of Western democratic states have their basis in liberalism. This extends to regulation and ethical discussion of technology and businesses doing data processing. Liberalism relies on the privacy and autonomy of individuals, their ordering through a public market, and, more recently, a measure of equality guaranteed by the state. We argue that these forms of regulation and ethical analysis are largely incompatible with the techno-political and techno-economic dimensions of artificial intelligence. By analyzing liberal regulatory solutions in the form of privacy and data protection, regulation of public markets, and fairness in AI, we expose how the data economy and artificial intelligence have transcended liberal legal imagination. Organizations use artificial intelligence to exceed the bounded rationality of individuals and each other. This has led to the private consolidation of markets and an unequal hierarchy of control operating mainly for the purpose of shareholder value. An artificial intelligence will be only as ethical as the purpose of the social system that operates it. Inspired by the science of artificial life as an alternative to artificial intelligence, we consider data intermediaries: sociotechnical systems composed of individuals associated around collectively pursued purposes. An attention cooperative, that prioritizes its incoming and outgoing data flows, is one model of a social system that could form and maintain its own autonomous purpose.
西方民主国家的法律和伦理有其自由主义的基础。这延伸到对数据处理技术和企业的监管和道德讨论。自由主义依赖于个人的隐私和自主权,依赖于他们通过公共市场的秩序,以及最近由国家保障的某种程度的平等。我们认为,这些形式的监管和伦理分析在很大程度上与人工智能的技术-政治和技术-经济维度不相容。通过分析以隐私和数据保护、公共市场监管和人工智能公平为形式的自由主义监管解决方案,我们揭示了数据经济和人工智能如何超越了自由主义法律的想象。组织使用人工智能来超越个人和彼此的有限理性。这导致了市场的私下整合,以及主要以股东价值为目的的不平等的控制等级。人工智能的道德程度取决于运行它的社会系统的目的。受人工生命科学作为人工智能替代品的启发,我们考虑数据中介:由围绕共同追求的目标相关联的个人组成的社会技术系统。一个注意合作社,优先考虑其传入和传出的数据流,是一个社会系统的模型,可以形成和维持自己的自治目的。
{"title":"Artificial Intelligence and the Purpose of Social Systems","authors":"Sebastian Benthall, Jake Goldenfein","doi":"10.1145/3461702.3462526","DOIUrl":"https://doi.org/10.1145/3461702.3462526","url":null,"abstract":"The law and ethics of Western democratic states have their basis in liberalism. This extends to regulation and ethical discussion of technology and businesses doing data processing. Liberalism relies on the privacy and autonomy of individuals, their ordering through a public market, and, more recently, a measure of equality guaranteed by the state. We argue that these forms of regulation and ethical analysis are largely incompatible with the techno-political and techno-economic dimensions of artificial intelligence. By analyzing liberal regulatory solutions in the form of privacy and data protection, regulation of public markets, and fairness in AI, we expose how the data economy and artificial intelligence have transcended liberal legal imagination. Organizations use artificial intelligence to exceed the bounded rationality of individuals and each other. This has led to the private consolidation of markets and an unequal hierarchy of control operating mainly for the purpose of shareholder value. An artificial intelligence will be only as ethical as the purpose of the social system that operates it. Inspired by the science of artificial life as an alternative to artificial intelligence, we consider data intermediaries: sociotechnical systems composed of individuals associated around collectively pursued purposes. An attention cooperative, that prioritizes its incoming and outgoing data flows, is one model of a social system that could form and maintain its own autonomous purpose.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"82 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114131033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Alienation in the AI-Driven Workplace 人工智能驱动的工作场所中的疏离感
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462520
Kate Vredenburgh
This paper asks whether explanations of one's workplace and economic institutions are valuable in and of themselves. In doing so, it departs from much of the explainability literature in law, computer science, philosophy, and the social sciences, which examine the instrumental values that explainable AI has: explainable systems increase accountability and user trust, or reduce the risk of harm due to increased robustness. Think, however, of how you might feel if you went to your local administrative agency to apply for some benefit, or you were handed down a decision by a judge in a court. Let's stipulate that you know that the decision was just, even though neither the civil servant nor the judge explain to you why the decision was made, and you don't know the relevant rules; you just brought all the information you had about yourself, and hoped for the best. Is such a decision process defective? I argue that such a decision process is defective because it prevents individuals from accessing the normative explanations that are necessary to form an appropriate practical orientation towards their social world. A practical orientation is a reflective stance towards one's social world, which is expressed in one's actions and draws on one's cognitive architecture that allows one to navigate the various social practices and institutions. A practical orientation can range from rejection to silent endorsement, and is the sort of attitude for which there are the right kind of reasons, based in the world's normative character. It also determines how one fills out one's role obligations, and, more broadly, guides one's actions in the relevant institution: a teacher in the American South during the time of enforced racial segregation, for example, might choose where to teach on the basis of her rejection of the segregation of education. To form an appropriate practical orientation, one must have an understanding of the social world's normative character, which required a normative explanation And, since we spend so much of our lives at work and are constrained by economic institutions, we must understand their structure and how they function.
本文提出的问题是,对工作场所和经济制度的解释本身是否有价值。在这样做的过程中,它偏离了法律、计算机科学、哲学和社会科学中的许多可解释性文献,这些文献研究了可解释人工智能的工具价值:可解释的系统增加了问责制和用户信任,或者减少了由于鲁棒性增加而造成的伤害风险。然而,想象一下,如果你去当地的行政机构申请一些福利,或者你在法庭上被法官宣布了一个决定,你会有什么感觉。假设你知道这个决定是公正的,即使公务员和法官都没有向你解释为什么做出这个决定,你也不知道相关的规则;你只是把所有关于你自己的信息都带来了,然后抱着最好的希望。这样的决策过程有缺陷吗?我认为,这样的决策过程是有缺陷的,因为它阻止个人获得规范性解释,而规范性解释是形成对其社会世界的适当实践取向所必需的。实践取向是对一个人的社会世界的反思立场,它表现在一个人的行动中,并利用一个人的认知架构,使一个人能够驾驭各种社会实践和制度。实践取向的范围可以从拒绝到沉默的认可,它是一种基于世界规范特征的正确理由的态度。它还决定了一个人如何履行自己的角色义务,更广泛地说,它指导着一个人在相关机构中的行为:例如,在强制实行种族隔离的美国南方,一名教师可能会根据她对教育隔离的反对来选择在哪里教书。为了形成一个合适的实践方向,我们必须理解社会世界的规范特征,这需要一个规范的解释。而且,由于我们在工作中花费了如此多的时间,并受到经济制度的约束,我们必须理解它们的结构和它们是如何运作的。
{"title":"Alienation in the AI-Driven Workplace","authors":"Kate Vredenburgh","doi":"10.1145/3461702.3462520","DOIUrl":"https://doi.org/10.1145/3461702.3462520","url":null,"abstract":"This paper asks whether explanations of one's workplace and economic institutions are valuable in and of themselves. In doing so, it departs from much of the explainability literature in law, computer science, philosophy, and the social sciences, which examine the instrumental values that explainable AI has: explainable systems increase accountability and user trust, or reduce the risk of harm due to increased robustness. Think, however, of how you might feel if you went to your local administrative agency to apply for some benefit, or you were handed down a decision by a judge in a court. Let's stipulate that you know that the decision was just, even though neither the civil servant nor the judge explain to you why the decision was made, and you don't know the relevant rules; you just brought all the information you had about yourself, and hoped for the best. Is such a decision process defective? I argue that such a decision process is defective because it prevents individuals from accessing the normative explanations that are necessary to form an appropriate practical orientation towards their social world. A practical orientation is a reflective stance towards one's social world, which is expressed in one's actions and draws on one's cognitive architecture that allows one to navigate the various social practices and institutions. A practical orientation can range from rejection to silent endorsement, and is the sort of attitude for which there are the right kind of reasons, based in the world's normative character. It also determines how one fills out one's role obligations, and, more broadly, guides one's actions in the relevant institution: a teacher in the American South during the time of enforced racial segregation, for example, might choose where to teach on the basis of her rejection of the segregation of education. To form an appropriate practical orientation, one must have an understanding of the social world's normative character, which required a normative explanation And, since we spend so much of our lives at work and are constrained by economic institutions, we must understand their structure and how they function.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127457296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Can We Obtain Fairness For Free? 我们能免费获得公平吗?
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462614
Rashidul Islam, Shimei Pan, James R. Foulds
There is growing awareness that AI and machine learning systems can in some cases learn to behave in unfair and discriminatory ways with harmful consequences. However, despite an enormous amount of research, techniques for ensuring AI fairness have yet to see widespread deployment in real systems. One of the main barriers is the conventional wisdom that fairness brings a cost in predictive performance metrics such as accuracy which could affect an organization's bottom-line. In this paper we take a closer look at this concern. Clearly fairness/performance trade-offs exist, but are they inevitable? In contrast to the conventional wisdom, we find that it is frequently possible, indeed straightforward, to improve on a trained model's fairness without sacrificing predictive performance. We systematically study the behavior of fair learning algorithms on a range of benchmark datasets, showing that it is possible to improve fairness to some degree with no loss (or even an improvement) in predictive performance via a sensible hyper-parameter selection strategy. Our results reveal a pathway toward increasing the deployment of fair AI methods, with potentially substantial positive real-world impacts.
人们越来越意识到,人工智能和机器学习系统在某些情况下可能会以不公平和歧视性的方式行事,从而产生有害的后果。然而,尽管进行了大量的研究,但确保人工智能公平性的技术尚未在实际系统中得到广泛应用。其中一个主要障碍是,传统观念认为,公平性会导致预测性能指标(如准确性)的成本,这可能会影响组织的底线。在本文中,我们将仔细研究这一问题。显然,公平和性能之间存在权衡,但它们是不可避免的吗?与传统智慧相反,我们发现,在不牺牲预测性能的情况下,提高训练模型的公平性通常是可能的,而且是直接的。我们系统地研究了公平学习算法在一系列基准数据集上的行为,表明通过合理的超参数选择策略可以在一定程度上提高公平性,而不会损失(甚至改进)预测性能。我们的研究结果揭示了增加公平人工智能方法部署的途径,可能对现实世界产生重大的积极影响。
{"title":"Can We Obtain Fairness For Free?","authors":"Rashidul Islam, Shimei Pan, James R. Foulds","doi":"10.1145/3461702.3462614","DOIUrl":"https://doi.org/10.1145/3461702.3462614","url":null,"abstract":"There is growing awareness that AI and machine learning systems can in some cases learn to behave in unfair and discriminatory ways with harmful consequences. However, despite an enormous amount of research, techniques for ensuring AI fairness have yet to see widespread deployment in real systems. One of the main barriers is the conventional wisdom that fairness brings a cost in predictive performance metrics such as accuracy which could affect an organization's bottom-line. In this paper we take a closer look at this concern. Clearly fairness/performance trade-offs exist, but are they inevitable? In contrast to the conventional wisdom, we find that it is frequently possible, indeed straightforward, to improve on a trained model's fairness without sacrificing predictive performance. We systematically study the behavior of fair learning algorithms on a range of benchmark datasets, showing that it is possible to improve fairness to some degree with no loss (or even an improvement) in predictive performance via a sensible hyper-parameter selection strategy. Our results reveal a pathway toward increasing the deployment of fair AI methods, with potentially substantial positive real-world impacts.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126886451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Trustworthy AI for the People? 值得信赖的人工智能?
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462470
Clàudia Figueras, H. Verhagen, Teresa Cerratto Pargman
While AI systems become more pervasive, their social impact is increasingly hard to measure. To help mitigate possible risks and guide practitioners into a more responsible design, diverse organizations have released AI ethics frameworks. However, it remains unclear how ethical issues are dealt with in the everyday practices of AI developers. To this end, we have carried an exploratory empirical study interviewing AI developers working for Swedish public organizations to understand how ethics are enacted in practice. Our analysis found that several AI ethics issues are not consistently tackled, and AI systems are not fully recognized as part of a broader sociotechnical system.
虽然人工智能系统变得越来越普遍,但它们的社会影响越来越难以衡量。为了帮助降低可能的风险并指导从业者进行更负责任的设计,不同的组织发布了人工智能道德框架。然而,目前尚不清楚人工智能开发者在日常实践中如何处理伦理问题。为此,我们进行了一项探索性实证研究,采访了为瑞典公共组织工作的人工智能开发人员,以了解道德在实践中是如何制定的。我们的分析发现,一些人工智能伦理问题没有得到持续解决,人工智能系统没有被充分认识到是更广泛的社会技术系统的一部分。
{"title":"Trustworthy AI for the People?","authors":"Clàudia Figueras, H. Verhagen, Teresa Cerratto Pargman","doi":"10.1145/3461702.3462470","DOIUrl":"https://doi.org/10.1145/3461702.3462470","url":null,"abstract":"While AI systems become more pervasive, their social impact is increasingly hard to measure. To help mitigate possible risks and guide practitioners into a more responsible design, diverse organizations have released AI ethics frameworks. However, it remains unclear how ethical issues are dealt with in the everyday practices of AI developers. To this end, we have carried an exploratory empirical study interviewing AI developers working for Swedish public organizations to understand how ethics are enacted in practice. Our analysis found that several AI ethics issues are not consistently tackled, and AI systems are not fully recognized as part of a broader sociotechnical system.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131190438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Computing Plans that Signal Normative Compliance 表明规范遵从性的计算计划
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462607
Alban Grastien, Claire Benn, S. Thiébaux
There has been increasing acceptance that agents must act in a way that is sensitive to ethical considerations. These considerations have been cashed out as constraints, such that some actions are permissible, while others are impermissible. In this paper, we claim that, in addition to only performing those actions that are permissible, agents should only perform those courses of action that are _unambiguously_ permissible. By doing so they signal normative compliance: they communicate their understanding of, and commitment to abiding by, the normative constraints in play. Those courses of action (or plans) that succeed in signalling compliance in this sense, we term 'acceptable'. The problem this paper addresses is how to compute plans that signal compliance, that is, how to find plans that are acceptable as well as permissible. We do this by identifying those plans such that, were an observer to see only part of its execution, that observer would infer the plan enacted was permissible. This paper provides a formal definition of compliance signalling within the domain of AI planning, describes an algorithm for computing compliance signalling plans, provides preliminary experimental results and discusses possible improvements. The signalling of compliance is vital for communication, coordination and cooperation in situations where the agent is partially observed. It is equally vital, therefore, to solve the computational problem of finding those plans that signal compliance. This is what this paper does.
越来越多的人接受代理人必须以一种对道德考虑敏感的方式行事。这些考虑已被兑现为约束,例如某些操作是允许的,而其他操作是不允许的。在本文中,我们声称,除了只执行那些允许的行为之外,代理应该只执行那些“明确”允许的行为。通过这样做,他们发出了遵守规范的信号:他们传达了他们对规范约束的理解和遵守的承诺。在这个意义上,那些成功发出遵从信号的行动方案(或计划),我们称之为“可接受的”。本文讨论的问题是如何计算指示遵从性的计划,也就是说,如何找到既可接受又允许的计划。我们通过识别这些计划来做到这一点,如果一个观察者只看到其执行的一部分,那么观察者会推断制定的计划是允许的。本文给出了人工智能规划领域的符合性信令的正式定义,描述了计算符合性信令计划的算法,提供了初步的实验结果,并讨论了可能的改进。在行为人受到部分观察的情况下,发出遵守的信号对于沟通、协调和合作至关重要。因此,同样重要的是,要解决计算问题,找到那些计划的信号遵从。这就是这篇论文所做的。
{"title":"Computing Plans that Signal Normative Compliance","authors":"Alban Grastien, Claire Benn, S. Thiébaux","doi":"10.1145/3461702.3462607","DOIUrl":"https://doi.org/10.1145/3461702.3462607","url":null,"abstract":"There has been increasing acceptance that agents must act in a way that is sensitive to ethical considerations. These considerations have been cashed out as constraints, such that some actions are permissible, while others are impermissible. In this paper, we claim that, in addition to only performing those actions that are permissible, agents should only perform those courses of action that are _unambiguously_ permissible. By doing so they signal normative compliance: they communicate their understanding of, and commitment to abiding by, the normative constraints in play. Those courses of action (or plans) that succeed in signalling compliance in this sense, we term 'acceptable'. The problem this paper addresses is how to compute plans that signal compliance, that is, how to find plans that are acceptable as well as permissible. We do this by identifying those plans such that, were an observer to see only part of its execution, that observer would infer the plan enacted was permissible. This paper provides a formal definition of compliance signalling within the domain of AI planning, describes an algorithm for computing compliance signalling plans, provides preliminary experimental results and discusses possible improvements. The signalling of compliance is vital for communication, coordination and cooperation in situations where the agent is partially observed. It is equally vital, therefore, to solve the computational problem of finding those plans that signal compliance. This is what this paper does.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133490075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Automating Procedurally Fair Feature Selection in Machine Learning 机器学习中程序公平特征选择的自动化
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462585
Clara Belitz, Lan Jiang, Nigel Bosch
In recent years, machine learning has become more common in everyday applications. Consequently, numerous studies have explored issues of unfairness against specific groups or individuals in the context of these applications. Much of the previous work on unfairness in machine learning has focused on the fairness of outcomes rather than process. We propose a feature selection method inspired by fair process (procedural fairness) in addition to fair outcome. Specifically, we introduce the notion of unfairness weight, which indicates how heavily to weight unfairness versus accuracy when measuring the marginal benefit of adding a new feature to a model. Our goal is to maintain accuracy while reducing unfairness, as defined by six common statistical definitions. We show that this approach demonstrably decreases unfairness as the unfairness weight is increased, for most combinations of metrics and classifiers used. A small subset of all the combinations of datasets (4), unfairness metrics (6), and classifiers (3), however, demonstrated relatively low unfairness initially. For these specific combinations, neither unfairness nor accuracy were affected as unfairness weight changed, demonstrating that this method does not reduce accuracy unless there is also an equivalent decrease in unfairness. We also show that this approach selects unfair features and sensitive features for the model less frequently as the unfairness weight increases. As such, this procedure is an effective approach to constructing classifiers that both reduce unfairness and are less likely to include unfair features in the modeling process.
近年来,机器学习在日常应用中变得越来越普遍。因此,许多研究探讨了在这些应用的背景下对特定群体或个人的不公平问题。之前关于机器学习不公平的许多工作都集中在结果的公平性上,而不是过程的公平性。在结果公平的基础上,提出了一种基于过程公平(程序公平)的特征选择方法。具体来说,我们引入了不公平权重的概念,它表明在测量向模型中添加新特征的边际效益时,不公平与准确性的权重有多大。我们的目标是在保持准确性的同时减少不公平,正如六个常见的统计定义所定义的那样。我们表明,对于使用的大多数度量和分类器的组合,随着不公平权重的增加,这种方法明显地减少了不公平。然而,数据集(4)、不公平度量(6)和分类器(3)的所有组合中的一小部分,最初表现出相对较低的不公平。对于这些特定的组合,不公平性和准确性都不会随着不公平性权重的变化而受到影响,这表明除非不公平性也相应降低,否则该方法不会降低准确性。我们还表明,随着不公平权重的增加,该方法为模型选择不公平特征和敏感特征的频率降低。因此,这个过程是构建分类器的有效方法,它既减少了不公平,又不太可能在建模过程中包含不公平的特征。
{"title":"Automating Procedurally Fair Feature Selection in Machine Learning","authors":"Clara Belitz, Lan Jiang, Nigel Bosch","doi":"10.1145/3461702.3462585","DOIUrl":"https://doi.org/10.1145/3461702.3462585","url":null,"abstract":"In recent years, machine learning has become more common in everyday applications. Consequently, numerous studies have explored issues of unfairness against specific groups or individuals in the context of these applications. Much of the previous work on unfairness in machine learning has focused on the fairness of outcomes rather than process. We propose a feature selection method inspired by fair process (procedural fairness) in addition to fair outcome. Specifically, we introduce the notion of unfairness weight, which indicates how heavily to weight unfairness versus accuracy when measuring the marginal benefit of adding a new feature to a model. Our goal is to maintain accuracy while reducing unfairness, as defined by six common statistical definitions. We show that this approach demonstrably decreases unfairness as the unfairness weight is increased, for most combinations of metrics and classifiers used. A small subset of all the combinations of datasets (4), unfairness metrics (6), and classifiers (3), however, demonstrated relatively low unfairness initially. For these specific combinations, neither unfairness nor accuracy were affected as unfairness weight changed, demonstrating that this method does not reduce accuracy unless there is also an equivalent decrease in unfairness. We also show that this approach selects unfair features and sensitive features for the model less frequently as the unfairness weight increases. As such, this procedure is an effective approach to constructing classifiers that both reduce unfairness and are less likely to include unfair features in the modeling process.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131059784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Co-design and Ethical Artificial Intelligence for Health: Myths and Misconceptions 协同设计和伦理人工智能健康:神话和误解
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462537
J. Donia, J. Shaw
Applications of artificial intelligence / machine learning (AI/ML) are dynamic and rapidly growing, and although multi-purpose, are particularly consequential in health care. One strategy for anticipating and addressing ethical challenges related to AI/ML for health care is co-design - or involvement of end users in design. Co-design has a diverse intellectual and practical history, however, and has been conceptualized in many different ways. Moreover, the unique features of AI/ML introduce challenges to co-design that are often underappreciated. This review summarizes the research literature on involvement in health care and design, and informed by critical data studies, examines the extent to which co-design as commonly conceptualized is capable of addressing the range of normative issues raised by AI/ML for health. We suggest that AI/ML technologies have amplified existing challenges related to co-design, and created entirely new challenges. We outline five co-design 'myths and misconceptions' related to AI/ML for health that form the basis for future research and practice. We conclude by suggesting that the normative strength of a co-design approach to AI/ML for health can be considered at three levels: technological, health care system, and societal. We also suggest research directions for a 'new era' of co-design capable of addressing these challenges. Link to full text: https://bit.ly/3yZrb3y
人工智能/机器学习(AI/ML)的应用是动态的和快速增长的,尽管是多用途的,但在医疗保健领域尤其重要。预测和解决与医疗保健人工智能/机器学习相关的道德挑战的一种策略是共同设计——或让最终用户参与设计。然而,协同设计有着多种多样的思想和实践历史,并以许多不同的方式被概念化。此外,AI/ML的独特功能给协同设计带来了挑战,而这些挑战往往被低估。本综述总结了参与医疗保健和设计的研究文献,并通过关键数据研究,检查了共同设计作为通常概念化的程度,能够解决人工智能/机器学习对健康提出的一系列规范问题。我们认为AI/ML技术放大了与协同设计相关的现有挑战,并创造了全新的挑战。我们概述了与AI/ML健康相关的五个协同设计“神话和误解”,这些神话和误解构成了未来研究和实践的基础。我们的结论是,人工智能/机器学习健康协同设计方法的规范强度可以在三个层面上考虑:技术、医疗保健系统和社会。我们还为能够应对这些挑战的协同设计的“新时代”提出了研究方向。链接到全文:https://bit.ly/3yZrb3y
{"title":"Co-design and Ethical Artificial Intelligence for Health: Myths and Misconceptions","authors":"J. Donia, J. Shaw","doi":"10.1145/3461702.3462537","DOIUrl":"https://doi.org/10.1145/3461702.3462537","url":null,"abstract":"Applications of artificial intelligence / machine learning (AI/ML) are dynamic and rapidly growing, and although multi-purpose, are particularly consequential in health care. One strategy for anticipating and addressing ethical challenges related to AI/ML for health care is co-design - or involvement of end users in design. Co-design has a diverse intellectual and practical history, however, and has been conceptualized in many different ways. Moreover, the unique features of AI/ML introduce challenges to co-design that are often underappreciated. This review summarizes the research literature on involvement in health care and design, and informed by critical data studies, examines the extent to which co-design as commonly conceptualized is capable of addressing the range of normative issues raised by AI/ML for health. We suggest that AI/ML technologies have amplified existing challenges related to co-design, and created entirely new challenges. We outline five co-design 'myths and misconceptions' related to AI/ML for health that form the basis for future research and practice. We conclude by suggesting that the normative strength of a co-design approach to AI/ML for health can be considered at three levels: technological, health care system, and societal. We also suggest research directions for a 'new era' of co-design capable of addressing these challenges. Link to full text: https://bit.ly/3yZrb3y","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114866998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
期刊
Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1