首页 > 最新文献

Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society最新文献

英文 中文
AI Alignment and Human Reward AI对齐和人类奖励
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462570
Patrick Butlin
According to a prominent approach to AI alignment, AI agents should be built to learn and promote human values. However, humans value things in several different ways: we have desires and preferences of various kinds, and if we engage in reinforcement learning, we also have reward functions. One research project to which this approach gives rise is therefore to say which of these various classes of human values should be promoted. This paper takes on part of this project by assessing the proposal that human reward functions should be the target for AI alignment. There is some reason to believe that powerful AI agents which were aligned to values of this form would help us to lead good lives, but there is also considerable uncertainty about this claim, arising from unresolved empirical and conceptual issues in human psychology.
根据一种突出的人工智能对齐方法,应该建立人工智能代理来学习和促进人类的价值观。然而,人类以几种不同的方式评价事物:我们有各种各样的欲望和偏好,如果我们进行强化学习,我们也有奖励功能。因此,这种方法引发的一个研究项目是,在这些不同类别的人类价值观中,哪一类应该得到促进。本文通过评估人类奖励功能应该成为人工智能校准目标的提议来承担该项目的一部分。我们有理由相信,与这种形式的价值观相一致的强大的人工智能代理将帮助我们过上美好的生活,但由于人类心理学中尚未解决的经验和概念问题,这种说法也存在相当大的不确定性。
{"title":"AI Alignment and Human Reward","authors":"Patrick Butlin","doi":"10.1145/3461702.3462570","DOIUrl":"https://doi.org/10.1145/3461702.3462570","url":null,"abstract":"According to a prominent approach to AI alignment, AI agents should be built to learn and promote human values. However, humans value things in several different ways: we have desires and preferences of various kinds, and if we engage in reinforcement learning, we also have reward functions. One research project to which this approach gives rise is therefore to say which of these various classes of human values should be promoted. This paper takes on part of this project by assessing the proposal that human reward functions should be the target for AI alignment. There is some reason to believe that powerful AI agents which were aligned to values of this form would help us to lead good lives, but there is also considerable uncertainty about this claim, arising from unresolved empirical and conceptual issues in human psychology.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131148103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
The Ethical Gravity Thesis: Marrian Levels and the Persistence of Bias in Automated Decision-making Systems 伦理重力论文:自动决策系统中的玛丽安水平和偏见的持久性
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462606
A. Kasirzadeh, C. Klein
Computers are used to make decisions in an increasing number of domains. There is widespread agreement that some of these uses are ethically problematic. Far less clear is where ethical problems arise, and what might be done about them. This paper expands and defends the Ethical Gravity Thesis: ethical problems that arise at higher levels of analysis of an automated decision-making system are inherited by lower levels of analysis. Particular instantiations of systems can add new problems, but not ameliorate more general ones. We defend this thesis by adapting Marr's famous 1982 framework for understanding information-processing systems. We show how this framework allows one to situate ethical problems at the appropriate level of abstraction, which in turn can be used to target appropriate interventions.
计算机被用于在越来越多的领域做出决策。人们普遍认为,其中一些用途存在道德问题。道德问题从何而来,又该如何应对,则远不那么清楚。本文扩展并捍卫了伦理重力理论:在自动化决策系统的较高层次分析中出现的伦理问题被较低层次的分析所继承。系统的特定实例可以增加新问题,但不能改善更一般的问题。我们通过采用Marr著名的1982年框架来理解信息处理系统来捍卫这一论点。我们展示了这个框架如何允许人们将伦理问题定位在适当的抽象层次上,从而可以用来针对适当的干预措施。
{"title":"The Ethical Gravity Thesis: Marrian Levels and the Persistence of Bias in Automated Decision-making Systems","authors":"A. Kasirzadeh, C. Klein","doi":"10.1145/3461702.3462606","DOIUrl":"https://doi.org/10.1145/3461702.3462606","url":null,"abstract":"Computers are used to make decisions in an increasing number of domains. There is widespread agreement that some of these uses are ethically problematic. Far less clear is where ethical problems arise, and what might be done about them. This paper expands and defends the Ethical Gravity Thesis: ethical problems that arise at higher levels of analysis of an automated decision-making system are inherited by lower levels of analysis. Particular instantiations of systems can add new problems, but not ameliorate more general ones. We defend this thesis by adapting Marr's famous 1982 framework for understanding information-processing systems. We show how this framework allows one to situate ethical problems at the appropriate level of abstraction, which in turn can be used to target appropriate interventions.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125517306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Fair Equality of Chances for Prediction-based Decisions 基于预测的决策机会的公平平等
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462613
M. Loi, Anders Herlitz, Hoda Heidari
This is a one-page summary of the paper "A Philosophical Theory of Fairness for Prediction-based Decisions." The full paper is available on SSRN at the following link: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3450300
这是一篇关于“基于预测的决策公平的哲学理论”的一页摘要。全文可在SSRN上获得,链接如下:https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3450300
{"title":"Fair Equality of Chances for Prediction-based Decisions","authors":"M. Loi, Anders Herlitz, Hoda Heidari","doi":"10.1145/3461702.3462613","DOIUrl":"https://doi.org/10.1145/3461702.3462613","url":null,"abstract":"This is a one-page summary of the paper \"A Philosophical Theory of Fairness for Prediction-based Decisions.\" The full paper is available on SSRN at the following link: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3450300","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"2 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120970923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Causality in Neural Networks - An Extended Abstract 神经网络中的因果关系——扩展摘要
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462467
Abbavaram Gowtham Reddy
Causal reasoning is the main learning and explanation tool used by humans. AI systems should possess causal reasoning capabilities to be deployed in the real world with trust and reliability. Introducing the ideas of causality to machine learning helps in providing better learning and explainable models. Explainability, causal disentanglement are some important aspects of any machine learning model. Causal explanations are required to believe in a model's decision and causal disentanglement learning is important for transfer learning applications. We exploit the ideas of causality to be used in deep learning models to achieve better and causally explainable models that are useful in fairness, disentangled representation, etc.
因果推理是人类使用的主要学习和解释工具。人工智能系统应该具备因果推理能力,以便在现实世界中部署,并具有信任和可靠性。将因果关系的思想引入机器学习有助于提供更好的学习和可解释的模型。可解释性、因果解缠是任何机器学习模型的一些重要方面。相信模型的决定需要因果解释,因果解纠缠学习对于迁移学习的应用非常重要。我们利用因果关系的概念用于深度学习模型,以实现更好的、因果可解释的模型,这些模型在公平性、解纠缠表示等方面很有用。
{"title":"Causality in Neural Networks - An Extended Abstract","authors":"Abbavaram Gowtham Reddy","doi":"10.1145/3461702.3462467","DOIUrl":"https://doi.org/10.1145/3461702.3462467","url":null,"abstract":"Causal reasoning is the main learning and explanation tool used by humans. AI systems should possess causal reasoning capabilities to be deployed in the real world with trust and reliability. Introducing the ideas of causality to machine learning helps in providing better learning and explainable models. Explainability, causal disentanglement are some important aspects of any machine learning model. Causal explanations are required to believe in a model's decision and causal disentanglement learning is important for transfer learning applications. We exploit the ideas of causality to be used in deep learning models to achieve better and causally explainable models that are useful in fairness, disentangled representation, etc.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115452262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
What's Fair about Individual Fairness? 什么是个人公平?
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462621
W. Fleisher
One of the main lines of research in algorithmic fairness involves individual fairness (IF) methods. Individual fairness is motivated by an intuitive principle, similar treatment, which requires that similar individuals be treated similarly. IF offers a precise account of this principle using distance metrics to evaluate the similarity of individuals. Proponents of individual fairness have argued that it gives the correct definition of algorithmic fairness, and that it should therefore be preferred to other methods for determining fairness. I argue that individual fairness cannot serve as a definition of fairness. Moreover, IF methods should not be given priority over other fairness methods, nor used in isolation from them. To support these conclusions, I describe four in-principle problems for individual fairness as a definition and as a method for ensuring fairness: (1) counterexamples show that similar treatment (and therefore IF) are insufficient to guarantee fairness; (2) IF methods for learning similarity metrics are at risk of encoding human implicit bias; (3) IF requires prior moral judgments, limiting its usefulness as a guide for fairness and undermining its claim to define fairness; and (4) the incommensurability of relevant moral values makes similarity metrics impossible for many tasks. In light of these limitations, I suggest that individual fairness cannot be a definition of fairness, and instead should be seen as one tool among several for ameliorating algorithmic bias.
算法公平性研究的主要方向之一是个体公平性方法。个人公平是由一个直观的原则驱动的,即相似的待遇,这要求相似的个人得到相似的对待。IF用距离度量来评估个体的相似性,提供了对这一原理的精确描述。个人公平的支持者认为,它给出了算法公平的正确定义,因此它应该比其他确定公平的方法更受欢迎。我认为个人公平不能作为公平的定义。此外,中频方法不应优先于其他公平性方法,也不应与其他公平性方法隔离使用。为了支持这些结论,我描述了个人公平作为定义和确保公平的方法的四个原则上的问题:(1)反例表明,类似的处理(因此IF)不足以保证公平;(2)学习相似度量的IF方法存在编码人类内隐偏见的风险;(3) IF需要事先的道德判断,限制了它作为公平指导的有用性,削弱了它定义公平的主张;(4)相关道德价值观的不可通约性使得相似性度量不可能用于许多任务。鉴于这些限制,我认为个人公平不能被定义为公平,而应该被视为改善算法偏见的几种工具之一。
{"title":"What's Fair about Individual Fairness?","authors":"W. Fleisher","doi":"10.1145/3461702.3462621","DOIUrl":"https://doi.org/10.1145/3461702.3462621","url":null,"abstract":"One of the main lines of research in algorithmic fairness involves individual fairness (IF) methods. Individual fairness is motivated by an intuitive principle, similar treatment, which requires that similar individuals be treated similarly. IF offers a precise account of this principle using distance metrics to evaluate the similarity of individuals. Proponents of individual fairness have argued that it gives the correct definition of algorithmic fairness, and that it should therefore be preferred to other methods for determining fairness. I argue that individual fairness cannot serve as a definition of fairness. Moreover, IF methods should not be given priority over other fairness methods, nor used in isolation from them. To support these conclusions, I describe four in-principle problems for individual fairness as a definition and as a method for ensuring fairness: (1) counterexamples show that similar treatment (and therefore IF) are insufficient to guarantee fairness; (2) IF methods for learning similarity metrics are at risk of encoding human implicit bias; (3) IF requires prior moral judgments, limiting its usefulness as a guide for fairness and undermining its claim to define fairness; and (4) the incommensurability of relevant moral values makes similarity metrics impossible for many tasks. In light of these limitations, I suggest that individual fairness cannot be a definition of fairness, and instead should be seen as one tool among several for ameliorating algorithmic bias.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129270772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
The Earth Is Flat and the Sun Is Not a Star: The Susceptibility of GPT-2 to Universal Adversarial Triggers 地球是平的,太阳不是恒星:GPT-2对普遍对抗性触发的敏感性
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462578
Hunter Scott Heidenreich, J. Williams
This work considers universal adversarial triggers, a method of adversarially disrupting natural language models, and questions if it is possible to use such triggers to affect both the topic and stance of conditional text generation models. In considering four "controversial" topics, this work demonstrates success at identifying triggers that cause the GPT-2 model to produce text about targeted topics as well as influence the stance the text takes towards the topic. We show that, while the more fringe topics are more challenging to identify triggers for, they do appear to more effectively discriminate aspects like stance. We view this both as an indication of the dangerous potential for controllability and, perhaps, a reflection of the nature of the disconnect between conflicting views on these topics, something that future work could use to question the nature of filter bubbles and if they are reflected within models trained on internet content. In demonstrating the feasibility and ease of such an attack, this work seeks to raise the awareness that neural language models are susceptible to this influence--even if the model is already deployed and adversaries lack internal model access--and advocates the immediate safeguarding against this type of adversarial attack in order to prevent potential harm to human users.
这项工作考虑了普遍的对抗性触发,一种对抗性破坏自然语言模型的方法,并质疑是否有可能使用这种触发来影响条件文本生成模型的主题和立场。在考虑四个“有争议的”主题时,这项工作证明了在确定触发因素方面的成功,这些触发因素导致GPT-2模型产生关于目标主题的文本,并影响文本对主题的立场。我们发现,虽然越边缘的话题越难以识别触发因素,但它们确实似乎更有效地区分立场等方面。我们认为这既表明了可控性的危险潜力,也可能反映了在这些主题上相互冲突的观点之间脱节的本质,未来的工作可以用来质疑过滤气泡的本质,以及它们是否反映在互联网内容训练的模型中。为了证明这种攻击的可行性和易用性,这项工作旨在提高人们的意识,即神经语言模型容易受到这种影响——即使模型已经部署,对手缺乏内部模型访问——并提倡立即防范这种类型的对抗性攻击,以防止对人类用户的潜在伤害。
{"title":"The Earth Is Flat and the Sun Is Not a Star: The Susceptibility of GPT-2 to Universal Adversarial Triggers","authors":"Hunter Scott Heidenreich, J. Williams","doi":"10.1145/3461702.3462578","DOIUrl":"https://doi.org/10.1145/3461702.3462578","url":null,"abstract":"This work considers universal adversarial triggers, a method of adversarially disrupting natural language models, and questions if it is possible to use such triggers to affect both the topic and stance of conditional text generation models. In considering four \"controversial\" topics, this work demonstrates success at identifying triggers that cause the GPT-2 model to produce text about targeted topics as well as influence the stance the text takes towards the topic. We show that, while the more fringe topics are more challenging to identify triggers for, they do appear to more effectively discriminate aspects like stance. We view this both as an indication of the dangerous potential for controllability and, perhaps, a reflection of the nature of the disconnect between conflicting views on these topics, something that future work could use to question the nature of filter bubbles and if they are reflected within models trained on internet content. In demonstrating the feasibility and ease of such an attack, this work seeks to raise the awareness that neural language models are susceptible to this influence--even if the model is already deployed and adversaries lack internal model access--and advocates the immediate safeguarding against this type of adversarial attack in order to prevent potential harm to human users.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"199 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125867441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Minimax Group Fairness: Algorithms and Experiments 极大极小群公平性:算法与实验
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462523
Emily Diana, Wesley Gill, Michael Kearns, K. Kenthapadi, Aaron Roth
We consider a recently introduced framework in which fairness is measured by worst-case outcomes across groups, rather than by the more standard differences between group outcomes. In this framework we provide provably convergent oracle-efficient learning algorithms (or equivalently, reductions to non-fair learning) for minimax group fairness. Here the goal is that of minimizing the maximum loss across all groups, rather than equalizing group losses. Our algorithms apply to both regression and classification settings and support both overall error and false positive or false negative rates as the fairness measure of interest. They also support relaxations of the fairness constraints, thus permitting study of the tradeoff between overall accuracy and minimax fairness. We compare the experimental behavior and performance of our algorithms across a variety of fairness-sensitive data sets and show empirical cases in which minimax fairness is strictly and strongly preferable to equal outcome notions.
我们考虑了最近引入的一个框架,在这个框架中,公平性是通过群体之间最坏的结果来衡量的,而不是通过群体结果之间更标准的差异来衡量的。在这个框架中,我们提供了可证明的收敛的oracle高效学习算法(或等效地,非公平学习的简化),用于极大极小组公平性。这里的目标是最小化所有组的最大损失,而不是使组的损失相等。我们的算法适用于回归和分类设置,并支持整体错误和假阳性或假阴性率作为兴趣的公平度量。它们还支持放宽公平性约束,从而允许研究总体准确性和极大极小公平性之间的权衡。我们比较了我们的算法在各种公平敏感数据集上的实验行为和性能,并展示了经验案例,其中极大极小公平性严格优于相等结果概念。
{"title":"Minimax Group Fairness: Algorithms and Experiments","authors":"Emily Diana, Wesley Gill, Michael Kearns, K. Kenthapadi, Aaron Roth","doi":"10.1145/3461702.3462523","DOIUrl":"https://doi.org/10.1145/3461702.3462523","url":null,"abstract":"We consider a recently introduced framework in which fairness is measured by worst-case outcomes across groups, rather than by the more standard differences between group outcomes. In this framework we provide provably convergent oracle-efficient learning algorithms (or equivalently, reductions to non-fair learning) for minimax group fairness. Here the goal is that of minimizing the maximum loss across all groups, rather than equalizing group losses. Our algorithms apply to both regression and classification settings and support both overall error and false positive or false negative rates as the fairness measure of interest. They also support relaxations of the fairness constraints, thus permitting study of the tradeoff between overall accuracy and minimax fairness. We compare the experimental behavior and performance of our algorithms across a variety of fairness-sensitive data sets and show empirical cases in which minimax fairness is strictly and strongly preferable to equal outcome notions.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125895903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 65
A Framework for Understanding AI-Induced Field Change: How AI Technologies are Legitimized and Institutionalized 理解人工智能引发的领域变化的框架:人工智能技术如何合法化和制度化
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462591
B. Larsen
Artificial intelligence (AI) systems operate in increasingly diverse areas, from healthcare to facial recognition, the stock market, autonomous vehicles, and so on. While the underlying digital infrastructure of AI systems is developing rapidly, each area of implementation is subject to different degrees and processes of legitimization. By combining elements from institutional theory and information systems-theory, this paper presents a conceptual framework to analyze and understand AI-induced field-change. The introduction of novel AI-agents into new or existing fields creates a dynamic in which algorithms (re)shape organizations and institutions while existing institutional infrastructures determine the scope and speed at which organizational change is allowed to occur. Where institutional infrastructure and governance arrangements, such as standards, rules, and regulations, still are unelaborate, the field can move fast but is also more likely to be contested. The institutional infrastructure surrounding AI-induced fields is generally little elaborated, which could be an obstacle to the broader institutionalization of AI-systems going forward.
人工智能(AI)系统在越来越多样化的领域发挥作用,从医疗保健到面部识别、股票市场、自动驾驶汽车等等。虽然人工智能系统的底层数字基础设施正在迅速发展,但每个实施领域都受到不同程度和合法化过程的影响。本文结合制度理论和信息系统理论的要素,提出了一个分析和理解人工智能引发的场域变化的概念框架。将新颖的人工智能代理引入新的或现有的领域,创造了一种动态,其中算法(重新)塑造组织和机构,而现有的机构基础设施决定了允许发生组织变革的范围和速度。在制度基础设施和治理安排(如标准、规则和条例)仍然不完善的地方,该领域可以快速发展,但也更有可能受到竞争。围绕人工智能领域的制度基础设施通常没有得到详细阐述,这可能成为人工智能系统更广泛制度化的障碍。
{"title":"A Framework for Understanding AI-Induced Field Change: How AI Technologies are Legitimized and Institutionalized","authors":"B. Larsen","doi":"10.1145/3461702.3462591","DOIUrl":"https://doi.org/10.1145/3461702.3462591","url":null,"abstract":"Artificial intelligence (AI) systems operate in increasingly diverse areas, from healthcare to facial recognition, the stock market, autonomous vehicles, and so on. While the underlying digital infrastructure of AI systems is developing rapidly, each area of implementation is subject to different degrees and processes of legitimization. By combining elements from institutional theory and information systems-theory, this paper presents a conceptual framework to analyze and understand AI-induced field-change. The introduction of novel AI-agents into new or existing fields creates a dynamic in which algorithms (re)shape organizations and institutions while existing institutional infrastructures determine the scope and speed at which organizational change is allowed to occur. Where institutional infrastructure and governance arrangements, such as standards, rules, and regulations, still are unelaborate, the field can move fast but is also more likely to be contested. The institutional infrastructure surrounding AI-induced fields is generally little elaborated, which could be an obstacle to the broader institutionalization of AI-systems going forward.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133151960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Ethically Compliant Planning within Moral Communities 道德社区中的道德合规规划
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462522
Samer B. Nashed, Justin Svegliato, S. Zilberstein
Ethically compliant autonomous systems (ECAS) are the state-of-the-art for solving sequential decision-making problems under uncertainty while respecting constraints that encode ethical considerations. This paper defines a novel concept in the context of ECAS that is from moral philosophy, the moral community, which leads to a nuanced taxonomy of explicit ethical agents. We then propose new ethical frameworks that extend the applicability of ECAS to domains where a moral community is required. Next, we provide a formal analysis of the proposed ethical frameworks and conduct experiments that illustrate their differences. Finally, we discuss the implications of explicit moral communities that could shape research on standards and guidelines for ethical agents in order to better understand and predict common errors in their design and communicate their capabilities.
符合伦理道德的自治系统(ECAS)是解决不确定性下的顺序决策问题的最先进技术,同时尊重编码伦理考虑的约束。本文在ECAS的背景下定义了一个来自道德哲学的新概念,即道德共同体,这导致了明确伦理行为者的微妙分类。然后,我们提出了新的伦理框架,将ECAS的适用性扩展到需要道德社区的领域。接下来,我们对提出的道德框架进行正式分析,并进行实验来说明它们的差异。最后,我们讨论了明确的道德共同体的含义,它可以塑造道德主体的标准和指导方针的研究,以便更好地理解和预测其设计中的常见错误,并传达其能力。
{"title":"Ethically Compliant Planning within Moral Communities","authors":"Samer B. Nashed, Justin Svegliato, S. Zilberstein","doi":"10.1145/3461702.3462522","DOIUrl":"https://doi.org/10.1145/3461702.3462522","url":null,"abstract":"Ethically compliant autonomous systems (ECAS) are the state-of-the-art for solving sequential decision-making problems under uncertainty while respecting constraints that encode ethical considerations. This paper defines a novel concept in the context of ECAS that is from moral philosophy, the moral community, which leads to a nuanced taxonomy of explicit ethical agents. We then propose new ethical frameworks that extend the applicability of ECAS to domains where a moral community is required. Next, we provide a formal analysis of the proposed ethical frameworks and conduct experiments that illustrate their differences. Finally, we discuss the implications of explicit moral communities that could shape research on standards and guidelines for ethical agents in order to better understand and predict common errors in their design and communicate their capabilities.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114370160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
An AI Ethics Course Highlighting Explicit Ethical Agents 强调明确道德主体的人工智能伦理课程
Pub Date : 2021-07-21 DOI: 10.1145/3461702.3462552
N. Green
This is an experience report describing a pilot AI Ethics course for undergraduate computer science majors. In addition to teaching students about different ethical approaches and using them to analyze ethical issues, the course covered how ethics has been incorporated into the implementation of explicit ethical agents, and required students to implement an explicit ethical agent for a simple application. This report describes the course objectives and design, the topics covered, and a qualitative evaluation with suggestions for future offerings of the courses.
这是一份经验报告,描述了计算机科学本科专业的人工智能伦理试点课程。除了教学生不同的伦理方法并使用它们来分析伦理问题外,该课程还涵盖了伦理如何被纳入显式伦理代理的实施,并要求学生为一个简单的应用程序实现显式伦理代理。这份报告描述了课程的目标和设计,涵盖的主题,以及对课程未来提供的建议的定性评估。
{"title":"An AI Ethics Course Highlighting Explicit Ethical Agents","authors":"N. Green","doi":"10.1145/3461702.3462552","DOIUrl":"https://doi.org/10.1145/3461702.3462552","url":null,"abstract":"This is an experience report describing a pilot AI Ethics course for undergraduate computer science majors. In addition to teaching students about different ethical approaches and using them to analyze ethical issues, the course covered how ethics has been incorporated into the implementation of explicit ethical agents, and required students to implement an explicit ethical agent for a simple application. This report describes the course objectives and design, the topics covered, and a qualitative evaluation with suggestions for future offerings of the courses.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"285 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131559116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
期刊
Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1