首页 > 最新文献

Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society最新文献

英文 中文
Good Explanation for Algorithmic Transparency 很好地解释了算法透明度
Pub Date : 2020-02-04 DOI: 10.2139/ssrn.3503603
Joy Lu, Dokyun Lee, Tae Wan Kim, D. Danks
Machine learning algorithms have gained widespread usage across a variety of domains, both in providing predictions to expert users and recommending decisions to everyday users. However, these AI systems are often black boxes, and end-users are rarely provided with an explanation. The critical need for explanation by AI systems has led to calls for algorithmic transparency, including the "right to explanation'' in the EU General Data Protection Regulation (GDPR). These initiatives presuppose that we know what constitutes a meaningful or good explanation, but there has actually been surprisingly little research on this question in the context of AI systems. In this paper, we (1) develop a generalizable framework grounded in philosophy, psychology, and interpretable machine learning to investigate and define characteristics of good explanation, and (2) conduct a large-scale lab experiment to measure the impact of different factors on people's perceptions of understanding, usage intention, and trust of AI systems. The framework and study together provide a concrete guide for managers on how to present algorithmic prediction rationales to end-users to foster trust and adoption, and elements of explanation and transparency to be considered by AI researchers and engineers in designing, developing, and deploying transparent or explainable algorithms.
机器学习算法已经在各个领域得到了广泛的应用,既可以为专家用户提供预测,也可以向日常用户推荐决策。然而,这些人工智能系统通常是黑盒子,最终用户很少得到解释。人工智能系统对解释的迫切需求导致了对算法透明度的呼吁,包括欧盟通用数据保护条例(GDPR)中的“解释权”。这些举措的前提是,我们知道什么是有意义的或好的解释,但实际上,在人工智能系统的背景下,关于这个问题的研究却少得惊人。在本文中,我们(1)开发了一个基于哲学、心理学和可解释性机器学习的可推广框架,以调查和定义良好解释的特征;(2)进行了大规模的实验室实验,以衡量不同因素对人们对人工智能系统的理解、使用意图和信任的影响。该框架和研究共同为管理人员提供了具体的指导,指导他们如何向最终用户展示算法预测的基本原理,以促进信任和采用,以及人工智能研究人员和工程师在设计、开发和部署透明或可解释的算法时要考虑的解释和透明度要素。
{"title":"Good Explanation for Algorithmic Transparency","authors":"Joy Lu, Dokyun Lee, Tae Wan Kim, D. Danks","doi":"10.2139/ssrn.3503603","DOIUrl":"https://doi.org/10.2139/ssrn.3503603","url":null,"abstract":"Machine learning algorithms have gained widespread usage across a variety of domains, both in providing predictions to expert users and recommending decisions to everyday users. However, these AI systems are often black boxes, and end-users are rarely provided with an explanation. The critical need for explanation by AI systems has led to calls for algorithmic transparency, including the \"right to explanation'' in the EU General Data Protection Regulation (GDPR). These initiatives presuppose that we know what constitutes a meaningful or good explanation, but there has actually been surprisingly little research on this question in the context of AI systems. In this paper, we (1) develop a generalizable framework grounded in philosophy, psychology, and interpretable machine learning to investigate and define characteristics of good explanation, and (2) conduct a large-scale lab experiment to measure the impact of different factors on people's perceptions of understanding, usage intention, and trust of AI systems. The framework and study together provide a concrete guide for managers on how to present algorithmic prediction rationales to end-users to foster trust and adoption, and elements of explanation and transparency to be considered by AI researchers and engineers in designing, developing, and deploying transparent or explainable algorithms.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87614012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
FACE
Pub Date : 2020-02-04 DOI: 10.1145/3375627.3375850
Rafael Poyiadzi, Kacper Sokol, Raúl Santos-Rodríguez, Tijl De Bie, Peter A. Flach
Work in Counterfactual Explanations tends to focus on the principle of "the closest possible world" that identifies small changes leading to the desired outcome. In this paper we argue that while this approach might initially seem intuitively appealing it exhibits shortcomings not addressed in the current literature. First, a counterfactual example generated by the state-of-the-art systems is not necessarily representative of the underlying data distribution, and may therefore prescribe unachievable goals (e.g., an unsuccessful life insurance applicant with severe disability may be advised to do more sports). Secondly, the counterfactuals may not be based on a "feasible path" between the current state of the subject and the suggested one, making actionable recourse infeasible (e.g., low-skilled unsuccessful mortgage applicants may be told to double their salary, which may be hard without first increasing their skill level). These two shortcomings may render counterfactual explanations impractical and sometimes outright offensive. To address these two major flaws, first of all, we propose a new line of Counterfactual Explanations research aimed at providing actionable and feasible paths to transform a selected instance into one that meets a certain goal. Secondly, we propose FACE: an algorithmically sound way of uncovering these "feasible paths" based on the shortest path distances defined via density-weighted metrics. Our approach generates counterfactuals that are coherent with the underlying data distribution and supported by the "feasible paths" of change, which are achievable and can be tailored to the problem at hand.
{"title":"FACE","authors":"Rafael Poyiadzi, Kacper Sokol, Raúl Santos-Rodríguez, Tijl De Bie, Peter A. Flach","doi":"10.1145/3375627.3375850","DOIUrl":"https://doi.org/10.1145/3375627.3375850","url":null,"abstract":"Work in Counterfactual Explanations tends to focus on the principle of \"the closest possible world\" that identifies small changes leading to the desired outcome. In this paper we argue that while this approach might initially seem intuitively appealing it exhibits shortcomings not addressed in the current literature. First, a counterfactual example generated by the state-of-the-art systems is not necessarily representative of the underlying data distribution, and may therefore prescribe unachievable goals (e.g., an unsuccessful life insurance applicant with severe disability may be advised to do more sports). Secondly, the counterfactuals may not be based on a \"feasible path\" between the current state of the subject and the suggested one, making actionable recourse infeasible (e.g., low-skilled unsuccessful mortgage applicants may be told to double their salary, which may be hard without first increasing their skill level). These two shortcomings may render counterfactual explanations impractical and sometimes outright offensive. To address these two major flaws, first of all, we propose a new line of Counterfactual Explanations research aimed at providing actionable and feasible paths to transform a selected instance into one that meets a certain goal. Secondly, we propose FACE: an algorithmically sound way of uncovering these \"feasible paths\" based on the shortest path distances defined via density-weighted metrics. Our approach generates counterfactuals that are coherent with the underlying data distribution and supported by the \"feasible paths\" of change, which are achievable and can be tailored to the problem at hand.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"87 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84221123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Different "Intelligibility" for Different Folks 不同人的“可理解性”不同
Pub Date : 2020-02-04 DOI: 10.1145/3375627.3375810
Yishan Zhou, D. Danks
Many arguments have concluded that our autonomous technologies must be intelligible, interpretable, or explainable, even if that property comes at a performance cost. In this paper, we consider the reasons why some property like these might be valuable, we conclude that there is not simply one kind of 'intelligibility', but rather different types for different individuals and uses. In particular, different interests and goals require different types of intelligibility (or explanations, or other related notion). We thus provide a typography of 'intelligibility' that distinguishes various notions, and draw methodological conclusions about how autonomous technologies should be designed and deployed in different ways, depending on whose intelligibility is required.
许多论点得出结论,我们的自主技术必须是可理解的、可解释的或可解释的,即使这种特性是以性能为代价的。在本文中,我们考虑了为什么像这样的一些属性可能是有价值的原因,我们得出的结论是,不存在简单的一种“可理解性”,而是针对不同的个人和用途的不同类型。特别是,不同的兴趣和目标需要不同类型的可理解性(或解释,或其他相关概念)。因此,我们提供了一个区分各种概念的“可理解性”的排版,并得出关于自主技术应该如何以不同的方式设计和部署的方法论结论,这取决于需要哪些可理解性。
{"title":"Different \"Intelligibility\" for Different Folks","authors":"Yishan Zhou, D. Danks","doi":"10.1145/3375627.3375810","DOIUrl":"https://doi.org/10.1145/3375627.3375810","url":null,"abstract":"Many arguments have concluded that our autonomous technologies must be intelligible, interpretable, or explainable, even if that property comes at a performance cost. In this paper, we consider the reasons why some property like these might be valuable, we conclude that there is not simply one kind of 'intelligibility', but rather different types for different individuals and uses. In particular, different interests and goals require different types of intelligibility (or explanations, or other related notion). We thus provide a typography of 'intelligibility' that distinguishes various notions, and draw methodological conclusions about how autonomous technologies should be designed and deployed in different ways, depending on whose intelligibility is required.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"253 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73299998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Social Contracts for Non-Cooperative Games 非合作游戏的社会契约
Pub Date : 2020-02-04 DOI: 10.1145/3375627.3375829
Alan Davoust, Michael Rovatsos
In future agent societies, we might see AI systems engaging in selfish, calculated behavior, furthering their owners' interests instead of socially desirable outcomes. How can we promote morally sound behaviour in such settings, in order to obtain more desirable outcomes? A solution from moral philosophy is the concept of a social contract, a set of rules that people would voluntarily commit to in order to obtain better outcomes than those brought by anarchy. We adapt this concept to a game-theoretic setting, to systematically modify the payoffs of a non-cooperative game, so that agents will rationally pursue socially desirable outcomes. We show that for any game, a suitable social contract can be designed to produce an optimal outcome in terms of social welfare. We then investigate the limitations of applying this approach to alternative moral objectives, and establish that, for any alternative moral objective that is significantly different from social welfare, there are games for which no such social contract will be feasible that produces non-negligible social benefit compared to collective selfish behaviour.
在未来的代理社会中,我们可能会看到人工智能系统从事自私的、有计划的行为,促进其所有者的利益,而不是社会期望的结果。在这种情况下,我们如何促进道德健全的行为,以获得更理想的结果?道德哲学的一个解决方案是社会契约的概念,这是一套人们自愿遵守的规则,以获得比无政府状态带来的更好的结果。我们将这一概念应用到博弈论中,系统地修改非合作博弈的收益,使代理人理性地追求社会期望的结果。我们表明,对于任何游戏,一个合适的社会契约都可以被设计为产生社会福利方面的最佳结果。然后,我们研究了将这种方法应用于其他道德目标的局限性,并确定,对于任何与社会福利显著不同的其他道德目标,存在与集体自私行为相比,没有这种社会契约将是可行的游戏,产生不可忽略的社会利益。
{"title":"Social Contracts for Non-Cooperative Games","authors":"Alan Davoust, Michael Rovatsos","doi":"10.1145/3375627.3375829","DOIUrl":"https://doi.org/10.1145/3375627.3375829","url":null,"abstract":"In future agent societies, we might see AI systems engaging in selfish, calculated behavior, furthering their owners' interests instead of socially desirable outcomes. How can we promote morally sound behaviour in such settings, in order to obtain more desirable outcomes? A solution from moral philosophy is the concept of a social contract, a set of rules that people would voluntarily commit to in order to obtain better outcomes than those brought by anarchy. We adapt this concept to a game-theoretic setting, to systematically modify the payoffs of a non-cooperative game, so that agents will rationally pursue socially desirable outcomes. We show that for any game, a suitable social contract can be designed to produce an optimal outcome in terms of social welfare. We then investigate the limitations of applying this approach to alternative moral objectives, and establish that, for any alternative moral objective that is significantly different from social welfare, there are games for which no such social contract will be feasible that produces non-negligible social benefit compared to collective selfish behaviour.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"13 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91433640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Steps Towards Value-Aligned Systems 迈向价值一致系统的步骤
Pub Date : 2020-02-04 DOI: 10.1145/3375627.3375872
Osonde A. Osoba, Benjamin Boudreaux, Douglas Yeung
Algorithmic (including AI/ML) decision-making artifacts are an established and growing part of our decision-making ecosystem. They are now indispensable tools that help us manage the flood of information we use to try to make effective decisions in a complex world. The current literature is full of examples of how individual artifacts violate societal norms and expectations (e.g. violations of fairness, privacy, or safety norms). Against this backdrop, this discussion highlights an under-emphasized perspective in the body of research focused on assessing value misalignment in AI-equipped sociotechnical systems. The research on value misalignment so far has a strong focus on the behavior of individual tech artifacts. This discussion argues for a more structured systems-level approach for assessing value-alignment in sociotechnical systems. We rely primarily on the research on fairness to make our arguments more concrete. And we use the opportunity to highlight how adopting a system perspective improves our ability to explain and address value misalignments better. Our discussion ends with an exploration of priority questions that demand attention if we are to assure the value alignment of whole systems, not just individual artifacts.
算法(包括AI/ML)决策工件是我们决策生态系统中已建立并不断发展的一部分。它们现在是不可或缺的工具,帮助我们管理海量信息,在这个复杂的世界里做出有效的决策。当前的文献中充满了个体人工制品如何违反社会规范和期望的例子(例如,违反公平、隐私或安全规范)。在此背景下,本讨论强调了研究主体中一个未被强调的观点,该研究侧重于评估配备人工智能的社会技术系统中的价值错位。到目前为止,对价值偏差的研究主要集中在单个技术工件的行为上。这一讨论主张在社会技术系统中采用一种更结构化的系统级方法来评估价值一致性。我们主要依靠对公平的研究来使我们的论点更加具体。并且我们利用这个机会强调采用系统视角如何提高我们更好地解释和处理价值偏差的能力。我们的讨论以优先级问题的探索结束,如果我们要确保整个系统的价值一致性,而不仅仅是单个工件,就需要注意这些优先级问题。
{"title":"Steps Towards Value-Aligned Systems","authors":"Osonde A. Osoba, Benjamin Boudreaux, Douglas Yeung","doi":"10.1145/3375627.3375872","DOIUrl":"https://doi.org/10.1145/3375627.3375872","url":null,"abstract":"Algorithmic (including AI/ML) decision-making artifacts are an established and growing part of our decision-making ecosystem. They are now indispensable tools that help us manage the flood of information we use to try to make effective decisions in a complex world. The current literature is full of examples of how individual artifacts violate societal norms and expectations (e.g. violations of fairness, privacy, or safety norms). Against this backdrop, this discussion highlights an under-emphasized perspective in the body of research focused on assessing value misalignment in AI-equipped sociotechnical systems. The research on value misalignment so far has a strong focus on the behavior of individual tech artifacts. This discussion argues for a more structured systems-level approach for assessing value-alignment in sociotechnical systems. We rely primarily on the research on fairness to make our arguments more concrete. And we use the opportunity to highlight how adopting a system perspective improves our ability to explain and address value misalignments better. Our discussion ends with an exploration of priority questions that demand attention if we are to assure the value alignment of whole systems, not just individual artifacts.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"30 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82808749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A Deontic Logic for Programming Rightful Machines 合法机器编程的道义逻辑
Pub Date : 2020-02-04 DOI: 10.1145/3375627.3375867
A. T. Wright
A "rightful machine" is an explicitly moral, autonomous machine agent whose behavior conforms to principles of justice and the positive public law of a legitimate state. In this paper, I set out some basic elements of a deontic logic appropriate for capturing conflicting legal obligations for purposes of programming rightful machines. Justice demands that the prescriptive system of enforceable public laws be consistent, yet statutes or case holdings may often describe legal obligations that contradict; moreover, even fundamental constitutional rights may come into conflict. I argue that a deontic logic of the law should not try to work around such conflicts but, instead, identify and expose them so that the rights and duties that generate inconsistencies in public law can be explicitly qualified and the conflicts resolved. I then argue that a credulous, non-monotonic deontic logic can describe inconsistent legal obligations while meeting the normative demand for consistency in the prescriptive system of public law. I propose an implementation of this logic via a modified form of "answer set programming," which I demonstrate with some simple examples.
一个“合法的机器”是一个明确的道德,自主的机器代理,其行为符合正义原则和一个合法国家的积极公法。在本文中,我列出了道义逻辑的一些基本元素,这些元素适用于捕获相互冲突的法律义务,以便为合法的机器编程。司法要求可执行的公法的规定体系是一致的,然而成文法或判例可能经常描述相互矛盾的法律义务;此外,即使是基本的宪法权利也可能发生冲突。我认为,法律的道义逻辑不应该试图绕过这些冲突,相反,应该识别并揭露它们,以便在公法中产生不一致的权利和义务能够得到明确的限定,并解决冲突。然后,我认为,一种轻信的、非单调的道义逻辑可以描述不一致的法律义务,同时满足公法规定性体系中对一致性的规范性要求。我建议通过一种修改形式的“答案集编程”来实现这个逻辑,我用一些简单的例子来演示它。
{"title":"A Deontic Logic for Programming Rightful Machines","authors":"A. T. Wright","doi":"10.1145/3375627.3375867","DOIUrl":"https://doi.org/10.1145/3375627.3375867","url":null,"abstract":"A \"rightful machine\" is an explicitly moral, autonomous machine agent whose behavior conforms to principles of justice and the positive public law of a legitimate state. In this paper, I set out some basic elements of a deontic logic appropriate for capturing conflicting legal obligations for purposes of programming rightful machines. Justice demands that the prescriptive system of enforceable public laws be consistent, yet statutes or case holdings may often describe legal obligations that contradict; moreover, even fundamental constitutional rights may come into conflict. I argue that a deontic logic of the law should not try to work around such conflicts but, instead, identify and expose them so that the rights and duties that generate inconsistencies in public law can be explicitly qualified and the conflicts resolved. I then argue that a credulous, non-monotonic deontic logic can describe inconsistent legal obligations while meeting the normative demand for consistency in the prescriptive system of public law. I propose an implementation of this logic via a modified form of \"answer set programming,\" which I demonstrate with some simple examples.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"29 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82236361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Joint Optimization of AI Fairness and Utility: A Human-Centered Approach 人工智能公平性与效用的联合优化:以人为本的方法
Pub Date : 2020-02-04 DOI: 10.1145/3375627.3375862
Yunfeng Zhang, R. Bellamy, Kush R. Varshney
Today, AI is increasingly being used in many high-stakes decision-making applications in which fairness is an important concern. Already, there are many examples of AI being biased and making questionable and unfair decisions. The AI research community has proposed many methods to measure and mitigate unwanted biases, but few of them involve inputs from human policy makers. We argue that because different fairness criteria sometimes cannot be simultaneously satisfied, and because achieving fairness often requires sacrificing other objectives such as model accuracy, it is key to acquire and adhere to human policy makers' preferences on how to make the tradeoff among these objectives. In this paper, we propose a framework and some exemplar methods for eliciting such preferences and for optimizing an AI model according to these preferences.
如今,人工智能越来越多地应用于许多高风险决策应用中,其中公平性是一个重要问题。已经有很多人工智能存在偏见、做出可疑和不公平决定的例子。人工智能研究界提出了许多方法来衡量和减轻不必要的偏见,但其中很少涉及人类政策制定者的投入。我们认为,由于不同的公平标准有时不能同时得到满足,并且由于实现公平往往需要牺牲模型准确性等其他目标,因此获取并坚持人类决策者对如何在这些目标之间进行权衡的偏好是关键。在本文中,我们提出了一个框架和一些示例方法来引出这种偏好并根据这些偏好优化人工智能模型。
{"title":"Joint Optimization of AI Fairness and Utility: A Human-Centered Approach","authors":"Yunfeng Zhang, R. Bellamy, Kush R. Varshney","doi":"10.1145/3375627.3375862","DOIUrl":"https://doi.org/10.1145/3375627.3375862","url":null,"abstract":"Today, AI is increasingly being used in many high-stakes decision-making applications in which fairness is an important concern. Already, there are many examples of AI being biased and making questionable and unfair decisions. The AI research community has proposed many methods to measure and mitigate unwanted biases, but few of them involve inputs from human policy makers. We argue that because different fairness criteria sometimes cannot be simultaneously satisfied, and because achieving fairness often requires sacrificing other objectives such as model accuracy, it is key to acquire and adhere to human policy makers' preferences on how to make the tradeoff among these objectives. In this paper, we propose a framework and some exemplar methods for eliciting such preferences and for optimizing an AI model according to these preferences.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"35 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85914220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Deepfakes for Medical Video De-Identification: Privacy Protection and Diagnostic Information Preservation 深度造假用于医疗视频去识别:隐私保护和诊断信息保存
Pub Date : 2020-02-04 DOI: 10.1145/3375627.3375849
Bingquan Zhu, Hao Fang, Yanan Sui, Luming Li
Data sharing for medical research has been difficult as open-sourcing clinical data may violate patient privacy. Traditional methods for face de-identification wipe out facial information entirely, making it impossible to analyze facial behavior. Recent advancements on whole-body keypoints detection also rely on facial input to estimate body keypoints. Both facial and body keypoints are critical in some medical diagnoses, and keypoints invariability after de-identification is of great importance. Here, we propose a solution using deepfake technology, the face swapping technique. While this swapping method has been criticized for invading privacy and portraiture right, it could conversely protect privacy in medical video: patients' faces could be swapped to a proper target face and become unrecognizable. However, it remained an open question that to what extent the swapping de-identification method could affect the automatic detection of body keypoints. In this study, we apply deepfake technology to Parkinson's disease examination videos to de-identify subjects, and quantitatively show that: face-swapping as a de-identification approach is reliable, and it keeps the keypoints almost invariant, significantly better than traditional methods. This study proposes a pipeline for video de-identification and keypoint preservation, clearing up some ethical restrictions for medical data sharing. This work could make open-source high quality medical video datasets more feasible and promote future medical research that benefits our society.
医学研究的数据共享一直很困难,因为开源临床数据可能侵犯患者隐私。传统的人脸去识别方法完全删除了人脸信息,使得无法分析人脸行为。全身关键点检测的最新进展也依赖于面部输入来估计身体关键点。面部关键点和身体关键点在某些医学诊断中都是至关重要的,去识别后关键点的不变性至关重要。在这里,我们提出了一个使用深度伪造技术的解决方案,即人脸交换技术。虽然这种交换方法被批评为侵犯隐私和肖像权,但它可以反过来保护医疗视频中的隐私:患者的脸可以交换到合适的目标脸,变得无法识别。但是,交换去识别方法在多大程度上影响身体关键点的自动检测仍然是一个悬而未决的问题。在本研究中,我们将deepfake技术应用于帕金森病检查视频中,对被试进行去识别,并定量表明:换脸作为一种去识别方法是可靠的,并且它保持关键点几乎不变,明显优于传统方法。本研究提出了一种视频去识别和关键点保存的管道,为医疗数据共享扫清了一些伦理限制。这项工作可以使开源的高质量医疗视频数据集更加可行,并促进未来的医学研究,造福我们的社会。
{"title":"Deepfakes for Medical Video De-Identification: Privacy Protection and Diagnostic Information Preservation","authors":"Bingquan Zhu, Hao Fang, Yanan Sui, Luming Li","doi":"10.1145/3375627.3375849","DOIUrl":"https://doi.org/10.1145/3375627.3375849","url":null,"abstract":"Data sharing for medical research has been difficult as open-sourcing clinical data may violate patient privacy. Traditional methods for face de-identification wipe out facial information entirely, making it impossible to analyze facial behavior. Recent advancements on whole-body keypoints detection also rely on facial input to estimate body keypoints. Both facial and body keypoints are critical in some medical diagnoses, and keypoints invariability after de-identification is of great importance. Here, we propose a solution using deepfake technology, the face swapping technique. While this swapping method has been criticized for invading privacy and portraiture right, it could conversely protect privacy in medical video: patients' faces could be swapped to a proper target face and become unrecognizable. However, it remained an open question that to what extent the swapping de-identification method could affect the automatic detection of body keypoints. In this study, we apply deepfake technology to Parkinson's disease examination videos to de-identify subjects, and quantitatively show that: face-swapping as a de-identification approach is reliable, and it keeps the keypoints almost invariant, significantly better than traditional methods. This study proposes a pipeline for video de-identification and keypoint preservation, clearing up some ethical restrictions for medical data sharing. This work could make open-source high quality medical video datasets more feasible and promote future medical research that benefits our society.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"78 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80840774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
Why Reliabilism Is not Enough: Epistemic and Moral Justification in Machine Learning 为什么可靠性是不够的:机器学习中的认知和道德辩护
Pub Date : 2020-02-04 DOI: 10.1145/3375627.3375866
A. Smart, Larry James, B. Hutchinson, Simone Wu, Shannon Vallor
In this paper we argue that standard calls for explainability that focus on the epistemic inscrutability of black-box machine learning models may be misplaced. If we presume, for the sake of this paper, that machine learning can be a source of knowledge, then it makes sense to wonder what kind of em justification it involves. How do we rationalize on the one hand the seeming justificatory black box with the observed wide adoption of machine learning? We argue that, in general, people implicitly adoptreliabilism regarding machine learning. Reliabilism is an epistemological theory of epistemic justification according to which a belief is warranted if it has been produced by a reliable process or method citegoldman2012reliabilism. We argue that, in cases where model deployments require em moral justification, reliabilism is not sufficient, and instead justifying deployment requires establishing robust human processes as a moral "wrapper'' around machine outputs. We then suggest that, in certain high-stakes domains with moral consequences, reliabilism does not provide another kind of necessary justification---moral justification. Finally, we offer cautions relevant to the (implicit or explicit) adoption of the reliabilist interpretation of machine learning.
在本文中,我们认为对可解释性的标准呼吁关注黑箱机器学习模型的认知不可知性可能是错位的。为了本文的目的,如果我们假设机器学习可以成为知识的来源,那么想知道它包含什么样的em理由是有意义的。一方面,我们如何用机器学习的广泛采用来合理化看似正当的黑箱?我们认为,一般来说,人们在机器学习方面隐含着可靠性。可靠性论是一种认识论理论,根据这种理论,如果一个信念是通过可靠的过程或方法产生的,那么它就是有保证的。我们认为,在模型部署需要道德证明的情况下,可靠性是不够的,相反,证明部署需要建立健壮的人类过程,作为机器输出的道德“包装”。然后我们认为,在某些具有道德后果的高风险领域,可靠性并不提供另一种必要的正当性——道德正当性。最后,我们提出了与(隐式或显式)采用机器学习的可靠性解释相关的警告。
{"title":"Why Reliabilism Is not Enough: Epistemic and Moral Justification in Machine Learning","authors":"A. Smart, Larry James, B. Hutchinson, Simone Wu, Shannon Vallor","doi":"10.1145/3375627.3375866","DOIUrl":"https://doi.org/10.1145/3375627.3375866","url":null,"abstract":"In this paper we argue that standard calls for explainability that focus on the epistemic inscrutability of black-box machine learning models may be misplaced. If we presume, for the sake of this paper, that machine learning can be a source of knowledge, then it makes sense to wonder what kind of em justification it involves. How do we rationalize on the one hand the seeming justificatory black box with the observed wide adoption of machine learning? We argue that, in general, people implicitly adoptreliabilism regarding machine learning. Reliabilism is an epistemological theory of epistemic justification according to which a belief is warranted if it has been produced by a reliable process or method citegoldman2012reliabilism. We argue that, in cases where model deployments require em moral justification, reliabilism is not sufficient, and instead justifying deployment requires establishing robust human processes as a moral \"wrapper'' around machine outputs. We then suggest that, in certain high-stakes domains with moral consequences, reliabilism does not provide another kind of necessary justification---moral justification. Finally, we offer cautions relevant to the (implicit or explicit) adoption of the reliabilist interpretation of machine learning.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"63 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78829833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Biased Priorities, Biased Outcomes: Three Recommendations for Ethics-oriented Data Annotation Practices 有偏见的优先级,有偏见的结果:面向伦理的数据注释实践的三个建议
Pub Date : 2020-02-04 DOI: 10.1145/3375627.3375809
Gunay Kazimzade, Milagros Miceli
In this paper, we analyze the relation between data-related biases and practices of data annotation, by placing them in the context of market economy. We understand annotation as a praxis related to the sensemaking of data and investigate annotation practices for vision models by focusing on the values that are prioritized by industrial decision-makers and practitioners. The quality of data is critical for machine learning models as it holds the power to (mis-)represent the population it is intended to analyze. For autonomous systems to be able to make sense of the world, humans first need to make sense of the data these systems will be trained on. This paper addresses this issue, guided by the following research questions: Which goals are prioritized by decision-makers at the data annotation stage? How do these priorities correlate with data-related bias issues? Focusing on work practices and their context, our research goal aims at understanding the logics driving companies and their impact on the performed annotations. The study follows a qualitative design and is based on 24 interviews with relevant actors and extensive participatory observations, including several weeks of fieldwork at two companies dedicated to data annotation for vision models in Buenos Aires, Argentina and Sofia, Bulgaria. The prevalence of market-oriented values over socially responsible approaches is argued based on three corporate priorities that inform work practices in this field and directly shape the annotations performed: profit (short deadlines connected to the strive for profit are prioritized over alternative approaches that could prevent biased outcomes), standardization (the strive for standardized and, in many cases, reductive or biased annotations to make data fit the products and revenue plans of clients), and opacity (related to client's power to impose their criteria on the annotations that are performed. Criteria that most of the times remain opaque due to corporate confidentiality). Finally, we introduce three elements, aiming at developing ethics-oriented practices of data annotation, that could help prevent biased outcomes: transparency (regarding the documentation of data transformations, including information on responsibilities and criteria for decision-making.), education (training on the potential harms caused by AI and its ethical implications, that could help data annotators and related roles adopt a more critical approach towards the interpretation and labeling of data), and regulations (clear guidelines for ethical AI developed at the governmental level and applied both in private and public organizations).
本文从市场经济的角度出发,分析了数据相关偏差与数据标注实践的关系。我们将注释理解为一种与数据语义相关的实践,并通过关注工业决策者和实践者优先考虑的价值来研究视觉模型的注释实践。数据的质量对于机器学习模型至关重要,因为它有能力(错误地)代表它想要分析的人群。为了让自主系统能够理解世界,人类首先需要理解这些系统将接受训练的数据。本文以以下研究问题为指导,解决了这一问题:决策者在数据注释阶段优先考虑哪些目标?这些优先级如何与数据相关的偏见问题相关联?专注于工作实践及其上下文,我们的研究目标旨在理解驱动公司的逻辑及其对执行注释的影响。该研究遵循定性设计,基于对相关参与者的24次访谈和广泛的参与性观察,包括在阿根廷布宜诺斯艾利斯和保加利亚索非亚两家致力于视觉模型数据注释的公司进行的为期数周的实地考察。以市场为导向的价值观在社会责任方法上的盛行是基于三个公司优先事项来争论的,这些优先事项为该领域的工作实践提供了信息,并直接影响了所执行的注释:利润(与追求利润相关的短期限优先于可以防止偏差结果的替代方法),标准化(争取标准化,在许多情况下,简化或有偏差的注释,以使数据符合客户的产品和收入计划),以及不透明性(与客户将其标准强加于所执行的注释的权力有关。由于公司保密,这些标准在大多数情况下仍然不透明)。最后,我们介绍了三个元素,旨在发展以伦理为导向的数据注释实践,这有助于防止有偏见的结果:透明度(关于数据转换的文档,包括关于责任和决策标准的信息)、教育(关于人工智能造成的潜在危害及其伦理影响的培训,这可以帮助数据注释者和相关角色对数据的解释和标记采取更关键的方法)、和法规(在政府层面制定的道德人工智能的明确指导方针,并适用于私人和公共组织)。
{"title":"Biased Priorities, Biased Outcomes: Three Recommendations for Ethics-oriented Data Annotation Practices","authors":"Gunay Kazimzade, Milagros Miceli","doi":"10.1145/3375627.3375809","DOIUrl":"https://doi.org/10.1145/3375627.3375809","url":null,"abstract":"In this paper, we analyze the relation between data-related biases and practices of data annotation, by placing them in the context of market economy. We understand annotation as a praxis related to the sensemaking of data and investigate annotation practices for vision models by focusing on the values that are prioritized by industrial decision-makers and practitioners. The quality of data is critical for machine learning models as it holds the power to (mis-)represent the population it is intended to analyze. For autonomous systems to be able to make sense of the world, humans first need to make sense of the data these systems will be trained on. This paper addresses this issue, guided by the following research questions: Which goals are prioritized by decision-makers at the data annotation stage? How do these priorities correlate with data-related bias issues? Focusing on work practices and their context, our research goal aims at understanding the logics driving companies and their impact on the performed annotations. The study follows a qualitative design and is based on 24 interviews with relevant actors and extensive participatory observations, including several weeks of fieldwork at two companies dedicated to data annotation for vision models in Buenos Aires, Argentina and Sofia, Bulgaria. The prevalence of market-oriented values over socially responsible approaches is argued based on three corporate priorities that inform work practices in this field and directly shape the annotations performed: profit (short deadlines connected to the strive for profit are prioritized over alternative approaches that could prevent biased outcomes), standardization (the strive for standardized and, in many cases, reductive or biased annotations to make data fit the products and revenue plans of clients), and opacity (related to client's power to impose their criteria on the annotations that are performed. Criteria that most of the times remain opaque due to corporate confidentiality). Finally, we introduce three elements, aiming at developing ethics-oriented practices of data annotation, that could help prevent biased outcomes: transparency (regarding the documentation of data transformations, including information on responsibilities and criteria for decision-making.), education (training on the potential harms caused by AI and its ethical implications, that could help data annotators and related roles adopt a more critical approach towards the interpretation and labeling of data), and regulations (clear guidelines for ethical AI developed at the governmental level and applied both in private and public organizations).","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"449 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77854278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
期刊
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1