{"title":"The Value of Trustworthy AI","authors":"D. Danks","doi":"10.1145/3306618.3314228","DOIUrl":null,"url":null,"abstract":"Trust is one of the most critical relations in our human lives, whether trust in one another, trust in the artifacts that we use everyday, or trust of an AI system. Even a cursory examination of the literatures in human-computer interaction, human-robot interaction, and numerous other disciplines reveals a deep, persistent concern with the nature of trust in AI, and the conditions under which it can be generated, reduced, repaired, or influenced. At a high level, we often understand trust as a relation in which the trustor makes oneself vulnerable based on positive expectations about the behavior or intentions of the trustee [1]. For example, when I trust my car to start in the morning, I make myself vulnerable (e.g., I risk that I will be late to work if it does not start) because I have the positive expectation that it actually will start. This high-level characterization is relatively unhelpful, however, particularly given the wide range of disciplines that have examined the relation of trust, ranging from organizational behavior to game theory to ethics to cognitive science. The picture that emerges from, for example, social psychology (i.e., two distinct kinds of trust depending on whether one knows the trustee's behaviors or intentions/ values) appears to be quite different from the one that emerges from moral philosophy (i.e., a single, highly-moralized notion), even though both are consistent with this high-level characterization. This talk first introduces that diversity of types of 'trust', but then argues that we can make progress towards a unified characterization by focusing on the function of trust. That is, we should ask why care whether we can trust our artifacts, AI, or fellow humans, as that can help to illuminate features of trust that are shared across domains, trustors, and trustees. I contend that one reason to desire trust is an \"almost-necessary\" condition on ethical action: namely, that the user has a reasonable belief that the system (whether human or machine) will behave approximately as intended. This condition is obviously not sufficient for ethical use, nor is it strictly necessary since the best available option might nonetheless be one for which the user lacks appropriate reasonable beliefs. Nonetheless, it provides a reasonable starting point for an analysis of 'trust'. More precisely, I propose that this condition indicates a role for trust as providing precisely those reasonable beliefs, at least when we have appropriately grounded trust. That is, we can understand 'appropriate trust' as obtaining when the trustor has justified beliefs that the trustee has suitable dispositions. As there is variation in the trustor's goals and values, and also the openness of the context of use, then different specific versions of 'appropriate trust' result as those variations lead to different types of focal dispositions, specific dispositions, or observability of dispositions, respectively. For example, in an open context (i.e., one where the possibilities cannot be exhaustively enumerated), the trustee's full dispositions will not be directly observable, but rather must be inferred from observations. This framework provides a unification of the different theories of 'trust' developed in different disciplines. Moreover, it provides clarity about one key function of trust, and thereby helps us to understand the value of (appropriate) trust. We need to trust our AI systems because that is a precondition for the ethical, responsible use of them.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"78 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3306618.3314228","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 16

Abstract

Trust is one of the most critical relations in our human lives, whether trust in one another, trust in the artifacts that we use everyday, or trust of an AI system. Even a cursory examination of the literatures in human-computer interaction, human-robot interaction, and numerous other disciplines reveals a deep, persistent concern with the nature of trust in AI, and the conditions under which it can be generated, reduced, repaired, or influenced. At a high level, we often understand trust as a relation in which the trustor makes oneself vulnerable based on positive expectations about the behavior or intentions of the trustee [1]. For example, when I trust my car to start in the morning, I make myself vulnerable (e.g., I risk that I will be late to work if it does not start) because I have the positive expectation that it actually will start. This high-level characterization is relatively unhelpful, however, particularly given the wide range of disciplines that have examined the relation of trust, ranging from organizational behavior to game theory to ethics to cognitive science. The picture that emerges from, for example, social psychology (i.e., two distinct kinds of trust depending on whether one knows the trustee's behaviors or intentions/ values) appears to be quite different from the one that emerges from moral philosophy (i.e., a single, highly-moralized notion), even though both are consistent with this high-level characterization. This talk first introduces that diversity of types of 'trust', but then argues that we can make progress towards a unified characterization by focusing on the function of trust. That is, we should ask why care whether we can trust our artifacts, AI, or fellow humans, as that can help to illuminate features of trust that are shared across domains, trustors, and trustees. I contend that one reason to desire trust is an "almost-necessary" condition on ethical action: namely, that the user has a reasonable belief that the system (whether human or machine) will behave approximately as intended. This condition is obviously not sufficient for ethical use, nor is it strictly necessary since the best available option might nonetheless be one for which the user lacks appropriate reasonable beliefs. Nonetheless, it provides a reasonable starting point for an analysis of 'trust'. More precisely, I propose that this condition indicates a role for trust as providing precisely those reasonable beliefs, at least when we have appropriately grounded trust. That is, we can understand 'appropriate trust' as obtaining when the trustor has justified beliefs that the trustee has suitable dispositions. As there is variation in the trustor's goals and values, and also the openness of the context of use, then different specific versions of 'appropriate trust' result as those variations lead to different types of focal dispositions, specific dispositions, or observability of dispositions, respectively. For example, in an open context (i.e., one where the possibilities cannot be exhaustively enumerated), the trustee's full dispositions will not be directly observable, but rather must be inferred from observations. This framework provides a unification of the different theories of 'trust' developed in different disciplines. Moreover, it provides clarity about one key function of trust, and thereby helps us to understand the value of (appropriate) trust. We need to trust our AI systems because that is a precondition for the ethical, responsible use of them.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
值得信赖的人工智能的价值
信任是我们人类生活中最重要的关系之一,无论是对彼此的信任,对我们每天使用的人工制品的信任,还是对人工智能系统的信任。即使对人机交互、人机交互和许多其他学科的文献进行粗略的检查,也会显示出对人工智能信任性质的深刻而持久的关注,以及它可以产生、减少、修复或影响的条件。在高层次上,我们经常把信任理解为一种关系,在这种关系中,委托人基于对受托人的行为或意图的积极期望而使自己变得脆弱。例如,当我相信我的车在早上启动时,我让自己变得脆弱(例如,如果它不启动,我冒着上班迟到的风险),因为我有积极的期望,它真的会启动。然而,这种高层次的描述是相对没有帮助的,特别是考虑到研究信任关系的学科范围很广,从组织行为学到博弈论,从伦理学到认知科学。例如,从社会心理学(即,两种不同类型的信任取决于一个人是否知道受托人的行为或意图/价值观)中出现的图景似乎与道德哲学(即,单一的,高度道德化的概念)中出现的图景大不相同,尽管两者都符合这种高层次的特征。本演讲首先介绍了“信任”类型的多样性,然后认为我们可以通过关注信任的功能来实现统一的特征。也就是说,我们应该问为什么要关心我们是否可以信任我们的工件、人工智能或人类同胞,因为这有助于阐明跨领域、受托人和受托人共享的信任特征。我认为,渴望信任的一个原因是道德行为的“几乎必要”条件:即用户有合理的信念,相信系统(无论是人还是机器)将大致按照预期运行。这个条件显然不是道德使用的充分条件,也不是严格必要的条件,因为最好的选择可能是用户缺乏适当的合理信念的选择。尽管如此,它为分析“信任”提供了一个合理的起点。更准确地说,我认为这种情况表明信任的作用是提供那些合理的信念,至少当我们有适当的信任基础时。也就是说,我们可以将“适当信托”理解为当委托人有理由相信受托人有适当的处置时获得的。由于委托人的目标和价值观存在差异,使用情境的开放性也存在差异,因此“适当信任”的不同具体版本会导致不同类型的焦点倾向、特定倾向或倾向的可观察性。例如,在一个开放的环境中(即,不能详尽列举可能性的环境),受托人的全部倾向将不能直接观察到,而必须从观察中推断出来。这个框架提供了不同学科发展的不同“信任”理论的统一。此外,它明确了信任的一个关键功能,从而帮助我们理解(适当的)信任的价值。我们需要信任我们的人工智能系统,因为这是道德、负责任地使用它们的先决条件。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Semantics Derived Automatically from Language Corpora Contain Human-like Moral Choices Requirements for an Artificial Agent with Norm Competence Enabling Effective Transparency: Towards User-Centric Intelligent Systems Killer Robots and Human Dignity The Value of Trustworthy AI
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1