It is Not “Accuracy vs. Explainability”—We Need Both for Trustworthy AI Systems

Dragutin Petkovic
{"title":"It is Not “Accuracy vs. Explainability”—We Need Both for Trustworthy AI Systems","authors":"Dragutin Petkovic","doi":"10.1109/TTS.2023.3239921","DOIUrl":null,"url":null,"abstract":"We are witnessing the emergence of an “AI economy and society” where AI technologies and applications are increasingly impacting health care, business, transportation, defense and many aspects of everyday life. Many successes have been reported where AI systems even surpassed the accuracy of human experts. However, AI systems may produce errors, can exhibit bias, may be sensitive to noise in the data, and often lack technical and judicial transparency resulting in reduction in trust and challenges to their adoption. These recent shortcomings and concerns have been documented in both the scientific and general press such as accidents with self-driving cars, biases in healthcare or hiring and face recognition systems for people of color, and seemingly correct decisions later found to be made due to wrong reasons etc. This has resulted in the emergence of many government and regulatory initiatives requiring trustworthy and ethical AI to provide accuracy and robustness, some form of explainability, human control and oversight, elimination of bias, judicial transparency and safety. The challenges in delivery of trustworthy AI systems have motivated intense research on explainable AI systems (XAI). The original aim of XAI is to provide human understandable information of how AI systems make their decisions in order to increase user trust. In this paper we first very briefly summarize current XAI work and then challenge the recent arguments that present “accuracy vs. explainability” as being mutually exclusive and for focusing mainly on deep learning with its limited XAI capabilities. We then present our recommendations for the broad use of XAI in all stages of delivery of high stakes trustworthy AI systems, e.g., development; validation/certification; and trustworthy production and maintenance.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"4 1","pages":"46-53"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on technology and society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10029927/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

We are witnessing the emergence of an “AI economy and society” where AI technologies and applications are increasingly impacting health care, business, transportation, defense and many aspects of everyday life. Many successes have been reported where AI systems even surpassed the accuracy of human experts. However, AI systems may produce errors, can exhibit bias, may be sensitive to noise in the data, and often lack technical and judicial transparency resulting in reduction in trust and challenges to their adoption. These recent shortcomings and concerns have been documented in both the scientific and general press such as accidents with self-driving cars, biases in healthcare or hiring and face recognition systems for people of color, and seemingly correct decisions later found to be made due to wrong reasons etc. This has resulted in the emergence of many government and regulatory initiatives requiring trustworthy and ethical AI to provide accuracy and robustness, some form of explainability, human control and oversight, elimination of bias, judicial transparency and safety. The challenges in delivery of trustworthy AI systems have motivated intense research on explainable AI systems (XAI). The original aim of XAI is to provide human understandable information of how AI systems make their decisions in order to increase user trust. In this paper we first very briefly summarize current XAI work and then challenge the recent arguments that present “accuracy vs. explainability” as being mutually exclusive and for focusing mainly on deep learning with its limited XAI capabilities. We then present our recommendations for the broad use of XAI in all stages of delivery of high stakes trustworthy AI systems, e.g., development; validation/certification; and trustworthy production and maintenance.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
这不是“准确性与可解释性”——值得信赖的人工智能系统需要两者
我们正在见证“人工智能经济和社会”的出现,人工智能技术和应用正在日益影响医疗保健、商业、交通、国防和日常生活的许多方面。据报道,人工智能系统的准确性甚至超过了人类专家。然而,人工智能系统可能会产生错误,可能表现出偏见,可能对数据中的噪声敏感,并且往往缺乏技术和司法透明度,导致信任度降低,并对其采用提出挑战。这些最近的缺点和担忧已经在科学界和普通媒体上得到了记录,比如自动驾驶汽车的事故、医疗保健或雇佣方面的偏见以及有色人种的人脸识别系统,以及后来发现由于错误原因做出的看似正确的决定等。这导致了许多政府和监管举措的出现,要求值得信赖和道德的人工智能提供准确性和稳健性、某种形式的可解释性、人为控制和监督、消除偏见、司法透明度和安全性。在交付值得信赖的人工智能系统方面的挑战促使人们对可解释的人工智能(XAI)进行了深入的研究。XAI的最初目的是提供人工智能系统如何做出决策的人类可理解信息,以增加用户的信任。在本文中,我们首先非常简要地总结了当前的XAI工作,然后对最近的论点提出质疑,这些论点认为“准确性与可解释性”是相互排斥的,并且主要关注具有有限XAI能力的深度学习。然后,我们提出了在交付高风险、值得信赖的人工智能系统的所有阶段广泛使用XAI的建议,例如开发;验证/认证;值得信赖的生产和维护。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
2024 Index IEEE Transactions on Technology and Society Vol. 5 Front Cover Table of Contents IEEE Transactions on Technology and Society Publication Information In This Special: Co-Designing Consumer Technology With Society
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1