Establishing and evaluating trustworthy AI: overview and research challenges.

IF 2.4 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Frontiers in Big Data Pub Date : 2024-11-29 eCollection Date: 2024-01-01 DOI:10.3389/fdata.2024.1467222
Dominik Kowald, Sebastian Scher, Viktoria Pammer-Schindler, Peter Müllner, Kerstin Waxnegger, Lea Demelius, Angela Fessl, Maximilian Toller, Inti Gabriel Mendoza Estrada, Ilija Šimić, Vedran Sabol, Andreas Trügler, Eduardo Veas, Roman Kern, Tomislav Nad, Simone Kopeinik
{"title":"Establishing and evaluating trustworthy AI: overview and research challenges.","authors":"Dominik Kowald, Sebastian Scher, Viktoria Pammer-Schindler, Peter Müllner, Kerstin Waxnegger, Lea Demelius, Angela Fessl, Maximilian Toller, Inti Gabriel Mendoza Estrada, Ilija Šimić, Vedran Sabol, Andreas Trügler, Eduardo Veas, Roman Kern, Tomislav Nad, Simone Kopeinik","doi":"10.3389/fdata.2024.1467222","DOIUrl":null,"url":null,"abstract":"<p><p>Artificial intelligence (AI) technologies (re-)shape modern life, driving innovation in a wide range of sectors. However, some AI systems have yielded unexpected or undesirable outcomes or have been used in questionable manners. As a result, there has been a surge in public and academic discussions about aspects that AI systems must fulfill to be considered trustworthy. In this paper, we synthesize existing conceptualizations of trustworthy AI along six requirements: (1) human agency and oversight, (2) fairness and non-discrimination, (3) transparency and explainability, (4) robustness and accuracy, (5) privacy and security, and (6) accountability. For each one, we provide a definition, describe how it can be established and evaluated, and discuss requirement-specific research challenges. Finally, we conclude this analysis by identifying overarching research challenges across the requirements with respect to (1) interdisciplinary research, (2) conceptual clarity, (3) context-dependency, (4) dynamics in evolving systems, and (5) investigations in real-world contexts. Thus, this paper synthesizes and consolidates a wide-ranging and active discussion currently taking place in various academic sub-communities and public forums. It aims to serve as a reference for a broad audience and as a basis for future research directions.</p>","PeriodicalId":52859,"journal":{"name":"Frontiers in Big Data","volume":"7 ","pages":"1467222"},"PeriodicalIF":2.4000,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11638207/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Big Data","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/fdata.2024.1467222","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/1 0:00:00","PubModel":"eCollection","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Artificial intelligence (AI) technologies (re-)shape modern life, driving innovation in a wide range of sectors. However, some AI systems have yielded unexpected or undesirable outcomes or have been used in questionable manners. As a result, there has been a surge in public and academic discussions about aspects that AI systems must fulfill to be considered trustworthy. In this paper, we synthesize existing conceptualizations of trustworthy AI along six requirements: (1) human agency and oversight, (2) fairness and non-discrimination, (3) transparency and explainability, (4) robustness and accuracy, (5) privacy and security, and (6) accountability. For each one, we provide a definition, describe how it can be established and evaluated, and discuss requirement-specific research challenges. Finally, we conclude this analysis by identifying overarching research challenges across the requirements with respect to (1) interdisciplinary research, (2) conceptual clarity, (3) context-dependency, (4) dynamics in evolving systems, and (5) investigations in real-world contexts. Thus, this paper synthesizes and consolidates a wide-ranging and active discussion currently taking place in various academic sub-communities and public forums. It aims to serve as a reference for a broad audience and as a basis for future research directions.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
建立和评估值得信赖的人工智能:概述与研究挑战。
人工智能(AI)技术(重新)塑造了现代生活,推动了各行各业的创新。然而,一些人工智能系统产生了意想不到或不理想的结果,或者在使用过程中出现了问题。因此,公众和学术界对人工智能系统必须满足哪些方面才能被认为是值得信赖的讨论激增。在本文中,我们按照六项要求综合了现有的可信人工智能概念:(1)人类机构和监督,(2)公平和非歧视,(3)透明度和可解释性,(4)稳健性和准确性,(5)隐私和安全,以及(6)问责制。对于每一项,我们都给出了定义,描述了如何建立和评估,并讨论了具体要求的研究挑战。最后,我们通过确定各项要求在以下方面的总体研究挑战来结束本分析:(1) 跨学科研究,(2) 概念清晰度,(3) 上下文依赖性,(4) 演进系统的动态性,以及 (5) 现实世界背景下的调查。因此,本文综合并整合了目前在各种学术分社区和公共论坛上开展的广泛而活跃的讨论。本文旨在为广大读者提供参考,并为未来的研究方向奠定基础。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
5.20
自引率
3.20%
发文量
122
审稿时长
13 weeks
期刊最新文献
How critical is SME financial literacy and digital financial access for financial and economic development in the expanded BRICS block? Leveraging compact convolutional transformers for enhanced COVID-19 detection in chest X-rays: a grad-CAM visualization approach. Predicting student self-efficacy in Muslim societies using machine learning algorithms. Application of a localized morphometrics approach to imaging-derived brain phenotypes for genotype-phenotype associations in pediatric mental health and neurodevelopmental disorders. Advancing cybersecurity and privacy with artificial intelligence: current trends and future research directions.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1