可解释的人工智能:需求、技术、应用和未来方向调查

Melkamu Mersha, Khang Lam, Joseph Wood, Ali AlShami, Jugal Kalita
{"title":"可解释的人工智能:需求、技术、应用和未来方向调查","authors":"Melkamu Mersha, Khang Lam, Joseph Wood, Ali AlShami, Jugal Kalita","doi":"arxiv-2409.00265","DOIUrl":null,"url":null,"abstract":"Artificial intelligence models encounter significant challenges due to their\nblack-box nature, particularly in safety-critical domains such as healthcare,\nfinance, and autonomous vehicles. Explainable Artificial Intelligence (XAI)\naddresses these challenges by providing explanations for how these models make\ndecisions and predictions, ensuring transparency, accountability, and fairness.\nExisting studies have examined the fundamental concepts of XAI, its general\nprinciples, and the scope of XAI techniques. However, there remains a gap in\nthe literature as there are no comprehensive reviews that delve into the\ndetailed mathematical representations, design methodologies of XAI models, and\nother associated aspects. This paper provides a comprehensive literature review\nencompassing common terminologies and definitions, the need for XAI,\nbeneficiaries of XAI, a taxonomy of XAI methods, and the application of XAI\nmethods in different application areas. The survey is aimed at XAI researchers,\nXAI practitioners, AI model developers, and XAI beneficiaries who are\ninterested in enhancing the trustworthiness, transparency, accountability, and\nfairness of their AI models.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Explainable Artificial Intelligence: A Survey of Needs, Techniques, Applications, and Future Direction\",\"authors\":\"Melkamu Mersha, Khang Lam, Joseph Wood, Ali AlShami, Jugal Kalita\",\"doi\":\"arxiv-2409.00265\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Artificial intelligence models encounter significant challenges due to their\\nblack-box nature, particularly in safety-critical domains such as healthcare,\\nfinance, and autonomous vehicles. Explainable Artificial Intelligence (XAI)\\naddresses these challenges by providing explanations for how these models make\\ndecisions and predictions, ensuring transparency, accountability, and fairness.\\nExisting studies have examined the fundamental concepts of XAI, its general\\nprinciples, and the scope of XAI techniques. However, there remains a gap in\\nthe literature as there are no comprehensive reviews that delve into the\\ndetailed mathematical representations, design methodologies of XAI models, and\\nother associated aspects. This paper provides a comprehensive literature review\\nencompassing common terminologies and definitions, the need for XAI,\\nbeneficiaries of XAI, a taxonomy of XAI methods, and the application of XAI\\nmethods in different application areas. The survey is aimed at XAI researchers,\\nXAI practitioners, AI model developers, and XAI beneficiaries who are\\ninterested in enhancing the trustworthiness, transparency, accountability, and\\nfairness of their AI models.\",\"PeriodicalId\":501479,\"journal\":{\"name\":\"arXiv - CS - Artificial Intelligence\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.00265\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.00265","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

人工智能模型因其黑箱性质而面临巨大挑战,尤其是在医疗保健、金融和自动驾驶汽车等安全关键领域。可解释的人工智能(XAI)通过解释这些模型如何做出决策和预测来应对这些挑战,从而确保透明度、问责制和公平性。然而,文献中仍然存在空白,因为没有全面的综述深入探讨 XAI 模型的详细数学表达、设计方法和其他相关方面。本文提供了一份全面的文献综述,包括常用术语和定义、XAI 的需求、XAI 的受益者、XAI 方法分类以及 XAI 方法在不同应用领域的应用。该调查面向XAI研究人员、XAI从业人员、人工智能模型开发人员和XAI受益人,他们都对提高人工智能模型的可信度、透明度、问责制和公平性感兴趣。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Explainable Artificial Intelligence: A Survey of Needs, Techniques, Applications, and Future Direction
Artificial intelligence models encounter significant challenges due to their black-box nature, particularly in safety-critical domains such as healthcare, finance, and autonomous vehicles. Explainable Artificial Intelligence (XAI) addresses these challenges by providing explanations for how these models make decisions and predictions, ensuring transparency, accountability, and fairness. Existing studies have examined the fundamental concepts of XAI, its general principles, and the scope of XAI techniques. However, there remains a gap in the literature as there are no comprehensive reviews that delve into the detailed mathematical representations, design methodologies of XAI models, and other associated aspects. This paper provides a comprehensive literature review encompassing common terminologies and definitions, the need for XAI, beneficiaries of XAI, a taxonomy of XAI methods, and the application of XAI methods in different application areas. The survey is aimed at XAI researchers, XAI practitioners, AI model developers, and XAI beneficiaries who are interested in enhancing the trustworthiness, transparency, accountability, and fairness of their AI models.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Abductive explanations of classifiers under constraints: Complexity and properties Explaining Non-monotonic Normative Reasoning using Argumentation Theory with Deontic Logic Towards Explainable Goal Recognition Using Weight of Evidence (WoE): A Human-Centered Approach A Metric Hybrid Planning Approach to Solving Pandemic Planning Problems with Simple SIR Models Neural Networks for Vehicle Routing Problem
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1