Melkamu Mersha, Khang Lam, Joseph Wood, Ali AlShami, Jugal Kalita
{"title":"可解释的人工智能:需求、技术、应用和未来方向调查","authors":"Melkamu Mersha, Khang Lam, Joseph Wood, Ali AlShami, Jugal Kalita","doi":"arxiv-2409.00265","DOIUrl":null,"url":null,"abstract":"Artificial intelligence models encounter significant challenges due to their\nblack-box nature, particularly in safety-critical domains such as healthcare,\nfinance, and autonomous vehicles. Explainable Artificial Intelligence (XAI)\naddresses these challenges by providing explanations for how these models make\ndecisions and predictions, ensuring transparency, accountability, and fairness.\nExisting studies have examined the fundamental concepts of XAI, its general\nprinciples, and the scope of XAI techniques. However, there remains a gap in\nthe literature as there are no comprehensive reviews that delve into the\ndetailed mathematical representations, design methodologies of XAI models, and\nother associated aspects. This paper provides a comprehensive literature review\nencompassing common terminologies and definitions, the need for XAI,\nbeneficiaries of XAI, a taxonomy of XAI methods, and the application of XAI\nmethods in different application areas. The survey is aimed at XAI researchers,\nXAI practitioners, AI model developers, and XAI beneficiaries who are\ninterested in enhancing the trustworthiness, transparency, accountability, and\nfairness of their AI models.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Explainable Artificial Intelligence: A Survey of Needs, Techniques, Applications, and Future Direction\",\"authors\":\"Melkamu Mersha, Khang Lam, Joseph Wood, Ali AlShami, Jugal Kalita\",\"doi\":\"arxiv-2409.00265\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Artificial intelligence models encounter significant challenges due to their\\nblack-box nature, particularly in safety-critical domains such as healthcare,\\nfinance, and autonomous vehicles. Explainable Artificial Intelligence (XAI)\\naddresses these challenges by providing explanations for how these models make\\ndecisions and predictions, ensuring transparency, accountability, and fairness.\\nExisting studies have examined the fundamental concepts of XAI, its general\\nprinciples, and the scope of XAI techniques. However, there remains a gap in\\nthe literature as there are no comprehensive reviews that delve into the\\ndetailed mathematical representations, design methodologies of XAI models, and\\nother associated aspects. This paper provides a comprehensive literature review\\nencompassing common terminologies and definitions, the need for XAI,\\nbeneficiaries of XAI, a taxonomy of XAI methods, and the application of XAI\\nmethods in different application areas. The survey is aimed at XAI researchers,\\nXAI practitioners, AI model developers, and XAI beneficiaries who are\\ninterested in enhancing the trustworthiness, transparency, accountability, and\\nfairness of their AI models.\",\"PeriodicalId\":501479,\"journal\":{\"name\":\"arXiv - CS - Artificial Intelligence\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.00265\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.00265","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Explainable Artificial Intelligence: A Survey of Needs, Techniques, Applications, and Future Direction
Artificial intelligence models encounter significant challenges due to their
black-box nature, particularly in safety-critical domains such as healthcare,
finance, and autonomous vehicles. Explainable Artificial Intelligence (XAI)
addresses these challenges by providing explanations for how these models make
decisions and predictions, ensuring transparency, accountability, and fairness.
Existing studies have examined the fundamental concepts of XAI, its general
principles, and the scope of XAI techniques. However, there remains a gap in
the literature as there are no comprehensive reviews that delve into the
detailed mathematical representations, design methodologies of XAI models, and
other associated aspects. This paper provides a comprehensive literature review
encompassing common terminologies and definitions, the need for XAI,
beneficiaries of XAI, a taxonomy of XAI methods, and the application of XAI
methods in different application areas. The survey is aimed at XAI researchers,
XAI practitioners, AI model developers, and XAI beneficiaries who are
interested in enhancing the trustworthiness, transparency, accountability, and
fairness of their AI models.