Shakran Mahmood, Colin Teo, Jeremy Sim, Wei Zhang, Jiang Muyun, R. Bhuvana, Kejia Teo, Tseng Tsai Yeo, Jia Lu, Balazs Gulyas, Cuntai Guan
{"title":"可解释人工智能在认知研究中的应用:范围综述","authors":"Shakran Mahmood, Colin Teo, Jeremy Sim, Wei Zhang, Jiang Muyun, R. Bhuvana, Kejia Teo, Tseng Tsai Yeo, Jia Lu, Balazs Gulyas, Cuntai Guan","doi":"10.1002/ibra.12174","DOIUrl":null,"url":null,"abstract":"<p>The rapid advancement of artificial intelligence (AI) has sparked renewed discussions on its trustworthiness and the concept of eXplainable AI (XAI). Recent research in neuroscience has emphasized the relevance of XAI in studying cognition. This scoping review aims to identify and analyze various XAI methods used to study the mechanisms and features of cognitive function and dysfunction. In this study, the collected evidence is qualitatively assessed to develop an effective framework for approaching XAI in cognitive neuroscience. Based on the Joanna Briggs Institute and preferred reporting items for systematic reviews and meta-analyses extension for scoping review guidelines, we searched for peer-reviewed articles on MEDLINE, Embase, Web of Science, Cochrane Central Register of Controlled Trials, and Google Scholar. Two reviewers performed data screening, extraction, and thematic analysis in parallel. Twelve eligible experimental studies published in the past decade were included. The results showed that the majority (75%) focused on normal cognitive functions such as perception, social cognition, language, executive function, and memory, while others (25%) examined impaired cognition. The predominant XAI methods employed were intrinsic XAI (58.3%), followed by attribution-based (41.7%) and example-based (8.3%) post hoc methods. Explainability was applied at a local (66.7%) or global (33.3%) scope. The findings, predominantly correlational, were anatomical (83.3%) or nonanatomical (16.7%). In conclusion, while these XAI techniques were lauded for their predictive power, robustness, testability, and plausibility, limitations included oversimplification, confounding factors, and inconsistencies. The reviewed studies showcased the potential of XAI models while acknowledging current challenges in causality and oversimplification, particularly emphasizing the need for reproducibility.</p>","PeriodicalId":94030,"journal":{"name":"Ibrain","volume":"10 3","pages":"245-265"},"PeriodicalIF":0.0000,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ibra.12174","citationCount":"0","resultStr":"{\"title\":\"The application of eXplainable artificial intelligence in studying cognition: A scoping review\",\"authors\":\"Shakran Mahmood, Colin Teo, Jeremy Sim, Wei Zhang, Jiang Muyun, R. Bhuvana, Kejia Teo, Tseng Tsai Yeo, Jia Lu, Balazs Gulyas, Cuntai Guan\",\"doi\":\"10.1002/ibra.12174\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>The rapid advancement of artificial intelligence (AI) has sparked renewed discussions on its trustworthiness and the concept of eXplainable AI (XAI). Recent research in neuroscience has emphasized the relevance of XAI in studying cognition. This scoping review aims to identify and analyze various XAI methods used to study the mechanisms and features of cognitive function and dysfunction. In this study, the collected evidence is qualitatively assessed to develop an effective framework for approaching XAI in cognitive neuroscience. Based on the Joanna Briggs Institute and preferred reporting items for systematic reviews and meta-analyses extension for scoping review guidelines, we searched for peer-reviewed articles on MEDLINE, Embase, Web of Science, Cochrane Central Register of Controlled Trials, and Google Scholar. Two reviewers performed data screening, extraction, and thematic analysis in parallel. Twelve eligible experimental studies published in the past decade were included. The results showed that the majority (75%) focused on normal cognitive functions such as perception, social cognition, language, executive function, and memory, while others (25%) examined impaired cognition. The predominant XAI methods employed were intrinsic XAI (58.3%), followed by attribution-based (41.7%) and example-based (8.3%) post hoc methods. Explainability was applied at a local (66.7%) or global (33.3%) scope. The findings, predominantly correlational, were anatomical (83.3%) or nonanatomical (16.7%). In conclusion, while these XAI techniques were lauded for their predictive power, robustness, testability, and plausibility, limitations included oversimplification, confounding factors, and inconsistencies. The reviewed studies showcased the potential of XAI models while acknowledging current challenges in causality and oversimplification, particularly emphasizing the need for reproducibility.</p>\",\"PeriodicalId\":94030,\"journal\":{\"name\":\"Ibrain\",\"volume\":\"10 3\",\"pages\":\"245-265\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ibra.12174\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Ibrain\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/ibra.12174\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ibrain","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ibra.12174","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
人工智能(AI)的飞速发展引发了关于其可信度和可解释人工智能(XAI)概念的新讨论。神经科学领域的最新研究强调了 XAI 在研究认知方面的相关性。本范围综述旨在确定和分析用于研究认知功能和功能障碍的机制和特征的各种 XAI 方法。本研究对收集到的证据进行了定性评估,以便为认知神经科学中的 XAI 方法制定一个有效的框架。根据乔安娜-布里格斯研究所(Joanna Briggs Institute)和《系统综述和荟萃分析扩展范围综述指南的首选报告项目》,我们在 MEDLINE、Embase、Web of Science、Cochrane Central Register of Controlled Trials 和 Google Scholar 上检索了同行评议文章。两名审稿人同时进行数据筛选、提取和专题分析。过去十年间发表的 12 项符合条件的实验研究被纳入其中。结果显示,大多数研究(75%)侧重于正常认知功能,如感知、社会认知、语言、执行功能和记忆,而其他研究(25%)则对受损认知进行了研究。采用的主要 XAI 方法是内在 XAI(58.3%),其次是基于归因(41.7%)和基于示例(8.3%)的事后方法。可解释性被应用于本地(66.7%)或全球(33.3%)范围。研究结果主要是相关性的,有解剖学的(83.3%)或非解剖学的(16.7%)。总之,尽管这些 XAI 技术因其预测能力、稳健性、可测试性和可信性而备受赞誉,但其局限性也包括过度简化、混杂因素和不一致性。所回顾的研究展示了 XAI 模型的潜力,同时也承认目前在因果关系和过度简化方面存在挑战,特别强调了可重复性的必要性。
The application of eXplainable artificial intelligence in studying cognition: A scoping review
The rapid advancement of artificial intelligence (AI) has sparked renewed discussions on its trustworthiness and the concept of eXplainable AI (XAI). Recent research in neuroscience has emphasized the relevance of XAI in studying cognition. This scoping review aims to identify and analyze various XAI methods used to study the mechanisms and features of cognitive function and dysfunction. In this study, the collected evidence is qualitatively assessed to develop an effective framework for approaching XAI in cognitive neuroscience. Based on the Joanna Briggs Institute and preferred reporting items for systematic reviews and meta-analyses extension for scoping review guidelines, we searched for peer-reviewed articles on MEDLINE, Embase, Web of Science, Cochrane Central Register of Controlled Trials, and Google Scholar. Two reviewers performed data screening, extraction, and thematic analysis in parallel. Twelve eligible experimental studies published in the past decade were included. The results showed that the majority (75%) focused on normal cognitive functions such as perception, social cognition, language, executive function, and memory, while others (25%) examined impaired cognition. The predominant XAI methods employed were intrinsic XAI (58.3%), followed by attribution-based (41.7%) and example-based (8.3%) post hoc methods. Explainability was applied at a local (66.7%) or global (33.3%) scope. The findings, predominantly correlational, were anatomical (83.3%) or nonanatomical (16.7%). In conclusion, while these XAI techniques were lauded for their predictive power, robustness, testability, and plausibility, limitations included oversimplification, confounding factors, and inconsistencies. The reviewed studies showcased the potential of XAI models while acknowledging current challenges in causality and oversimplification, particularly emphasizing the need for reproducibility.