用于脑机接口的可解释人工智能方法:回顾与设计空间。

Param Rajpura, Hubert Cecotti, Yogesh Kumar Meena
{"title":"用于脑机接口的可解释人工智能方法:回顾与设计空间。","authors":"Param Rajpura, Hubert Cecotti, Yogesh Kumar Meena","doi":"10.1088/1741-2552/ad6593","DOIUrl":null,"url":null,"abstract":"<p><p><i>Objective.</i>This review paper provides an integrated perspective of Explainable Artificial Intelligence (XAI) techniques applied to Brain-Computer Interfaces (BCIs). BCIs use predictive models to interpret brain signals for various high-stake applications. However, achieving explainability in these complex models is challenging as it compromises accuracy. Trust in these models can be established by incorporating reasoning or causal relationships from domain experts. The field of XAI has emerged to address the need for explainability across various stakeholders, but there is a lack of an integrated perspective in XAI for BCI (XAI4BCI) literature. It is necessary to differentiate key concepts like explainability, interpretability, and understanding, often used interchangeably in this context, and formulate a comprehensive framework.<i>Approach.</i>To understand the need of XAI for BCI, we pose six key research questions for a systematic review and meta-analysis, encompassing its purposes, applications, usability, and technical feasibility. We employ the PRISMA methodology-preferred reporting items for systematic reviews and meta-analyses to review (<i>n</i> = 1246) and analyse (<i>n</i> = 84) studies published in 2015 and onwards for key insights.<i>Main results.</i>The results highlight that current research primarily focuses on interpretability for developers and researchers, aiming to justify outcomes and enhance model performance. We discuss the unique approaches, advantages, and limitations of XAI4BCI from the literature. We draw insights from philosophy, psychology, and social sciences. We propose a design space for XAI4BCI, considering the evolving need to visualise and investigate predictive model outcomes customised for various stakeholders in the BCI development and deployment lifecycle.<i>Significance.</i>This paper is the first to focus solely on reviewing XAI4BCI research articles. This systematic review and meta-analysis findings with the proposed design space prompt important discussions on establishing standards for BCI explanations, highlighting current limitations, and guiding the future of XAI in BCI.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Explainable artificial intelligence approaches for brain-computer interfaces: a review and design space.\",\"authors\":\"Param Rajpura, Hubert Cecotti, Yogesh Kumar Meena\",\"doi\":\"10.1088/1741-2552/ad6593\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p><i>Objective.</i>This review paper provides an integrated perspective of Explainable Artificial Intelligence (XAI) techniques applied to Brain-Computer Interfaces (BCIs). BCIs use predictive models to interpret brain signals for various high-stake applications. However, achieving explainability in these complex models is challenging as it compromises accuracy. Trust in these models can be established by incorporating reasoning or causal relationships from domain experts. The field of XAI has emerged to address the need for explainability across various stakeholders, but there is a lack of an integrated perspective in XAI for BCI (XAI4BCI) literature. It is necessary to differentiate key concepts like explainability, interpretability, and understanding, often used interchangeably in this context, and formulate a comprehensive framework.<i>Approach.</i>To understand the need of XAI for BCI, we pose six key research questions for a systematic review and meta-analysis, encompassing its purposes, applications, usability, and technical feasibility. We employ the PRISMA methodology-preferred reporting items for systematic reviews and meta-analyses to review (<i>n</i> = 1246) and analyse (<i>n</i> = 84) studies published in 2015 and onwards for key insights.<i>Main results.</i>The results highlight that current research primarily focuses on interpretability for developers and researchers, aiming to justify outcomes and enhance model performance. We discuss the unique approaches, advantages, and limitations of XAI4BCI from the literature. We draw insights from philosophy, psychology, and social sciences. We propose a design space for XAI4BCI, considering the evolving need to visualise and investigate predictive model outcomes customised for various stakeholders in the BCI development and deployment lifecycle.<i>Significance.</i>This paper is the first to focus solely on reviewing XAI4BCI research articles. This systematic review and meta-analysis findings with the proposed design space prompt important discussions on establishing standards for BCI explanations, highlighting current limitations, and guiding the future of XAI in BCI.</p>\",\"PeriodicalId\":94096,\"journal\":{\"name\":\"Journal of neural engineering\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of neural engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1088/1741-2552/ad6593\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of neural engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1088/1741-2552/ad6593","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

目的:这篇综述论文从综合角度阐述了应用于脑机接口(BCIs)的可解释人工智能(XAI)技术。BCIs 使用预测模型来解释大脑信号,用于各种高风险应用。然而,在这些复杂的模型中实现可解释性具有挑战性,因为这会影响准确性。可以通过纳入领域专家的推理或因果关系来建立对这些模型的信任。为了满足不同利益相关者对可解释性的需求,XAI 领域应运而生,但在 XAI for BCI(XAI4BCI)文献中缺乏综合视角。有必要区分可解释性、可解释性和可理解性等关键概念(这些概念在此背景下经常交替使用),并制定一个全面的框架:为了解 XAI 对 BCI 的需求,我们提出了六个关键研究问题 (RQ),对其目的、应用、可用性和技术可行性进行系统回顾和荟萃分析。我们采用了PRISMA方法--系统综述和荟萃分析的首选报告项目,对2015年及以后发表的研究进行了综述(n=1246)和分析(n=84),以获得关键见解:主要结果:研究结果突出表明,目前的研究主要关注开发人员和研究人员的可解释性,旨在证明结果的合理性并提高模型性能。我们从文献中讨论了 XAI4BCI 的独特方法、优势和局限性。我们从哲学、心理学和社会科学中汲取见解。考虑到在生物识别(BCI)开发和部署生命周期中为不同利益相关者定制可视化和调查预测模型结果的需求不断发展,我们提出了 XAI4BCI 的设计空间:本文是第一篇专门针对 XAI4BCI 研究文章的综述。这篇系统性综述和荟萃分析结果以及所提出的设计空间促使人们就建立 BCI 解释标准、强调当前局限性以及指导 BCI 中 XAI 的未来展开重要讨论。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Explainable artificial intelligence approaches for brain-computer interfaces: a review and design space.

Objective.This review paper provides an integrated perspective of Explainable Artificial Intelligence (XAI) techniques applied to Brain-Computer Interfaces (BCIs). BCIs use predictive models to interpret brain signals for various high-stake applications. However, achieving explainability in these complex models is challenging as it compromises accuracy. Trust in these models can be established by incorporating reasoning or causal relationships from domain experts. The field of XAI has emerged to address the need for explainability across various stakeholders, but there is a lack of an integrated perspective in XAI for BCI (XAI4BCI) literature. It is necessary to differentiate key concepts like explainability, interpretability, and understanding, often used interchangeably in this context, and formulate a comprehensive framework.Approach.To understand the need of XAI for BCI, we pose six key research questions for a systematic review and meta-analysis, encompassing its purposes, applications, usability, and technical feasibility. We employ the PRISMA methodology-preferred reporting items for systematic reviews and meta-analyses to review (n = 1246) and analyse (n = 84) studies published in 2015 and onwards for key insights.Main results.The results highlight that current research primarily focuses on interpretability for developers and researchers, aiming to justify outcomes and enhance model performance. We discuss the unique approaches, advantages, and limitations of XAI4BCI from the literature. We draw insights from philosophy, psychology, and social sciences. We propose a design space for XAI4BCI, considering the evolving need to visualise and investigate predictive model outcomes customised for various stakeholders in the BCI development and deployment lifecycle.Significance.This paper is the first to focus solely on reviewing XAI4BCI research articles. This systematic review and meta-analysis findings with the proposed design space prompt important discussions on establishing standards for BCI explanations, highlighting current limitations, and guiding the future of XAI in BCI.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Attention demands modulate brain electrical microstates and mental fatigue induced by simulated flight tasks. Temporal attention fusion network with custom loss function for EEG-fNIRS classification. Classification of hand movements from EEG using a FusionNet based LSTM network. Frequency-dependent phase entrainment of cortical cell types during tACS: computational modeling evidence. Patient-specific visual neglect severity estimation for stroke patients with neglect using EEG.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1