{"title":"CrossConvPyramid: Deep Multimodal Fusion for Epileptic Magnetoencephalography Spike Detection.","authors":"Liang Zhang, Shurong Sheng, Xiongfei Wang, Jia-Hong Gao, Yi Sun, Kuntao Xiao, Wanli Yang, Pengfei Teng, Guoming Luan, Zhao Lv","doi":"10.1109/JBHI.2025.3538582","DOIUrl":null,"url":null,"abstract":"<p><p>Magnetoencephalography (MEG) is a vital non-invasive tool for epilepsy analysis, as it captures high-resolution signals that reflect changes in brain activity over time. The automated detection of epileptic spikes within these signals can significantly reduce the labor and time required for manual annotation of MEG recording data, thereby aiding clinicians in identifying epileptogenic foci and evaluating treatment prognosis. Research in this domain often utilizes the raw, multi-channel signals from MEG scans for spike detection, commonly neglecting the multi-channel spiking patterns from spatially adjacent channels. Moreover, epileptic spikes share considerable morphological similarities with artifact signals within the recordings, posing a challenge for models to differentiate between the two. In this paper, we introduce a multimodal fusion framework that addresses these two challenges collectively. Instead of relying solely on the signal recordings, our framework also mines knowledge from their corresponding topography-map images, which encapsulate the spatial context and amplitude distribution of the input signals. To facilitate more effective data fusion, we present a novel multimodal feature fusion technique called CrossConvPyramid, built upon a convolutional pyramid architecture augmented by an attention mechanism. It initially employs cross-attention and a convolutional pyramid to encode inter-modal correlations within the intermediate features extracted by individual unimodal networks. Subsequently, it utilizes a self-attention mechanism to refine and select the most salient features from both inter-modal and unimodal features, specifically tailored for the spike classification task. Our method achieved the average F1 scores of 92.88% and 95.23% across two distinct real-world MEG datasets from separate centers, respectively outperforming the current state-of-the-art by 2.31% and 0.88%. We plan to release the code on GitHub later.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7000,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal of Biomedical and Health Informatics","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1109/JBHI.2025.3538582","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
摘要
脑磁图(MEG)是一种重要的非侵入性癫痫分析工具,因为它能捕捉到反映大脑活动随时间变化的高分辨率信号。自动检测这些信号中的癫痫尖峰可大大减少人工标注脑磁图记录数据所需的人力和时间,从而帮助临床医生识别致痫灶和评估治疗预后。该领域的研究通常利用 MEG 扫描的原始多通道信号进行尖峰检测,通常会忽略空间上相邻通道的多通道尖峰模式。此外,癫痫尖峰与记录中的伪信号在形态上有很大的相似性,这给区分两者的模型带来了挑战。在本文中,我们引入了一个多模态融合框架,以共同应对这两个挑战。我们的框架不仅依赖于信号记录,还从相应的地形图图像中挖掘知识,这些图像囊括了输入信号的空间背景和振幅分布。为了促进更有效的数据融合,我们提出了一种名为 CrossConvPyramid 的新型多模态特征融合技术,该技术基于卷积金字塔架构,并辅以注意力机制。它首先利用交叉注意和卷积金字塔来编码单个单模态网络提取的中间特征中的模态间相关性。随后,它利用自我注意机制从模态间特征和单模态特征中提炼和选择最突出的特征,专门用于尖峰分类任务。我们的方法在来自不同中心的两个不同的真实世界 MEG 数据集上取得了 92.88% 和 95.23% 的平均 F1 分数,分别比目前最先进的方法高出 2.31% 和 0.88%。我们计划稍后在 GitHub 上发布代码。
CrossConvPyramid: Deep Multimodal Fusion for Epileptic Magnetoencephalography Spike Detection.
Magnetoencephalography (MEG) is a vital non-invasive tool for epilepsy analysis, as it captures high-resolution signals that reflect changes in brain activity over time. The automated detection of epileptic spikes within these signals can significantly reduce the labor and time required for manual annotation of MEG recording data, thereby aiding clinicians in identifying epileptogenic foci and evaluating treatment prognosis. Research in this domain often utilizes the raw, multi-channel signals from MEG scans for spike detection, commonly neglecting the multi-channel spiking patterns from spatially adjacent channels. Moreover, epileptic spikes share considerable morphological similarities with artifact signals within the recordings, posing a challenge for models to differentiate between the two. In this paper, we introduce a multimodal fusion framework that addresses these two challenges collectively. Instead of relying solely on the signal recordings, our framework also mines knowledge from their corresponding topography-map images, which encapsulate the spatial context and amplitude distribution of the input signals. To facilitate more effective data fusion, we present a novel multimodal feature fusion technique called CrossConvPyramid, built upon a convolutional pyramid architecture augmented by an attention mechanism. It initially employs cross-attention and a convolutional pyramid to encode inter-modal correlations within the intermediate features extracted by individual unimodal networks. Subsequently, it utilizes a self-attention mechanism to refine and select the most salient features from both inter-modal and unimodal features, specifically tailored for the spike classification task. Our method achieved the average F1 scores of 92.88% and 95.23% across two distinct real-world MEG datasets from separate centers, respectively outperforming the current state-of-the-art by 2.31% and 0.88%. We plan to release the code on GitHub later.
期刊介绍:
IEEE Journal of Biomedical and Health Informatics publishes original papers presenting recent advances where information and communication technologies intersect with health, healthcare, life sciences, and biomedicine. Topics include acquisition, transmission, storage, retrieval, management, and analysis of biomedical and health information. The journal covers applications of information technologies in healthcare, patient monitoring, preventive care, early disease diagnosis, therapy discovery, and personalized treatment protocols. It explores electronic medical and health records, clinical information systems, decision support systems, medical and biological imaging informatics, wearable systems, body area/sensor networks, and more. Integration-related topics like interoperability, evidence-based medicine, and secure patient data are also addressed.