基于脑电图的多模态情感识别的跨模态可信度建模

IF 3.7 3区 医学 Q2 ENGINEERING, BIOMEDICAL Journal of neural engineering Pub Date : 2024-04-11 DOI:10.1088/1741-2552/ad3987
Yuzhe Zhang, Huan Liu, Di Wang, Dalin Zhang, Tianyu Lou, Qinghua Zheng, Chai Quek
{"title":"基于脑电图的多模态情感识别的跨模态可信度建模","authors":"Yuzhe Zhang, Huan Liu, Di Wang, Dalin Zhang, Tianyu Lou, Qinghua Zheng, Chai Quek","doi":"10.1088/1741-2552/ad3987","DOIUrl":null,"url":null,"abstract":"<italic toggle=\"yes\">Objective.</italic> The study of emotion recognition through electroencephalography (EEG) has garnered significant attention recently. Integrating EEG with other peripheral physiological signals may greatly enhance performance in emotion recognition. Nonetheless, existing approaches still suffer from two predominant challenges: modality heterogeneity, stemming from the diverse mechanisms across modalities, and fusion credibility, which arises when one or multiple modalities fail to provide highly credible signals. <italic toggle=\"yes\">Approach.</italic> In this paper, we introduce a novel multimodal physiological signal fusion model that incorporates both intra-inter modality reconstruction and sequential pattern consistency, thereby ensuring a computable and credible EEG-based multimodal emotion recognition. For the modality heterogeneity issue, we first implement a local self-attention transformer to obtain intra-modal features for each respective modality. Subsequently, we devise a pairwise cross-attention transformer to reveal the inter-modal correlations among different modalities, thereby rendering different modalities compatible and diminishing the heterogeneity concern. For the fusion credibility issue, we introduce the concept of sequential pattern consistency to measure whether different modalities evolve in a consistent way. Specifically, we propose to measure the varying trends of different modalities, and compute the inter-modality consistency scores to ascertain fusion credibility. <italic toggle=\"yes\">Main results.</italic> We conduct extensive experiments on two benchmarked datasets (DEAP and MAHNOB-HCI) with the subject-dependent paradigm. For the DEAP dataset, our method improves the accuracy by 4.58%, and the F1 score by 0.63%, compared to the state-of-the-art baseline. Similarly, for the MAHNOB-HCI dataset, our method improves the accuracy by 3.97%, and the F1 score by 4.21%. In addition, we gain much insight into the proposed framework through significance test, ablation experiments, confusion matrices and hyperparameter analysis. Consequently, we demonstrate the effectiveness of the proposed credibility modelling through statistical analysis and carefully designed experiments. <italic toggle=\"yes\">Significance.</italic> All experimental results demonstrate the effectiveness of our proposed architecture and indicate that credibility modelling is essential for multimodal emotion recognition.","PeriodicalId":16753,"journal":{"name":"Journal of neural engineering","volume":null,"pages":null},"PeriodicalIF":3.7000,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cross-modal credibility modelling for EEG-based multimodal emotion recognition\",\"authors\":\"Yuzhe Zhang, Huan Liu, Di Wang, Dalin Zhang, Tianyu Lou, Qinghua Zheng, Chai Quek\",\"doi\":\"10.1088/1741-2552/ad3987\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<italic toggle=\\\"yes\\\">Objective.</italic> The study of emotion recognition through electroencephalography (EEG) has garnered significant attention recently. Integrating EEG with other peripheral physiological signals may greatly enhance performance in emotion recognition. Nonetheless, existing approaches still suffer from two predominant challenges: modality heterogeneity, stemming from the diverse mechanisms across modalities, and fusion credibility, which arises when one or multiple modalities fail to provide highly credible signals. <italic toggle=\\\"yes\\\">Approach.</italic> In this paper, we introduce a novel multimodal physiological signal fusion model that incorporates both intra-inter modality reconstruction and sequential pattern consistency, thereby ensuring a computable and credible EEG-based multimodal emotion recognition. For the modality heterogeneity issue, we first implement a local self-attention transformer to obtain intra-modal features for each respective modality. Subsequently, we devise a pairwise cross-attention transformer to reveal the inter-modal correlations among different modalities, thereby rendering different modalities compatible and diminishing the heterogeneity concern. For the fusion credibility issue, we introduce the concept of sequential pattern consistency to measure whether different modalities evolve in a consistent way. Specifically, we propose to measure the varying trends of different modalities, and compute the inter-modality consistency scores to ascertain fusion credibility. <italic toggle=\\\"yes\\\">Main results.</italic> We conduct extensive experiments on two benchmarked datasets (DEAP and MAHNOB-HCI) with the subject-dependent paradigm. For the DEAP dataset, our method improves the accuracy by 4.58%, and the F1 score by 0.63%, compared to the state-of-the-art baseline. Similarly, for the MAHNOB-HCI dataset, our method improves the accuracy by 3.97%, and the F1 score by 4.21%. In addition, we gain much insight into the proposed framework through significance test, ablation experiments, confusion matrices and hyperparameter analysis. Consequently, we demonstrate the effectiveness of the proposed credibility modelling through statistical analysis and carefully designed experiments. <italic toggle=\\\"yes\\\">Significance.</italic> All experimental results demonstrate the effectiveness of our proposed architecture and indicate that credibility modelling is essential for multimodal emotion recognition.\",\"PeriodicalId\":16753,\"journal\":{\"name\":\"Journal of neural engineering\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2024-04-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of neural engineering\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1088/1741-2552/ad3987\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of neural engineering","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1088/1741-2552/ad3987","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

摘要

目的。通过脑电图(EEG)进行情绪识别的研究近来备受关注。将脑电图与其他外周生理信号整合可大大提高情绪识别的性能。然而,现有的方法仍然面临两个主要挑战:一是模态异质性,这源于不同模态之间的不同机制;二是融合可信度,当一种或多种模态无法提供高度可信的信号时,就会产生融合可信度问题。方法。在本文中,我们介绍了一种新型的多模态生理信号融合模型,该模型结合了模态内重建和序列模式一致性,从而确保了基于脑电图的多模态情绪识别的可计算性和可信度。针对模态异质性问题,我们首先实施了局部自注意变换器,以获得各模态的模态内特征。随后,我们设计了一个成对交叉注意变换器,以揭示不同模态之间的模态间相关性,从而使不同模态相互兼容,减少异质性问题。针对融合可信度问题,我们引入了序列模式一致性的概念,以衡量不同模态是否以一致的方式发展。具体来说,我们建议测量不同模态的变化趋势,并计算模态间的一致性得分,以确定融合可信度。主要结果。我们在两个基准数据集(DEAP 和 MAHNOB-HCI)上使用主体依赖范式进行了大量实验。在 DEAP 数据集上,与最先进的基线相比,我们的方法提高了 4.58% 的准确率和 0.63% 的 F1 分数。同样,对于 MAHNOB-HCI 数据集,我们的方法提高了 3.97% 的准确率和 4.21% 的 F1 分数。此外,通过显著性测试、消融实验、混淆矩阵和超参数分析,我们对所提出的框架有了更深入的了解。因此,我们通过统计分析和精心设计的实验证明了所提出的可信度建模的有效性。意义重大。所有实验结果都证明了我们提出的架构的有效性,并表明可信度建模对于多模态情感识别至关重要。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Cross-modal credibility modelling for EEG-based multimodal emotion recognition
Objective. The study of emotion recognition through electroencephalography (EEG) has garnered significant attention recently. Integrating EEG with other peripheral physiological signals may greatly enhance performance in emotion recognition. Nonetheless, existing approaches still suffer from two predominant challenges: modality heterogeneity, stemming from the diverse mechanisms across modalities, and fusion credibility, which arises when one or multiple modalities fail to provide highly credible signals. Approach. In this paper, we introduce a novel multimodal physiological signal fusion model that incorporates both intra-inter modality reconstruction and sequential pattern consistency, thereby ensuring a computable and credible EEG-based multimodal emotion recognition. For the modality heterogeneity issue, we first implement a local self-attention transformer to obtain intra-modal features for each respective modality. Subsequently, we devise a pairwise cross-attention transformer to reveal the inter-modal correlations among different modalities, thereby rendering different modalities compatible and diminishing the heterogeneity concern. For the fusion credibility issue, we introduce the concept of sequential pattern consistency to measure whether different modalities evolve in a consistent way. Specifically, we propose to measure the varying trends of different modalities, and compute the inter-modality consistency scores to ascertain fusion credibility. Main results. We conduct extensive experiments on two benchmarked datasets (DEAP and MAHNOB-HCI) with the subject-dependent paradigm. For the DEAP dataset, our method improves the accuracy by 4.58%, and the F1 score by 0.63%, compared to the state-of-the-art baseline. Similarly, for the MAHNOB-HCI dataset, our method improves the accuracy by 3.97%, and the F1 score by 4.21%. In addition, we gain much insight into the proposed framework through significance test, ablation experiments, confusion matrices and hyperparameter analysis. Consequently, we demonstrate the effectiveness of the proposed credibility modelling through statistical analysis and carefully designed experiments. Significance. All experimental results demonstrate the effectiveness of our proposed architecture and indicate that credibility modelling is essential for multimodal emotion recognition.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of neural engineering
Journal of neural engineering 工程技术-工程:生物医学
CiteScore
7.80
自引率
12.50%
发文量
319
审稿时长
4.2 months
期刊介绍: The goal of Journal of Neural Engineering (JNE) is to act as a forum for the interdisciplinary field of neural engineering where neuroscientists, neurobiologists and engineers can publish their work in one periodical that bridges the gap between neuroscience and engineering. The journal publishes articles in the field of neural engineering at the molecular, cellular and systems levels. The scope of the journal encompasses experimental, computational, theoretical, clinical and applied aspects of: Innovative neurotechnology; Brain-machine (computer) interface; Neural interfacing; Bioelectronic medicines; Neuromodulation; Neural prostheses; Neural control; Neuro-rehabilitation; Neurorobotics; Optical neural engineering; Neural circuits: artificial & biological; Neuromorphic engineering; Neural tissue regeneration; Neural signal processing; Theoretical and computational neuroscience; Systems neuroscience; Translational neuroscience; Neuroimaging.
期刊最新文献
PDMS/CNT electrodes with bioamplifier for practical in-the-ear and conventional biosignal recordings. DOCTer: a novel EEG-based diagnosis framework for disorders of consciousness. I see artifacts: ICA-based EEG artifact removal does not improve deep network decoding across three BCI tasks. Integrating spatial and temporal features for enhanced artifact removal in multi-channel EEG recordings. PD-ARnet: a deep learning approach for Parkinson's disease diagnosis from resting-state fMRI.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1