Emotion Recognition by Learning the Manifold of Fused Multiscale Information of EEG Signals

IF 9.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IEEE Transactions on Affective Computing Pub Date : 2025-03-28 DOI:10.1109/TAFFC.2025.3555226
Cunbo Li;Shuhan Zhang;Yufeng Mu;Lei Yang;Yueheng Peng;Fali Li;Yangsong Zhang;Zhen Liang;Zehong Cao;Feng Wan;Dezhong Yao;Peiyang Li;Peng Xu
{"title":"Emotion Recognition by Learning the Manifold of Fused Multiscale Information of EEG Signals","authors":"Cunbo Li;Shuhan Zhang;Yufeng Mu;Lei Yang;Yueheng Peng;Fali Li;Yangsong Zhang;Zhen Liang;Zehong Cao;Feng Wan;Dezhong Yao;Peiyang Li;Peng Xu","doi":"10.1109/TAFFC.2025.3555226","DOIUrl":null,"url":null,"abstract":"Recent research has consistently indicated that the fusion of electroencephalography (EEG) features from multiple modalities can integrate cognitive state expressions across diverse dimensions, resulting in a substantial increase in emotion recognition accuracy. However, redundant information within the fused multimodal features could lead to the curse of dimensionality and overfitting of the learning model. In this work, we propose a multiscale EEG feature fusion and representation strategy for EEG emotion recognition named manifold of multiscale information fusion (MMIF), in which the optimal manifold of the multiscale fusion of local and global brain activation patterns can be automatically learned to realize an efficient representation of emotional EEG signals. To evaluate the performance, in this work, both off- and online EEG emotion recognition experiments were conducted, and the experimental results consistently verified the effectiveness and feasibility of the MMIF applied in real-time emotion decoding systems. Furthermore, the analytical experiments confirmed the discriminative capabilities and cognitive interpretability of the MMIF. In summary, the proposed MMIF model may provide an efficient avenue for exploring representations and enhancing the discrimination of multimodal fusion features, which may also provide a promising solution for designing online affective braincomputer interaction systems.","PeriodicalId":13131,"journal":{"name":"IEEE Transactions on Affective Computing","volume":"16 3","pages":"2172-2188"},"PeriodicalIF":9.8000,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Affective Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10943131/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Recent research has consistently indicated that the fusion of electroencephalography (EEG) features from multiple modalities can integrate cognitive state expressions across diverse dimensions, resulting in a substantial increase in emotion recognition accuracy. However, redundant information within the fused multimodal features could lead to the curse of dimensionality and overfitting of the learning model. In this work, we propose a multiscale EEG feature fusion and representation strategy for EEG emotion recognition named manifold of multiscale information fusion (MMIF), in which the optimal manifold of the multiscale fusion of local and global brain activation patterns can be automatically learned to realize an efficient representation of emotional EEG signals. To evaluate the performance, in this work, both off- and online EEG emotion recognition experiments were conducted, and the experimental results consistently verified the effectiveness and feasibility of the MMIF applied in real-time emotion decoding systems. Furthermore, the analytical experiments confirmed the discriminative capabilities and cognitive interpretability of the MMIF. In summary, the proposed MMIF model may provide an efficient avenue for exploring representations and enhancing the discrimination of multimodal fusion features, which may also provide a promising solution for designing online affective braincomputer interaction systems.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于脑电信号融合多尺度信息流形的情绪识别
近年来的研究一致表明,多模态脑电图特征的融合可以整合不同维度的认知状态表达,从而大大提高情绪识别的准确性。然而,融合的多模态特征中的冗余信息可能导致学习模型的维数缺失和过拟合。本文提出了一种多尺度脑电特征融合与表征策略——多尺度信息融合流形(MMIF),该策略能够自动学习局部和全局脑激活模式多尺度融合的最优流形,实现对脑电情绪信号的高效表征。为了评估其性能,本研究分别进行了离线和在线脑电情绪识别实验,实验结果一致验证了MMIF应用于实时情绪解码系统的有效性和可行性。此外,分析实验证实了MMIF的判别能力和认知可解释性。综上所述,所提出的MMIF模型可以为探索表征和增强多模态融合特征的识别提供有效的途径,也可能为设计在线情感脑机交互系统提供有希望的解决方案。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Affective Computing
IEEE Transactions on Affective Computing COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, CYBERNETICS
CiteScore
15.00
自引率
6.20%
发文量
174
期刊介绍: The IEEE Transactions on Affective Computing is an international and interdisciplinary journal. Its primary goal is to share research findings on the development of systems capable of recognizing, interpreting, and simulating human emotions and related affective phenomena. The journal publishes original research on the underlying principles and theories that explain how and why affective factors shape human-technology interactions. It also focuses on how techniques for sensing and simulating affect can enhance our understanding of human emotions and processes. Additionally, the journal explores the design, implementation, and evaluation of systems that prioritize the consideration of affect in their usability. We also welcome surveys of existing work that provide new perspectives on the historical and future directions of this field.
期刊最新文献
Explainable Affective Body Expression Recognition with Multi-Scale Spatiotemporal Encoding and LLM-Based Reasoning Personality Traits and Demographics Analysis in Online Mental Health Discourse EEG-Based Emotion Classification Using Deep Capsule Networks for Subject-Independent and Dependent Scenarios Nasal Dominance and Nostril Breathing Variability: Potential Biomarkers of Acute Stress Charting the Unspoken: Causal Inference-Guided LLM Augmentation for Emotion Recognition in Conversation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1