SEED-VII: A Multimodal Dataset of Six Basic Emotions With Continuous Labels for Emotion Recognition

IF 9.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IEEE Transactions on Affective Computing Pub Date : 2024-10-23 DOI:10.1109/TAFFC.2024.3485057
Wei-Bang Jiang;Xuan-Hao Liu;Wei-Long Zheng;Bao-Liang Lu
{"title":"SEED-VII: A Multimodal Dataset of Six Basic Emotions With Continuous Labels for Emotion Recognition","authors":"Wei-Bang Jiang;Xuan-Hao Liu;Wei-Long Zheng;Bao-Liang Lu","doi":"10.1109/TAFFC.2024.3485057","DOIUrl":null,"url":null,"abstract":"Recognizing emotions from physiological signals is a topic that has garnered widespread interest, and research continues to develop novel techniques for perceiving emotions. However, the emergence of deep learning has highlighted the need for comprehensive and high-quality emotional datasets that enable the accurate decoding of human emotions. To systematically explore human emotions, we develop a multimodal dataset consisting of six basic (happiness, sadness, fear, disgust, surprise, and anger) emotions and the neutral emotion, named SEED-VII. This multimodal dataset includes electroencephalography (EEG) and eye movement signals. The seven emotions in SEED-VII are elicited by 80 different videos and fully investigated with continuous labels that indicate the intensity levels of the corresponding emotions. Additionally, we propose a novel Multimodal Adaptive Emotion Transformer (MAET), that can flexibly process both unimodal and multimodal inputs. Adversarial training is utilized in the MAET to mitigate subject discrepancies, which enhances domain generalization. Our extensive experiments, encompassing both subject-dependent and cross-subject conditions, demonstrate the superior performance of the MAET in terms of handling various inputs. Continuous labels are used to filter the data with high emotional intensity, and this strategy is proven to be effective for attaining improved emotion recognition performance. Furthermore, complementary properties between the EEG signals and eye movements and stable neural patterns of the seven emotions are observed.","PeriodicalId":13131,"journal":{"name":"IEEE Transactions on Affective Computing","volume":"16 2","pages":"969-985"},"PeriodicalIF":9.8000,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Affective Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10731546/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Recognizing emotions from physiological signals is a topic that has garnered widespread interest, and research continues to develop novel techniques for perceiving emotions. However, the emergence of deep learning has highlighted the need for comprehensive and high-quality emotional datasets that enable the accurate decoding of human emotions. To systematically explore human emotions, we develop a multimodal dataset consisting of six basic (happiness, sadness, fear, disgust, surprise, and anger) emotions and the neutral emotion, named SEED-VII. This multimodal dataset includes electroencephalography (EEG) and eye movement signals. The seven emotions in SEED-VII are elicited by 80 different videos and fully investigated with continuous labels that indicate the intensity levels of the corresponding emotions. Additionally, we propose a novel Multimodal Adaptive Emotion Transformer (MAET), that can flexibly process both unimodal and multimodal inputs. Adversarial training is utilized in the MAET to mitigate subject discrepancies, which enhances domain generalization. Our extensive experiments, encompassing both subject-dependent and cross-subject conditions, demonstrate the superior performance of the MAET in terms of handling various inputs. Continuous labels are used to filter the data with high emotional intensity, and this strategy is proven to be effective for attaining improved emotion recognition performance. Furthermore, complementary properties between the EEG signals and eye movements and stable neural patterns of the seven emotions are observed.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
SEED-VII:带有连续标签的六种基本情绪多模态数据集,用于情绪识别
从生理信号中识别情绪是一个引起广泛兴趣的话题,研究还在继续开发新的情绪感知技术。然而,深度学习的出现凸显了对全面、高质量情感数据集的需求,这些数据集能够准确解码人类情感。为了系统地探索人类情绪,我们开发了一个多模态数据集,由六种基本情绪(快乐、悲伤、恐惧、厌恶、惊讶和愤怒)和中性情绪组成,名为SEED-VII。这个多模态数据集包括脑电图(EEG)和眼动信号。SEED-VII中的七种情绪是由80个不同的视频引发的,并通过连续的标签进行了充分的调查,这些标签表明了相应情绪的强度水平。此外,我们提出了一种新的多模态自适应情绪转换器(MAET),它可以灵活地处理单模态和多模态输入。在MAET中使用对抗训练来减少主题差异,从而增强领域泛化。我们广泛的实验,包括科目依赖和交叉科目条件,证明了MAET在处理各种输入方面的优越性能。使用连续标签对情绪强度高的数据进行过滤,该策略被证明是提高情绪识别性能的有效方法。此外,脑电图信号与眼动之间的互补性以及七种情绪的稳定神经模式也被观察到。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Affective Computing
IEEE Transactions on Affective Computing COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, CYBERNETICS
CiteScore
15.00
自引率
6.20%
发文量
174
期刊介绍: The IEEE Transactions on Affective Computing is an international and interdisciplinary journal. Its primary goal is to share research findings on the development of systems capable of recognizing, interpreting, and simulating human emotions and related affective phenomena. The journal publishes original research on the underlying principles and theories that explain how and why affective factors shape human-technology interactions. It also focuses on how techniques for sensing and simulating affect can enhance our understanding of human emotions and processes. Additionally, the journal explores the design, implementation, and evaluation of systems that prioritize the consideration of affect in their usability. We also welcome surveys of existing work that provide new perspectives on the historical and future directions of this field.
期刊最新文献
EmoSENSE: Modeling Sentiment-Semantic Knowledge with Hierarchical Reinforcement Learning for Emotional Image Generation Emo-DiT: Emotional Speech Synthesis With a Diffusion Model Approach to Enhance Naturalness and Emotional Expressiveness InterARM: Interpretable Affective Reasoning Model for Multimodal Sarcasm Detection Exploring canine emotions: A transfer learning and 3DCNN-based study for small databases Fine-grained EEG emotion recognition using lite residual convolution-based transformer neural network
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1