{"title":"从脑电图到眼动:基于约束对抗网络的双注意跨模态情绪识别","authors":"Yiting Wang;Jia-Wen Liu;Bao-Liang Lu;Wei-Long Zheng","doi":"10.1109/TAFFC.2024.3524418","DOIUrl":null,"url":null,"abstract":"Emotion recognition is a fundamental part of affective computing, obtaining performance gain from multimodal methods. Electroencephalography (EEG) and eye movements are extensively used as they contain complementary information. However, the inconvenient acquisition of EEG is hindering the extensive adoption of multimodal emotion recognition in daily applications while eye movements are more convenient to collect but with lower performance. To tackle this issue, we propose a Constrained Adversarial Network with Dual Attention (CANDA), exploiting the complementary information from multiple modalities during training to improve the test-time performance of single easily acquired modality, i.e., transferring knowledge from a stronger modality to a weaker modality. During training, a common joint space is learned to diminish the distribution discrepancy among different modalities and incorporate the multimodal representations. During test, single modality is converted to the common space achieving comparable performance to multiple modalities. Extensive experiments demonstrate that our model achieves the state-of-the-art performance for cross-modal emotion recognition. Specifically, the mean accuracy increases around 15% on SEED, 15% on SEED-IV, and 2% on SEED-V compared to the latest baseline for emotion recognition. Visualization of features in the joint space illustrates that the distribution of different modalities aligns together with the discriminative ability regarding various emotions.","PeriodicalId":13131,"journal":{"name":"IEEE Transactions on Affective Computing","volume":"16 3","pages":"1543-1556"},"PeriodicalIF":9.8000,"publicationDate":"2024-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"From EEG to Eye Movements: Cross-Modal Emotion Recognition Using Constrained Adversarial Network With Dual Attention\",\"authors\":\"Yiting Wang;Jia-Wen Liu;Bao-Liang Lu;Wei-Long Zheng\",\"doi\":\"10.1109/TAFFC.2024.3524418\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Emotion recognition is a fundamental part of affective computing, obtaining performance gain from multimodal methods. Electroencephalography (EEG) and eye movements are extensively used as they contain complementary information. However, the inconvenient acquisition of EEG is hindering the extensive adoption of multimodal emotion recognition in daily applications while eye movements are more convenient to collect but with lower performance. To tackle this issue, we propose a Constrained Adversarial Network with Dual Attention (CANDA), exploiting the complementary information from multiple modalities during training to improve the test-time performance of single easily acquired modality, i.e., transferring knowledge from a stronger modality to a weaker modality. During training, a common joint space is learned to diminish the distribution discrepancy among different modalities and incorporate the multimodal representations. During test, single modality is converted to the common space achieving comparable performance to multiple modalities. Extensive experiments demonstrate that our model achieves the state-of-the-art performance for cross-modal emotion recognition. Specifically, the mean accuracy increases around 15% on SEED, 15% on SEED-IV, and 2% on SEED-V compared to the latest baseline for emotion recognition. Visualization of features in the joint space illustrates that the distribution of different modalities aligns together with the discriminative ability regarding various emotions.\",\"PeriodicalId\":13131,\"journal\":{\"name\":\"IEEE Transactions on Affective Computing\",\"volume\":\"16 3\",\"pages\":\"1543-1556\"},\"PeriodicalIF\":9.8000,\"publicationDate\":\"2024-12-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Affective Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10818647/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Affective Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10818647/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
摘要
情感识别是情感计算的基础部分,通过多模态方法获得性能增益。脑电图(EEG)和眼动被广泛使用,因为它们包含互补的信息。然而,由于EEG采集不方便,阻碍了多模态情绪识别在日常应用中的广泛应用,而眼动数据采集更方便,但性能较差。为了解决这个问题,我们提出了一种约束对抗网络(Constrained Adversarial Network with Dual Attention, canada),在训练过程中利用多个模态的互补信息来提高单个容易获得模态的测试时间性能,即将知识从较强的模态转移到较弱的模态。在训练过程中,学习了一个共同的关节空间,以减小不同模态之间的分布差异,并结合多模态表示。在测试过程中,将单模态转换为公共空间,实现与多模态相当的性能。大量的实验表明,我们的模型达到了最先进的跨模态情感识别性能。具体来说,与最新的情绪识别基线相比,SEED的平均准确率提高了15%,SEED- iv的平均准确率提高了15%,SEED- v的平均准确率提高了2%。关节空间特征的可视化表明,不同模态的分布与不同情绪的判别能力是一致的。
From EEG to Eye Movements: Cross-Modal Emotion Recognition Using Constrained Adversarial Network With Dual Attention
Emotion recognition is a fundamental part of affective computing, obtaining performance gain from multimodal methods. Electroencephalography (EEG) and eye movements are extensively used as they contain complementary information. However, the inconvenient acquisition of EEG is hindering the extensive adoption of multimodal emotion recognition in daily applications while eye movements are more convenient to collect but with lower performance. To tackle this issue, we propose a Constrained Adversarial Network with Dual Attention (CANDA), exploiting the complementary information from multiple modalities during training to improve the test-time performance of single easily acquired modality, i.e., transferring knowledge from a stronger modality to a weaker modality. During training, a common joint space is learned to diminish the distribution discrepancy among different modalities and incorporate the multimodal representations. During test, single modality is converted to the common space achieving comparable performance to multiple modalities. Extensive experiments demonstrate that our model achieves the state-of-the-art performance for cross-modal emotion recognition. Specifically, the mean accuracy increases around 15% on SEED, 15% on SEED-IV, and 2% on SEED-V compared to the latest baseline for emotion recognition. Visualization of features in the joint space illustrates that the distribution of different modalities aligns together with the discriminative ability regarding various emotions.
期刊介绍:
The IEEE Transactions on Affective Computing is an international and interdisciplinary journal. Its primary goal is to share research findings on the development of systems capable of recognizing, interpreting, and simulating human emotions and related affective phenomena. The journal publishes original research on the underlying principles and theories that explain how and why affective factors shape human-technology interactions. It also focuses on how techniques for sensing and simulating affect can enhance our understanding of human emotions and processes. Additionally, the journal explores the design, implementation, and evaluation of systems that prioritize the consideration of affect in their usability. We also welcome surveys of existing work that provide new perspectives on the historical and future directions of this field.