{"title":"From EEG to Eye Movements: Cross-Modal Emotion Recognition Using Constrained Adversarial Network With Dual Attention","authors":"Yiting Wang;Jia-Wen Liu;Bao-Liang Lu;Wei-Long Zheng","doi":"10.1109/TAFFC.2024.3524418","DOIUrl":null,"url":null,"abstract":"Emotion recognition is a fundamental part of affective computing, obtaining performance gain from multimodal methods. Electroencephalography (EEG) and eye movements are extensively used as they contain complementary information. However, the inconvenient acquisition of EEG is hindering the extensive adoption of multimodal emotion recognition in daily applications while eye movements are more convenient to collect but with lower performance. To tackle this issue, we propose a Constrained Adversarial Network with Dual Attention (CANDA), exploiting the complementary information from multiple modalities during training to improve the test-time performance of single easily acquired modality, i.e., transferring knowledge from a stronger modality to a weaker modality. During training, a common joint space is learned to diminish the distribution discrepancy among different modalities and incorporate the multimodal representations. During test, single modality is converted to the common space achieving comparable performance to multiple modalities. Extensive experiments demonstrate that our model achieves the state-of-the-art performance for cross-modal emotion recognition. Specifically, the mean accuracy increases around 15% on SEED, 15% on SEED-IV, and 2% on SEED-V compared to the latest baseline for emotion recognition. Visualization of features in the joint space illustrates that the distribution of different modalities aligns together with the discriminative ability regarding various emotions.","PeriodicalId":13131,"journal":{"name":"IEEE Transactions on Affective Computing","volume":"16 3","pages":"1543-1556"},"PeriodicalIF":9.8000,"publicationDate":"2024-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Affective Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10818647/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Emotion recognition is a fundamental part of affective computing, obtaining performance gain from multimodal methods. Electroencephalography (EEG) and eye movements are extensively used as they contain complementary information. However, the inconvenient acquisition of EEG is hindering the extensive adoption of multimodal emotion recognition in daily applications while eye movements are more convenient to collect but with lower performance. To tackle this issue, we propose a Constrained Adversarial Network with Dual Attention (CANDA), exploiting the complementary information from multiple modalities during training to improve the test-time performance of single easily acquired modality, i.e., transferring knowledge from a stronger modality to a weaker modality. During training, a common joint space is learned to diminish the distribution discrepancy among different modalities and incorporate the multimodal representations. During test, single modality is converted to the common space achieving comparable performance to multiple modalities. Extensive experiments demonstrate that our model achieves the state-of-the-art performance for cross-modal emotion recognition. Specifically, the mean accuracy increases around 15% on SEED, 15% on SEED-IV, and 2% on SEED-V compared to the latest baseline for emotion recognition. Visualization of features in the joint space illustrates that the distribution of different modalities aligns together with the discriminative ability regarding various emotions.
期刊介绍:
The IEEE Transactions on Affective Computing is an international and interdisciplinary journal. Its primary goal is to share research findings on the development of systems capable of recognizing, interpreting, and simulating human emotions and related affective phenomena. The journal publishes original research on the underlying principles and theories that explain how and why affective factors shape human-technology interactions. It also focuses on how techniques for sensing and simulating affect can enhance our understanding of human emotions and processes. Additionally, the journal explores the design, implementation, and evaluation of systems that prioritize the consideration of affect in their usability. We also welcome surveys of existing work that provide new perspectives on the historical and future directions of this field.