首页 > 最新文献

IEEE Transactions on Affective Computing最新文献

英文 中文
Individual-Aware Attention Modulation for Unseen Speaker Emotion Recognition 个体感知注意力调制用于识别看不见的说话者情绪
IF 11.2 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-15 DOI: 10.1109/taffc.2024.3498937
Yuanbo Fang, Xiaofen Xing, Zhaojie Chu, Yifeng Du, Xiangmin Xu
{"title":"Individual-Aware Attention Modulation for Unseen Speaker Emotion Recognition","authors":"Yuanbo Fang, Xiaofen Xing, Zhaojie Chu, Yifeng Du, Xiangmin Xu","doi":"10.1109/taffc.2024.3498937","DOIUrl":"https://doi.org/10.1109/taffc.2024.3498937","url":null,"abstract":"","PeriodicalId":13131,"journal":{"name":"IEEE Transactions on Affective Computing","volume":"11230 1","pages":""},"PeriodicalIF":11.2,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142642627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sparse Emotion Dictionary and CWT Spectrogram Fusion with Multi-head Self-Attention for Depression Recognition in Parkinson's Disease Patients 稀疏情绪字典和 CWT 频谱图与多头自我注意力融合,用于帕金森病患者的抑郁识别
IF 11.2 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-14 DOI: 10.1109/taffc.2024.3498009
Jian Li, Yuliang Zhao, Yinghao Liu, Huawei Zhang, Peng Shan, Yuanyi Wu, Wanyue Wang, Yulin Wang
{"title":"Sparse Emotion Dictionary and CWT Spectrogram Fusion with Multi-head Self-Attention for Depression Recognition in Parkinson's Disease Patients","authors":"Jian Li, Yuliang Zhao, Yinghao Liu, Huawei Zhang, Peng Shan, Yuanyi Wu, Wanyue Wang, Yulin Wang","doi":"10.1109/taffc.2024.3498009","DOIUrl":"https://doi.org/10.1109/taffc.2024.3498009","url":null,"abstract":"","PeriodicalId":13131,"journal":{"name":"IEEE Transactions on Affective Computing","volume":"98 1","pages":""},"PeriodicalIF":11.2,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142637239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EEG-Based Cross-Subject Emotion Recognition Using Sparse Bayesian Learning with Enhanced Covariance Alignment 利用稀疏贝叶斯学习与增强协方差对齐进行基于脑电图的跨受试者情绪识别
IF 11.2 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-14 DOI: 10.1109/taffc.2024.3497897
Wenlong Wang, Feifei Qi, Weichen Huang, Yuanqing Li, Zhuliang Yu, Wei Wu
{"title":"EEG-Based Cross-Subject Emotion Recognition Using Sparse Bayesian Learning with Enhanced Covariance Alignment","authors":"Wenlong Wang, Feifei Qi, Weichen Huang, Yuanqing Li, Zhuliang Yu, Wei Wu","doi":"10.1109/taffc.2024.3497897","DOIUrl":"https://doi.org/10.1109/taffc.2024.3497897","url":null,"abstract":"","PeriodicalId":13131,"journal":{"name":"IEEE Transactions on Affective Computing","volume":"160 1","pages":""},"PeriodicalIF":11.2,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142637288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Low-Rank Matching Attention Based Cross-Modal Feature Fusion Method for Conversational Emotion Recognition 基于低库匹配注意力的对话情绪识别跨模态特征融合方法
IF 11.2 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-14 DOI: 10.1109/taffc.2024.3498443
Yuntao Shou, Huan Liu, Xiangyong Cao, Deyu Meng, Bo Dong
{"title":"A Low-Rank Matching Attention Based Cross-Modal Feature Fusion Method for Conversational Emotion Recognition","authors":"Yuntao Shou, Huan Liu, Xiangyong Cao, Deyu Meng, Bo Dong","doi":"10.1109/taffc.2024.3498443","DOIUrl":"https://doi.org/10.1109/taffc.2024.3498443","url":null,"abstract":"","PeriodicalId":13131,"journal":{"name":"IEEE Transactions on Affective Computing","volume":"46 1","pages":""},"PeriodicalIF":11.2,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142637243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image-to-Text Conversion and Aspect-Oriented Filtration for Multimodal Aspect-Based Sentiment Analysis 基于多模态方面的情感分析的图像到文本转换和面向方面的过滤
IF 9.6 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-11-15 DOI: 10.1109/TAFFC.2023.3333200
Qianlong Wang;Hongling Xu;Zhiyuan Wen;Bin Liang;Min Yang;Bing Qin;Ruifeng Xu
Multimodal aspect-based sentiment analysis (MABSA) aims to determine the sentiment polarity of each aspect mentioned in the text based on multimodal content. Various approaches have been proposed to model multimodal sentiment features for each aspect via modal interactions. However, most existing approaches have two shortcomings: (1) The representation gap between textual and visual modalities may increase the risk of misalignment in modal interactions; (2) In some examples where the image is not related to the text, the visual information may not enrich the textual modality when learning aspect-based sentiment features. In such cases, blindly leveraging visual information may introduce noises in reasoning the aspect-based sentiment expressions. To tackle these shortcomings, we propose an end-to-end MABSA framework with image conversion and noise filtration. Specifically, to bridge the representation gap in different modalities, we attempt to translate images into the input space of a pre-trained language model (PLM). To this end, we develop an image-to-text conversion module that can convert an image to an implicit sequence of token embedding. Moreover, an aspect-oriented filtration module is devised to alleviate the noise in the implicit token embeddings, which consists of two attention operations. After filtering the noise, we leverage a PLM to encode the text, aspect, and image prompt derived from filtered implicit token embeddings as sentiment features to perform aspect-based sentiment prediction. Experimental results on two MABSA datasets show that our framework achieves state-of-the-art performance. Furthermore, extensive experimental analysis demonstrates the proposed framework has superior robustness and efficiency.
基于多模态方面的情感分析(MABSA)旨在根据多模态内容确定文本中提到的每个方面的情感极性。已有多种方法通过模态交互为每个方面的多模态情感特征建模。然而,大多数现有方法都有两个缺点:(1) 文本模态和视觉模态之间的表征差距可能会增加模态交互中错位的风险;(2) 在某些图像与文本无关的例子中,在学习基于方面的情感特征时,视觉信息可能无法丰富文本模态。在这种情况下,盲目利用视觉信息可能会在推理基于方面的情感表达时引入噪音。为了解决这些问题,我们提出了一种具有图像转换和噪声过滤功能的端到端 MABSA 框架。具体来说,为了弥合不同模态的表征差距,我们尝试将图像转换到预训练语言模型(PLM)的输入空间。为此,我们开发了一个图像到文本的转换模块,可将图像转换为隐式标记嵌入序列。此外,我们还设计了一个面向方面的过滤模块,以减轻隐式标记嵌入中的噪声,该模块由两个注意操作组成。过滤噪声后,我们利用 PLM 将从过滤后的隐式标记嵌入中得到的文本、方面和图像提示编码为情感特征,从而进行基于方面的情感预测。在两个 MABSA 数据集上的实验结果表明,我们的框架达到了最先进的性能。此外,广泛的实验分析表明,所提出的框架具有卓越的鲁棒性和效率。
{"title":"Image-to-Text Conversion and Aspect-Oriented Filtration for Multimodal Aspect-Based Sentiment Analysis","authors":"Qianlong Wang;Hongling Xu;Zhiyuan Wen;Bin Liang;Min Yang;Bing Qin;Ruifeng Xu","doi":"10.1109/TAFFC.2023.3333200","DOIUrl":"10.1109/TAFFC.2023.3333200","url":null,"abstract":"Multimodal aspect-based sentiment analysis (MABSA) aims to determine the sentiment polarity of each aspect mentioned in the text based on multimodal content. Various approaches have been proposed to model multimodal sentiment features for each aspect via modal interactions. However, most existing approaches have two shortcomings: (1) The representation gap between textual and visual modalities may increase the risk of misalignment in modal interactions; (2) In some examples where the image is not related to the text, the visual information may not enrich the textual modality when learning aspect-based sentiment features. In such cases, blindly leveraging visual information may introduce noises in reasoning the aspect-based sentiment expressions. To tackle these shortcomings, we propose an end-to-end MABSA framework with image conversion and noise filtration. Specifically, to bridge the representation gap in different modalities, we attempt to translate images into the input space of a pre-trained language model (PLM). To this end, we develop an image-to-text conversion module that can convert an image to an implicit sequence of token embedding. Moreover, an aspect-oriented filtration module is devised to alleviate the noise in the implicit token embeddings, which consists of two attention operations. After filtering the noise, we leverage a PLM to encode the text, aspect, and image prompt derived from filtered implicit token embeddings as sentiment features to perform aspect-based sentiment prediction. Experimental results on two MABSA datasets show that our framework achieves state-of-the-art performance. Furthermore, extensive experimental analysis demonstrates the proposed framework has superior robustness and efficiency.","PeriodicalId":13131,"journal":{"name":"IEEE Transactions on Affective Computing","volume":"15 3","pages":"1264-1278"},"PeriodicalIF":9.6,"publicationDate":"2023-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135709632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual Learning for Conversational Emotion Recognition and Emotional Response Generation 对话情绪识别和情绪反应生成的双重学习
IF 9.6 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-11-14 DOI: 10.1109/TAFFC.2023.3332631
Shuhe Zhang;Haifeng Hu;Songlong Xing
Emotion recognition in conversation (ERC) and emotional response generation (ERG) are two important NLP tasks. ERC aims to detect the utterance-level emotion from a dialogue, while ERG focuses on expressing a desired emotion. Essentially, ERC is a classification task, with its input and output domains being the utterance text and emotion labels, respectively. On the other hand, ERG is a generation task with its input and output domains being the opposite. These two tasks are highly related, but surprisingly, they are addressed independently without making use of their duality in prior works. Therefore, in this article, we propose to solve these two tasks in a dual learning framework. Our contributions are fourfold: (1) We propose a dual learning framework for ERC and ERG. (2) Within the proposed framework, two models can be trained jointly, so that the duality between them can be utilised. (3) Instead of a symmetric framework that deals with two tasks of the same data domain, we propose a dual learning framework that performs on a pair of asymmetric input and output spaces, i.e., the natural language space and the emotion labels. (4) Experiments are conducted on benchmark datasets to demonstrate the effectiveness of our framework.
对话中的情感识别(ERC)和情感反应生成(ERG)是两项重要的 NLP 任务。ERC的目的是从对话中检测出语篇级别的情感,而ERG的重点则是表达出想要表达的情感。从本质上讲,ERC 是一项分类任务,其输入域和输出域分别是语篇文本和情感标签。另一方面,ERG 是一项生成任务,其输入域和输出域正好相反。这两项任务高度相关,但令人惊讶的是,在之前的工作中,这两项任务被独立处理,而没有利用它们的二元性。因此,在本文中,我们建议在二元学习框架下解决这两个任务。我们的贡献有四个方面:(1) 我们为 ERC 和 ERG 提出了一个双重学习框架。(2) 在提出的框架内,可以联合训练两个模型,从而利用它们之间的对偶性。(3) 我们提出的双重学习框架不是处理同一数据域两个任务的对称框架,而是在一对不对称的输入和输出空间(即自然语言空间和情感标签)上执行。(4) 我们在基准数据集上进行了实验,以证明我们框架的有效性。
{"title":"Dual Learning for Conversational Emotion Recognition and Emotional Response Generation","authors":"Shuhe Zhang;Haifeng Hu;Songlong Xing","doi":"10.1109/TAFFC.2023.3332631","DOIUrl":"10.1109/TAFFC.2023.3332631","url":null,"abstract":"Emotion recognition in conversation (ERC) and emotional response generation (ERG) are two important NLP tasks. ERC aims to detect the utterance-level emotion from a dialogue, while ERG focuses on expressing a desired emotion. Essentially, ERC is a classification task, with its input and output domains being the utterance text and emotion labels, respectively. On the other hand, ERG is a generation task with its input and output domains being the opposite. These two tasks are highly related, but surprisingly, they are addressed independently without making use of their duality in prior works. Therefore, in this article, we propose to solve these two tasks in a dual learning framework. Our contributions are fourfold: (1) We propose a dual learning framework for ERC and ERG. (2) Within the proposed framework, two models can be trained jointly, so that the duality between them can be utilised. (3) Instead of a symmetric framework that deals with two tasks of the same data domain, we propose a dual learning framework that performs on a pair of asymmetric input and output spaces, i.e., the natural language space and the emotion labels. (4) Experiments are conducted on benchmark datasets to demonstrate the effectiveness of our framework.","PeriodicalId":13131,"journal":{"name":"IEEE Transactions on Affective Computing","volume":"15 3","pages":"1241-1252"},"PeriodicalIF":9.6,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135703713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Empathy by Design: The Influence of Trembling AI Voices on Prosocial Behavior 移情设计:颤抖的人工智能声音对亲社会行为的影响
IF 9.6 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-11-14 DOI: 10.1109/TAFFC.2023.3332742
Fotis Efthymiou;Christian Hildebrand
Recent advances in artificial speech synthesis and machine learning equip AI-powered conversational agents, from voice assistants to social robots, with the ability to mimic human emotional expression during their interactions with users. One unexplored development is the ability to design machine-generated voices that induce varying levels of “shakiness” (i.e., trembling) in the agents’ voices. In the current work, we examine how the trembling voice of a conversational AI impacts users’ perceptions, affective experiences, and their subsequent behavior. Across three studies, we demonstrate that a trembling voice enhances the perceived psychological vulnerability of the agent, followed by a heightened sense of empathic concern, ultimately increasing people's willingness to donate in a prosocial charity context. We provide further evidence from a large-scale field experiment that conversational agents with a trembling voice lead to increased click-through rates and decreased costs-per-impression in an online charity advertising setting. These findings deepen our understanding of the nuanced impact of intentionally designed voices of conversational AI agents on humans and highlight the ethical and societal challenges that arise.
人工语音合成和机器学习领域的最新进展使人工智能驱动的会话代理(从语音助手到社交机器人)在与用户互动时能够模仿人类的情感表达。其中一项尚未开发的功能是设计机器生成的声音,使代理的声音产生不同程度的 "颤抖"(即颤抖)。在目前的工作中,我们研究了人工智能对话的颤抖声音如何影响用户的感知、情感体验及其后续行为。在三项研究中,我们证明了颤抖的声音会增强人们对代理心理脆弱性的感知,继而增强共情关切感,最终提高人们在亲社会慈善背景下的捐赠意愿。我们从一项大规模的现场实验中进一步证明,在网络慈善广告中,声音颤抖的对话代理会提高点击率,降低每次印象的成本。这些发现加深了我们对有意设计的人工智能对话代理声音对人类的细微影响的理解,并突出了由此产生的伦理和社会挑战。
{"title":"Empathy by Design: The Influence of Trembling AI Voices on Prosocial Behavior","authors":"Fotis Efthymiou;Christian Hildebrand","doi":"10.1109/TAFFC.2023.3332742","DOIUrl":"10.1109/TAFFC.2023.3332742","url":null,"abstract":"Recent advances in artificial speech synthesis and machine learning equip AI-powered conversational agents, from voice assistants to social robots, with the ability to mimic human emotional expression during their interactions with users. One unexplored development is the ability to design machine-generated voices that induce varying levels of “shakiness” (i.e., trembling) in the agents’ voices. In the current work, we examine how the trembling voice of a conversational AI impacts users’ perceptions, affective experiences, and their subsequent behavior. Across three studies, we demonstrate that a trembling voice enhances the perceived psychological vulnerability of the agent, followed by a heightened sense of empathic concern, ultimately increasing people's willingness to donate in a prosocial charity context. We provide further evidence from a large-scale field experiment that conversational agents with a trembling voice lead to increased click-through rates and decreased costs-per-impression in an online charity advertising setting. These findings deepen our understanding of the nuanced impact of intentionally designed voices of conversational AI agents on humans and highlight the ethical and societal challenges that arise.","PeriodicalId":13131,"journal":{"name":"IEEE Transactions on Affective Computing","volume":"15 3","pages":"1253-1263"},"PeriodicalIF":9.6,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10316625","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135704990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Annotate Smarter, not Harder: Using Active Learning to Reduce Emotional Annotation Effort 更聪明地注释,而不是更努力地注释:利用主动学习减少情感注释工作
IF 9.6 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-11-02 DOI: 10.1109/TAFFC.2023.3329563
Soraia M. Alarcão;Vânia Mendonça;Cláudia Sevivas;Carolina Maruta;Manuel J. Fonseca
The success of supervised models for emotion recognition on images heavily depends on the availability of images properly annotated. Although millions of images are presently available, only a few are annotated with reliable emotional information. Current emotion recognition solutions either use large amounts of weakly-labeled web images, which often contain noise that is unrelated to the emotions of the image, or transfer learning, which usually results in performance losses. Thus, it would be desirable to know which images would be useful to be annotated to avoid an extensive annotation effort. In this paper, we propose a novel approach based on active learning to choose which images are more relevant to be annotated. Our approach dynamically combines multiple active learning strategies and learns the best ones (without prior knowledge of the best ones). Experiments using nine benchmark datasets revealed that: (i) active learning allows to reduce the annotation effort, while reaching or surpassing the performance of a supervised baseline with as little as 3% to 18% of the baseline's training set, in classification tasks; (ii) our online combination of multiple strategies converges to the performance of the best individual strategies, while avoiding the experimentation overhead needed to identify them.
图像情感识别监督模型的成功与否,在很大程度上取决于是否有适当注释的图像。尽管目前有数以百万计的图像可供使用,但只有少数图像标注了可靠的情感信息。目前的情感识别解决方案要么使用大量弱标注的网络图像(其中通常包含与图像情感无关的噪音),要么使用迁移学习,后者通常会导致性能损失。因此,最好能知道哪些图像需要进行标注,以避免大量的标注工作。在本文中,我们提出了一种基于主动学习的新方法,用于选择哪些图像更适合进行注释。我们的方法动态结合多种主动学习策略,并学习最佳策略(无需事先了解最佳策略)。使用九个基准数据集进行的实验表明(i) 在分类任务中,主动学习可以减少注释工作量,同时达到或超过有监督基线的性能,只需基线训练集的 3% 到 18%;(ii) 我们的多策略在线组合收敛到最佳单个策略的性能,同时避免了识别这些策略所需的实验开销。
{"title":"Annotate Smarter, not Harder: Using Active Learning to Reduce Emotional Annotation Effort","authors":"Soraia M. Alarcão;Vânia Mendonça;Cláudia Sevivas;Carolina Maruta;Manuel J. Fonseca","doi":"10.1109/TAFFC.2023.3329563","DOIUrl":"10.1109/TAFFC.2023.3329563","url":null,"abstract":"The success of supervised models for emotion recognition on images heavily depends on the availability of images properly annotated. Although millions of images are presently available, only a few are annotated with reliable emotional information. Current emotion recognition solutions either use large amounts of weakly-labeled web images, which often contain noise that is unrelated to the emotions of the image, or transfer learning, which usually results in performance losses. Thus, it would be desirable to know which images would be useful to be annotated to avoid an extensive annotation effort. In this paper, we propose a novel approach based on active learning to choose which images are more relevant to be annotated. Our approach dynamically combines multiple active learning strategies and learns the best ones (without prior knowledge of the best ones). Experiments using nine benchmark datasets revealed that: (i) active learning allows to reduce the annotation effort, while reaching or surpassing the performance of a supervised baseline with as little as 3% to 18% of the baseline's training set, in classification tasks; (ii) our online combination of multiple strategies converges to the performance of the best individual strategies, while avoiding the experimentation overhead needed to identify them.","PeriodicalId":13131,"journal":{"name":"IEEE Transactions on Affective Computing","volume":"15 3","pages":"1213-1227"},"PeriodicalIF":9.6,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134890813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing EEG-Based Decision-Making Performance Prediction by Maximizing Mutual Information Between Emotion and Decision-Relevant Features 通过最大化情绪与决策相关特征之间的互信息,加强基于脑电图的决策绩效预测
IF 9.6 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-11-02 DOI: 10.1109/TAFFC.2023.3329526
Xinyuan Wang;Danli Wang;Xuange Gao;Yanyan Zhao;Steve C. Chiu
Emotions are important factors in decision-making. With the advent of brain-computer interface (BCI) techniques, researchers developed a strong interest in predicting decisions based on emotions, which is a challenging task. To predict decision-making performance using emotion, we have proposed the Maximizing Mutual Information between Emotion and Decision relevant features (MMI-ED) method, with three modules: (1) Temporal-spatial encoding module captures spatial correlation and temporal dependence from electroencephalogram (EEG) signals; (2) Relevant feature decomposition module extracts emotion-relevant features and decision-relevant features; (3) Relevant feature fusion module maximizes the mutual information to incorporate useful emotion-related feature information during the decision-making prediction process. To construct a dataset that uses emotions to predict decision-making performance, we designed an experiment involving emotion elicitation and decision-making tasks and collected EEG, behavioral, and subjective data. We performed a comparison of our model with several emotion recognition and motion imagery models using our dataset. The results demonstrate that our model achieved state-of-the-art performance, achieving a classification accuracy of 92.96$%$. This accuracy is 6.83$%$ higher than the best-performing model. Furthermore, we conducted an ablation study to demonstrate the validity of each module and provided explanations for the brain regions associated with the relevant features.
情绪是决策的重要因素。随着脑机接口(BCI)技术的出现,研究人员对基于情绪的决策预测产生了浓厚的兴趣,而这是一项具有挑战性的任务。为了利用情绪预测决策表现,我们提出了情绪与决策相关特征互信息最大化(MMI-ED)方法,包括三个模块:(1)时空编码模块从脑电信号中捕捉空间相关性和时间依赖性;(2)相关特征分解模块提取情绪相关特征和决策相关特征;(3)相关特征融合模块最大化互信息,在决策预测过程中纳入有用的情绪相关特征信息。为了构建一个利用情绪预测决策表现的数据集,我们设计了一个涉及情绪激发和决策任务的实验,并收集了脑电图、行为和主观数据。我们使用数据集将我们的模型与多个情绪识别和运动图像模型进行了比较。结果表明,我们的模型达到了最先进的性能,分类准确率为 92.96%。这一准确率比表现最好的模型高出 6.83%。此外,我们还进行了一项消融研究,以证明每个模块的有效性,并对与相关特征相关的脑区进行了解释。
{"title":"Enhancing EEG-Based Decision-Making Performance Prediction by Maximizing Mutual Information Between Emotion and Decision-Relevant Features","authors":"Xinyuan Wang;Danli Wang;Xuange Gao;Yanyan Zhao;Steve C. Chiu","doi":"10.1109/TAFFC.2023.3329526","DOIUrl":"10.1109/TAFFC.2023.3329526","url":null,"abstract":"Emotions are important factors in decision-making. With the advent of brain-computer interface (BCI) techniques, researchers developed a strong interest in predicting decisions based on emotions, which is a challenging task. To predict decision-making performance using emotion, we have proposed the Maximizing Mutual Information between Emotion and Decision relevant features (MMI-ED) method, with three modules: (1) Temporal-spatial encoding module captures spatial correlation and temporal dependence from electroencephalogram (EEG) signals; (2) Relevant feature decomposition module extracts emotion-relevant features and decision-relevant features; (3) Relevant feature fusion module maximizes the mutual information to incorporate useful emotion-related feature information during the decision-making prediction process. To construct a dataset that uses emotions to predict decision-making performance, we designed an experiment involving emotion elicitation and decision-making tasks and collected EEG, behavioral, and subjective data. We performed a comparison of our model with several emotion recognition and motion imagery models using our dataset. The results demonstrate that our model achieved state-of-the-art performance, achieving a classification accuracy of 92.96\u0000<inline-formula><tex-math>$%$</tex-math></inline-formula>\u0000. This accuracy is 6.83\u0000<inline-formula><tex-math>$%$</tex-math></inline-formula>\u0000 higher than the best-performing model. Furthermore, we conducted an ablation study to demonstrate the validity of each module and provided explanations for the brain regions associated with the relevant features.","PeriodicalId":13131,"journal":{"name":"IEEE Transactions on Affective Computing","volume":"15 3","pages":"1228-1240"},"PeriodicalIF":9.6,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134890392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EmoStim: A Database of Emotional Film Clips With Discrete and Componential Assessment EmoStim:具有离散性和连续性评估功能的情绪电影片段数据库
IF 9.6 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-10-31 DOI: 10.1109/TAFFC.2023.3328900
Rukshani Somarathna;Patrik Vuilleumier;Gelareh Mohammadi
Emotion elicitation using emotional film clips is one of the most common and ecologically valid methods in Affective Computing. However, selecting and validating appropriate materials that evoke a range of emotions is challenging. Here, we present EmoStim: A Database of Emotional Film Clips as a film library with rich and varied content. EmoStim is designed for researchers interested in studying emotions in relation to either discrete or componential models of emotion. To create the database, 139 film clips were selected from literature and then annotated by 638 participants through the CrowdFlower platform. We selected 99 film clips based on the distribution of subjective ratings that effectively distinguished between emotions defined by the discrete model. We show that the selected film clips reliably induce a range of specific emotions according to the discrete model. Further, we describe relationships between emotions, emotion organization in the componential space, and underlying dimensions representing emotional experience. The EmoStim database and participant annotations are freely available for research purposes. The database can be used to enrich our understanding of emotions further and serve as a guide to select or creating additional materials.
使用情感电影片段进行情感激发是情感计算中最常见、最有效的方法之一。然而,选择和验证能唤起各种情绪的适当材料是一项挑战。在此,我们介绍 EmoStim:情感电影片段数据库是一个内容丰富多样的电影资料库。EmoStim 专为有兴趣研究情绪的研究人员设计,这些研究人员既可以根据离散情绪模型,也可以根据情绪的成分模型来研究情绪。为了创建该数据库,我们从文献中挑选了 139 个电影片段,然后由 638 名参与者通过 CrowdFlower 平台进行注释。我们根据主观评分的分布情况选出了 99 个电影片段,它们能有效区分离散模型所定义的情绪。我们的研究表明,根据离散模型,所选电影片段能可靠地诱发一系列特定情绪。此外,我们还描述了情绪之间的关系、成分空间中的情绪组织以及代表情绪体验的基本维度。EmoStim 数据库和参与者注释可免费用于研究目的。该数据库可用于进一步丰富我们对情绪的理解,并作为选择或创建其他材料的指南。
{"title":"EmoStim: A Database of Emotional Film Clips With Discrete and Componential Assessment","authors":"Rukshani Somarathna;Patrik Vuilleumier;Gelareh Mohammadi","doi":"10.1109/TAFFC.2023.3328900","DOIUrl":"10.1109/TAFFC.2023.3328900","url":null,"abstract":"Emotion elicitation using emotional film clips is one of the most common and ecologically valid methods in Affective Computing. However, selecting and validating appropriate materials that evoke a range of emotions is challenging. Here, we present EmoStim: A Database of Emotional Film Clips as a film library with rich and varied content. EmoStim is designed for researchers interested in studying emotions in relation to either discrete or componential models of emotion. To create the database, 139 film clips were selected from literature and then annotated by 638 participants through the CrowdFlower platform. We selected 99 film clips based on the distribution of subjective ratings that effectively distinguished between emotions defined by the discrete model. We show that the selected film clips reliably induce a range of specific emotions according to the discrete model. Further, we describe relationships between emotions, emotion organization in the componential space, and underlying dimensions representing emotional experience. The EmoStim database and participant annotations are freely available for research purposes. The database can be used to enrich our understanding of emotions further and serve as a guide to select or creating additional materials.","PeriodicalId":13131,"journal":{"name":"IEEE Transactions on Affective Computing","volume":"15 3","pages":"1202-1212"},"PeriodicalIF":9.6,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135312044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Affective Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1