A Multimodal Low Complexity Neural Network Approach for Emotion Recognition

IF 4.3 Q1 PSYCHOLOGY, MULTIDISCIPLINARY Human Behavior and Emerging Technologies Pub Date : 2024-11-11 DOI:10.1155/2024/5581443
Adrian Rodriguez Aguiñaga, Margarita Ramirez Ramirez, Maria del Consuelo Salgado Soto, Maria de los Angeles Quezada Cisnero
{"title":"A Multimodal Low Complexity Neural Network Approach for Emotion Recognition","authors":"Adrian Rodriguez Aguiñaga,&nbsp;Margarita Ramirez Ramirez,&nbsp;Maria del Consuelo Salgado Soto,&nbsp;Maria de los Angeles Quezada Cisnero","doi":"10.1155/2024/5581443","DOIUrl":null,"url":null,"abstract":"<p>This paper introduces a neural network-based model designed for classifying emotional states by leveraging multimodal physiological signals. The model utilizes data from the AMIGOS and SEED-V databases. The AMIGOS database integrates inputs from electroencephalogram (EEG), electrocardiogram (ECG), and galvanic skin response (GSR) to analyze emotional responses, while the SEED-V database continuously updates EEG signals. We implemented a sequential neural network architecture featuring two hidden layers, which underwent substantial hyperparameter tuning to achieve optimal performance. Our model’s effectiveness was tested through binary classification tasks focusing on arousal and valence, as well as a more complex four-class classification that delineates emotional quadrants for the emotional tags: happy, sad, neutral, and disgust. In these varied scenarios, the model consistently demonstrated accuracy levels ranging from 79% to 86% in the AMIGOS database and up to 97% in SEED-V. A notable aspect of our approach is the model’s ability to accurately recognize emotions without the need for extensive signal preprocessing, a common challenge in multimodal emotion analysis. This feature enhances the practical applicability of our model in real-world scenarios where rapid and efficient emotion recognition is essential.</p>","PeriodicalId":36408,"journal":{"name":"Human Behavior and Emerging Technologies","volume":"2024 1","pages":""},"PeriodicalIF":4.3000,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/2024/5581443","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Human Behavior and Emerging Technologies","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1155/2024/5581443","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

Abstract

This paper introduces a neural network-based model designed for classifying emotional states by leveraging multimodal physiological signals. The model utilizes data from the AMIGOS and SEED-V databases. The AMIGOS database integrates inputs from electroencephalogram (EEG), electrocardiogram (ECG), and galvanic skin response (GSR) to analyze emotional responses, while the SEED-V database continuously updates EEG signals. We implemented a sequential neural network architecture featuring two hidden layers, which underwent substantial hyperparameter tuning to achieve optimal performance. Our model’s effectiveness was tested through binary classification tasks focusing on arousal and valence, as well as a more complex four-class classification that delineates emotional quadrants for the emotional tags: happy, sad, neutral, and disgust. In these varied scenarios, the model consistently demonstrated accuracy levels ranging from 79% to 86% in the AMIGOS database and up to 97% in SEED-V. A notable aspect of our approach is the model’s ability to accurately recognize emotions without the need for extensive signal preprocessing, a common challenge in multimodal emotion analysis. This feature enhances the practical applicability of our model in real-world scenarios where rapid and efficient emotion recognition is essential.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用于情感识别的多模态低复杂度神经网络方法
本文介绍了一种基于神经网络的模型,旨在利用多模态生理信号对情绪状态进行分类。该模型利用了 AMIGOS 和 SEED-V 数据库中的数据。AMIGOS 数据库整合了脑电图(EEG)、心电图(ECG)和皮肤电反应(GSR)的输入,用于分析情绪反应,而 SEED-V 数据库则持续更新脑电信号。我们采用了具有两个隐藏层的序列神经网络架构,并对其进行了大量超参数调整,以达到最佳性能。我们通过二元分类任务测试了模型的有效性,这些任务侧重于唤醒和情绪,以及更为复杂的四元分类,即为情绪标签划分情绪象限:快乐、悲伤、中性和厌恶。在这些不同的场景中,该模型在 AMIGOS 数据库中的准确率始终保持在 79% 到 86% 之间,而在 SEED-V 中则高达 97%。我们方法的一个显著特点是,该模型无需进行大量信号预处理就能准确识别情绪,而这是多模态情绪分析中的一个常见挑战。这一特点增强了我们的模型在现实世界场景中的实际应用性,在这些场景中,快速高效的情绪识别至关重要。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Human Behavior and Emerging Technologies
Human Behavior and Emerging Technologies Social Sciences-Social Sciences (all)
CiteScore
17.20
自引率
8.70%
发文量
73
期刊介绍: Human Behavior and Emerging Technologies is an interdisciplinary journal dedicated to publishing high-impact research that enhances understanding of the complex interactions between diverse human behavior and emerging digital technologies.
期刊最新文献
Perceived Value and Purchase Influence of YouTube Beauty Vlog Content Amongst Generation Y Female Consumers Uncovering the Web of Secrets Surrounding Employee Monitoring Software: A Content Analysis of Information Provided by Vendors Industry Readiness and Adaptation of Fourth Industrial Revolution: Applying the Extended TOE Framework The Effect of Children’s Phubbing on Parents’ Psychological Wellbeing: A Moderated Mediation Analysis User Preferences for Medical Digital Transformation: A Case Study of Orthodontic Services
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1