Adrian Rodriguez Aguiñaga, Margarita Ramirez Ramirez, Maria del Consuelo Salgado Soto, Maria de los Angeles Quezada Cisnero
{"title":"A Multimodal Low Complexity Neural Network Approach for Emotion Recognition","authors":"Adrian Rodriguez Aguiñaga, Margarita Ramirez Ramirez, Maria del Consuelo Salgado Soto, Maria de los Angeles Quezada Cisnero","doi":"10.1155/2024/5581443","DOIUrl":null,"url":null,"abstract":"<p>This paper introduces a neural network-based model designed for classifying emotional states by leveraging multimodal physiological signals. The model utilizes data from the AMIGOS and SEED-V databases. The AMIGOS database integrates inputs from electroencephalogram (EEG), electrocardiogram (ECG), and galvanic skin response (GSR) to analyze emotional responses, while the SEED-V database continuously updates EEG signals. We implemented a sequential neural network architecture featuring two hidden layers, which underwent substantial hyperparameter tuning to achieve optimal performance. Our model’s effectiveness was tested through binary classification tasks focusing on arousal and valence, as well as a more complex four-class classification that delineates emotional quadrants for the emotional tags: happy, sad, neutral, and disgust. In these varied scenarios, the model consistently demonstrated accuracy levels ranging from 79% to 86% in the AMIGOS database and up to 97% in SEED-V. A notable aspect of our approach is the model’s ability to accurately recognize emotions without the need for extensive signal preprocessing, a common challenge in multimodal emotion analysis. This feature enhances the practical applicability of our model in real-world scenarios where rapid and efficient emotion recognition is essential.</p>","PeriodicalId":36408,"journal":{"name":"Human Behavior and Emerging Technologies","volume":"2024 1","pages":""},"PeriodicalIF":4.3000,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/2024/5581443","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Human Behavior and Emerging Technologies","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1155/2024/5581443","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
This paper introduces a neural network-based model designed for classifying emotional states by leveraging multimodal physiological signals. The model utilizes data from the AMIGOS and SEED-V databases. The AMIGOS database integrates inputs from electroencephalogram (EEG), electrocardiogram (ECG), and galvanic skin response (GSR) to analyze emotional responses, while the SEED-V database continuously updates EEG signals. We implemented a sequential neural network architecture featuring two hidden layers, which underwent substantial hyperparameter tuning to achieve optimal performance. Our model’s effectiveness was tested through binary classification tasks focusing on arousal and valence, as well as a more complex four-class classification that delineates emotional quadrants for the emotional tags: happy, sad, neutral, and disgust. In these varied scenarios, the model consistently demonstrated accuracy levels ranging from 79% to 86% in the AMIGOS database and up to 97% in SEED-V. A notable aspect of our approach is the model’s ability to accurately recognize emotions without the need for extensive signal preprocessing, a common challenge in multimodal emotion analysis. This feature enhances the practical applicability of our model in real-world scenarios where rapid and efficient emotion recognition is essential.
期刊介绍:
Human Behavior and Emerging Technologies is an interdisciplinary journal dedicated to publishing high-impact research that enhances understanding of the complex interactions between diverse human behavior and emerging digital technologies.