Romeo Lanzino, Danilo Avola, Federico Fontana, Luigi Cinque, Francesco Scarcello, Gian Luca Foresti
{"title":"SATEER:基于脑电图的情感识别主体感知变换器。","authors":"Romeo Lanzino, Danilo Avola, Federico Fontana, Luigi Cinque, Francesco Scarcello, Gian Luca Foresti","doi":"10.1142/S0129065725500029","DOIUrl":null,"url":null,"abstract":"<p><p>This study presents a Subject-Aware Transformer-based neural network designed for the Electroencephalogram (EEG) Emotion Recognition task (SATEER), which entails the analysis of EEG signals to classify and interpret human emotional states. SATEER processes the EEG waveforms by transforming them into Mel spectrograms, which can be seen as particular cases of images with the number of channels equal to the number of electrodes used during the recording process; this type of data can thus be processed using a Computer Vision pipeline. Distinct from preceding approaches, this model addresses the variability in individual responses to identical stimuli by incorporating a User Embedder module. This module enables the association of individual profiles with their EEGs, thereby enhancing classification accuracy. The efficacy of the model was rigorously evaluated using four publicly available datasets, demonstrating superior performance over existing methods in all conducted benchmarks. For instance, on the AMIGOS dataset (A dataset for Multimodal research of affect, personality traits, and mood on Individuals and GrOupS), SATEER's accuracy exceeds 99.8% accuracy across all labels and showcases an improvement of 0.47% over the state of the art. Furthermore, an exhaustive ablation study underscores the pivotal role of the User Embedder module and each other component of the presented model in achieving these advancements.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550002"},"PeriodicalIF":0.0000,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"SATEER: Subject-Aware Transformer for EEG-Based Emotion Recognition.\",\"authors\":\"Romeo Lanzino, Danilo Avola, Federico Fontana, Luigi Cinque, Francesco Scarcello, Gian Luca Foresti\",\"doi\":\"10.1142/S0129065725500029\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>This study presents a Subject-Aware Transformer-based neural network designed for the Electroencephalogram (EEG) Emotion Recognition task (SATEER), which entails the analysis of EEG signals to classify and interpret human emotional states. SATEER processes the EEG waveforms by transforming them into Mel spectrograms, which can be seen as particular cases of images with the number of channels equal to the number of electrodes used during the recording process; this type of data can thus be processed using a Computer Vision pipeline. Distinct from preceding approaches, this model addresses the variability in individual responses to identical stimuli by incorporating a User Embedder module. This module enables the association of individual profiles with their EEGs, thereby enhancing classification accuracy. The efficacy of the model was rigorously evaluated using four publicly available datasets, demonstrating superior performance over existing methods in all conducted benchmarks. For instance, on the AMIGOS dataset (A dataset for Multimodal research of affect, personality traits, and mood on Individuals and GrOupS), SATEER's accuracy exceeds 99.8% accuracy across all labels and showcases an improvement of 0.47% over the state of the art. Furthermore, an exhaustive ablation study underscores the pivotal role of the User Embedder module and each other component of the presented model in achieving these advancements.</p>\",\"PeriodicalId\":94052,\"journal\":{\"name\":\"International journal of neural systems\",\"volume\":\" \",\"pages\":\"2550002\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-11-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International journal of neural systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1142/S0129065725500029\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International journal of neural systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1142/S0129065725500029","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
SATEER: Subject-Aware Transformer for EEG-Based Emotion Recognition.
This study presents a Subject-Aware Transformer-based neural network designed for the Electroencephalogram (EEG) Emotion Recognition task (SATEER), which entails the analysis of EEG signals to classify and interpret human emotional states. SATEER processes the EEG waveforms by transforming them into Mel spectrograms, which can be seen as particular cases of images with the number of channels equal to the number of electrodes used during the recording process; this type of data can thus be processed using a Computer Vision pipeline. Distinct from preceding approaches, this model addresses the variability in individual responses to identical stimuli by incorporating a User Embedder module. This module enables the association of individual profiles with their EEGs, thereby enhancing classification accuracy. The efficacy of the model was rigorously evaluated using four publicly available datasets, demonstrating superior performance over existing methods in all conducted benchmarks. For instance, on the AMIGOS dataset (A dataset for Multimodal research of affect, personality traits, and mood on Individuals and GrOupS), SATEER's accuracy exceeds 99.8% accuracy across all labels and showcases an improvement of 0.47% over the state of the art. Furthermore, an exhaustive ablation study underscores the pivotal role of the User Embedder module and each other component of the presented model in achieving these advancements.