Alya Alshammari , Nazir Ahmad , Muhammad Swaileh A. Alzaidi , Somia A. Asklany , Hanan Al Sultan , Nief AL-Gamdi , Jawhara Aljabri , Mahir Mohammed Sharif
{"title":"Artificial intelligence with greater cane rat algorithm driven robust speech emotion recognition approach","authors":"Alya Alshammari , Nazir Ahmad , Muhammad Swaileh A. Alzaidi , Somia A. Asklany , Hanan Al Sultan , Nief AL-Gamdi , Jawhara Aljabri , Mahir Mohammed Sharif","doi":"10.1016/j.aej.2025.02.090","DOIUrl":null,"url":null,"abstract":"<div><div>Speech emotion recognition is a crucial research area that can help improve and maintain public health and contribute to the continuing development of health information technologies. Various speech emotion recognition systems developments have involved deep learning (DL) techniques and novel temporal and acoustic features. Speech is the essential medium of human communication, and the word combination communicates the aspectual meaning of speech. Furthermore, all the words have an emotional level attached to them based on the speaker's emotional state. Automated speech emotion recognition could contribute to several real-time public health applications by detecting or recognizing emotions and deducing valuable data based on the mental and emotional state of the patients. The artificial intelligence (AI) community has been an active research area in Speech Emotion recognition. This article devises an Artificial Intelligence with Greater Cane Rat Algorithm driven Robust Speech Emotion Recognition (AIGCRA-RSER) approach. In the AIGCRA-RSER technique, the major aim is to recognize emotions in the speech data. Primarily, the AIGCRA-RSER technique utilizes mel-spectrogram-based speech representation, which generates the corresponding spectrograms. Besides, the AIGCRA-RSER technique designs the MobileNetv3 model for feature vector representations, and GCRA chooses its hyperparameters. Lastly, an extreme learning machine (ELM) is used to identify speech emotions. A widespread simulation analysis is performed to ensure the enhancements of the AIGCRA-RSER technique. The performance validation of the AIGCRA-RSER technique portrayed a superior accuracy value of 92.04 % over existing models under diverse measures.</div></div>","PeriodicalId":7484,"journal":{"name":"alexandria engineering journal","volume":"121 ","pages":"Pages 426-435"},"PeriodicalIF":6.2000,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"alexandria engineering journal","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1110016825002674","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Speech emotion recognition is a crucial research area that can help improve and maintain public health and contribute to the continuing development of health information technologies. Various speech emotion recognition systems developments have involved deep learning (DL) techniques and novel temporal and acoustic features. Speech is the essential medium of human communication, and the word combination communicates the aspectual meaning of speech. Furthermore, all the words have an emotional level attached to them based on the speaker's emotional state. Automated speech emotion recognition could contribute to several real-time public health applications by detecting or recognizing emotions and deducing valuable data based on the mental and emotional state of the patients. The artificial intelligence (AI) community has been an active research area in Speech Emotion recognition. This article devises an Artificial Intelligence with Greater Cane Rat Algorithm driven Robust Speech Emotion Recognition (AIGCRA-RSER) approach. In the AIGCRA-RSER technique, the major aim is to recognize emotions in the speech data. Primarily, the AIGCRA-RSER technique utilizes mel-spectrogram-based speech representation, which generates the corresponding spectrograms. Besides, the AIGCRA-RSER technique designs the MobileNetv3 model for feature vector representations, and GCRA chooses its hyperparameters. Lastly, an extreme learning machine (ELM) is used to identify speech emotions. A widespread simulation analysis is performed to ensure the enhancements of the AIGCRA-RSER technique. The performance validation of the AIGCRA-RSER technique portrayed a superior accuracy value of 92.04 % over existing models under diverse measures.
期刊介绍:
Alexandria Engineering Journal is an international journal devoted to publishing high quality papers in the field of engineering and applied science. Alexandria Engineering Journal is cited in the Engineering Information Services (EIS) and the Chemical Abstracts (CA). The papers published in Alexandria Engineering Journal are grouped into five sections, according to the following classification:
• Mechanical, Production, Marine and Textile Engineering
• Electrical Engineering, Computer Science and Nuclear Engineering
• Civil and Architecture Engineering
• Chemical Engineering and Applied Sciences
• Environmental Engineering