Wenjin Liu, Jiaqi Shi, Shudong Zhang, Lijuan Zhou, Haoming Liu
{"title":"电子语音:开发用于语音情感识别和分析的数据集","authors":"Wenjin Liu, Jiaqi Shi, Shudong Zhang, Lijuan Zhou, Haoming Liu","doi":"10.1155/2024/5410080","DOIUrl":null,"url":null,"abstract":"<div>\n <p>Speech emotion recognition plays a crucial role in analyzing psychological disorders, behavioral decision-making, and human-machine interaction applications. However, the majority of current methods for speech emotion recognition heavily rely on data-driven approaches, and the scarcity of emotion speech datasets limits the progress in research and development of emotion analysis and recognition. To address this issue, this study introduces a new English speech dataset specifically designed for emotion analysis and recognition. This dataset consists of 5503 voices from over 60 English speakers in different emotional states. Furthermore, to enhance emotion analysis and recognition, fast Fourier transform (FFT), short-time Fourier transform (STFT), mel-frequency cepstral coefficients (MFCCs), and continuous wavelet transform (CWT) are employed for feature extraction from the speech data. Utilizing these algorithms, the spectrum images of the speeches are obtained, forming four datasets consisting of different speech feature images. Furthermore, to evaluate the dataset, 16 classification models and 19 detection algorithms are selected. The experimental results demonstrate that the majority of classification and detection models achieve exceptionally high recognition accuracy on this dataset, confirming its effectiveness and utility. The dataset proves to be valuable in advancing research and development in the field of emotion recognition.</p>\n </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2024 1","pages":""},"PeriodicalIF":5.0000,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/2024/5410080","citationCount":"0","resultStr":"{\"title\":\"E-Speech: Development of a Dataset for Speech Emotion Recognition and Analysis\",\"authors\":\"Wenjin Liu, Jiaqi Shi, Shudong Zhang, Lijuan Zhou, Haoming Liu\",\"doi\":\"10.1155/2024/5410080\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n <p>Speech emotion recognition plays a crucial role in analyzing psychological disorders, behavioral decision-making, and human-machine interaction applications. However, the majority of current methods for speech emotion recognition heavily rely on data-driven approaches, and the scarcity of emotion speech datasets limits the progress in research and development of emotion analysis and recognition. To address this issue, this study introduces a new English speech dataset specifically designed for emotion analysis and recognition. This dataset consists of 5503 voices from over 60 English speakers in different emotional states. Furthermore, to enhance emotion analysis and recognition, fast Fourier transform (FFT), short-time Fourier transform (STFT), mel-frequency cepstral coefficients (MFCCs), and continuous wavelet transform (CWT) are employed for feature extraction from the speech data. Utilizing these algorithms, the spectrum images of the speeches are obtained, forming four datasets consisting of different speech feature images. Furthermore, to evaluate the dataset, 16 classification models and 19 detection algorithms are selected. The experimental results demonstrate that the majority of classification and detection models achieve exceptionally high recognition accuracy on this dataset, confirming its effectiveness and utility. The dataset proves to be valuable in advancing research and development in the field of emotion recognition.</p>\\n </div>\",\"PeriodicalId\":14089,\"journal\":{\"name\":\"International Journal of Intelligent Systems\",\"volume\":\"2024 1\",\"pages\":\"\"},\"PeriodicalIF\":5.0000,\"publicationDate\":\"2024-10-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1155/2024/5410080\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Intelligent Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1155/2024/5410080\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1155/2024/5410080","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
E-Speech: Development of a Dataset for Speech Emotion Recognition and Analysis
Speech emotion recognition plays a crucial role in analyzing psychological disorders, behavioral decision-making, and human-machine interaction applications. However, the majority of current methods for speech emotion recognition heavily rely on data-driven approaches, and the scarcity of emotion speech datasets limits the progress in research and development of emotion analysis and recognition. To address this issue, this study introduces a new English speech dataset specifically designed for emotion analysis and recognition. This dataset consists of 5503 voices from over 60 English speakers in different emotional states. Furthermore, to enhance emotion analysis and recognition, fast Fourier transform (FFT), short-time Fourier transform (STFT), mel-frequency cepstral coefficients (MFCCs), and continuous wavelet transform (CWT) are employed for feature extraction from the speech data. Utilizing these algorithms, the spectrum images of the speeches are obtained, forming four datasets consisting of different speech feature images. Furthermore, to evaluate the dataset, 16 classification models and 19 detection algorithms are selected. The experimental results demonstrate that the majority of classification and detection models achieve exceptionally high recognition accuracy on this dataset, confirming its effectiveness and utility. The dataset proves to be valuable in advancing research and development in the field of emotion recognition.
期刊介绍:
The International Journal of Intelligent Systems serves as a forum for individuals interested in tapping into the vast theories based on intelligent systems construction. With its peer-reviewed format, the journal explores several fascinating editorials written by today''s experts in the field. Because new developments are being introduced each day, there''s much to be learned — examination, analysis creation, information retrieval, man–computer interactions, and more. The International Journal of Intelligent Systems uses charts and illustrations to demonstrate these ground-breaking issues, and encourages readers to share their thoughts and experiences.