{"title":"基于优化CNN-RF-QPSO模型的音乐情感分类","authors":"Rui Tian, Ruheng Yin, Feng Gan","doi":"10.1108/dta-07-2022-0267","DOIUrl":null,"url":null,"abstract":"PurposeMusic sentiment analysis helps to promote the diversification of music information retrieval methods. Traditional music emotion classification tasks suffer from high manual workload and low classification accuracy caused by difficulty in feature extraction and inaccurate manual determination of hyperparameter. In this paper, the authors propose an optimized convolution neural network-random forest (CNN-RF) model for music sentiment classification which is capable of optimizing the manually selected hyperparameters to improve the accuracy of music sentiment classification and reduce labor costs and human classification errors.Design/methodology/approachA CNN-RF music sentiment classification model is designed based on quantum particle swarm optimization (QPSO). First, the audio data are transformed into a Mel spectrogram, and feature extraction is conducted by a CNN. Second, the music features extracted are processed by RF algorithm to complete a preliminary emotion classification. Finally, to select the suitable hyperparameters for a CNN, the QPSO algorithm is adopted to extract the best hyperparameters and obtain the final classification results.FindingsThe model has gone through experimental validations and achieved a classification accuracy of 97 per cent for different sentiment categories with shortened training time. The proposed method with QPSO achieved 1.2 and 1.6 per cent higher accuracy than that with particle swarm optimization and genetic algorithm, respectively. The proposed model had great potential for music sentiment classification.Originality/valueThe dual contribution of this work comprises the proposed model which integrated two deep learning models and the introduction of a QPSO into model optimization. With these two innovations, the efficiency and accuracy of music emotion recognition and classification have been significantly improved.","PeriodicalId":56156,"journal":{"name":"Data Technologies and Applications","volume":" ","pages":""},"PeriodicalIF":1.7000,"publicationDate":"2023-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Music sentiment classification based on an optimized CNN-RF-QPSO model\",\"authors\":\"Rui Tian, Ruheng Yin, Feng Gan\",\"doi\":\"10.1108/dta-07-2022-0267\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"PurposeMusic sentiment analysis helps to promote the diversification of music information retrieval methods. Traditional music emotion classification tasks suffer from high manual workload and low classification accuracy caused by difficulty in feature extraction and inaccurate manual determination of hyperparameter. In this paper, the authors propose an optimized convolution neural network-random forest (CNN-RF) model for music sentiment classification which is capable of optimizing the manually selected hyperparameters to improve the accuracy of music sentiment classification and reduce labor costs and human classification errors.Design/methodology/approachA CNN-RF music sentiment classification model is designed based on quantum particle swarm optimization (QPSO). First, the audio data are transformed into a Mel spectrogram, and feature extraction is conducted by a CNN. Second, the music features extracted are processed by RF algorithm to complete a preliminary emotion classification. Finally, to select the suitable hyperparameters for a CNN, the QPSO algorithm is adopted to extract the best hyperparameters and obtain the final classification results.FindingsThe model has gone through experimental validations and achieved a classification accuracy of 97 per cent for different sentiment categories with shortened training time. The proposed method with QPSO achieved 1.2 and 1.6 per cent higher accuracy than that with particle swarm optimization and genetic algorithm, respectively. The proposed model had great potential for music sentiment classification.Originality/valueThe dual contribution of this work comprises the proposed model which integrated two deep learning models and the introduction of a QPSO into model optimization. With these two innovations, the efficiency and accuracy of music emotion recognition and classification have been significantly improved.\",\"PeriodicalId\":56156,\"journal\":{\"name\":\"Data Technologies and Applications\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2023-03-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Data Technologies and Applications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1108/dta-07-2022-0267\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Data Technologies and Applications","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1108/dta-07-2022-0267","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Music sentiment classification based on an optimized CNN-RF-QPSO model
PurposeMusic sentiment analysis helps to promote the diversification of music information retrieval methods. Traditional music emotion classification tasks suffer from high manual workload and low classification accuracy caused by difficulty in feature extraction and inaccurate manual determination of hyperparameter. In this paper, the authors propose an optimized convolution neural network-random forest (CNN-RF) model for music sentiment classification which is capable of optimizing the manually selected hyperparameters to improve the accuracy of music sentiment classification and reduce labor costs and human classification errors.Design/methodology/approachA CNN-RF music sentiment classification model is designed based on quantum particle swarm optimization (QPSO). First, the audio data are transformed into a Mel spectrogram, and feature extraction is conducted by a CNN. Second, the music features extracted are processed by RF algorithm to complete a preliminary emotion classification. Finally, to select the suitable hyperparameters for a CNN, the QPSO algorithm is adopted to extract the best hyperparameters and obtain the final classification results.FindingsThe model has gone through experimental validations and achieved a classification accuracy of 97 per cent for different sentiment categories with shortened training time. The proposed method with QPSO achieved 1.2 and 1.6 per cent higher accuracy than that with particle swarm optimization and genetic algorithm, respectively. The proposed model had great potential for music sentiment classification.Originality/valueThe dual contribution of this work comprises the proposed model which integrated two deep learning models and the introduction of a QPSO into model optimization. With these two innovations, the efficiency and accuracy of music emotion recognition and classification have been significantly improved.