Darshana Subhash , Jyothish Lal G. , Premjith B. , Vinayakumar Ravi
{"title":"基于变模分解的鲁棒口音分类系统","authors":"Darshana Subhash , Jyothish Lal G. , Premjith B. , Vinayakumar Ravi","doi":"10.1016/j.engappai.2024.109512","DOIUrl":null,"url":null,"abstract":"<div><div>State-of-the-art automatic speech recognition models often struggle to capture nuanced features inherent in accented speech, leading to sub-optimal performance in speaker recognition based on regional accents. Despite substantial progress in the field of automatic speech recognition, ensuring robustness to accents and generalization across dialects remains a persistent challenge, particularly in real-time settings. In response, this study introduces a novel approach leveraging Variational Mode Decomposition (VMD) to enhance accented speech signals, aiming to mitigate noise interference and improve generalization on unseen accented speech datasets. Our method employs decomposed modes of the VMD algorithm for signal reconstruction, followed by feature extraction using Mel-Frequency Cepstral Coefficients (MFCC). These features are subsequently classified using machine learning models such as 1D Convolutional Neural Network (1D-CNN), Support Vector Machine (SVM), Random Forest, and Decision Trees, as well as a deep learning model based on a 2D Convolutional Neural Network (2D-CNN). Experimental results demonstrate superior performance, with the SVM classifier achieving an accuracy of approximately 87.5% on a standard dataset and 99.3% on the AccentBase dataset. The 2D-CNN model further improves the results in multi-class accent classification tasks. This research contributes to advancing automatic speech recognition robustness and accent-inclusive speaker recognition, addressing critical challenges in real-world applications.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"139 ","pages":"Article 109512"},"PeriodicalIF":7.5000,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A robust accent classification system based on variational mode decomposition\",\"authors\":\"Darshana Subhash , Jyothish Lal G. , Premjith B. , Vinayakumar Ravi\",\"doi\":\"10.1016/j.engappai.2024.109512\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>State-of-the-art automatic speech recognition models often struggle to capture nuanced features inherent in accented speech, leading to sub-optimal performance in speaker recognition based on regional accents. Despite substantial progress in the field of automatic speech recognition, ensuring robustness to accents and generalization across dialects remains a persistent challenge, particularly in real-time settings. In response, this study introduces a novel approach leveraging Variational Mode Decomposition (VMD) to enhance accented speech signals, aiming to mitigate noise interference and improve generalization on unseen accented speech datasets. Our method employs decomposed modes of the VMD algorithm for signal reconstruction, followed by feature extraction using Mel-Frequency Cepstral Coefficients (MFCC). These features are subsequently classified using machine learning models such as 1D Convolutional Neural Network (1D-CNN), Support Vector Machine (SVM), Random Forest, and Decision Trees, as well as a deep learning model based on a 2D Convolutional Neural Network (2D-CNN). Experimental results demonstrate superior performance, with the SVM classifier achieving an accuracy of approximately 87.5% on a standard dataset and 99.3% on the AccentBase dataset. The 2D-CNN model further improves the results in multi-class accent classification tasks. This research contributes to advancing automatic speech recognition robustness and accent-inclusive speaker recognition, addressing critical challenges in real-world applications.</div></div>\",\"PeriodicalId\":50523,\"journal\":{\"name\":\"Engineering Applications of Artificial Intelligence\",\"volume\":\"139 \",\"pages\":\"Article 109512\"},\"PeriodicalIF\":7.5000,\"publicationDate\":\"2024-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Engineering Applications of Artificial Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0952197624016701\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Applications of Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0952197624016701","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
A robust accent classification system based on variational mode decomposition
State-of-the-art automatic speech recognition models often struggle to capture nuanced features inherent in accented speech, leading to sub-optimal performance in speaker recognition based on regional accents. Despite substantial progress in the field of automatic speech recognition, ensuring robustness to accents and generalization across dialects remains a persistent challenge, particularly in real-time settings. In response, this study introduces a novel approach leveraging Variational Mode Decomposition (VMD) to enhance accented speech signals, aiming to mitigate noise interference and improve generalization on unseen accented speech datasets. Our method employs decomposed modes of the VMD algorithm for signal reconstruction, followed by feature extraction using Mel-Frequency Cepstral Coefficients (MFCC). These features are subsequently classified using machine learning models such as 1D Convolutional Neural Network (1D-CNN), Support Vector Machine (SVM), Random Forest, and Decision Trees, as well as a deep learning model based on a 2D Convolutional Neural Network (2D-CNN). Experimental results demonstrate superior performance, with the SVM classifier achieving an accuracy of approximately 87.5% on a standard dataset and 99.3% on the AccentBase dataset. The 2D-CNN model further improves the results in multi-class accent classification tasks. This research contributes to advancing automatic speech recognition robustness and accent-inclusive speaker recognition, addressing critical challenges in real-world applications.
期刊介绍:
Artificial Intelligence (AI) is pivotal in driving the fourth industrial revolution, witnessing remarkable advancements across various machine learning methodologies. AI techniques have become indispensable tools for practicing engineers, enabling them to tackle previously insurmountable challenges. Engineering Applications of Artificial Intelligence serves as a global platform for the swift dissemination of research elucidating the practical application of AI methods across all engineering disciplines. Submitted papers are expected to present novel aspects of AI utilized in real-world engineering applications, validated using publicly available datasets to ensure the replicability of research outcomes. Join us in exploring the transformative potential of AI in engineering.