Ali Shariq Imran, Vetle Haflan, Abdolreza Sabzi Shahrebabaki, Negar Olfati, T. Svendsen
{"title":"Evaluating Acoustic Feature Maps in 2D-CNN for Speaker Identification","authors":"Ali Shariq Imran, Vetle Haflan, Abdolreza Sabzi Shahrebabaki, Negar Olfati, T. Svendsen","doi":"10.1145/3318299.3318386","DOIUrl":null,"url":null,"abstract":"This paper presents a study evaluating different acoustic feature map representations in two-dimensional convolutional neural networks (2D-CNN) on the speech dataset for various speech-related activities. Specifically, the task involves identifying useful 2D-CNN input feature maps for enhancing speaker identification with an ultimate goal to improve speaker authentication and enabling voice as a biometric feature. Voice in contrast to fingerprints and image-based biometrics is a natural choice for hands-free communication systems where touch interfaces are inconvenient or dangerous to use. Effective input feature map representation may help CNN exploit intrinsic voice features that not only can address the instability issues of voice as an identifier for textindependent speaker authentication while preserving privacy but can also assist in developing efficacious voice-enabled interfaces. Three different acoustic features with three possible feature map representations are evaluated in this study. Results obtained on three speech corpora shows that an interpolated baseline spectrogram performs best compared to Mel frequency spectral coefficients (MFSC) and Mel frequency cepstral coefficient (MFCC) when tested on a 5-fold cross-validation method using 2D-CNN. On both textdependent and text-independent datasets, raw spectrogram accuracy is 4% better than the traditional acoustic features.","PeriodicalId":164987,"journal":{"name":"International Conference on Machine Learning and Computing","volume":"130 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Machine Learning and Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3318299.3318386","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper presents a study evaluating different acoustic feature map representations in two-dimensional convolutional neural networks (2D-CNN) on the speech dataset for various speech-related activities. Specifically, the task involves identifying useful 2D-CNN input feature maps for enhancing speaker identification with an ultimate goal to improve speaker authentication and enabling voice as a biometric feature. Voice in contrast to fingerprints and image-based biometrics is a natural choice for hands-free communication systems where touch interfaces are inconvenient or dangerous to use. Effective input feature map representation may help CNN exploit intrinsic voice features that not only can address the instability issues of voice as an identifier for textindependent speaker authentication while preserving privacy but can also assist in developing efficacious voice-enabled interfaces. Three different acoustic features with three possible feature map representations are evaluated in this study. Results obtained on three speech corpora shows that an interpolated baseline spectrogram performs best compared to Mel frequency spectral coefficients (MFSC) and Mel frequency cepstral coefficient (MFCC) when tested on a 5-fold cross-validation method using 2D-CNN. On both textdependent and text-independent datasets, raw spectrogram accuracy is 4% better than the traditional acoustic features.