S. Nirjon, Robert F. Dickerson, J. Stankovic, G. Shen, Xiaofan Jiang
{"title":"sMFCC: exploiting sparseness in speech for fast acoustic feature extraction on mobile devices -- a feasibility study","authors":"S. Nirjon, Robert F. Dickerson, J. Stankovic, G. Shen, Xiaofan Jiang","doi":"10.1145/2444776.2444787","DOIUrl":null,"url":null,"abstract":"Due to limited processing capability, contemporary smartphones cannot extract frequency domain acoustic features in real-time on the device when the sampling rate is high. We propose a solution to this problem which exploits the sparseness in speech to extract frequency domain acoustic features inside a smartphone in real-time, without requiring any support from a remote server even when the sampling rate is as high as 44.1 KHz. We perform an empirical study to quantify the sparseness in speech recorded on a smartphone and use it to obtain a highly accurate and sparse approximation of a widely used feature of speech called the Mel-Frequency Cepstral Coefficients (MFCC) efficiently. We name the new feature the sparse MFCC or sMFCC, in short. We experimentally determine the trade-offs between the approximation error and the expected speedup of sMFCC. We implement a simple spoken word recognition application using both MFCC and sMFCC features, show that sMFCC is expected to be up to 5.84 times faster and its accuracy is within 1.1% -- 3.9% of that of MFCC, and determine the conditions under which sMFCC runs in real-time.","PeriodicalId":88972,"journal":{"name":"Proceedings. IEEE Workshop on Mobile Computing Systems and Applications","volume":"15 1","pages":"8"},"PeriodicalIF":0.0000,"publicationDate":"2013-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. IEEE Workshop on Mobile Computing Systems and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2444776.2444787","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11
Abstract
Due to limited processing capability, contemporary smartphones cannot extract frequency domain acoustic features in real-time on the device when the sampling rate is high. We propose a solution to this problem which exploits the sparseness in speech to extract frequency domain acoustic features inside a smartphone in real-time, without requiring any support from a remote server even when the sampling rate is as high as 44.1 KHz. We perform an empirical study to quantify the sparseness in speech recorded on a smartphone and use it to obtain a highly accurate and sparse approximation of a widely used feature of speech called the Mel-Frequency Cepstral Coefficients (MFCC) efficiently. We name the new feature the sparse MFCC or sMFCC, in short. We experimentally determine the trade-offs between the approximation error and the expected speedup of sMFCC. We implement a simple spoken word recognition application using both MFCC and sMFCC features, show that sMFCC is expected to be up to 5.84 times faster and its accuracy is within 1.1% -- 3.9% of that of MFCC, and determine the conditions under which sMFCC runs in real-time.