{"title":"一种用于语音识别中hmm大余量估计的紧凑半定规划(SDP)公式","authors":"Yan Yin, Hui Jiang","doi":"10.1109/ASRU.2007.4430130","DOIUrl":null,"url":null,"abstract":"In this paper, we study a new semidefinite programming (SDP) formulation to improve optimization efficiency for large margin estimation (LME) of HMMs in speech recognition. We re-formulate the same LME problem as smaller-scale SDP problems to speed up the SDP-based LME training, especially for large model sets. In the new formulation, instead of building the SDP problem from a single huge variable matrix, we consider to formulate the SDP problem based on many small independent variable matrices, each of which is built separately from a Gaussian mean vector. Moreover, we propose to further decompose feature vectors and Gaussian mean vectors according to static, delta and accelerate components to build even more compact variable matrices. This method can significantly reduce the total number of free variables and result in much smaller SDP problem even for the same model set. The proposed new LME/SDP methods have been evaluated on a connected digit string recognition task using the TIDIGITS database. Experimental results show that it can significantly improve optimization efficiency (about 30-50 times faster for large model sets) and meanwhile it can provide slightly better optimization accuracy and recognition performance than our previous SDP formulation.","PeriodicalId":371729,"journal":{"name":"2007 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"A compact semidefinite programming (SDP) formulation for large margin estimation of HMMS in speech recognition\",\"authors\":\"Yan Yin, Hui Jiang\",\"doi\":\"10.1109/ASRU.2007.4430130\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we study a new semidefinite programming (SDP) formulation to improve optimization efficiency for large margin estimation (LME) of HMMs in speech recognition. We re-formulate the same LME problem as smaller-scale SDP problems to speed up the SDP-based LME training, especially for large model sets. In the new formulation, instead of building the SDP problem from a single huge variable matrix, we consider to formulate the SDP problem based on many small independent variable matrices, each of which is built separately from a Gaussian mean vector. Moreover, we propose to further decompose feature vectors and Gaussian mean vectors according to static, delta and accelerate components to build even more compact variable matrices. This method can significantly reduce the total number of free variables and result in much smaller SDP problem even for the same model set. The proposed new LME/SDP methods have been evaluated on a connected digit string recognition task using the TIDIGITS database. Experimental results show that it can significantly improve optimization efficiency (about 30-50 times faster for large model sets) and meanwhile it can provide slightly better optimization accuracy and recognition performance than our previous SDP formulation.\",\"PeriodicalId\":371729,\"journal\":{\"name\":\"2007 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU)\",\"volume\":\"62 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2007-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2007 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ASRU.2007.4430130\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2007 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ASRU.2007.4430130","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A compact semidefinite programming (SDP) formulation for large margin estimation of HMMS in speech recognition
In this paper, we study a new semidefinite programming (SDP) formulation to improve optimization efficiency for large margin estimation (LME) of HMMs in speech recognition. We re-formulate the same LME problem as smaller-scale SDP problems to speed up the SDP-based LME training, especially for large model sets. In the new formulation, instead of building the SDP problem from a single huge variable matrix, we consider to formulate the SDP problem based on many small independent variable matrices, each of which is built separately from a Gaussian mean vector. Moreover, we propose to further decompose feature vectors and Gaussian mean vectors according to static, delta and accelerate components to build even more compact variable matrices. This method can significantly reduce the total number of free variables and result in much smaller SDP problem even for the same model set. The proposed new LME/SDP methods have been evaluated on a connected digit string recognition task using the TIDIGITS database. Experimental results show that it can significantly improve optimization efficiency (about 30-50 times faster for large model sets) and meanwhile it can provide slightly better optimization accuracy and recognition performance than our previous SDP formulation.