Pub Date : 2011-12-01DOI: 10.1109/ASRU.2011.6163988
J. Williams, I. D. Melamed, Tirso Alonso, B. Hollister, J. Wilpon
Crowd-sourcing is a promising method for fast and cheap transcription of large volumes of speech data. However, this method cannot achieve the accuracy of expert transcribers on speech that is difficult to transcribe. Faced with such speech data, we developed three new methods of crowd-sourcing, which allow explicit trade-offs among precision, recall, and cost. The methods are: incremental redundancy, treating ASR as a transcriber, and using a regression model to predict transcription reliability. Even though the accuracy of individual crowd-workers is only 55% on our data, our best method achieves 90% accuracy on 93% of the utterances, using only 1.3 crowd-worker transcriptions per utterance on average. When forced to transcribe all utterances, our best method matches the accuracy of previous crowd-sourcing methods using only one third as many transcriptions. We also study the effects of various task design factors on transcription latency and accuracy, some of which have not been reported before.
{"title":"Crowd-sourcing for difficult transcription of speech","authors":"J. Williams, I. D. Melamed, Tirso Alonso, B. Hollister, J. Wilpon","doi":"10.1109/ASRU.2011.6163988","DOIUrl":"https://doi.org/10.1109/ASRU.2011.6163988","url":null,"abstract":"Crowd-sourcing is a promising method for fast and cheap transcription of large volumes of speech data. However, this method cannot achieve the accuracy of expert transcribers on speech that is difficult to transcribe. Faced with such speech data, we developed three new methods of crowd-sourcing, which allow explicit trade-offs among precision, recall, and cost. The methods are: incremental redundancy, treating ASR as a transcriber, and using a regression model to predict transcription reliability. Even though the accuracy of individual crowd-workers is only 55% on our data, our best method achieves 90% accuracy on 93% of the utterances, using only 1.3 crowd-worker transcriptions per utterance on average. When forced to transcribe all utterances, our best method matches the accuracy of previous crowd-sourcing methods using only one third as many transcriptions. We also study the effects of various task design factors on transcription latency and accuracy, some of which have not been reported before.","PeriodicalId":338241,"journal":{"name":"2011 IEEE Workshop on Automatic Speech Recognition & Understanding","volume":"16 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122365417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/ASRU.2011.6163927
K. Kumatani, J. McDonough, B. Raj
This paper presents a new beamforming method for distant speech recognition (DSR). The dominant mode subspace is considered in order to efficiently estimate the active weight vectors for maximum kurtosis (MK) beamforming with the generalized sidelobe canceler (GSC). We demonstrated in [1], [2], [3] that the beamforming method based on the maximum kurtosis criterion can remove reverberant and noise effects without signal cancellation encountered in the conventional beamforming algorithms. The MK beamforming algorithm, however, required a relatively large amount of data for reliably estimating the active weight vector because it relies on a numerical optimization algorithm. In order to achieve efficient estimation, we propose to cascade the subspace (eigenspace) filter [4, §6.8] with the active weight vector. The subspace filter can decompose the output of the blocking matrix into directional signals and ambient noise components. Then, the ambient noise components are averaged and would be subtracted from the beamformer's output, which leads to reliable estimation as well as significant computational reduction. We show the effectiveness of our method through a set of distant speech recognition experiments on real microphone array data captured in the real environment. Our new beamforming algorithm provided the best recognition performance among conventional beamforming techniques, a word error rate (WER) of 5.3 %, which is comparable to the WER of 4.2 % obtained with a close-talking microphone. Moreover, it achieved better recognition performance with a fewer amounts of adaptation data than the conventional MK beamformer.
{"title":"Maximum kurtosis beamforming with a subspace filter for distant speech recognition","authors":"K. Kumatani, J. McDonough, B. Raj","doi":"10.1109/ASRU.2011.6163927","DOIUrl":"https://doi.org/10.1109/ASRU.2011.6163927","url":null,"abstract":"This paper presents a new beamforming method for distant speech recognition (DSR). The dominant mode subspace is considered in order to efficiently estimate the active weight vectors for maximum kurtosis (MK) beamforming with the generalized sidelobe canceler (GSC). We demonstrated in [1], [2], [3] that the beamforming method based on the maximum kurtosis criterion can remove reverberant and noise effects without signal cancellation encountered in the conventional beamforming algorithms. The MK beamforming algorithm, however, required a relatively large amount of data for reliably estimating the active weight vector because it relies on a numerical optimization algorithm. In order to achieve efficient estimation, we propose to cascade the subspace (eigenspace) filter [4, §6.8] with the active weight vector. The subspace filter can decompose the output of the blocking matrix into directional signals and ambient noise components. Then, the ambient noise components are averaged and would be subtracted from the beamformer's output, which leads to reliable estimation as well as significant computational reduction. We show the effectiveness of our method through a set of distant speech recognition experiments on real microphone array data captured in the real environment. Our new beamforming algorithm provided the best recognition performance among conventional beamforming techniques, a word error rate (WER) of 5.3 %, which is comparable to the WER of 4.2 % obtained with a close-talking microphone. Moreover, it achieved better recognition performance with a fewer amounts of adaptation data than the conventional MK beamformer.","PeriodicalId":338241,"journal":{"name":"2011 IEEE Workshop on Automatic Speech Recognition & Understanding","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127980694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/ASRU.2011.6163910
Hiroshi Fujimura, Masanobu Nakamura, Yusuke Shinohara, T. Masuko
This paper proposes a novel technique to exploit generative and discriminative models for speech recognition. Speech recognition using discriminative models has attracted much attention in the past decade. In particular, a rescoring framework using discriminative word classifiers with generative-model-based features was shown to be effective in small-vocabulary tasks. However, a straightforward application of the framework to large-vocabulary tasks is difficult because the number of classifiers increases in proportion to the number of word pairs. We extend this framework to exploit generative and discriminative models in large-vocabulary tasks. N-best hypotheses obtained in the first pass are rescored using AdaBoost phoneme classifiers, where generative-model-based features, i.e. difference-of-likelihood features in particular, are used for the classifiers. Special care is taken to use context-dependent hidden Markov models (CDHMMs) as generative models, since most of the state-of-the-art speech recognizers use CDHMMs. Experimental results show that the proposed method reduces word errors by 32.68% relatively in a one-million-vocabulary isolated word recognition task.
{"title":"N-Best rescoring by adaboost phoneme classifiers for isolated word recognition","authors":"Hiroshi Fujimura, Masanobu Nakamura, Yusuke Shinohara, T. Masuko","doi":"10.1109/ASRU.2011.6163910","DOIUrl":"https://doi.org/10.1109/ASRU.2011.6163910","url":null,"abstract":"This paper proposes a novel technique to exploit generative and discriminative models for speech recognition. Speech recognition using discriminative models has attracted much attention in the past decade. In particular, a rescoring framework using discriminative word classifiers with generative-model-based features was shown to be effective in small-vocabulary tasks. However, a straightforward application of the framework to large-vocabulary tasks is difficult because the number of classifiers increases in proportion to the number of word pairs. We extend this framework to exploit generative and discriminative models in large-vocabulary tasks. N-best hypotheses obtained in the first pass are rescored using AdaBoost phoneme classifiers, where generative-model-based features, i.e. difference-of-likelihood features in particular, are used for the classifiers. Special care is taken to use context-dependent hidden Markov models (CDHMMs) as generative models, since most of the state-of-the-art speech recognizers use CDHMMs. Experimental results show that the proposed method reduces word errors by 32.68% relatively in a one-million-vocabulary isolated word recognition task.","PeriodicalId":338241,"journal":{"name":"2011 IEEE Workshop on Automatic Speech Recognition & Understanding","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129867360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/ASRU.2011.6163975
K. Zechner, Xiaoming Xi, L. Chen
We evaluate two types of prosodic features utilizing automatically generated stress and tone labels for non-native read speech in terms of their applicability for automated speech scoring. oth types of features have not been used in the context of automated scoring of non-native read speech to date.
{"title":"Evaluating prosodic features for automated scoring of non-native read speech","authors":"K. Zechner, Xiaoming Xi, L. Chen","doi":"10.1109/ASRU.2011.6163975","DOIUrl":"https://doi.org/10.1109/ASRU.2011.6163975","url":null,"abstract":"We evaluate two types of prosodic features utilizing automatically generated stress and tone labels for non-native read speech in terms of their applicability for automated speech scoring. oth types of features have not been used in the context of automated scoring of non-native read speech to date.","PeriodicalId":338241,"journal":{"name":"2011 IEEE Workshop on Automatic Speech Recognition & Understanding","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115198316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/ASRU.2011.6163958
F. Grézl, M. Karafiát, M. Janda
This study is focused on the performance of Probabilistic and Bottle-Neck features on different language than they were trained for. It is shown, that such porting is possible and that the features are still competitive to PLP features. Further, several combination techniques are evaluated. The performance of combined features is close to the best performing system. Finally, bigger NNs were trained on large data from different domain. The resulting features outperformed previously trained systems and combination with them further improved the system performance.
{"title":"Study of probabilistic and Bottle-Neck features in multilingual environment","authors":"F. Grézl, M. Karafiát, M. Janda","doi":"10.1109/ASRU.2011.6163958","DOIUrl":"https://doi.org/10.1109/ASRU.2011.6163958","url":null,"abstract":"This study is focused on the performance of Probabilistic and Bottle-Neck features on different language than they were trained for. It is shown, that such porting is possible and that the features are still competitive to PLP features. Further, several combination techniques are evaluated. The performance of combined features is close to the best performing system. Finally, bigger NNs were trained on large data from different domain. The resulting features outperformed previously trained systems and combination with them further improved the system performance.","PeriodicalId":338241,"journal":{"name":"2011 IEEE Workshop on Automatic Speech Recognition & Understanding","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115795579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/ASRU.2011.6163945
Izhak Shafran, R. Sproat, M. Yarmohammadi, Brian Roark
Speech and language processing systems routinely face the need to apply finite state operations (e.g., POS tagging) on results from intermediate stages (e.g., ASR output) that are naturally represented in a compact lattice form. Currently, such needs are met by converting the lattices into linear sequences (n-best scoring sequences) before and after applying the finite state operations. In this paper, we eliminate the need for this unnecessary conversion by addressing the problem of picking only the single-best scoring output labels for every input sequence. For this purpose, we define a categorial semiring that allows determinzation over strings and incorporate it into a 〈Tropical, Categorial〉 lexicographic semiring. Through examples and empirical evaluations we show how determinization in this lexicographic semiring produces the desired output. The proposed solution is general in nature and can be applied to multi-tape weighted transducers that arise in many applications.
{"title":"Efficient determinization of tagged word lattices using categorial and lexicographic semirings","authors":"Izhak Shafran, R. Sproat, M. Yarmohammadi, Brian Roark","doi":"10.1109/ASRU.2011.6163945","DOIUrl":"https://doi.org/10.1109/ASRU.2011.6163945","url":null,"abstract":"Speech and language processing systems routinely face the need to apply finite state operations (e.g., POS tagging) on results from intermediate stages (e.g., ASR output) that are naturally represented in a compact lattice form. Currently, such needs are met by converting the lattices into linear sequences (n-best scoring sequences) before and after applying the finite state operations. In this paper, we eliminate the need for this unnecessary conversion by addressing the problem of picking only the single-best scoring output labels for every input sequence. For this purpose, we define a categorial semiring that allows determinzation over strings and incorporate it into a 〈Tropical, Categorial〉 lexicographic semiring. Through examples and empirical evaluations we show how determinization in this lexicographic semiring produces the desired output. The proposed solution is general in nature and can be applied to multi-tape weighted transducers that arise in many applications.","PeriodicalId":338241,"journal":{"name":"2011 IEEE Workshop on Automatic Speech Recognition & Understanding","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131272197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/ASRU.2011.6163970
L. Ralaivola, Benoit Favre, Pierre Gotab, Frédéric Béchet, Géraldine Damnati
We analyze the problem of call-type classification using data that is weakly labelled. The training data is not systematically annotated, but we consider we have a weak or lazy oracle able to answer the question “Is sample x of class q?” by a simple ‘yes’ or ‘no’ answer. This situation of learning might be encountered in many real-world problems where the cost of labelling data is very high. We prove that it is possible to learn linear classifiers in this setting, by estimating adequate expectations inspired by the Multiclass Bandit paradgim. We propose a learning strategy that builds on Kessler's construction to learn multiclass perceptrons. We test our learning procedure against two real-world datasets from spoken langage understanding and provide compelling results.
{"title":"Applying Multiclass Bandit algorithms to call-type classification","authors":"L. Ralaivola, Benoit Favre, Pierre Gotab, Frédéric Béchet, Géraldine Damnati","doi":"10.1109/ASRU.2011.6163970","DOIUrl":"https://doi.org/10.1109/ASRU.2011.6163970","url":null,"abstract":"We analyze the problem of call-type classification using data that is weakly labelled. The training data is not systematically annotated, but we consider we have a weak or lazy oracle able to answer the question “Is sample x of class q?” by a simple ‘yes’ or ‘no’ answer. This situation of learning might be encountered in many real-world problems where the cost of labelling data is very high. We prove that it is possible to learn linear classifiers in this setting, by estimating adequate expectations inspired by the Multiclass Bandit paradgim. We propose a learning strategy that builds on Kessler's construction to learn multiclass perceptrons. We test our learning procedure against two real-world datasets from spoken langage understanding and provide compelling results.","PeriodicalId":338241,"journal":{"name":"2011 IEEE Workshop on Automatic Speech Recognition & Understanding","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116079144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/ASRU.2011.6163896
Muhammad Ali Tahir, R. Schlüter, H. Ney
This paper presents a method to incorporate mixture density splitting into the acoustic model discriminative log-linear training. The standard method is to obtain a high resolution model by maximum likelihood training and density splitting, and then further training this model discriminatively. For a single Gaussian density per state the log-linear MMI optimization is a global maximum problem, and by further splitting and discriminative training of this model we can get a higher complexity model. The mixture training is not a global maximum problem, nevertheless experimentally we achieve large gains in the objective function and corresponding moderate gains in the word error rate on a large vocabulary corpus
{"title":"Discriminative splitting of Gaussian/log-linear mixture HMMs for speech recognition","authors":"Muhammad Ali Tahir, R. Schlüter, H. Ney","doi":"10.1109/ASRU.2011.6163896","DOIUrl":"https://doi.org/10.1109/ASRU.2011.6163896","url":null,"abstract":"This paper presents a method to incorporate mixture density splitting into the acoustic model discriminative log-linear training. The standard method is to obtain a high resolution model by maximum likelihood training and density splitting, and then further training this model discriminatively. For a single Gaussian density per state the log-linear MMI optimization is a global maximum problem, and by further splitting and discriminative training of this model we can get a higher complexity model. The mixture training is not a global maximum problem, nevertheless experimentally we achieve large gains in the objective function and corresponding moderate gains in the word error rate on a large vocabulary corpus","PeriodicalId":338241,"journal":{"name":"2011 IEEE Workshop on Automatic Speech Recognition & Understanding","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121452558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/ASRU.2011.6163925
Arata Itoh, Sunao Hara, N. Kitaoka, K. Takeda
In this paper, we propose a novel acoustic model training method which is suitable for speaker adaptation in speech recognition. Our method is based on feature generation from a small amount of speakers' data. For decades, speaker adaptation methods have been widely used. Such adaptation methods need some amount of adaptation data and if the data is not sufficient, speech recognition performance degrade significantly. If the seed models to be adapted to a specific speaker can widely cover more speakers, speaker adaptation can perform robustly. To make such robust seed models, we adopt inverse maximum likelihood linear regression (MLLR) transformation-based feature generation, and then train our seed models using these features. First we obtain MLLR transformation matrices from a limited number of existing speakers. Then we extract the bases of the MLLR transformation matrices using PCA. The distribution of the weight parameters to express the MLLR transformation matrices for the existing speakers is estimated. Next we generate pseudo-speaker MLLR transformations by sampling the weight parameters from the distribution, and apply the inverse of the transformation to the normalized existing speaker features to generate the pseudo-speakers' features. Finally, using these features, we train the acoustic seed models. Using this seed models, we obtained better speaker adaptation results than using simply environmentally adapted models.
{"title":"Robust seed model training for speaker adaptation using pseudo-speaker features generated by inverse CMLLR transformation","authors":"Arata Itoh, Sunao Hara, N. Kitaoka, K. Takeda","doi":"10.1109/ASRU.2011.6163925","DOIUrl":"https://doi.org/10.1109/ASRU.2011.6163925","url":null,"abstract":"In this paper, we propose a novel acoustic model training method which is suitable for speaker adaptation in speech recognition. Our method is based on feature generation from a small amount of speakers' data. For decades, speaker adaptation methods have been widely used. Such adaptation methods need some amount of adaptation data and if the data is not sufficient, speech recognition performance degrade significantly. If the seed models to be adapted to a specific speaker can widely cover more speakers, speaker adaptation can perform robustly. To make such robust seed models, we adopt inverse maximum likelihood linear regression (MLLR) transformation-based feature generation, and then train our seed models using these features. First we obtain MLLR transformation matrices from a limited number of existing speakers. Then we extract the bases of the MLLR transformation matrices using PCA. The distribution of the weight parameters to express the MLLR transformation matrices for the existing speakers is estimated. Next we generate pseudo-speaker MLLR transformations by sampling the weight parameters from the distribution, and apply the inverse of the transformation to the normalized existing speaker features to generate the pseudo-speakers' features. Finally, using these features, we train the acoustic seed models. Using this seed models, we obtained better speaker adaptation results than using simply environmentally adapted models.","PeriodicalId":338241,"journal":{"name":"2011 IEEE Workshop on Automatic Speech Recognition & Understanding","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115542584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/ASRU.2011.6163920
Mickael Rouvier, M. Bouallegue, D. Matrouf, G. Linarès
In this paper we propose a new feature normalization based on Factor Analysis (FA) for the problem of acoustic variability in Automatic Speech Recognition (ASR). The FA paradigm was previously used in the field of ASR, in order to model the usefull information: the HMM state dependent acoustic information. In this paper, we propose to use the FA paradigm to model the useless information (speaker- or channel-variability) in order to remove it from acoustic data frames. The transformed training data frames are then used to train new HMM models using the standard training algorithm. The transformation is also applied to the test data before the decoding process. With this approach we obtain, on french broadcast news, an absolute WER reduction of 1.3%.
{"title":"Factor analysis based session variability compensation for Automatic Speech Recognition","authors":"Mickael Rouvier, M. Bouallegue, D. Matrouf, G. Linarès","doi":"10.1109/ASRU.2011.6163920","DOIUrl":"https://doi.org/10.1109/ASRU.2011.6163920","url":null,"abstract":"In this paper we propose a new feature normalization based on Factor Analysis (FA) for the problem of acoustic variability in Automatic Speech Recognition (ASR). The FA paradigm was previously used in the field of ASR, in order to model the usefull information: the HMM state dependent acoustic information. In this paper, we propose to use the FA paradigm to model the useless information (speaker- or channel-variability) in order to remove it from acoustic data frames. The transformed training data frames are then used to train new HMM models using the standard training algorithm. The transformation is also applied to the test data before the decoding process. With this approach we obtain, on french broadcast news, an absolute WER reduction of 1.3%.","PeriodicalId":338241,"journal":{"name":"2011 IEEE Workshop on Automatic Speech Recognition & Understanding","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127510877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}