Pub Date : 2016-12-13DOI: 10.1109/SLT.2016.7846262
H. Delgado, M. Todisco, Md. Sahidullah, A. K. Sarkar, N. Evans, T. Kinnunen, Z. Tan
Many authentication applications involving automatic speaker verification (ASV) demand robust performance using short-duration, fixed or prompted text utterances. Text constraints not only reduce the phone-mismatch between enrolment and test utterances, which generally leads to improved performance, but also provide an ancillary level of security. This can take the form of explicit utterance verification (UV). An integrated UV + ASV system should then verify access attempts which contain not just the expected speaker, but also the expected text content. This paper presents such a system and introduces new features which are used for both UV and ASV tasks. Based upon multi-resolution, spectro-temporal analysis and when fused with more traditional parameterisations, the new features not only generally outperform Mel-frequency cepstral coefficients, but also are shown to be complementary when fusing systems at score level. Finally, the joint operation of UV and ASV greatly decreases false acceptances for unmatched text trials.
{"title":"Further optimisations of constant Q cepstral processing for integrated utterance and text-dependent speaker verification","authors":"H. Delgado, M. Todisco, Md. Sahidullah, A. K. Sarkar, N. Evans, T. Kinnunen, Z. Tan","doi":"10.1109/SLT.2016.7846262","DOIUrl":"https://doi.org/10.1109/SLT.2016.7846262","url":null,"abstract":"Many authentication applications involving automatic speaker verification (ASV) demand robust performance using short-duration, fixed or prompted text utterances. Text constraints not only reduce the phone-mismatch between enrolment and test utterances, which generally leads to improved performance, but also provide an ancillary level of security. This can take the form of explicit utterance verification (UV). An integrated UV + ASV system should then verify access attempts which contain not just the expected speaker, but also the expected text content. This paper presents such a system and introduces new features which are used for both UV and ASV tasks. Based upon multi-resolution, spectro-temporal analysis and when fused with more traditional parameterisations, the new features not only generally outperform Mel-frequency cepstral coefficients, but also are shown to be complementary when fusing systems at score level. Finally, the joint operation of UV and ASV greatly decreases false acceptances for unmatched text trials.","PeriodicalId":281635,"journal":{"name":"2016 IEEE Spoken Language Technology Workshop (SLT)","volume":"336 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115669425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-13DOI: 10.1109/SLT.2016.7846239
D. González, E. Vincent, J. Lara
The growing demand for robust speech processing applications able to operate in adverse scenarios calls for new evaluation protocols and datasets beyond artificial laboratory conditions. The characteristics of real data for a given scenario are rarely discussed in the literature. As a result, methods are often tested based on the author expertise and not always in scenarios with actual practical value. This paper aims to open this discussion by identifying some of the main problems with data simulation or collection procedures used so far and summarizing the important characteristics of real scenarios to be taken into account, including the properties of reverberation, noise and Lombard effect. At last, we provide some preliminary guidelines towards designing experimental setup and speech recognition results for proposal validation.
{"title":"A study of speech distortion conditions in real scenarios for speech processing applications","authors":"D. González, E. Vincent, J. Lara","doi":"10.1109/SLT.2016.7846239","DOIUrl":"https://doi.org/10.1109/SLT.2016.7846239","url":null,"abstract":"The growing demand for robust speech processing applications able to operate in adverse scenarios calls for new evaluation protocols and datasets beyond artificial laboratory conditions. The characteristics of real data for a given scenario are rarely discussed in the literature. As a result, methods are often tested based on the author expertise and not always in scenarios with actual practical value. This paper aims to open this discussion by identifying some of the main problems with data simulation or collection procedures used so far and summarizing the important characteristics of real scenarios to be taken into account, including the properties of reverberation, noise and Lombard effect. At last, we provide some preliminary guidelines towards designing experimental setup and speech recognition results for proposal validation.","PeriodicalId":281635,"journal":{"name":"2016 IEEE Spoken Language Technology Workshop (SLT)","volume":"186 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134162937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-13DOI: 10.1109/SLT.2016.7846251
M. Barlier, R. Laroche, O. Pietquin
In this paper, we introduce a novel framework to encode the dynamics of dialogues into a probabilistic graphical model. Traditionally, Hidden Markov Models (HMMs) would be used to address this problem, involving a first step of hand-crafting to build a dialogue model (e.g. defining potential hidden states) followed by applying expectation-maximisation (EM) algorithms to refine it. Recently, an alternative class of algorithms based on the Method of Moments (MoM) has proven successful in avoiding issues of the EM-like algorithms such as convergence towards local optima, tractability issues, initialization issues or the lack of theoretical guarantees. In this work, we show that dialogues may be modeled by SP-RFA, a class of graphical models efficiently learnable within the MoM and directly usable in planning algorithms (such as reinforcement learning). Experiments are led on the Ubuntu corpus and dialogues are considered as sequences of dialogue acts, represented by their Latent Dirichlet Allocation (LDA) and Latent Semantic Analysis (LSA). We show that a MoM-based algorithm can learn a compact model of sequences of such acts.
{"title":"Learning dialogue dynamics with the method of moments","authors":"M. Barlier, R. Laroche, O. Pietquin","doi":"10.1109/SLT.2016.7846251","DOIUrl":"https://doi.org/10.1109/SLT.2016.7846251","url":null,"abstract":"In this paper, we introduce a novel framework to encode the dynamics of dialogues into a probabilistic graphical model. Traditionally, Hidden Markov Models (HMMs) would be used to address this problem, involving a first step of hand-crafting to build a dialogue model (e.g. defining potential hidden states) followed by applying expectation-maximisation (EM) algorithms to refine it. Recently, an alternative class of algorithms based on the Method of Moments (MoM) has proven successful in avoiding issues of the EM-like algorithms such as convergence towards local optima, tractability issues, initialization issues or the lack of theoretical guarantees. In this work, we show that dialogues may be modeled by SP-RFA, a class of graphical models efficiently learnable within the MoM and directly usable in planning algorithms (such as reinforcement learning). Experiments are led on the Ubuntu corpus and dialogues are considered as sequences of dialogue acts, represented by their Latent Dirichlet Allocation (LDA) and Latent Semantic Analysis (LSA). We show that a MoM-based algorithm can learn a compact model of sequences of such acts.","PeriodicalId":281635,"journal":{"name":"2016 IEEE Spoken Language Technology Workshop (SLT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125781481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/SLT.2016.7846247
A. Torbati, J. Picone
State of the art speech recognition systems use context-dependent phonemes as acoustic units. However, these approaches do not work well for low resourced languages where large amounts of training data or resources such as a lexicon are not available. For such languages, automatic discovery of acoustic units can be important. In this paper, we demonstrate the application of nonparametric Bayesian models to acoustic unit discovery. We show that the discovered units are linguistically meaningful. We also present a semi-supervised learning algorithm that uses a nonparametric Bayesian model to learn a mapping between words and acoustic units. We demonstrate that a speech recognition system using these discovered resources can approach the performance of a speech recognizer trained using resources developed by experts. We show that unsupervised discovery of acoustic units combined with semi-supervised discovery of the lexicon achieved performance (9.8% WER) comparable to other published high complexity systems. This nonparametric approach enables the rapid development of speech recognition systems in low resourced languages.
{"title":"A nonparametric Bayesian approach for automatic discovery of a lexicon and acoustic units","authors":"A. Torbati, J. Picone","doi":"10.1109/SLT.2016.7846247","DOIUrl":"https://doi.org/10.1109/SLT.2016.7846247","url":null,"abstract":"State of the art speech recognition systems use context-dependent phonemes as acoustic units. However, these approaches do not work well for low resourced languages where large amounts of training data or resources such as a lexicon are not available. For such languages, automatic discovery of acoustic units can be important. In this paper, we demonstrate the application of nonparametric Bayesian models to acoustic unit discovery. We show that the discovered units are linguistically meaningful. We also present a semi-supervised learning algorithm that uses a nonparametric Bayesian model to learn a mapping between words and acoustic units. We demonstrate that a speech recognition system using these discovered resources can approach the performance of a speech recognizer trained using resources developed by experts. We show that unsupervised discovery of acoustic units combined with semi-supervised discovery of the lexicon achieved performance (9.8% WER) comparable to other published high complexity systems. This nonparametric approach enables the rapid development of speech recognition systems in low resourced languages.","PeriodicalId":281635,"journal":{"name":"2016 IEEE Spoken Language Technology Workshop (SLT)","volume":"223 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127177933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/SLT.2016.7846320
Felix Sun, David F. Harwath, James R. Glass
In this paper, we introduce a multimodal speech recognition scenario, in which an image provides contextual information for a spoken caption to be decoded. We investigate a lattice rescoring algorithm that integrates information from the image at two different points: the image is used to augment the language model with the most likely words, and to rescore the top hypotheses using a word-level RNN. This rescoring mechanism decreases the word error rate by 3 absolute percentage points, compared to a baseline speech recognizer operating with only the speech recording.
{"title":"Look, listen, and decode: Multimodal speech recognition with images","authors":"Felix Sun, David F. Harwath, James R. Glass","doi":"10.1109/SLT.2016.7846320","DOIUrl":"https://doi.org/10.1109/SLT.2016.7846320","url":null,"abstract":"In this paper, we introduce a multimodal speech recognition scenario, in which an image provides contextual information for a spoken caption to be decoded. We investigate a lattice rescoring algorithm that integrates information from the image at two different points: the image is used to augment the language model with the most likely words, and to rescore the top hypotheses using a word-level RNN. This rescoring mechanism decreases the word error rate by 3 absolute percentage points, compared to a baseline speech recognizer operating with only the speech recording.","PeriodicalId":281635,"journal":{"name":"2016 IEEE Spoken Language Technology Workshop (SLT)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125156947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/SLT.2016.7846250
Ming Sun, Aasish Pappu, Yun-Nung (Vivian) Chen, Alexander I. Rudnicky
Users interact with mobile apps with certain intents such as finding a restaurant. Some intents and their corresponding activities are complex and may involve multiple apps; for example, a restaurant app, a messenger app and a calendar app may be needed to plan a dinner with friends. However, activities may be quite personal and third-party developers would not be building apps to specifically handle complex intents (e.g., a DinnerPlanner). Instead we want our intelligent agent to actively learn to understand these intents and provide assistance when needed. This paper proposes a framework to enable the agent to learn an inventory of intents from a small set of task-oriented user utterances. The experiments show that on previously unseen user activities, the agent is able to reliably recognize user intents using graph-based semi-supervised learning methods. The dataset, models, and the system outputs are available to research community.
{"title":"Weakly supervised user intent detection for multi-domain dialogues","authors":"Ming Sun, Aasish Pappu, Yun-Nung (Vivian) Chen, Alexander I. Rudnicky","doi":"10.1109/SLT.2016.7846250","DOIUrl":"https://doi.org/10.1109/SLT.2016.7846250","url":null,"abstract":"Users interact with mobile apps with certain intents such as finding a restaurant. Some intents and their corresponding activities are complex and may involve multiple apps; for example, a restaurant app, a messenger app and a calendar app may be needed to plan a dinner with friends. However, activities may be quite personal and third-party developers would not be building apps to specifically handle complex intents (e.g., a DinnerPlanner). Instead we want our intelligent agent to actively learn to understand these intents and provide assistance when needed. This paper proposes a framework to enable the agent to learn an inventory of intents from a small set of task-oriented user utterances. The experiments show that on previously unseen user activities, the agent is able to reliably recognize user intents using graph-based semi-supervised learning methods. The dataset, models, and the system outputs are available to research community.","PeriodicalId":281635,"journal":{"name":"2016 IEEE Spoken Language Technology Workshop (SLT)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129257086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/SLT.2016.7846273
Justin Scheiner, Ian Williams, Petar S. Aleksic
It has been shown that automatic speech recognition (ASR) system quality can be improved by augmenting n-gram language models with contextual information [1][2]. In the voice search domain, there are a large number of useful contextual signals for a given query. Some of these signals are speaker location, speaker identity, time of the query, etc. Each of these signals comes with relevant contextual information (e.g. location specific entities, favorite queries, recent popular queries) that is not included in the language model's training data. We show that these contextual signals can be used to improve ASR system quality. This is achieved by adjusting n-gram language model probabilities on-the-fly based on the contextual information relevant for the current voice search request. We analyze three example sources of context: location context, previously entered typed and spoken queries. We present a set of approaches we have used to improve ASR quality using these sources of context. Our main objective is to automatically, in real time, take advantage of all available sources of contextual information. In addition, we investigate challenges that come with applying our approach to a number of languages (unsegmented languages, languages with diacritics) and present solutions used.
{"title":"Voice search language model adaptation using contextual information","authors":"Justin Scheiner, Ian Williams, Petar S. Aleksic","doi":"10.1109/SLT.2016.7846273","DOIUrl":"https://doi.org/10.1109/SLT.2016.7846273","url":null,"abstract":"It has been shown that automatic speech recognition (ASR) system quality can be improved by augmenting n-gram language models with contextual information [1][2]. In the voice search domain, there are a large number of useful contextual signals for a given query. Some of these signals are speaker location, speaker identity, time of the query, etc. Each of these signals comes with relevant contextual information (e.g. location specific entities, favorite queries, recent popular queries) that is not included in the language model's training data. We show that these contextual signals can be used to improve ASR system quality. This is achieved by adjusting n-gram language model probabilities on-the-fly based on the contextual information relevant for the current voice search request. We analyze three example sources of context: location context, previously entered typed and spoken queries. We present a set of approaches we have used to improve ASR quality using these sources of context. Our main objective is to automatically, in real time, take advantage of all available sources of contextual information. In addition, we investigate challenges that come with applying our approach to a number of languages (unsegmented languages, languages with diacritics) and present solutions used.","PeriodicalId":281635,"journal":{"name":"2016 IEEE Spoken Language Technology Workshop (SLT)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127740590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/SLT.2016.7846290
Titouan Parcollet, Mohamed Morchid, Pierre-Michel Bousquet, Richard Dufour, G. Linarès, R. Mori
Machine Learning (ML) techniques have allowed a great performance improvement of different challenging Spoken Language Understanding (SLU) tasks. Among these methods, Neural Networks (NN), or Multilayer Perceptron (MLP), recently received a great interest from researchers due to their representation capability of complex internal structures in a low dimensional subspace. However, MLPs employ document representations based on basic word level or topic-based features. Therefore, these basic representations reveal little in way of document statistical structure by only considering words or topics contained in the document as a “bag-of-words”, ignoring relations between them. We propose to remedy this weakness by extending the complex features based on Quaternion algebra presented in [1] to neural networks called QMLP. This original QMLP approach is based on hyper-complex algebra to take into consideration features dependencies in documents. New document features, based on the document structure itself, used as input of the QMLP, are also investigated in this paper, in comparison to those initially proposed in [1]. Experiments made on a SLU task from a real framework of human spoken dialogues showed that our QMLP approach associated with the proposed document features outperforms other approaches, with an accuracy gain of 2% with respect to the MLP based on real numbers and more than 3% with respect to the first Quaternion-based features proposed in [1]. We finally demonstrated that less iterations are needed by our QMLP architecture to be efficient and to reach promising accuracies.
{"title":"Quaternion Neural Networks for Spoken Language Understanding","authors":"Titouan Parcollet, Mohamed Morchid, Pierre-Michel Bousquet, Richard Dufour, G. Linarès, R. Mori","doi":"10.1109/SLT.2016.7846290","DOIUrl":"https://doi.org/10.1109/SLT.2016.7846290","url":null,"abstract":"Machine Learning (ML) techniques have allowed a great performance improvement of different challenging Spoken Language Understanding (SLU) tasks. Among these methods, Neural Networks (NN), or Multilayer Perceptron (MLP), recently received a great interest from researchers due to their representation capability of complex internal structures in a low dimensional subspace. However, MLPs employ document representations based on basic word level or topic-based features. Therefore, these basic representations reveal little in way of document statistical structure by only considering words or topics contained in the document as a “bag-of-words”, ignoring relations between them. We propose to remedy this weakness by extending the complex features based on Quaternion algebra presented in [1] to neural networks called QMLP. This original QMLP approach is based on hyper-complex algebra to take into consideration features dependencies in documents. New document features, based on the document structure itself, used as input of the QMLP, are also investigated in this paper, in comparison to those initially proposed in [1]. Experiments made on a SLU task from a real framework of human spoken dialogues showed that our QMLP approach associated with the proposed document features outperforms other approaches, with an accuracy gain of 2% with respect to the MLP based on real numbers and more than 3% with respect to the first Quaternion-based features proposed in [1]. We finally demonstrated that less iterations are needed by our QMLP architecture to be efficient and to reach promising accuracies.","PeriodicalId":281635,"journal":{"name":"2016 IEEE Spoken Language Technology Workshop (SLT)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121865038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/SLT.2016.7846323
F. D. C. Quitry, Asa Oines, P. Moreno, Eugene Weinstein
This paper describes a new technique to automatically obtain large high-quality training speech corpora for acoustic modeling. Traditional approaches select utterances based on confidence thresholds and other heuristics. We propose instead to use an ensemble approach: we transcribe each utterance using several recognizers, and only keep those on which they agree. The recognizers we use are trained on data from different dialects of the same language, and this diversity leads them to make different mistakes in transcribing speech utterances. In this work we show, however, that when they agree, this is an extremely strong signal that the transcript is correct. This allows us to produce automatically transcribed speech corpora that are superior in transcript correctness even to those manually transcribed by humans. Furthermore, we show that using the produced semi-supervised data sets, we can train new acoustic models which outperform those trained solely on previously available data sets.
{"title":"High quality agreement-based semi-supervised training data for acoustic modeling","authors":"F. D. C. Quitry, Asa Oines, P. Moreno, Eugene Weinstein","doi":"10.1109/SLT.2016.7846323","DOIUrl":"https://doi.org/10.1109/SLT.2016.7846323","url":null,"abstract":"This paper describes a new technique to automatically obtain large high-quality training speech corpora for acoustic modeling. Traditional approaches select utterances based on confidence thresholds and other heuristics. We propose instead to use an ensemble approach: we transcribe each utterance using several recognizers, and only keep those on which they agree. The recognizers we use are trained on data from different dialects of the same language, and this diversity leads them to make different mistakes in transcribing speech utterances. In this work we show, however, that when they agree, this is an extremely strong signal that the transcript is correct. This allows us to produce automatically transcribed speech corpora that are superior in transcript correctness even to those manually transcribed by humans. Furthermore, we show that using the produced semi-supervised data sets, we can train new acoustic models which outperform those trained solely on previously available data sets.","PeriodicalId":281635,"journal":{"name":"2016 IEEE Spoken Language Technology Workshop (SLT)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127263934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/SLT.2016.7846331
M. Matassoni, D. Falavigna, D. Giuliani
This paper describes an approach for adapting a DNN trained on adult speech to children voices. The method extends a previous one, based on the Kullback-Leibler divergence between the original (adult) DNN output distribution and the target one, by accounting for the quality of the supervision of the adaptation utterances. In addition, starting from the observation that by gradually removing from the adaptation set the sentences with higher WERs significant performance improvements can be achieved, we also investigate the usage of automatic selection of adaptation utterances. For determining transcription quality we investigate the use of confidence estimates of recognized hypotheses. We present experiments and related results achieved on an Italian data set of children's speech. We show that the proposed DNN adaptation approach allows to significantly reduce the WER on a given test set from 14.2% (corresponding to using the non adapted DNN, trained on adult speech) to 10.6%. It is worth mentioning that the latter result has been achieved without making use of any training data specific of children's speech.
{"title":"DNN adaptation for recognition of children speech through automatic utterance selection","authors":"M. Matassoni, D. Falavigna, D. Giuliani","doi":"10.1109/SLT.2016.7846331","DOIUrl":"https://doi.org/10.1109/SLT.2016.7846331","url":null,"abstract":"This paper describes an approach for adapting a DNN trained on adult speech to children voices. The method extends a previous one, based on the Kullback-Leibler divergence between the original (adult) DNN output distribution and the target one, by accounting for the quality of the supervision of the adaptation utterances. In addition, starting from the observation that by gradually removing from the adaptation set the sentences with higher WERs significant performance improvements can be achieved, we also investigate the usage of automatic selection of adaptation utterances. For determining transcription quality we investigate the use of confidence estimates of recognized hypotheses. We present experiments and related results achieved on an Italian data set of children's speech. We show that the proposed DNN adaptation approach allows to significantly reduce the WER on a given test set from 14.2% (corresponding to using the non adapted DNN, trained on adult speech) to 10.6%. It is worth mentioning that the latter result has been achieved without making use of any training data specific of children's speech.","PeriodicalId":281635,"journal":{"name":"2016 IEEE Spoken Language Technology Workshop (SLT)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128897632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}