{"title":"Inferring linguistic structure in spoken language","authors":"M. Woszczyna, A. Waibel","doi":"10.21437/ICSLP.1994-226","DOIUrl":null,"url":null,"abstract":"We demonstrate the applications of Markov Chains and HMMs to modeling of the underlying structure in spontaneous spoken language. Experiments with supervised training cover the detection of the current dialog state and identi cation of the speech act as used by the speech translation component in our JANUS Speech-to-Speech Translation System. HMM training with hidden states is used to uncover other levels of structure in the task. The possible use of the model for perplexity reduction in a continuous speech recognition system is also demonstrated. To achieve improvement over a state independent bigram language model, great care must be taken to keep the number of model parameters small in the face of limited amounts of training data from transcribed spontaneous speech.","PeriodicalId":90685,"journal":{"name":"Proceedings : ICSLP. International Conference on Spoken Language Processing","volume":"15 1","pages":"847-850"},"PeriodicalIF":0.0000,"publicationDate":"1994-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"40","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings : ICSLP. International Conference on Spoken Language Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.21437/ICSLP.1994-226","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 40
Abstract
We demonstrate the applications of Markov Chains and HMMs to modeling of the underlying structure in spontaneous spoken language. Experiments with supervised training cover the detection of the current dialog state and identi cation of the speech act as used by the speech translation component in our JANUS Speech-to-Speech Translation System. HMM training with hidden states is used to uncover other levels of structure in the task. The possible use of the model for perplexity reduction in a continuous speech recognition system is also demonstrated. To achieve improvement over a state independent bigram language model, great care must be taken to keep the number of model parameters small in the face of limited amounts of training data from transcribed spontaneous speech.