{"title":"Multi-Span statistical language modeling for large vocabulary speech recognition","authors":"J. Bellegarda","doi":"10.21437/ICSLP.1998-640","DOIUrl":null,"url":null,"abstract":"The goal of multi-span language modeling is to integrate the various constraints, both local and global, that are present in the language. In this paper, local constraints are captured via the usual n-gram approach, while global constraints are taken into account through the use of latent semantic analysis. Anintegrative formulation is derivedfor the combination of these two paradigms, resulting in an en-tirely data-driven, multi-span framework for large vocabulary speech recognition. Because of the inherent comple-mentarity in the two types of constraints, the performance of the integrated language model compares favorably with the corresponding n-gram performance. Both perplexity and average word error rate (cid:12)gures are reported and dis-cussed.","PeriodicalId":117113,"journal":{"name":"5th International Conference on Spoken Language Processing (ICSLP 1998)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1998-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"5th International Conference on Spoken Language Processing (ICSLP 1998)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.21437/ICSLP.1998-640","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 13
Abstract
The goal of multi-span language modeling is to integrate the various constraints, both local and global, that are present in the language. In this paper, local constraints are captured via the usual n-gram approach, while global constraints are taken into account through the use of latent semantic analysis. Anintegrative formulation is derivedfor the combination of these two paradigms, resulting in an en-tirely data-driven, multi-span framework for large vocabulary speech recognition. Because of the inherent comple-mentarity in the two types of constraints, the performance of the integrated language model compares favorably with the corresponding n-gram performance. Both perplexity and average word error rate (cid:12)gures are reported and dis-cussed.