首页 > 最新文献

IEEE Workshop on Automatic Speech Recognition and Understanding, 2001. ASRU '01.最新文献

英文 中文
Speech recognition of broadcast news for the European Portuguese language 欧洲葡萄牙语广播新闻语音识别
Pub Date : 2001-12-09 DOI: 10.1109/ASRU.2001.1034651
H. Meinedo, N. Souto, J. Neto
This paper describes our work on the development of a large vocabulary continuous speech recognition system applied to a broadcast news task for the European Portuguese language in the scope of the ALERT project. We start by presenting the baseline recogniser AUDIMUS, which was originally developed with a corpus of read newspaper text. This is a hybrid system that uses a combination of phone probabilities generated by several MLPs trained on distinct feature sets. The paper details the modifications introduced in this system, namely in the development of a new language model, the vocabulary and pronunciation lexicon and the training on new data from the ALERT BN corpus currently available. The system trained with this BN corpus achieved 18.4% WER when tested with the F0 focus condition (studio, planed, native, clean), and 35.2% when tested in all focus conditions.
本文描述了我们在ALERT项目范围内的大词汇量连续语音识别系统的开发工作,该系统应用于欧洲葡萄牙语的广播新闻任务。我们首先介绍了基线识别器AUDIMUS,它最初是用阅读报纸文本的语料库开发的。这是一个混合系统,它使用了由几个mlp根据不同的特征集训练生成的电话概率的组合。本文详细介绍了该系统的改进,即开发新的语言模型,词汇和发音词典以及对ALERT BN语料库中现有新数据的训练。使用该BN语料库训练的系统在F0焦点条件下(工作室,计划,本机,清洁)测试时达到18.4%的WER,在所有焦点条件下测试时达到35.2%。
{"title":"Speech recognition of broadcast news for the European Portuguese language","authors":"H. Meinedo, N. Souto, J. Neto","doi":"10.1109/ASRU.2001.1034651","DOIUrl":"https://doi.org/10.1109/ASRU.2001.1034651","url":null,"abstract":"This paper describes our work on the development of a large vocabulary continuous speech recognition system applied to a broadcast news task for the European Portuguese language in the scope of the ALERT project. We start by presenting the baseline recogniser AUDIMUS, which was originally developed with a corpus of read newspaper text. This is a hybrid system that uses a combination of phone probabilities generated by several MLPs trained on distinct feature sets. The paper details the modifications introduced in this system, namely in the development of a new language model, the vocabulary and pronunciation lexicon and the training on new data from the ALERT BN corpus currently available. The system trained with this BN corpus achieved 18.4% WER when tested with the F0 focus condition (studio, planed, native, clean), and 35.2% when tested in all focus conditions.","PeriodicalId":118671,"journal":{"name":"IEEE Workshop on Automatic Speech Recognition and Understanding, 2001. ASRU '01.","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123983465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
VoiceXML 2.0 and the W3C speech interface framework VoiceXML 2.0和W3C语音接口框架
Pub Date : 2001-12-09 DOI: 10.1109/ASRU.2001.1034576
J. Larson
The World Wide Web Voice Browser Working Group has released specifications for four integrated languages to developing speech applications: VoiceXML 2.0, Speech Synthesis Markup Language, Speech Recognition Grammar Markup Language, and Semantic Interpretation. These languages enable developers to specify quickly conversational speech Web applications that can be accessed by any telephone or cell phone. The speech recognition and natural language communities are welcome to use these specifications and their implementations as they become available, as well as comment on the direction and details of these evolving specifications.
万维网语音浏览器工作组发布了用于开发语音应用的四种集成语言规范:VoiceXML 2.0、语音合成标记语言、语音识别语法标记语言和语义解释。这些语言使开发人员能够指定可以通过任何电话或移动电话访问的快速会话语音Web应用程序。欢迎语音识别和自然语言社区使用这些规范及其实现,并对这些不断发展的规范的方向和细节发表评论。
{"title":"VoiceXML 2.0 and the W3C speech interface framework","authors":"J. Larson","doi":"10.1109/ASRU.2001.1034576","DOIUrl":"https://doi.org/10.1109/ASRU.2001.1034576","url":null,"abstract":"The World Wide Web Voice Browser Working Group has released specifications for four integrated languages to developing speech applications: VoiceXML 2.0, Speech Synthesis Markup Language, Speech Recognition Grammar Markup Language, and Semantic Interpretation. These languages enable developers to specify quickly conversational speech Web applications that can be accessed by any telephone or cell phone. The speech recognition and natural language communities are welcome to use these specifications and their implementations as they become available, as well as comment on the direction and details of these evolving specifications.","PeriodicalId":118671,"journal":{"name":"IEEE Workshop on Automatic Speech Recognition and Understanding, 2001. ASRU '01.","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124806472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Natural language call routing: towards combination and boosting of classifiers 自然语言调用路由:面向分类器的组合与提升
Pub Date : 2001-12-09 DOI: 10.1109/ASRU.2001.1034622
I. Zitouni, H. Kuo, Chin-Hui Lee
We describe different techniques to improve natural language call routing: boosting, relevance feedback, discriminative training, and constrained minimization. Their common goal is to reweight the data in order to let the system focus on documents judged hard to classify by a single classifier. These approaches are evaluated with the common vector-based classifier and also with the beta classifier which had given good results in the similar task of E-mail steering. We explore ways of deriving and combining uncorrelated classifiers in order to improve accuracy. Compared to the cosine and beta baseline classifiers, we report an improvement of 49% and 10%, respectively.
我们描述了改进自然语言呼叫路由的不同技术:增强、相关反馈、判别训练和约束最小化。他们的共同目标是重新加权数据,以便让系统专注于被单一分类器判断为难以分类的文档。用基于向量的通用分类器和beta分类器对这些方法进行了评估,beta分类器在类似的电子邮件引导任务中取得了很好的效果。为了提高准确率,我们探索了派生和组合不相关分类器的方法。与余弦和beta基线分类器相比,我们分别报告了49%和10%的改进。
{"title":"Natural language call routing: towards combination and boosting of classifiers","authors":"I. Zitouni, H. Kuo, Chin-Hui Lee","doi":"10.1109/ASRU.2001.1034622","DOIUrl":"https://doi.org/10.1109/ASRU.2001.1034622","url":null,"abstract":"We describe different techniques to improve natural language call routing: boosting, relevance feedback, discriminative training, and constrained minimization. Their common goal is to reweight the data in order to let the system focus on documents judged hard to classify by a single classifier. These approaches are evaluated with the common vector-based classifier and also with the beta classifier which had given good results in the similar task of E-mail steering. We explore ways of deriving and combining uncorrelated classifiers in order to improve accuracy. Compared to the cosine and beta baseline classifiers, we report an improvement of 49% and 10%, respectively.","PeriodicalId":118671,"journal":{"name":"IEEE Workshop on Automatic Speech Recognition and Understanding, 2001. ASRU '01.","volume":"139 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123075792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Speech recognition using advanced HMM2 features 语音识别使用先进的HMM2功能
Pub Date : 2001-12-09 DOI: 10.1109/ASRU.2001.1034590
K. Weber, Samy Bengio, H. Bourlard
HMM2 is a particular hidden Markov model where state emission probabilities of the temporal (primary) HMM are modeled through (secondary) state-dependent frequency-based HMMs (see Weber, K. et al., Proc. ICSGP, vol.III, p.147-50, 2000). As we show in another paper (see Weber et al., Proc. Eurospeech, Sep. 2001), a secondary HMM can also be used to extract robust ASR features. Here, we further investigate this novel approach towards using a full HMM2 as feature extractor, working in the spectral domain, and extracting robust formant-like features for a standard ASR system. HMM2 performs a nonlinear, state-dependent frequency warping, and it is shown that the resulting frequency segmentation actually contains particularly discriminant features. To improve the HMM2 system further, we complement the initial spectral energy vectors with frequency information. Finally, adding temporal information to the HMM2 feature vector yields further improvements. These conclusions are experimentally validated on the Numbers95 database, where word error rates of 15%, using only a 4-dimensional feature vector (3 formant-like parameters and one time index) were obtained.
HMM2是一个特殊的隐马尔可夫模型,其中时间(主要)HMM的状态发射概率通过(次要)依赖于状态的基于频率的HMM来建模(参见Weber, K. et al., Proc. ICSGP, vol.III, p.147- 50,2000)。正如我们在另一篇论文中所展示的那样(参见Weber等人,Proc. Eurospeech, 2001年9月),二级HMM也可以用于提取鲁棒的ASR特征。在这里,我们进一步研究了这种新方法,使用完整的HMM2作为特征提取器,在谱域工作,并为标准ASR系统提取鲁棒的共振峰特征。HMM2执行非线性、状态相关的频率翘曲,结果表明,频率分割实际上包含特别的判别特征。为了进一步改进HMM2系统,我们用频率信息补充了初始谱能量向量。最后,将时间信息添加到HMM2特征向量中会得到进一步的改进。这些结论在Numbers95数据库上进行了实验验证,该数据库仅使用一个4维特征向量(3个峰形参数和一个时间索引)获得了错误率为15%的单词。
{"title":"Speech recognition using advanced HMM2 features","authors":"K. Weber, Samy Bengio, H. Bourlard","doi":"10.1109/ASRU.2001.1034590","DOIUrl":"https://doi.org/10.1109/ASRU.2001.1034590","url":null,"abstract":"HMM2 is a particular hidden Markov model where state emission probabilities of the temporal (primary) HMM are modeled through (secondary) state-dependent frequency-based HMMs (see Weber, K. et al., Proc. ICSGP, vol.III, p.147-50, 2000). As we show in another paper (see Weber et al., Proc. Eurospeech, Sep. 2001), a secondary HMM can also be used to extract robust ASR features. Here, we further investigate this novel approach towards using a full HMM2 as feature extractor, working in the spectral domain, and extracting robust formant-like features for a standard ASR system. HMM2 performs a nonlinear, state-dependent frequency warping, and it is shown that the resulting frequency segmentation actually contains particularly discriminant features. To improve the HMM2 system further, we complement the initial spectral energy vectors with frequency information. Finally, adding temporal information to the HMM2 feature vector yields further improvements. These conclusions are experimentally validated on the Numbers95 database, where word error rates of 15%, using only a 4-dimensional feature vector (3 formant-like parameters and one time index) were obtained.","PeriodicalId":118671,"journal":{"name":"IEEE Workshop on Automatic Speech Recognition and Understanding, 2001. ASRU '01.","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123285594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Recognition of negative emotions from the speech signal 从语音信号中识别负面情绪
Pub Date : 2001-12-09 DOI: 10.1109/ASRU.2001.1034632
C. Lee, Shrikanth S. Narayanan, R. Pieraccini
This paper reports on methods for automatic classification of spoken utterances based on the emotional state of the speaker. The data set used for the analysis comes from a corpus of human-machine dialogues recorded from a commercial application deployed by SpeechWorks. Linear discriminant classification with Gaussian class-conditional probability distribution and k-nearest neighbors methods are used to classify utterances into two basic emotion states, negative and non-negative The features used by the classifiers are utterance-level statistics of the fundamental frequency and energy of the speech signal. To improve classification performance, two specific feature selection methods are used; namely, promising first selection and forward feature selection. Principal component analysis is used to reduce the dimensionality of the features while maximizing classification accuracy. Improvements obtained by feature selection and PCA are reported. We also report the results.
本文报道了一种基于说话人情绪状态的语音自动分类方法。用于分析的数据集来自于一个由SpeechWorks部署的商业应用程序记录的人机对话语料库。基于高斯类-条件概率分布和k近邻方法的线性判别分类将话语分为消极和非消极两种基本情绪状态,分类器使用的特征是语音信号基频和能量的话语级统计。为了提高分类性能,使用了两种具体的特征选择方法;即有希望的优先选择和前向特征选择。主成分分析用于降低特征的维数,同时最大限度地提高分类精度。报道了通过特征选择和主成分分析得到的改进。我们也报道了结果。
{"title":"Recognition of negative emotions from the speech signal","authors":"C. Lee, Shrikanth S. Narayanan, R. Pieraccini","doi":"10.1109/ASRU.2001.1034632","DOIUrl":"https://doi.org/10.1109/ASRU.2001.1034632","url":null,"abstract":"This paper reports on methods for automatic classification of spoken utterances based on the emotional state of the speaker. The data set used for the analysis comes from a corpus of human-machine dialogues recorded from a commercial application deployed by SpeechWorks. Linear discriminant classification with Gaussian class-conditional probability distribution and k-nearest neighbors methods are used to classify utterances into two basic emotion states, negative and non-negative The features used by the classifiers are utterance-level statistics of the fundamental frequency and energy of the speech signal. To improve classification performance, two specific feature selection methods are used; namely, promising first selection and forward feature selection. Principal component analysis is used to reduce the dimensionality of the features while maximizing classification accuracy. Improvements obtained by feature selection and PCA are reported. We also report the results.","PeriodicalId":118671,"journal":{"name":"IEEE Workshop on Automatic Speech Recognition and Understanding, 2001. ASRU '01.","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127487659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 173
Automatic transcription of spontaneous lecture speech 自动转录自发演讲演讲
Pub Date : 2001-12-09 DOI: 10.1109/ASRU.2001.1034618
Tatsuya Kawahara, H. Nanjo, S. Furui
We introduce our extensive projects on spontaneous speech processing and current trials of lecture speech recognition. A large corpus of lecture presentations and talks is being collected in the project. We have trained initial baseline models and confirmed significant difference of real lectures and written notes. In spontaneous lecture speech, the speaking rate is generally faster and changes a lot, which makes it harder to apply fixed segmentation and decoding settings. Therefore, we propose sequential decoding and speaking-rate dependent decoding strategies. The sequential decoder simultaneously performs automatic segmentation and decoding of input utterances. Then, the most adequate acoustic analysis, phone models and decoding parameters are applied according to the current speaking rate. These strategies achieve improvement on automatic transcription of real lecture speech.
我们介绍了我们在自发语音处理和演讲语音识别方面的广泛项目。这个项目正在收集大量的演讲和谈话资料。我们训练了初始的基线模型,并确认了真实讲座和书面笔记的显著差异。在即兴演讲中,语速通常更快,变化很大,这使得应用固定的分割和解码设置变得更加困难。因此,我们提出了顺序译码和依赖语速的译码策略。顺序解码器同时对输入语音进行自动分割和解码。然后,根据当前通话速率,采用最充分的声学分析、手机型号和解码参数。这些策略实现了对真实演讲语音自动转录的改进。
{"title":"Automatic transcription of spontaneous lecture speech","authors":"Tatsuya Kawahara, H. Nanjo, S. Furui","doi":"10.1109/ASRU.2001.1034618","DOIUrl":"https://doi.org/10.1109/ASRU.2001.1034618","url":null,"abstract":"We introduce our extensive projects on spontaneous speech processing and current trials of lecture speech recognition. A large corpus of lecture presentations and talks is being collected in the project. We have trained initial baseline models and confirmed significant difference of real lectures and written notes. In spontaneous lecture speech, the speaking rate is generally faster and changes a lot, which makes it harder to apply fixed segmentation and decoding settings. Therefore, we propose sequential decoding and speaking-rate dependent decoding strategies. The sequential decoder simultaneously performs automatic segmentation and decoding of input utterances. Then, the most adequate acoustic analysis, phone models and decoding parameters are applied according to the current speaking rate. These strategies achieve improvement on automatic transcription of real lecture speech.","PeriodicalId":118671,"journal":{"name":"IEEE Workshop on Automatic Speech Recognition and Understanding, 2001. ASRU '01.","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125612590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
European Language Resources Association history and recent developments 欧洲语言资源协会的历史和最新发展
Pub Date : 2001-12-09 DOI: 10.1109/ASRU.2001.1034685
K. Choukri
This paper aims at briefly describing the rationale behind the foundation of the European Language Resources Association (ELRA) in 1995 and its activities since then. We would like to focus on the issues involved in making language resources available to different sectors of the language engineering community. ELRA is presented as a conduit for the distribution of speech, written and terminology databases, enabling all players to have access to language resources (LRs). In order to produce and provide such resources effectively to research and development groups in academic, commercial and industrial environments, it is necessary to address legal, logistic and other practical issues. This has already been done by ELRA through the establishment of an operational infrastructure that capitalizes on the investments of the European Commission and European national agencies to ensure the availability of speech, text, and terminology resources.
本文旨在简要描述1995年欧洲语言资源协会(ELRA)成立的基本原理及其此后的活动。我们希望将重点放在为语言工程社区的不同部门提供语言资源的相关问题上。ELRA是语音、书面和术语数据库分发的渠道,使所有参与者都能访问语言资源(LRs)。为了在学术、商业和工业环境中有效地生产和提供这些资源给研究和发展团体,有必要解决法律、后勤和其他实际问题。ELRA已经通过建立运营基础设施来实现这一点,该基础设施利用了欧洲委员会和欧洲国家机构的投资,以确保语音、文本和术语资源的可用性。
{"title":"European Language Resources Association history and recent developments","authors":"K. Choukri","doi":"10.1109/ASRU.2001.1034685","DOIUrl":"https://doi.org/10.1109/ASRU.2001.1034685","url":null,"abstract":"This paper aims at briefly describing the rationale behind the foundation of the European Language Resources Association (ELRA) in 1995 and its activities since then. We would like to focus on the issues involved in making language resources available to different sectors of the language engineering community. ELRA is presented as a conduit for the distribution of speech, written and terminology databases, enabling all players to have access to language resources (LRs). In order to produce and provide such resources effectively to research and development groups in academic, commercial and industrial environments, it is necessary to address legal, logistic and other practical issues. This has already been done by ELRA through the establishment of an operational infrastructure that capitalizes on the investments of the European Commission and European national agencies to ensure the availability of speech, text, and terminology resources.","PeriodicalId":118671,"journal":{"name":"IEEE Workshop on Automatic Speech Recognition and Understanding, 2001. ASRU '01.","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127909646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Phoneme-to-grapheme conversion for out-of-vocabulary words in large vocabulary speech recognition 大词汇量语音识别中词汇外词的音素-字素转换
Pub Date : 2001-12-01 DOI: 10.1109/ASRU.2001.1034672
B. Decadt, J. Duchateau, Walter Daelemans, P. Wambacq
We describe a method to enhance the readability of the textual output in a large vocabulary continuous speech recognition system when out-of-vocabulary words occur. The basic idea is to replace uncertain words in the transcriptions with a phoneme recognition result that is post-processed using a phoneme-to-grapheme converter. This converter turns phoneme strings into grapheme strings and is trained using machine learning techniques. Experiments show that, even when the grapheme strings are not fully correct, the resulting transcriptions are more easily readable than the original ones.
本文提出了一种在大词汇量连续语音识别系统中提高文本输出可读性的方法。其基本思想是用音素-字素转换器进行后处理的音素识别结果替换转录词中的不确定词。该转换器将音素字符串转换为字素字符串,并使用机器学习技术进行训练。实验表明,即使在字形字符串不完全正确的情况下,生成的转录也比原始转录更容易阅读。
{"title":"Phoneme-to-grapheme conversion for out-of-vocabulary words in large vocabulary speech recognition","authors":"B. Decadt, J. Duchateau, Walter Daelemans, P. Wambacq","doi":"10.1109/ASRU.2001.1034672","DOIUrl":"https://doi.org/10.1109/ASRU.2001.1034672","url":null,"abstract":"We describe a method to enhance the readability of the textual output in a large vocabulary continuous speech recognition system when out-of-vocabulary words occur. The basic idea is to replace uncertain words in the transcriptions with a phoneme recognition result that is post-processed using a phoneme-to-grapheme converter. This converter turns phoneme strings into grapheme strings and is trained using machine learning techniques. Experiments show that, even when the grapheme strings are not fully correct, the resulting transcriptions are more easily readable than the original ones.","PeriodicalId":118671,"journal":{"name":"IEEE Workshop on Automatic Speech Recognition and Understanding, 2001. ASRU '01.","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130627033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Maximum-likelihood training of the PLCG-based language model 基于plc的语言模型的最大似然训练
Pub Date : 2001-12-01 DOI: 10.1109/ASRU.2001.1034624
D. Van Uytsel, Dirk Van Compernolle, P. Wambacq
In Van Uytsel et al. (2001) a parsing language model based on a probabilistic left-comer grammar (PLCG) was proposed and encouraging performance on a speech recognition task using the PLCG-based language model was reported. In this paper we show how the PLCG-based language model can be further optimized by iterative parameter reestimation on unannotated training data. The precalculation of forward, inner and outer probabilities of states in the PLCG network provides an elegant crosscut to the computation of transition frequency expectations, which are needed in each iteration of the proposed reestimation procedure. The training algorithm enables model training on very large corpora. In our experiments, test set perplexity is close to saturation after three iterations, 5 to 16% lower than initially. We however observed no significant improvement of recognition accuracy after reestimation.
Van Uytsel等人(2001)提出了一种基于概率左角语法(PLCG)的解析语言模型,并报道了使用基于PLCG的语言模型在语音识别任务中的表现。在本文中,我们展示了如何通过对未注释的训练数据进行迭代参数重估计来进一步优化基于plc的语言模型。PLCG网络中状态的前向概率、内概率和外概率的预计算为每次迭代所提出的重估计过程中所需的过渡频率期望的计算提供了一个优雅的横切。训练算法可以在非常大的语料库上进行模型训练。在我们的实验中,经过三次迭代,测试集perplexity接近饱和,比初始降低了5% ~ 16%。然而,我们观察到重新估计后的识别准确率没有显著提高。
{"title":"Maximum-likelihood training of the PLCG-based language model","authors":"D. Van Uytsel, Dirk Van Compernolle, P. Wambacq","doi":"10.1109/ASRU.2001.1034624","DOIUrl":"https://doi.org/10.1109/ASRU.2001.1034624","url":null,"abstract":"In Van Uytsel et al. (2001) a parsing language model based on a probabilistic left-comer grammar (PLCG) was proposed and encouraging performance on a speech recognition task using the PLCG-based language model was reported. In this paper we show how the PLCG-based language model can be further optimized by iterative parameter reestimation on unannotated training data. The precalculation of forward, inner and outer probabilities of states in the PLCG network provides an elegant crosscut to the computation of transition frequency expectations, which are needed in each iteration of the proposed reestimation procedure. The training algorithm enables model training on very large corpora. In our experiments, test set perplexity is close to saturation after three iterations, 5 to 16% lower than initially. We however observed no significant improvement of recognition accuracy after reestimation.","PeriodicalId":118671,"journal":{"name":"IEEE Workshop on Automatic Speech Recognition and Understanding, 2001. ASRU '01.","volume":"84 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127978868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Distributed speech recognition with codec parameters 分布式语音识别与编解码器参数
Pub Date : 2001-11-27 DOI: 10.1109/ASRU.2001.1034604
sha Raj, Joshua Migdal, Rita Singh
Communication devices which perform distributed speech recognition (DSR) tasks currently transmit standardized coded parameters of speech signals. Recognition features are extracted from signals reconstructed using these on a remote server. Since reconstruction losses degrade recognition performance, proposals are being considered to standardize DSR-codecs which derive recognition features, to be transmitted and used directly for recognition. However, such a codec must be embedded on the transmitting device, along with its current standard codec. Performing recognition using codec bitstreams avoids these complications: no additional feature-extraction mechanism is required on the device, and there are no reconstruction losses on the server. We propose an LDA-based method for extracting optimal feature sets from codec bitstreams and demonstrate that features so derived result in improved recognition performance for the LPC, GSM and CELP codecs. For GSM and CELP, we show that the performance is comparable to that with uncoded speech and standard DSR-codec features.
执行分布式语音识别(DSR)任务的通信设备目前传输的是标准化的语音信号编码参数。识别特征是从在远程服务器上使用这些信号重建的信号中提取的。由于重建损失会降低识别性能,因此正在考虑对派生识别特征的dsr编解码器进行标准化,以便传输并直接用于识别。然而,这样的编解码器必须嵌入在传输设备上,以及它当前的标准编解码器。使用编解码器位流执行识别避免了这些复杂性:设备上不需要额外的特征提取机制,并且服务器上没有重建损失。我们提出了一种基于lda的方法来从编解码器比特流中提取最优特征集,并证明了这样提取的特征可以提高LPC、GSM和CELP编解码器的识别性能。对于GSM和CELP,我们表明性能可与未编码语音和标准dsr编解码器功能相媲美。
{"title":"Distributed speech recognition with codec parameters","authors":"sha Raj, Joshua Migdal, Rita Singh","doi":"10.1109/ASRU.2001.1034604","DOIUrl":"https://doi.org/10.1109/ASRU.2001.1034604","url":null,"abstract":"Communication devices which perform distributed speech recognition (DSR) tasks currently transmit standardized coded parameters of speech signals. Recognition features are extracted from signals reconstructed using these on a remote server. Since reconstruction losses degrade recognition performance, proposals are being considered to standardize DSR-codecs which derive recognition features, to be transmitted and used directly for recognition. However, such a codec must be embedded on the transmitting device, along with its current standard codec. Performing recognition using codec bitstreams avoids these complications: no additional feature-extraction mechanism is required on the device, and there are no reconstruction losses on the server. We propose an LDA-based method for extracting optimal feature sets from codec bitstreams and demonstrate that features so derived result in improved recognition performance for the LPC, GSM and CELP codecs. For GSM and CELP, we show that the performance is comparable to that with uncoded speech and standard DSR-codec features.","PeriodicalId":118671,"journal":{"name":"IEEE Workshop on Automatic Speech Recognition and Understanding, 2001. ASRU '01.","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114459961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
期刊
IEEE Workshop on Automatic Speech Recognition and Understanding, 2001. ASRU '01.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1