Multi-Objective Multi-Task Learning on RNNLM for Speech Recognition

Minguang Song, Yunxin Zhao, Shaojun Wang
{"title":"Multi-Objective Multi-Task Learning on RNNLM for Speech Recognition","authors":"Minguang Song, Yunxin Zhao, Shaojun Wang","doi":"10.1109/SLT.2018.8639649","DOIUrl":null,"url":null,"abstract":"The cross entropy (CE) loss function is commonly adopted for neural network language model (NNLM) training. Although this criterion is largely successful, as evidenced by the quick advance of NNLM, minimizing CE only maximizes likelihood of training data. When training data is insufficient, the generalization power of the resulting LM is limited on test data. In this paper, we propose to integrate a pairwise ranking (PR) loss with the CE loss for multi-objective training on recurrent neural network language model (RNNLM). The PR loss emphasizes discrimination between target and non-target words and also reserves probabilities for low-frequency correct words, which complements the distribution learning role of the CE loss. Combining the two losses may therefore help improve the performance of RNNLM. In addition, we incorporate multi-task learning (MTL) into the proposed multi-objective learning to regularize the primary task of RNNLM by an auxiliary task of part-of-speech (POS) tagging. The proposed approach to RNNLM learning has been evaluated on two speech recognition tasks of WSJ and AMI with encouraging results achieved on word error rate reductions.","PeriodicalId":377307,"journal":{"name":"2018 IEEE Spoken Language Technology Workshop (SLT)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE Spoken Language Technology Workshop (SLT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SLT.2018.8639649","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

The cross entropy (CE) loss function is commonly adopted for neural network language model (NNLM) training. Although this criterion is largely successful, as evidenced by the quick advance of NNLM, minimizing CE only maximizes likelihood of training data. When training data is insufficient, the generalization power of the resulting LM is limited on test data. In this paper, we propose to integrate a pairwise ranking (PR) loss with the CE loss for multi-objective training on recurrent neural network language model (RNNLM). The PR loss emphasizes discrimination between target and non-target words and also reserves probabilities for low-frequency correct words, which complements the distribution learning role of the CE loss. Combining the two losses may therefore help improve the performance of RNNLM. In addition, we incorporate multi-task learning (MTL) into the proposed multi-objective learning to regularize the primary task of RNNLM by an auxiliary task of part-of-speech (POS) tagging. The proposed approach to RNNLM learning has been evaluated on two speech recognition tasks of WSJ and AMI with encouraging results achieved on word error rate reductions.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于RNNLM的多目标多任务学习语音识别
交叉熵(CE)损失函数是神经网络语言模型(NNLM)训练中常用的方法。尽管这个标准在很大程度上是成功的,正如NNLM的快速发展所证明的那样,最小化CE只能最大化训练数据的可能性。当训练数据不足时,得到的LM对测试数据的泛化能力受到限制。在本文中,我们提出将配对排序(PR)损失与CE损失相结合,用于循环神经网络语言模型(RNNLM)的多目标训练。PR损失强调目标词和非目标词的区分,并保留低频正确词的概率,补充了CE损失的分布学习作用。因此,结合这两种损失可能有助于提高RNNLM的性能。此外,我们将多任务学习(MTL)纳入到多目标学习中,通过词性标注的辅助任务对RNNLM的主要任务进行正则化。本文提出的RNNLM学习方法已经在WSJ和AMI两个语音识别任务上进行了评估,在降低单词错误率方面取得了令人鼓舞的结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Sequence Teacher-Student Training of Acoustic Models for Automatic Free Speaking Language Assessment Leveraging Sequence-to-Sequence Speech Synthesis for Enhancing Acoustic-to-Word Speech Recognition Dynamic Extension of ASR Lexicon Using Wikipedia Data Detection and Calibration of Whisper for Speaker Recognition Out-of-Domain Slot Value Detection for Spoken Dialogue Systems with Context Information
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1