Exploiting Information From Native Data for Non-Native Automatic Pronunciation Assessment

Binghuai Lin, Liyuan Wang
{"title":"Exploiting Information From Native Data for Non-Native Automatic Pronunciation Assessment","authors":"Binghuai Lin, Liyuan Wang","doi":"10.1109/SLT54892.2023.10022486","DOIUrl":null,"url":null,"abstract":"This paper proposes an end-to-end pronunciation assessment method to exploit the adequate native data and reduce the need for non-native data costly to label. To obtain discriminative acoustic representations at the phoneme level, the pretrained wav2vec 2.0 is re-trained with connectionist temporal classification (CTC) loss for phoneme recognition using native data. These acoustic representations are fused with phoneme representations derived from a phoneme encoder to obtain final pronunciation scores. An efficient fusion mechanism aligns each phoneme with acoustic frames based on attention, where all blank frames recognized by the CTC-based phoneme recognition are masked. Finally, the whole network is optimized by a multi-task learning framework combining CTC loss and mean square error loss between predicted and human scores. Extensive experiments demonstrate that it outperforms previous baselines in the Pearson correlation coefficient even with much fewer labeled non-native data.","PeriodicalId":352002,"journal":{"name":"2022 IEEE Spoken Language Technology Workshop (SLT)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Spoken Language Technology Workshop (SLT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SLT54892.2023.10022486","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

This paper proposes an end-to-end pronunciation assessment method to exploit the adequate native data and reduce the need for non-native data costly to label. To obtain discriminative acoustic representations at the phoneme level, the pretrained wav2vec 2.0 is re-trained with connectionist temporal classification (CTC) loss for phoneme recognition using native data. These acoustic representations are fused with phoneme representations derived from a phoneme encoder to obtain final pronunciation scores. An efficient fusion mechanism aligns each phoneme with acoustic frames based on attention, where all blank frames recognized by the CTC-based phoneme recognition are masked. Finally, the whole network is optimized by a multi-task learning framework combining CTC loss and mean square error loss between predicted and human scores. Extensive experiments demonstrate that it outperforms previous baselines in the Pearson correlation coefficient even with much fewer labeled non-native data.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用母语数据信息进行非母语语音自动评估
本文提出了一种端到端的语音评估方法,以充分利用本地数据,减少对非本地数据的标注成本。为了在音素水平上获得判别性的声学表示,使用连接时间分类(CTC)损失对预训练的wav2vec 2.0进行重新训练,以使用本地数据进行音素识别。这些声学表征与音素表征融合,从音素编码器得到最终的发音分数。一种有效的融合机制将每个音素与基于注意力的声框架对齐,其中基于ctc的音素识别识别的所有空白框架都被掩盖。最后,通过多任务学习框架对整个网络进行优化,该框架结合了CTC损失和预测分数与人类分数之间的均方误差损失。大量的实验表明,即使标记的非本地数据少得多,它在Pearson相关系数方面也优于以前的基线。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Phone-Level Pronunciation Scoring for L1 Using Weighted-Dynamic Time Warping The Clever Hans Effect in Voice Spoofing Detection A Multi-Modal Array of Interpretable Features to Evaluate Language and Speech Patterns in Different Neurological Disorders Unsupervised Domain Adaptation of Neural PLDA Using Segment Pairs for Speaker Verification Learning Accent Representation with Multi-Level VAE Towards Controllable Speech Synthesis
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1