Distilling Sequence-to-Sequence Voice Conversion Models for Streaming Conversion Applications

Kou Tanaka, H. Kameoka, Takuhiro Kaneko, Shogo Seki
{"title":"Distilling Sequence-to-Sequence Voice Conversion Models for Streaming Conversion Applications","authors":"Kou Tanaka, H. Kameoka, Takuhiro Kaneko, Shogo Seki","doi":"10.1109/SLT54892.2023.10023432","DOIUrl":null,"url":null,"abstract":"This paper describes a method for distilling a recurrent-based sequence-to-sequence (S2S) voice conversion (VC) model. Although the performance of recent VCs is becoming higher quality, streaming conversion is still a challenge when considering practical applications. To achieve streaming VC, the conversion model needs a streamable structure, a causal layer rather than a non-causal layer. Motivated by this constraint and recent advances in S2S learning, we apply the teacher-student framework to recurrent-based S2S- VC models. A major challenge is how to minimize degradation due to the use of causal layers which masks future input information. Experimental evaluations show that except for male-to-female speaker conversion, our approach is able to maintain the teacher model's performance in terms of subjective evaluations despite the streamable student model structure. Audio samples can be accessed on http://www.kecl.ntt.co.jp/people/tanaka.ko/projects/dists2svc.","PeriodicalId":352002,"journal":{"name":"2022 IEEE Spoken Language Technology Workshop (SLT)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Spoken Language Technology Workshop (SLT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SLT54892.2023.10023432","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

This paper describes a method for distilling a recurrent-based sequence-to-sequence (S2S) voice conversion (VC) model. Although the performance of recent VCs is becoming higher quality, streaming conversion is still a challenge when considering practical applications. To achieve streaming VC, the conversion model needs a streamable structure, a causal layer rather than a non-causal layer. Motivated by this constraint and recent advances in S2S learning, we apply the teacher-student framework to recurrent-based S2S- VC models. A major challenge is how to minimize degradation due to the use of causal layers which masks future input information. Experimental evaluations show that except for male-to-female speaker conversion, our approach is able to maintain the teacher model's performance in terms of subjective evaluations despite the streamable student model structure. Audio samples can be accessed on http://www.kecl.ntt.co.jp/people/tanaka.ko/projects/dists2svc.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
为流转换应用提取序列到序列的语音转换模型
本文描述了一种提取基于循环的序列到序列(S2S)语音转换(VC)模型的方法。尽管最近的vc的性能质量越来越高,但在考虑实际应用时,流转换仍然是一个挑战。为了实现流VC,转换模型需要一个可流的结构,一个因果层而不是非因果层。基于这一约束和S2S学习的最新进展,我们将师生框架应用于基于循环的S2S- VC模型。一个主要的挑战是如何最大限度地减少由于使用掩盖未来输入信息的因果层而造成的退化。实验评估表明,除了男性到女性说话者的转换外,我们的方法能够保持教师模型在主观评价方面的表现,尽管学生模型结构可流化。音频样本可以访问http://www.kecl.ntt.co.jp/people/tanaka.ko/projects/dists2svc。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Phone-Level Pronunciation Scoring for L1 Using Weighted-Dynamic Time Warping The Clever Hans Effect in Voice Spoofing Detection A Multi-Modal Array of Interpretable Features to Evaluate Language and Speech Patterns in Different Neurological Disorders Unsupervised Domain Adaptation of Neural PLDA Using Segment Pairs for Speaker Verification Learning Accent Representation with Multi-Level VAE Towards Controllable Speech Synthesis
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1