基于个性化端到端语音合成的多尺度控制丰富风格迁移

Zhongcai Lyu, Jie Zhu
{"title":"基于个性化端到端语音合成的多尺度控制丰富风格迁移","authors":"Zhongcai Lyu, Jie Zhu","doi":"10.1109/ICIST55546.2022.9926908","DOIUrl":null,"url":null,"abstract":"Personalized speech synthesis aims to transfer speech style with a few speech samples from the target speaker. However, pretrain and fine-tuning techniques are required to overcome the problem of poor performance for similarity and prosody in a data-limited condition. In this paper, a zero-shot style transfer framework based on multi-scale control is presented to handle the above problems. Firstly, speaker embedding is extracted from a single reference speech audio by a specially designed reference encoder, with which Speaker-Adaptive Linear Modulation (SALM) could generate the scale and bias vector to influence the encoder output, and consequently greatly enhance the adaptability to unseen speakers. Secondly, we propose a prosody module that includes a prosody extractor and prosody predictor, which can efficiently predict the prosody of the generated speech from the reference audio and text information and achieve phoneme-level prosody control, thus increasing the diversity of the synthesized speech. Using both objective and subjective metrics for evaluation, the experiments demonstrate that our model is capable of synthesizing speech of high naturalness and similarity of speech, with only a few or even a single piece of data from the target speaker.","PeriodicalId":211213,"journal":{"name":"2022 12th International Conference on Information Science and Technology (ICIST)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Enriching Style Transfer in multi-scale control based personalized end-to-end speech synthesis\",\"authors\":\"Zhongcai Lyu, Jie Zhu\",\"doi\":\"10.1109/ICIST55546.2022.9926908\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Personalized speech synthesis aims to transfer speech style with a few speech samples from the target speaker. However, pretrain and fine-tuning techniques are required to overcome the problem of poor performance for similarity and prosody in a data-limited condition. In this paper, a zero-shot style transfer framework based on multi-scale control is presented to handle the above problems. Firstly, speaker embedding is extracted from a single reference speech audio by a specially designed reference encoder, with which Speaker-Adaptive Linear Modulation (SALM) could generate the scale and bias vector to influence the encoder output, and consequently greatly enhance the adaptability to unseen speakers. Secondly, we propose a prosody module that includes a prosody extractor and prosody predictor, which can efficiently predict the prosody of the generated speech from the reference audio and text information and achieve phoneme-level prosody control, thus increasing the diversity of the synthesized speech. Using both objective and subjective metrics for evaluation, the experiments demonstrate that our model is capable of synthesizing speech of high naturalness and similarity of speech, with only a few or even a single piece of data from the target speaker.\",\"PeriodicalId\":211213,\"journal\":{\"name\":\"2022 12th International Conference on Information Science and Technology (ICIST)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 12th International Conference on Information Science and Technology (ICIST)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICIST55546.2022.9926908\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 12th International Conference on Information Science and Technology (ICIST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIST55546.2022.9926908","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

个性化语音合成的目的是利用目标说话者的少量语音样本来转移语音风格。然而,需要预训练和微调技术来克服在数据有限的条件下相似性和韵律性能差的问题。针对上述问题,本文提出了一种基于多尺度控制的零弹式迁移框架。首先,通过专门设计的参考编码器从单个参考语音音频中提取扬声器嵌入,通过扬声器自适应线性调制(speaker - adaptive Linear Modulation, SALM)产生影响编码器输出的尺度和偏置向量,从而大大提高了对未知扬声器的适应性。其次,我们提出了包含韵律提取器和韵律预测器的韵律模块,该模块可以有效地从参考音频和文本信息中预测生成语音的韵律,实现音素级韵律控制,从而增加合成语音的多样性。实验结果表明,我们的模型能够在仅使用少量甚至单个目标说话人数据的情况下,合成出高度自然度和相似度的语音。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Enriching Style Transfer in multi-scale control based personalized end-to-end speech synthesis
Personalized speech synthesis aims to transfer speech style with a few speech samples from the target speaker. However, pretrain and fine-tuning techniques are required to overcome the problem of poor performance for similarity and prosody in a data-limited condition. In this paper, a zero-shot style transfer framework based on multi-scale control is presented to handle the above problems. Firstly, speaker embedding is extracted from a single reference speech audio by a specially designed reference encoder, with which Speaker-Adaptive Linear Modulation (SALM) could generate the scale and bias vector to influence the encoder output, and consequently greatly enhance the adaptability to unseen speakers. Secondly, we propose a prosody module that includes a prosody extractor and prosody predictor, which can efficiently predict the prosody of the generated speech from the reference audio and text information and achieve phoneme-level prosody control, thus increasing the diversity of the synthesized speech. Using both objective and subjective metrics for evaluation, the experiments demonstrate that our model is capable of synthesizing speech of high naturalness and similarity of speech, with only a few or even a single piece of data from the target speaker.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Marine Aquaculture Information Extraction from Optical Remote Sensing Images via MDOAU2-net A hybrid intelligent system for assisting low-vision people with over-the-counter medication Practical Adaptive Event-triggered Finite-time Stabilization for A Class of Second-order Systems Neurodynamics-based Iteratively Reweighted Convex Optimization for Sparse Signal Reconstruction A novel energy carbon emission codes based carbon efficiency evaluation method for enterprises
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1