首页 > 最新文献

Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020最新文献

英文 中文
The Duke Entry for 2020 Blizzard Challenge 杜克大学参加2020暴雪挑战赛
Pub Date : 2020-10-30 DOI: 10.21437/vcc_bc.2020-5
Zexin Cai, Ming Li
This paper presents the speech synthesis system built for the 2020 Blizzard Challenge by team ‘H’. The goal of the challenge is to build a synthesizer that is able to generate high-fidelity speech with a voice that is similar to the one from the provided data. Our system mainly draws on end-to-end neural networks. Specifically, we have an encoder-decoder based prosody prediction network to insert prosodic annotations for a given sentence. We use the spectrogram predictor from Tacotron2 as the end-toend phoneme-to-spectrogram generator, followed by the neural vocoder WaveRNN to convert predicted spectrograms to audio signals. Additionally, we involve finetuning strategics to improve the TTS performance in our work. Subjective evaluation of the synthetic audios is taken regarding naturalness, similarity, and intelligibility. Samples are available online for listening. 1
本文介绍了由“H”团队为2020暴雪挑战赛构建的语音合成系统。挑战的目标是构建一个能够生成高保真语音的合成器,其声音与所提供数据中的声音相似。我们的系统主要利用端到端神经网络。具体来说,我们有一个基于编码器-解码器的韵律预测网络,为给定的句子插入韵律注释。我们使用Tacotron2的频谱图预测器作为端到端音素到频谱图的生成器,然后使用神经声码器WaveRNN将预测的频谱图转换为音频信号。此外,我们还涉及微调策略,以提高我们工作中的TTS性能。对合成音频的自然性、相似性和可理解性进行主观评价。样本可在线收听。1
{"title":"The Duke Entry for 2020 Blizzard Challenge","authors":"Zexin Cai, Ming Li","doi":"10.21437/vcc_bc.2020-5","DOIUrl":"https://doi.org/10.21437/vcc_bc.2020-5","url":null,"abstract":"This paper presents the speech synthesis system built for the 2020 Blizzard Challenge by team ‘H’. The goal of the challenge is to build a synthesizer that is able to generate high-fidelity speech with a voice that is similar to the one from the provided data. Our system mainly draws on end-to-end neural networks. Specifically, we have an encoder-decoder based prosody prediction network to insert prosodic annotations for a given sentence. We use the spectrogram predictor from Tacotron2 as the end-toend phoneme-to-spectrogram generator, followed by the neural vocoder WaveRNN to convert predicted spectrograms to audio signals. Additionally, we involve finetuning strategics to improve the TTS performance in our work. Subjective evaluation of the synthetic audios is taken regarding naturalness, similarity, and intelligibility. Samples are available online for listening. 1","PeriodicalId":355114,"journal":{"name":"Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020","volume":"575 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132557940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Submission from SRCB for Voice Conversion Challenge 2020 SRCB提交的2020年语音转换挑战赛
Pub Date : 2020-10-30 DOI: 10.21437/vcc_bc.2020-18
Qiuyue Ma, Ruolan Liu, Xue Wen, Chunhui Lu, Xiao Chen
This paper presents the intra-lingual and cross-lingual voice conversion system for Voice Conversion Challenge 2020(VCC 2020). Voice conversion (VC) modifies a source speaker’s speech so that the result sounds like a target speaker. This becomes particularly difficult when source and target speakers speak different languages. In this work we focus on building a voice conversion system achieving consistent improvements in accent and intelligibility evaluations. Our voice conversion system is constituted by a bilingual phoneme recognition based speech representation module, a neural network based speech generation module and a neural vocoder. More concretely, we extract general phonation from the source speakers' speeches of different languages, and improve the sound quality by optimizing the speech synthesis module and adding a noise suppression post-process module to the vocoder. This framework ensures high intelligible and high natural speech, which is very close to human quality (MOS=4.17 rank 2 in Task 1, MOS=4.13 rank 2 in Task 2).
本文介绍了2020语音转换挑战赛(VCC 2020)的语内和跨语语音转换系统。语音转换(VC)是对源说话者的语音进行修改,使其听起来像目标说话者。当源语者和目标语者说不同的语言时,这变得尤其困难。在这项工作中,我们专注于建立一个语音转换系统,以实现在口音和可理解性评估方面的持续改进。我们的语音转换系统由基于双语音素识别的语音表示模块、基于神经网络的语音生成模块和神经声码器组成。更具体地说,我们从不同语言的源说话人的语音中提取出一般的发音,并通过优化语音合成模块和在声码器中增加噪声抑制后处理模块来提高音质。该框架确保了高可理解性和高度自然的语音,非常接近人类的质量(任务1中MOS=4.17排名2,任务2中MOS=4.13排名2)。
{"title":"Submission from SRCB for Voice Conversion Challenge 2020","authors":"Qiuyue Ma, Ruolan Liu, Xue Wen, Chunhui Lu, Xiao Chen","doi":"10.21437/vcc_bc.2020-18","DOIUrl":"https://doi.org/10.21437/vcc_bc.2020-18","url":null,"abstract":"This paper presents the intra-lingual and cross-lingual voice conversion system for Voice Conversion Challenge 2020(VCC 2020). Voice conversion (VC) modifies a source speaker’s speech so that the result sounds like a target speaker. This becomes particularly difficult when source and target speakers speak different languages. In this work we focus on building a voice conversion system achieving consistent improvements in accent and intelligibility evaluations. Our voice conversion system is constituted by a bilingual phoneme recognition based speech representation module, a neural network based speech generation module and a neural vocoder. More concretely, we extract general phonation from the source speakers' speeches of different languages, and improve the sound quality by optimizing the speech synthesis module and adding a noise suppression post-process module to the vocoder. This framework ensures high intelligible and high natural speech, which is very close to human quality (MOS=4.17 rank 2 in Task 1, MOS=4.13 rank 2 in Task 2).","PeriodicalId":355114,"journal":{"name":"Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116821091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
The Ximalaya TTS System for Blizzard Challenge 2020 暴雪挑战赛2020的喜马拉雅TTS系统
Pub Date : 2020-10-30 DOI: 10.21437/vcc_bc.2020-10
Wendi He, Zhiba Su, Yang Sun
This paper describes the proposed Himalaya text-to-speech synthesis system built for the Blizzard Challenge 2020. The two tasks are to build expressive speech synthesizers based on the released 9.5-hour Mandarin corpus from a male native speaker and 3-hour Shanghainese corpus from a female native speaker respectively. Our architecture is Tacotron2-based acoustic model with WaveRNN vocoder. Several methods for preprocessing and checking the raw BC transcript are imple-mented. Firstly, the multi-task TTS front-end module trans-forms the text sequences into phoneme-level sequences with prosody label after implement the polyphonic disambiguation and prosody prediction module. Then, we train the released corpus on a Seq2seq multi-speaker acoustic model for Mel spec-trograms modeling. Besides, the neural vocoder WaveRNN[1] with minor improvements generate high-quality audio for the submitted results. The identifier for our system is M, and the experimental evaluation results in listening tests show that the system we submitted performed well in most of the criterion.
本文描述了为暴雪挑战赛2020而构建的喜马拉雅文本到语音合成系统。两项任务分别是基于一名男性母语者9.5小时的普通话语料库和一名女性母语者3小时的上海话语料库构建表达性语音合成器。我们的架构是基于tacotron2的声学模型和WaveRNN声码器。实现了几种预处理和检查原始BC转录本的方法。首先,多任务TTS前端模块在实现复音消歧和韵律预测模块后,将文本序列转换为带有韵律标签的音素级序列;然后,我们在Seq2seq多扬声器声学模型上训练释放的语料库,用于Mel谱图建模。此外,经过轻微改进的神经声码器WaveRNN[1]为提交的结果生成高质量的音频。我们的系统标识符为M,在听力测试中的实验评估结果表明,我们提交的系统在大部分标准中都表现良好。
{"title":"The Ximalaya TTS System for Blizzard Challenge 2020","authors":"Wendi He, Zhiba Su, Yang Sun","doi":"10.21437/vcc_bc.2020-10","DOIUrl":"https://doi.org/10.21437/vcc_bc.2020-10","url":null,"abstract":"This paper describes the proposed Himalaya text-to-speech synthesis system built for the Blizzard Challenge 2020. The two tasks are to build expressive speech synthesizers based on the released 9.5-hour Mandarin corpus from a male native speaker and 3-hour Shanghainese corpus from a female native speaker respectively. Our architecture is Tacotron2-based acoustic model with WaveRNN vocoder. Several methods for preprocessing and checking the raw BC transcript are imple-mented. Firstly, the multi-task TTS front-end module trans-forms the text sequences into phoneme-level sequences with prosody label after implement the polyphonic disambiguation and prosody prediction module. Then, we train the released corpus on a Seq2seq multi-speaker acoustic model for Mel spec-trograms modeling. Besides, the neural vocoder WaveRNN[1] with minor improvements generate high-quality audio for the submitted results. The identifier for our system is M, and the experimental evaluation results in listening tests show that the system we submitted performed well in most of the criterion.","PeriodicalId":355114,"journal":{"name":"Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130072009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Non-parallel Voice Conversion based on Hierarchical Latent Embedding Vector Quantized Variational Autoencoder 基于层次隐嵌入矢量量化变分自编码器的非并行语音转换
Pub Date : 2020-10-30 DOI: 10.21437/vcc_bc.2020-20
Tuan Vu Ho, M. Akagi
This paper proposes a hierarchical latent embedding structure for Vector Quantized Variational Autoencoder (VQVAE) to improve the performance of the non-parallel voice conversion (NPVC) model. Previous studies on NPVC based on vanilla VQVAE use a single codebook to encode the linguistic information at a fixed temporal scale. However, the linguistic structure contains different semantic levels (e.g., phoneme, sylla-ble, word) that span at various temporal scales. Therefore, the converted speech may contain unnatural pronunciations which can degrade the naturalness of speech. To tackle this problem, we propose to use the hierarchical latent embedding structure which comprises several vector quantization blocks operating at different temporal scales. When trained with a multi-speaker database, our proposed model can encode the voice characteristics into the speaker embedding vector, which can be used in one-shot learning settings. Results from objective and subjective tests indicate that our proposed model outperforms the conventional VQVAE based model in both intra-lingual and cross-lingual conversion tasks. The official results from Voice Conversion Challenge 2020 reveal that our proposed model achieved the highest naturalness performance among autoencoder based models in both tasks. Our implementation is being made available at 1 .
为了提高非并行语音转换(NPVC)模型的性能,提出了一种面向矢量量化变分自编码器(VQVAE)的分层潜嵌入结构。以往基于vanilla VQVAE的NPVC研究使用单一码本在固定时间尺度上对语言信息进行编码。然而,语言结构包含不同的语义层次(如音素、音节、词),这些层次在不同的时间尺度上跨越。因此,转换后的语音可能包含不自然的发音,从而降低语音的自然度。为了解决这个问题,我们提出使用分层潜嵌入结构,该结构由多个在不同时间尺度上操作的矢量量化块组成。当使用多说话人数据库进行训练时,我们提出的模型可以将语音特征编码为说话人嵌入向量,可以用于一次性学习设置。客观和主观测试的结果表明,我们提出的模型在语内和跨语转换任务中都优于传统的基于VQVAE的模型。2020年语音转换挑战的官方结果显示,我们提出的模型在这两个任务中都实现了基于自编码器的模型中最高的自然度性能。我们的实现将在1点可用。
{"title":"Non-parallel Voice Conversion based on Hierarchical Latent Embedding Vector Quantized Variational Autoencoder","authors":"Tuan Vu Ho, M. Akagi","doi":"10.21437/vcc_bc.2020-20","DOIUrl":"https://doi.org/10.21437/vcc_bc.2020-20","url":null,"abstract":"This paper proposes a hierarchical latent embedding structure for Vector Quantized Variational Autoencoder (VQVAE) to improve the performance of the non-parallel voice conversion (NPVC) model. Previous studies on NPVC based on vanilla VQVAE use a single codebook to encode the linguistic information at a fixed temporal scale. However, the linguistic structure contains different semantic levels (e.g., phoneme, sylla-ble, word) that span at various temporal scales. Therefore, the converted speech may contain unnatural pronunciations which can degrade the naturalness of speech. To tackle this problem, we propose to use the hierarchical latent embedding structure which comprises several vector quantization blocks operating at different temporal scales. When trained with a multi-speaker database, our proposed model can encode the voice characteristics into the speaker embedding vector, which can be used in one-shot learning settings. Results from objective and subjective tests indicate that our proposed model outperforms the conventional VQVAE based model in both intra-lingual and cross-lingual conversion tasks. The official results from Voice Conversion Challenge 2020 reveal that our proposed model achieved the highest naturalness performance among autoencoder based models in both tasks. Our implementation is being made available at 1 .","PeriodicalId":355114,"journal":{"name":"Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132440004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
The Sogou System for Blizzard Challenge 2020 2020暴雪挑战赛的搜狗系统
Pub Date : 2020-10-30 DOI: 10.21437/vcc_bc.2020-8
Fanbo Meng, Ruimin Wang, Peng Fang, Shuangyuan Zou, Wenjun Duan, Ming Zhou, Kai Liu, Wei Chen
In this paper, we introduce the text-to-speech system from Sogou team submitted to Blizzard Challenge 2020. The goal of this year’s challenge is to build a natural Mandarin Chinese speech synthesis system from the 10-hours corpus by a native Chinese male speaker. We will discuss the major modules of the submitted system: (1) the front-end module to analyze the pronunciation and prosody of text; (2) the FastSpeech-based sequence-to-sequence acoustic model to predict acoustic features; (3) the WaveRNN based neural vocoder to reconstruct waveforms. Evaluation results provided by the challenge organizer are also discussed
在本文中,我们介绍了搜狗团队提交给暴雪挑战赛2020的文本转语音系统。今年挑战赛的目标是由一名母语为汉语的男性从10小时的语料库中构建一个自然的汉语语音合成系统。我们将讨论提交系统的主要模块:(1)前端模块对文本进行语音韵律分析;(2)基于fastspeech的序列对序列声学模型预测声学特征;(3)基于WaveRNN的神经声码器重构波形。讨论了挑战赛组织者提供的评估结果
{"title":"The Sogou System for Blizzard Challenge 2020","authors":"Fanbo Meng, Ruimin Wang, Peng Fang, Shuangyuan Zou, Wenjun Duan, Ming Zhou, Kai Liu, Wei Chen","doi":"10.21437/vcc_bc.2020-8","DOIUrl":"https://doi.org/10.21437/vcc_bc.2020-8","url":null,"abstract":"In this paper, we introduce the text-to-speech system from Sogou team submitted to Blizzard Challenge 2020. The goal of this year’s challenge is to build a natural Mandarin Chinese speech synthesis system from the 10-hours corpus by a native Chinese male speaker. We will discuss the major modules of the submitted system: (1) the front-end module to analyze the pronunciation and prosody of text; (2) the FastSpeech-based sequence-to-sequence acoustic model to predict acoustic features; (3) the WaveRNN based neural vocoder to reconstruct waveforms. Evaluation results provided by the challenge organizer are also discussed","PeriodicalId":355114,"journal":{"name":"Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133711673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
NUS-HLT System for Blizzard Challenge 2020 暴雪挑战赛2020的NUS-HLT系统
Pub Date : 2020-10-30 DOI: 10.21437/vcc_bc.2020-7
Yi Zhou, Xiaohai Tian, Xuehao Zhou, Mingyang Zhang, Grandee Lee, Rui Liu, Berrak, Sisman, Haizhou Li
The paper presents the NUS-HLT text-to-speech (TTS) system for the Blizzard Challenge 2020. The challenge has two tasks: Hub task 2020-MH1 to synthesize Mandarin Chinese given 9.5 hours of speech data from a male native speaker of Mandarin; Spoke task 2020-SS1 to synthesize Shanghainese given 3 hours of speech data from a female native speaker of Shanghainese. Our submitted system combines the word embedding, which is extracted from a pre-trained language model, with the E2E TTS synthesizer to generate acoustic features from text input. WaveRNN neural vocoder and WaveNet neural vocoder are utilized to generate speech waveforms from acoustic features in MH1 and SS1 tasks, respectively. Evaluation results provided by the challenge organizers demonstrate the effectiveness of our submitted TTS system.
本文介绍了用于暴雪挑战赛2020的NUS-HLT文本到语音(TTS)系统。该挑战有两个任务:Hub任务2020-MH1,在给定9.5小时的普通话母语男性语音数据的情况下合成普通话;口语任务2020-SS1在给定一名母语为上海话的女性3小时语音数据的情况下,合成上海话。我们提交的系统将从预训练的语言模型中提取的词嵌入与E2E TTS合成器相结合,从文本输入中生成声学特征。在MH1和SS1任务中,分别利用WaveRNN神经声码器和WaveNet神经声码器从声学特征生成语音波形。挑战赛组织者提供的评估结果证明了我们提交的TTS系统的有效性。
{"title":"NUS-HLT System for Blizzard Challenge 2020","authors":"Yi Zhou, Xiaohai Tian, Xuehao Zhou, Mingyang Zhang, Grandee Lee, Rui Liu, Berrak, Sisman, Haizhou Li","doi":"10.21437/vcc_bc.2020-7","DOIUrl":"https://doi.org/10.21437/vcc_bc.2020-7","url":null,"abstract":"The paper presents the NUS-HLT text-to-speech (TTS) system for the Blizzard Challenge 2020. The challenge has two tasks: Hub task 2020-MH1 to synthesize Mandarin Chinese given 9.5 hours of speech data from a male native speaker of Mandarin; Spoke task 2020-SS1 to synthesize Shanghainese given 3 hours of speech data from a female native speaker of Shanghainese. Our submitted system combines the word embedding, which is extracted from a pre-trained language model, with the E2E TTS synthesizer to generate acoustic features from text input. WaveRNN neural vocoder and WaveNet neural vocoder are utilized to generate speech waveforms from acoustic features in MH1 and SS1 tasks, respectively. Evaluation results provided by the challenge organizers demonstrate the effectiveness of our submitted TTS system.","PeriodicalId":355114,"journal":{"name":"Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131360892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The UFRJ Entry for the Voice Conversion Challenge 2020 2020年语音转换挑战赛UFRJ参赛作品
Pub Date : 2020-10-30 DOI: 10.21437/vcc_bc.2020-29
Victor Costa, Igor M. Quintanilha, S. L. Netto, L. Biscainho
This paper presents our system submitted to the Task 1 of the 2020 edition of the voice conversion challenge (VCC), based on CycleGAN to convert mel-spectograms and MelGAN to synthesize converted speech. CycleGAN is a GAN-based morphing network that uses a cyclic reconstruction cost to allow training with non-parallel corpora. MelGAN is a GAN based non-autoregressive neural vocoder that uses a multi-scale discriminator to efficiently capture complexities of speech signals and achieve high quality signals with extremely fast generation. In the VCC 2020 evaluation our system achieved mean opinion scores of 1.92 for English listeners and 1.81 for Japanese listeners, and averaged similarity score of 2.51 for English listeners and 2.59 for Japanese listeners. The results suggest that possi-bly the use of neural vocoders to represent converted speech is a problem that demand specific training strategies and the use of adaptation techniques.
本文介绍了我们提交给2020年版语音转换挑战(VCC)任务1的系统,该系统基于CycleGAN转换mel- spectrum,基于MelGAN合成转换后的语音。CycleGAN是一种基于gan的变形网络,它使用循环重建成本来允许非并行语料库的训练。MelGAN是一种基于GAN的非自回归神经声码器,它使用多尺度鉴别器来有效捕获语音信号的复杂性,并以极快的生成速度获得高质量的信号。在VCC 2020评估中,我们的系统对英语听众的平均意见得分为1.92,对日语听众的平均意见得分为1.81,对英语听众的平均相似度得分为2.51,对日语听众的平均相似度得分为2.59。结果表明,使用神经声码器来表示转换后的语音可能是一个需要特定训练策略和使用适应技术的问题。
{"title":"The UFRJ Entry for the Voice Conversion Challenge 2020","authors":"Victor Costa, Igor M. Quintanilha, S. L. Netto, L. Biscainho","doi":"10.21437/vcc_bc.2020-29","DOIUrl":"https://doi.org/10.21437/vcc_bc.2020-29","url":null,"abstract":"This paper presents our system submitted to the Task 1 of the 2020 edition of the voice conversion challenge (VCC), based on CycleGAN to convert mel-spectograms and MelGAN to synthesize converted speech. CycleGAN is a GAN-based morphing network that uses a cyclic reconstruction cost to allow training with non-parallel corpora. MelGAN is a GAN based non-autoregressive neural vocoder that uses a multi-scale discriminator to efficiently capture complexities of speech signals and achieve high quality signals with extremely fast generation. In the VCC 2020 evaluation our system achieved mean opinion scores of 1.92 for English listeners and 1.81 for Japanese listeners, and averaged similarity score of 2.51 for English listeners and 2.59 for Japanese listeners. The results suggest that possi-bly the use of neural vocoders to represent converted speech is a problem that demand specific training strategies and the use of adaptation techniques.","PeriodicalId":355114,"journal":{"name":"Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117172255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Submission from SCUT for Blizzard Challenge 2020 来自SCUT的暴雪挑战赛2020
Pub Date : 2020-10-30 DOI: 10.21437/vcc_bc.2020-6
J. Zhong, Yitao Yang, S. Bu
In this paper, we describe the SCUT text-to-speech synthesis system for the Blizzard Challenge 2020 and the task is to build a voice from the provided Mandarin dataset. We begin with our system architecture composed of an end-to-end structure to convert acoustic features from textual sequences and a WaveRNN vocoder to restore the waveform. Then a BERT-based prosody prediction model to specify the prosodic information of the content is introduced. The text processing module is adjusted to uniformly encode both Mandarin and English texts, then a two-stage training method is utilized to build a bilingual speech synthesis system. Meanwhile, we employ forward attention and guided attention mechanisms to accelerate the model’s convergence. Finally, the reasons for our inefficient performance presented in the evaluation results are discussed.
在本文中,我们描述了用于暴雪挑战赛2020的SCUT文本到语音合成系统,其任务是从提供的普通话数据集构建语音。首先,我们的系统架构由端到端结构组成,用于从文本序列转换声学特征,并使用WaveRNN声码器来恢复波形。然后介绍了一种基于bert的韵律预测模型来指定内容的韵律信息。调整文本处理模块对中英文文本进行统一编码,然后采用两阶段训练方法构建双语语音合成系统。同时,我们采用了前向注意和引导注意机制来加速模型的收敛。最后,讨论了评价结果中我们表现不佳的原因。
{"title":"Submission from SCUT for Blizzard Challenge 2020","authors":"J. Zhong, Yitao Yang, S. Bu","doi":"10.21437/vcc_bc.2020-6","DOIUrl":"https://doi.org/10.21437/vcc_bc.2020-6","url":null,"abstract":"In this paper, we describe the SCUT text-to-speech synthesis system for the Blizzard Challenge 2020 and the task is to build a voice from the provided Mandarin dataset. We begin with our system architecture composed of an end-to-end structure to convert acoustic features from textual sequences and a WaveRNN vocoder to restore the waveform. Then a BERT-based prosody prediction model to specify the prosodic information of the content is introduced. The text processing module is adjusted to uniformly encode both Mandarin and English texts, then a two-stage training method is utilized to build a bilingual speech synthesis system. Meanwhile, we employ forward attention and guided attention mechanisms to accelerate the model’s convergence. Finally, the reasons for our inefficient performance presented in the evaluation results are discussed.","PeriodicalId":355114,"journal":{"name":"Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127574996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The NLPR Speech Synthesis entry for Blizzard Challenge 2020 2020暴雪挑战赛NLPR语音合成参赛作品
Pub Date : 2020-10-30 DOI: 10.21437/vcc_bc.2020-12
Tao Wang, J. Tao, Ruibo Fu, Zhengqi Wen, Chunyu Qiang
The paper describes the NLPR speech synthesis system entry for Blizzard Challenge 2020. More than 9 hours of speech data from an news anchor and 3 hours of speech from one native Shanghainese speaker are adopted as training data for building system this year. Our speech synthesis system is built based on the multi-speaker end-to-end speech synthesis system. LPCNet based neural vocoder is adapted to improve the quality. Different from our previous system, some improvements about data pruning and speaker adaptation strategies were made to improve the stability of our system. In this paper, the whole system structure, data pruning method, and the duration control will be in-troduced and discussed. In addition, this competition includes two tasks of Mandarin and Shanghainese, and we will intro-duce the important parts of each topic respectively. Finally, the results of listening test are presented.
本文描述了暴雪挑战赛2020的NLPR语音合成系统参赛作品。今年,我们采用了一名新闻主播9小时以上的语音数据和一名上海本地人3小时以上的语音数据作为系统的训练数据。我们的语音合成系统是在多扬声器端到端语音合成系统的基础上构建的。采用基于LPCNet的神经声码器来提高音质。与之前的系统不同,我们在数据修剪和说话人自适应策略上做了一些改进,以提高系统的稳定性。本文将对整个系统的结构、数据修剪方法和持续时间控制进行介绍和讨论。此外,本次比赛包括普通话和上海话两个题目,我们将分别介绍每个题目的重要部分。最后给出了听力测试的结果。
{"title":"The NLPR Speech Synthesis entry for Blizzard Challenge 2020","authors":"Tao Wang, J. Tao, Ruibo Fu, Zhengqi Wen, Chunyu Qiang","doi":"10.21437/vcc_bc.2020-12","DOIUrl":"https://doi.org/10.21437/vcc_bc.2020-12","url":null,"abstract":"The paper describes the NLPR speech synthesis system entry for Blizzard Challenge 2020. More than 9 hours of speech data from an news anchor and 3 hours of speech from one native Shanghainese speaker are adopted as training data for building system this year. Our speech synthesis system is built based on the multi-speaker end-to-end speech synthesis system. LPCNet based neural vocoder is adapted to improve the quality. Different from our previous system, some improvements about data pruning and speaker adaptation strategies were made to improve the stability of our system. In this paper, the whole system structure, data pruning method, and the duration control will be in-troduced and discussed. In addition, this competition includes two tasks of Mandarin and Shanghainese, and we will intro-duce the important parts of each topic respectively. Finally, the results of listening test are presented.","PeriodicalId":355114,"journal":{"name":"Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121486017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1