Jiangfeng Liu, Yongbin Guo, Jinbiao Chen, Zixu Wang, Aihua Mao
{"title":"跨语言说话人音色翻译的语音合成","authors":"Jiangfeng Liu, Yongbin Guo, Jinbiao Chen, Zixu Wang, Aihua Mao","doi":"10.1109/ICCR55715.2022.10053890","DOIUrl":null,"url":null,"abstract":"We propose a cross-lingual TTS model based on the neural network. The model is capable of synthesizing speech across languages and translating the speaker's timbre. It uses a few seconds of untranscribed reference audio of the target speaker to synthesize the new speech of that speaker. The model consists of a separate speaker encoder, STT Translator, synthesizer, and vocoder. We decouple speaker information and speech to build a speaker recognition network. Our synthesizer is mainly built based on the Tacotron model and is divided into three parts: encoder, attention mechanism and decoder. The vocoder, on the other hand, is based on two methods, WaveRNN and HiFi-GAN, and serves to predict the synthesized waveform using the Mel spectrum. We conducted experiments to analyze the effectiveness of our approach. Besides, we also analyzed the effect of different datasets on the training effect.","PeriodicalId":441511,"journal":{"name":"2022 4th International Conference on Control and Robotics (ICCR)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Speech Synthesis for Speaker Timbre Translation Across Languages\",\"authors\":\"Jiangfeng Liu, Yongbin Guo, Jinbiao Chen, Zixu Wang, Aihua Mao\",\"doi\":\"10.1109/ICCR55715.2022.10053890\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We propose a cross-lingual TTS model based on the neural network. The model is capable of synthesizing speech across languages and translating the speaker's timbre. It uses a few seconds of untranscribed reference audio of the target speaker to synthesize the new speech of that speaker. The model consists of a separate speaker encoder, STT Translator, synthesizer, and vocoder. We decouple speaker information and speech to build a speaker recognition network. Our synthesizer is mainly built based on the Tacotron model and is divided into three parts: encoder, attention mechanism and decoder. The vocoder, on the other hand, is based on two methods, WaveRNN and HiFi-GAN, and serves to predict the synthesized waveform using the Mel spectrum. We conducted experiments to analyze the effectiveness of our approach. Besides, we also analyzed the effect of different datasets on the training effect.\",\"PeriodicalId\":441511,\"journal\":{\"name\":\"2022 4th International Conference on Control and Robotics (ICCR)\",\"volume\":\"16 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 4th International Conference on Control and Robotics (ICCR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCR55715.2022.10053890\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 4th International Conference on Control and Robotics (ICCR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCR55715.2022.10053890","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Speech Synthesis for Speaker Timbre Translation Across Languages
We propose a cross-lingual TTS model based on the neural network. The model is capable of synthesizing speech across languages and translating the speaker's timbre. It uses a few seconds of untranscribed reference audio of the target speaker to synthesize the new speech of that speaker. The model consists of a separate speaker encoder, STT Translator, synthesizer, and vocoder. We decouple speaker information and speech to build a speaker recognition network. Our synthesizer is mainly built based on the Tacotron model and is divided into three parts: encoder, attention mechanism and decoder. The vocoder, on the other hand, is based on two methods, WaveRNN and HiFi-GAN, and serves to predict the synthesized waveform using the Mel spectrum. We conducted experiments to analyze the effectiveness of our approach. Besides, we also analyzed the effect of different datasets on the training effect.