{"title":"Zero-Shot Voice Cloning Using Variational Embedding with Attention Mechanism","authors":"Jaeuk Lee, Jiye G. Kim, Joon‐Hyuk Chang","doi":"10.1109/IC-NIDC54101.2021.9660599","DOIUrl":null,"url":null,"abstract":"Many voice cloning studies based on multi-speaker text-to-speech (TTS) have been conducted. Among the techniques of voice cloning, we focus on zero-shot voice cloning. The most important aspect of zero-shot voice cloning is which speaker embedding is used. In this study, two types of speaker embeddings are used. One is extracted from the mel spectrogram using a speaker encoder and the other is stored in an embedding dictionary, such as a vector quantized-variational autoencoder (VQ-VAE). To extract embedding from the embedding dictionary, an attention mechanism is applied, which we call attention- V AE (AT - V AE). By employing the embedding extracted by the speaker encoder as a query in the attention mechanism, the attention weights are calculated in the embedding dictionary. This mechanism allows the extraction of speaker embedding, which represents unseen speakers. In addition, training is applied to make our model robust to unseen speakers. Through the training stage, our system has developed further. The performance of the proposed method was validated in terms of various metrics, and it was demonstrated that the proposed method enables voice cloning without adaptation training.","PeriodicalId":264468,"journal":{"name":"2021 7th IEEE International Conference on Network Intelligence and Digital Content (IC-NIDC)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 7th IEEE International Conference on Network Intelligence and Digital Content (IC-NIDC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IC-NIDC54101.2021.9660599","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Many voice cloning studies based on multi-speaker text-to-speech (TTS) have been conducted. Among the techniques of voice cloning, we focus on zero-shot voice cloning. The most important aspect of zero-shot voice cloning is which speaker embedding is used. In this study, two types of speaker embeddings are used. One is extracted from the mel spectrogram using a speaker encoder and the other is stored in an embedding dictionary, such as a vector quantized-variational autoencoder (VQ-VAE). To extract embedding from the embedding dictionary, an attention mechanism is applied, which we call attention- V AE (AT - V AE). By employing the embedding extracted by the speaker encoder as a query in the attention mechanism, the attention weights are calculated in the embedding dictionary. This mechanism allows the extraction of speaker embedding, which represents unseen speakers. In addition, training is applied to make our model robust to unseen speakers. Through the training stage, our system has developed further. The performance of the proposed method was validated in terms of various metrics, and it was demonstrated that the proposed method enables voice cloning without adaptation training.
许多基于多说话人文本到语音(TTS)的语音克隆研究已经开展。在语音克隆技术中,我们重点研究了零采样语音克隆技术。零射击语音克隆最重要的方面是使用哪一个扬声器嵌入。在本研究中,使用了两种类型的说话人嵌入。一种是使用扬声器编码器从mel频谱图中提取,另一种是存储在嵌入字典中,例如矢量量化变分自编码器(VQ-VAE)。为了从嵌入字典中提取嵌入,采用了一种注意机制,我们称之为注意- V AE (AT - V AE)。利用说话人编码器提取的嵌入作为注意机制的查询,在嵌入字典中计算注意权值。这种机制允许提取说话人嵌入,它代表看不见的说话人。此外,还进行了训练,使我们的模型对未见的说话者具有鲁棒性。经过培训阶段,我们的系统得到了进一步的发展。从多个指标对所提方法的性能进行了验证,结果表明所提方法无需自适应训练即可实现语音克隆。