Zongyang Du, Junchen Lu, Kun Zhou, Lakshmish Kaushik, Berrak Sisman
{"title":"Converting Anyone's Voice: End-to-End Expressive Voice Conversion with a Conditional Diffusion Model","authors":"Zongyang Du, Junchen Lu, Kun Zhou, Lakshmish Kaushik, Berrak Sisman","doi":"arxiv-2405.01730","DOIUrl":null,"url":null,"abstract":"Expressive voice conversion (VC) conducts speaker identity conversion for\nemotional speakers by jointly converting speaker identity and emotional style.\nEmotional style modeling for arbitrary speakers in expressive VC has not been\nextensively explored. Previous approaches have relied on vocoders for speech\nreconstruction, which makes speech quality heavily dependent on the performance\nof vocoders. A major challenge of expressive VC lies in emotion prosody\nmodeling. To address these challenges, this paper proposes a fully end-to-end\nexpressive VC framework based on a conditional denoising diffusion\nprobabilistic model (DDPM). We utilize speech units derived from\nself-supervised speech models as content conditioning, along with deep features\nextracted from speech emotion recognition and speaker verification systems to\nmodel emotional style and speaker identity. Objective and subjective\nevaluations show the effectiveness of our framework. Codes and samples are\npublicly available.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":"43 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Sound","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2405.01730","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Expressive voice conversion (VC) conducts speaker identity conversion for
emotional speakers by jointly converting speaker identity and emotional style.
Emotional style modeling for arbitrary speakers in expressive VC has not been
extensively explored. Previous approaches have relied on vocoders for speech
reconstruction, which makes speech quality heavily dependent on the performance
of vocoders. A major challenge of expressive VC lies in emotion prosody
modeling. To address these challenges, this paper proposes a fully end-to-end
expressive VC framework based on a conditional denoising diffusion
probabilistic model (DDPM). We utilize speech units derived from
self-supervised speech models as content conditioning, along with deep features
extracted from speech emotion recognition and speaker verification systems to
model emotional style and speaker identity. Objective and subjective
evaluations show the effectiveness of our framework. Codes and samples are
publicly available.