Xin Qi, Ruibo Fu, Zhengqi Wen, Tao Wang, Chunyu Qiang, Jianhua Tao, Chenxing Li, Yi Lu, Shuchen Shi, Zhiyong Wang, Xiaopeng Wang, Yuankun Xie, Yukun Liu, Xuefei Liu, Guanjun Li
{"title":"DPI-TTS:用于文本到语音中快速转换和风格时态建模的定向补丁交互","authors":"Xin Qi, Ruibo Fu, Zhengqi Wen, Tao Wang, Chunyu Qiang, Jianhua Tao, Chenxing Li, Yi Lu, Shuchen Shi, Zhiyong Wang, Xiaopeng Wang, Yuankun Xie, Yukun Liu, Xuefei Liu, Guanjun Li","doi":"arxiv-2409.11835","DOIUrl":null,"url":null,"abstract":"In recent years, speech diffusion models have advanced rapidly. Alongside the\nwidely used U-Net architecture, transformer-based models such as the Diffusion\nTransformer (DiT) have also gained attention. However, current DiT speech\nmodels treat Mel spectrograms as general images, which overlooks the specific\nacoustic properties of speech. To address these limitations, we propose a\nmethod called Directional Patch Interaction for Text-to-Speech (DPI-TTS), which\nbuilds on DiT and achieves fast training without compromising accuracy.\nNotably, DPI-TTS employs a low-to-high frequency, frame-by-frame progressive\ninference approach that aligns more closely with acoustic properties, enhancing\nthe naturalness of the generated speech. Additionally, we introduce a\nfine-grained style temporal modeling method that further improves speaker style\nsimilarity. Experimental results demonstrate that our method increases the\ntraining speed by nearly 2 times and significantly outperforms the baseline\nmodels.","PeriodicalId":501284,"journal":{"name":"arXiv - EE - Audio and Speech Processing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DPI-TTS: Directional Patch Interaction for Fast-Converging and Style Temporal Modeling in Text-to-Speech\",\"authors\":\"Xin Qi, Ruibo Fu, Zhengqi Wen, Tao Wang, Chunyu Qiang, Jianhua Tao, Chenxing Li, Yi Lu, Shuchen Shi, Zhiyong Wang, Xiaopeng Wang, Yuankun Xie, Yukun Liu, Xuefei Liu, Guanjun Li\",\"doi\":\"arxiv-2409.11835\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In recent years, speech diffusion models have advanced rapidly. Alongside the\\nwidely used U-Net architecture, transformer-based models such as the Diffusion\\nTransformer (DiT) have also gained attention. However, current DiT speech\\nmodels treat Mel spectrograms as general images, which overlooks the specific\\nacoustic properties of speech. To address these limitations, we propose a\\nmethod called Directional Patch Interaction for Text-to-Speech (DPI-TTS), which\\nbuilds on DiT and achieves fast training without compromising accuracy.\\nNotably, DPI-TTS employs a low-to-high frequency, frame-by-frame progressive\\ninference approach that aligns more closely with acoustic properties, enhancing\\nthe naturalness of the generated speech. Additionally, we introduce a\\nfine-grained style temporal modeling method that further improves speaker style\\nsimilarity. Experimental results demonstrate that our method increases the\\ntraining speed by nearly 2 times and significantly outperforms the baseline\\nmodels.\",\"PeriodicalId\":501284,\"journal\":{\"name\":\"arXiv - EE - Audio and Speech Processing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - EE - Audio and Speech Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11835\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Audio and Speech Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11835","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
近年来,语音扩散模型发展迅速。除了广泛使用的 U-Net 架构外,基于变压器的模型(如扩散变压器(DiT))也受到了关注。然而,目前的 DiT 语音模型将梅尔频谱图视为一般图像,忽略了语音的特定声学特性。为了解决这些局限性,我们提出了一种名为 "文本到语音的方向性补丁交互"(DPI-TTS)的方法,它建立在 DiT 的基础上,在不影响准确性的情况下实现了快速训练。值得注意的是,DPI-TTS 采用了一种从低频到高频、逐帧渐进的推理方法,更贴近声学特性,增强了生成语音的自然度。此外,我们还引入了细粒度风格时间建模方法,进一步提高了说话者的风格相似性。实验结果表明,我们的方法将训练速度提高了近 2 倍,明显优于基线模型。
DPI-TTS: Directional Patch Interaction for Fast-Converging and Style Temporal Modeling in Text-to-Speech
In recent years, speech diffusion models have advanced rapidly. Alongside the
widely used U-Net architecture, transformer-based models such as the Diffusion
Transformer (DiT) have also gained attention. However, current DiT speech
models treat Mel spectrograms as general images, which overlooks the specific
acoustic properties of speech. To address these limitations, we propose a
method called Directional Patch Interaction for Text-to-Speech (DPI-TTS), which
builds on DiT and achieves fast training without compromising accuracy.
Notably, DPI-TTS employs a low-to-high frequency, frame-by-frame progressive
inference approach that aligns more closely with acoustic properties, enhancing
the naturalness of the generated speech. Additionally, we introduce a
fine-grained style temporal modeling method that further improves speaker style
similarity. Experimental results demonstrate that our method increases the
training speed by nearly 2 times and significantly outperforms the baseline
models.