MacST: Multi-Accent Speech Synthesis via Text Transliteration for Accent Conversion

Sho Inoue, Shuai Wang, Wanxing Wang, Pengcheng Zhu, Mengxiao Bi, Haizhou Li
{"title":"MacST: Multi-Accent Speech Synthesis via Text Transliteration for Accent Conversion","authors":"Sho Inoue, Shuai Wang, Wanxing Wang, Pengcheng Zhu, Mengxiao Bi, Haizhou Li","doi":"arxiv-2409.09352","DOIUrl":null,"url":null,"abstract":"In accented voice conversion or accent conversion, we seek to convert the\naccent in speech from one another while preserving speaker identity and\nsemantic content. In this study, we formulate a novel method for creating\nmulti-accented speech samples, thus pairs of accented speech samples by the\nsame speaker, through text transliteration for training accent conversion\nsystems. We begin by generating transliterated text with Large Language Models\n(LLMs), which is then fed into multilingual TTS models to synthesize accented\nEnglish speech. As a reference system, we built a sequence-to-sequence model on\nthe synthetic parallel corpus for accent conversion. We validated the proposed\nmethod for both native and non-native English speakers. Subjective and\nobjective evaluations further validate our dataset's effectiveness in accent\nconversion studies.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":"27 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Sound","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.09352","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In accented voice conversion or accent conversion, we seek to convert the accent in speech from one another while preserving speaker identity and semantic content. In this study, we formulate a novel method for creating multi-accented speech samples, thus pairs of accented speech samples by the same speaker, through text transliteration for training accent conversion systems. We begin by generating transliterated text with Large Language Models (LLMs), which is then fed into multilingual TTS models to synthesize accented English speech. As a reference system, we built a sequence-to-sequence model on the synthetic parallel corpus for accent conversion. We validated the proposed method for both native and non-native English speakers. Subjective and objective evaluations further validate our dataset's effectiveness in accent conversion studies.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
MacST:通过文本转写进行重音转换的多重音语音合成
在重音语音转换或重音转换中,我们力求在保留说话人身份和语义内容的同时,将语音中的重音相互转换。在本研究中,我们提出了一种新方法,通过文本音译来创建多重音语音样本,即同一说话人的成对重音语音样本,用于训练重音转换系统。我们首先使用大型语言模型(LLMs)生成音译文本,然后将其输入多语言 TTS 模型以合成重音英语语音。作为参考系统,我们在合成平行语料库上建立了一个序列到序列模型,用于口音转换。我们对母语为英语和非母语为英语的用户验证了所提出的方法。主观和客观评估进一步验证了我们的数据集在口音转换研究中的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Explaining Deep Learning Embeddings for Speech Emotion Recognition by Predicting Interpretable Acoustic Features ESPnet-EZ: Python-only ESPnet for Easy Fine-tuning and Integration Prevailing Research Areas for Music AI in the Era of Foundation Models Egocentric Speaker Classification in Child-Adult Dyadic Interactions: From Sensing to Computational Modeling The T05 System for The VoiceMOS Challenge 2024: Transfer Learning from Deep Image Classifier to Naturalness MOS Prediction of High-Quality Synthetic Speech
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1