{"title":"Enhancing Multilingual Speech Generation and Recognition Abilities in LLMs with Constructed Code-switched Data","authors":"Jing Xu, Daxin Tan, Jiaqi Wang, Xiao Chen","doi":"arxiv-2409.10969","DOIUrl":null,"url":null,"abstract":"While large language models (LLMs) have been explored in the speech domain\nfor both generation and recognition tasks, their applications are predominantly\nconfined to the monolingual scenario, with limited exploration in multilingual\nand code-switched (CS) contexts. Additionally, speech generation and\nrecognition tasks are often handled separately, such as VALL-E and Qwen-Audio.\nIn this paper, we propose a MutltiLingual MultiTask (MLMT) model, integrating\nmultilingual speech generation and recognition tasks within the single LLM.\nFurthermore, we develop an effective data construction approach that splits and\nconcatenates words from different languages to equip LLMs with CS synthesis\nability without relying on CS data. The experimental results demonstrate that\nour model outperforms other baselines with a comparable data scale.\nFurthermore, our data construction approach not only equips LLMs with CS speech\nsynthesis capability with comparable speaker consistency and similarity to any\ngiven speaker, but also improves the performance of LLMs in multilingual speech\ngeneration and recognition tasks.","PeriodicalId":501284,"journal":{"name":"arXiv - EE - Audio and Speech Processing","volume":"19 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Audio and Speech Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.10969","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
While large language models (LLMs) have been explored in the speech domain
for both generation and recognition tasks, their applications are predominantly
confined to the monolingual scenario, with limited exploration in multilingual
and code-switched (CS) contexts. Additionally, speech generation and
recognition tasks are often handled separately, such as VALL-E and Qwen-Audio.
In this paper, we propose a MutltiLingual MultiTask (MLMT) model, integrating
multilingual speech generation and recognition tasks within the single LLM.
Furthermore, we develop an effective data construction approach that splits and
concatenates words from different languages to equip LLMs with CS synthesis
ability without relying on CS data. The experimental results demonstrate that
our model outperforms other baselines with a comparable data scale.
Furthermore, our data construction approach not only equips LLMs with CS speech
synthesis capability with comparable speaker consistency and similarity to any
given speaker, but also improves the performance of LLMs in multilingual speech
generation and recognition tasks.