Xuelong Geng, Tianyi Xu, Kun Wei, Bingshen Mu, Hongfei Xue, He Wang, Yangze Li, Pengcheng Guo, Yuhang Dai, Longhao Li, Mingchen Shao, Lei Xie
{"title":"在中文开源数据集上揭示基于 LLM 的 ASR 的潜力","authors":"Xuelong Geng, Tianyi Xu, Kun Wei, Bingshen Mu, Hongfei Xue, He Wang, Yangze Li, Pengcheng Guo, Yuhang Dai, Longhao Li, Mingchen Shao, Lei Xie","doi":"arxiv-2405.02132","DOIUrl":null,"url":null,"abstract":"Large Language Models (LLMs) have demonstrated unparalleled effectiveness in\nvarious NLP tasks, and integrating LLMs with automatic speech recognition (ASR)\nis becoming a mainstream paradigm. Building upon this momentum, our research\ndelves into an in-depth examination of this paradigm on a large open-source\nChinese dataset. Specifically, our research aims to evaluate the impact of\nvarious configurations of speech encoders, LLMs, and projector modules in the\ncontext of the speech foundation encoder-LLM ASR paradigm. Furthermore, we\nintroduce a three-stage training approach, expressly developed to enhance the\nmodel's ability to align auditory and textual information. The implementation\nof this approach, alongside the strategic integration of ASR components,\nenabled us to achieve the SOTA performance on the AISHELL-1, Test_Net, and\nTest_Meeting test sets. Our analysis presents an empirical foundation for\nfuture research in LLM-based ASR systems and offers insights into optimizing\nperformance using Chinese datasets. We will publicly release all scripts used\nfor data preparation, training, inference, and scoring, as well as pre-trained\nmodels and training logs to promote reproducible research.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":"18 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Unveiling the Potential of LLM-Based ASR on Chinese Open-Source Datasets\",\"authors\":\"Xuelong Geng, Tianyi Xu, Kun Wei, Bingshen Mu, Hongfei Xue, He Wang, Yangze Li, Pengcheng Guo, Yuhang Dai, Longhao Li, Mingchen Shao, Lei Xie\",\"doi\":\"arxiv-2405.02132\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Large Language Models (LLMs) have demonstrated unparalleled effectiveness in\\nvarious NLP tasks, and integrating LLMs with automatic speech recognition (ASR)\\nis becoming a mainstream paradigm. Building upon this momentum, our research\\ndelves into an in-depth examination of this paradigm on a large open-source\\nChinese dataset. Specifically, our research aims to evaluate the impact of\\nvarious configurations of speech encoders, LLMs, and projector modules in the\\ncontext of the speech foundation encoder-LLM ASR paradigm. Furthermore, we\\nintroduce a three-stage training approach, expressly developed to enhance the\\nmodel's ability to align auditory and textual information. The implementation\\nof this approach, alongside the strategic integration of ASR components,\\nenabled us to achieve the SOTA performance on the AISHELL-1, Test_Net, and\\nTest_Meeting test sets. Our analysis presents an empirical foundation for\\nfuture research in LLM-based ASR systems and offers insights into optimizing\\nperformance using Chinese datasets. We will publicly release all scripts used\\nfor data preparation, training, inference, and scoring, as well as pre-trained\\nmodels and training logs to promote reproducible research.\",\"PeriodicalId\":501178,\"journal\":{\"name\":\"arXiv - CS - Sound\",\"volume\":\"18 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-05-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Sound\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2405.02132\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Sound","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2405.02132","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
大型语言模型(LLMs)在各种 NLP 任务中表现出了无与伦比的有效性,而将 LLMs 与自动语音识别(ASR)集成正在成为一种主流范式。在这一势头的推动下,我们的研究致力于在大型开源中文数据集上对这一范例进行深入检验。具体来说,我们的研究旨在评估在语音基础编码器-LLM ASR 范式的背景下,语音编码器、LLM 和投影仪模块的各种配置所产生的影响。此外,我们还引入了一种三阶段训练方法,专门用于提高模型协调听觉和文本信息的能力。这种方法的实施以及 ASR 组件的战略性集成,使我们能够在 AISHELL-1、Test_Net 和 Test_Meeting 测试集上实现 SOTA 性能。我们的分析为基于 LLM 的 ASR 系统的未来研究奠定了经验基础,并为使用中文数据集优化性能提供了启示。我们将公开发布用于数据准备、训练、推理和评分的所有脚本,以及预训练模型和训练日志,以促进可重复研究。
Unveiling the Potential of LLM-Based ASR on Chinese Open-Source Datasets
Large Language Models (LLMs) have demonstrated unparalleled effectiveness in
various NLP tasks, and integrating LLMs with automatic speech recognition (ASR)
is becoming a mainstream paradigm. Building upon this momentum, our research
delves into an in-depth examination of this paradigm on a large open-source
Chinese dataset. Specifically, our research aims to evaluate the impact of
various configurations of speech encoders, LLMs, and projector modules in the
context of the speech foundation encoder-LLM ASR paradigm. Furthermore, we
introduce a three-stage training approach, expressly developed to enhance the
model's ability to align auditory and textual information. The implementation
of this approach, alongside the strategic integration of ASR components,
enabled us to achieve the SOTA performance on the AISHELL-1, Test_Net, and
Test_Meeting test sets. Our analysis presents an empirical foundation for
future research in LLM-based ASR systems and offers insights into optimizing
performance using Chinese datasets. We will publicly release all scripts used
for data preparation, training, inference, and scoring, as well as pre-trained
models and training logs to promote reproducible research.