{"title":"M2R-Whisper:多阶段、多尺度检索增强技术,用于增强耳语功能","authors":"Jiaming Zhou, Shiwan Zhao, Jiabei He, Hui Wang, Wenjia Zeng, Yong Chen, Haoqin Sun, Aobo Kong, Yong Qin","doi":"arxiv-2409.11889","DOIUrl":null,"url":null,"abstract":"State-of-the-art models like OpenAI's Whisper exhibit strong performance in\nmultilingual automatic speech recognition (ASR), but they still face challenges\nin accurately recognizing diverse subdialects. In this paper, we propose\nM2R-whisper, a novel multi-stage and multi-scale retrieval augmentation\napproach designed to enhance ASR performance in low-resource settings. Building\non the principles of in-context learning (ICL) and retrieval-augmented\ntechniques, our method employs sentence-level ICL in the pre-processing stage\nto harness contextual information, while integrating token-level k-Nearest\nNeighbors (kNN) retrieval as a post-processing step to further refine the final\noutput distribution. By synergistically combining sentence-level and\ntoken-level retrieval strategies, M2R-whisper effectively mitigates various\ntypes of recognition errors. Experiments conducted on Mandarin and subdialect\ndatasets, including AISHELL-1 and KeSpeech, demonstrate substantial\nimprovements in ASR accuracy, all achieved without any parameter updates.","PeriodicalId":501284,"journal":{"name":"arXiv - EE - Audio and Speech Processing","volume":"49 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"M2R-Whisper: Multi-stage and Multi-scale Retrieval Augmentation for Enhancing Whisper\",\"authors\":\"Jiaming Zhou, Shiwan Zhao, Jiabei He, Hui Wang, Wenjia Zeng, Yong Chen, Haoqin Sun, Aobo Kong, Yong Qin\",\"doi\":\"arxiv-2409.11889\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"State-of-the-art models like OpenAI's Whisper exhibit strong performance in\\nmultilingual automatic speech recognition (ASR), but they still face challenges\\nin accurately recognizing diverse subdialects. In this paper, we propose\\nM2R-whisper, a novel multi-stage and multi-scale retrieval augmentation\\napproach designed to enhance ASR performance in low-resource settings. Building\\non the principles of in-context learning (ICL) and retrieval-augmented\\ntechniques, our method employs sentence-level ICL in the pre-processing stage\\nto harness contextual information, while integrating token-level k-Nearest\\nNeighbors (kNN) retrieval as a post-processing step to further refine the final\\noutput distribution. By synergistically combining sentence-level and\\ntoken-level retrieval strategies, M2R-whisper effectively mitigates various\\ntypes of recognition errors. Experiments conducted on Mandarin and subdialect\\ndatasets, including AISHELL-1 and KeSpeech, demonstrate substantial\\nimprovements in ASR accuracy, all achieved without any parameter updates.\",\"PeriodicalId\":501284,\"journal\":{\"name\":\"arXiv - EE - Audio and Speech Processing\",\"volume\":\"49 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - EE - Audio and Speech Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11889\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Audio and Speech Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11889","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
M2R-Whisper: Multi-stage and Multi-scale Retrieval Augmentation for Enhancing Whisper
State-of-the-art models like OpenAI's Whisper exhibit strong performance in
multilingual automatic speech recognition (ASR), but they still face challenges
in accurately recognizing diverse subdialects. In this paper, we propose
M2R-whisper, a novel multi-stage and multi-scale retrieval augmentation
approach designed to enhance ASR performance in low-resource settings. Building
on the principles of in-context learning (ICL) and retrieval-augmented
techniques, our method employs sentence-level ICL in the pre-processing stage
to harness contextual information, while integrating token-level k-Nearest
Neighbors (kNN) retrieval as a post-processing step to further refine the final
output distribution. By synergistically combining sentence-level and
token-level retrieval strategies, M2R-whisper effectively mitigates various
types of recognition errors. Experiments conducted on Mandarin and subdialect
datasets, including AISHELL-1 and KeSpeech, demonstrate substantial
improvements in ASR accuracy, all achieved without any parameter updates.