{"title":"Meta-Prompt: Boosting Whisper's Performance in Low-Resource Speech Recognition","authors":"Yaqi Chen;Tong Niu;Hao Zhang;Wenlin Zhang;Dan Qu","doi":"10.1109/LSP.2024.3484328","DOIUrl":null,"url":null,"abstract":"Recent advancements in large-scale pre-trained automatic speech recognition (ASR) foundation models (e.g., Whisper) have exhibited remarkable performance in speech processing tasks. A recently emerging paradigm, prompt tuning, offers a parameter-efficient approach for fine-tuning, which has proven to be effective in enhancing the adaptation of pre-trained models to downstream tasks. In this paper, we first explore the prompting method for low-resource speech recognition based on Whisper. Although effective, it poses a challenge in the few-shot scenario due to its high sensitivity to initialization. To address this problem, we propose a novel meta-prompt for low-resource speech recognition that leverages the benefits of meta-learning for fast learning. Moreover, we further present a lightweight version of meta-prompt that omits the learning of encoder-prompt, reducing computational and storage costs. Extensive experiments on FLEURS datasets demonstrate consistent improvements across eleven target languages, showing better generalizability. Notably, meta-prompt achieves similar performance with a 20%-shot compared to prompt tuning with a 50%-shot setting, suggesting excellent few-shot learning ability.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"31 ","pages":"3039-3043"},"PeriodicalIF":3.2000,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Signal Processing Letters","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10723801/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Recent advancements in large-scale pre-trained automatic speech recognition (ASR) foundation models (e.g., Whisper) have exhibited remarkable performance in speech processing tasks. A recently emerging paradigm, prompt tuning, offers a parameter-efficient approach for fine-tuning, which has proven to be effective in enhancing the adaptation of pre-trained models to downstream tasks. In this paper, we first explore the prompting method for low-resource speech recognition based on Whisper. Although effective, it poses a challenge in the few-shot scenario due to its high sensitivity to initialization. To address this problem, we propose a novel meta-prompt for low-resource speech recognition that leverages the benefits of meta-learning for fast learning. Moreover, we further present a lightweight version of meta-prompt that omits the learning of encoder-prompt, reducing computational and storage costs. Extensive experiments on FLEURS datasets demonstrate consistent improvements across eleven target languages, showing better generalizability. Notably, meta-prompt achieves similar performance with a 20%-shot compared to prompt tuning with a 50%-shot setting, suggesting excellent few-shot learning ability.
期刊介绍:
The IEEE Signal Processing Letters is a monthly, archival publication designed to provide rapid dissemination of original, cutting-edge ideas and timely, significant contributions in signal, image, speech, language and audio processing. Papers published in the Letters can be presented within one year of their appearance in signal processing conferences such as ICASSP, GlobalSIP and ICIP, and also in several workshop organized by the Signal Processing Society.