An overview of high-resource automatic speech recognition methods and their empirical evaluation in low-resource environments

IF 3 3区 计算机科学 Q2 ACOUSTICS Speech Communication Pub Date : 2025-02-01 Epub Date: 2024-12-10 DOI:10.1016/j.specom.2024.103151
Kavan Fatehi , Mercedes Torres Torres , Ayse Kucukyilmaz
{"title":"An overview of high-resource automatic speech recognition methods and their empirical evaluation in low-resource environments","authors":"Kavan Fatehi ,&nbsp;Mercedes Torres Torres ,&nbsp;Ayse Kucukyilmaz","doi":"10.1016/j.specom.2024.103151","DOIUrl":null,"url":null,"abstract":"<div><div>Deep learning methods for Automatic Speech Recognition (ASR) often rely on large-scale training datasets, which are typically unavailable in low-resource environments (LREs). This lack of sufficient and representative training data poses a significant challenge for applying ASR systems in specific domains categorized as LREs. In this paper, we provide a comprehensive overview and empirical analysis of state-of-the-art deep learning techniques for ASR, which are primarily designed for high-resource environments (HREs). Our aim is to explore their potential effectiveness in LRE settings. We focus on identifying key factors that influence the adaptation of HRE models to LRE tasks. To this end, we survey advanced deep learning models and conduct a comparative evaluation of their performance in LRE contexts. Additionally, we propose that pre-training ASR models on HRE datasets, followed by domain-specific fine-tuning on LRE data, can significantly enhance performance in data-scarce settings. Using LibriSpeech and WSJ as our HRE datasets, we evaluate these models on two LRE datasets: UASpeech for dysarthria speech and iCUBE, our novel human–robot interaction dataset. Our systematic experiments, involving varying dataset sizes for pre-training, demonstrate the efficacy of combining pre-training and fine-tuning strategies to improve recognition accuracy in LREs.</div></div>","PeriodicalId":49485,"journal":{"name":"Speech Communication","volume":"167 ","pages":"Article 103151"},"PeriodicalIF":3.0000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Speech Communication","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167639324001225","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/12/10 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"ACOUSTICS","Score":null,"Total":0}
引用次数: 0

Abstract

Deep learning methods for Automatic Speech Recognition (ASR) often rely on large-scale training datasets, which are typically unavailable in low-resource environments (LREs). This lack of sufficient and representative training data poses a significant challenge for applying ASR systems in specific domains categorized as LREs. In this paper, we provide a comprehensive overview and empirical analysis of state-of-the-art deep learning techniques for ASR, which are primarily designed for high-resource environments (HREs). Our aim is to explore their potential effectiveness in LRE settings. We focus on identifying key factors that influence the adaptation of HRE models to LRE tasks. To this end, we survey advanced deep learning models and conduct a comparative evaluation of their performance in LRE contexts. Additionally, we propose that pre-training ASR models on HRE datasets, followed by domain-specific fine-tuning on LRE data, can significantly enhance performance in data-scarce settings. Using LibriSpeech and WSJ as our HRE datasets, we evaluate these models on two LRE datasets: UASpeech for dysarthria speech and iCUBE, our novel human–robot interaction dataset. Our systematic experiments, involving varying dataset sizes for pre-training, demonstrate the efficacy of combining pre-training and fine-tuning strategies to improve recognition accuracy in LREs.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
高资源自动语音识别方法综述及其在低资源环境下的经验评价
自动语音识别(ASR)的深度学习方法通常依赖于大规模的训练数据集,这在低资源环境(LREs)中通常是不可用的。缺乏足够的和有代表性的训练数据对在LREs的特定领域应用ASR系统提出了重大挑战。在本文中,我们提供了最先进的ASR深度学习技术的全面概述和实证分析,这些技术主要是为高资源环境(HREs)设计的。我们的目标是探索它们在LRE环境中的潜在有效性。我们的重点是确定影响HRE模型适应LRE任务的关键因素。为此,我们调查了先进的深度学习模型,并对它们在LRE环境中的表现进行了比较评估。此外,我们提出在HRE数据集上预训练ASR模型,然后在LRE数据上进行特定领域的微调,可以显著提高数据稀缺环境下的性能。使用librisspeech和WSJ作为我们的HRE数据集,我们在两个LRE数据集上评估了这些模型:用于构音障碍语音的uasspeech和我们的新型人机交互数据集iCUBE。我们的系统实验,涉及不同的数据集大小进行预训练,证明了将预训练和微调策略结合起来提高LREs识别精度的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Speech Communication
Speech Communication 工程技术-计算机:跨学科应用
CiteScore
6.80
自引率
6.20%
发文量
94
审稿时长
19.2 weeks
期刊介绍: Speech Communication is an interdisciplinary journal whose primary objective is to fulfil the need for the rapid dissemination and thorough discussion of basic and applied research results. The journal''s primary objectives are: • to present a forum for the advancement of human and human-machine speech communication science; • to stimulate cross-fertilization between different fields of this domain; • to contribute towards the rapid and wide diffusion of scientifically sound contributions in this domain.
期刊最新文献
Object detection for cross-linguistic vowel analysis: A novel language-agnostic method for forensic speech processing TSIP-Net: No-reference speech intelligibility prediction in the presence of competing speech Editorial Board Influence of speech-in-noise perception, gender, and age on lipreading ability for monosyllabic words Classification of phonation types in singing and speaking voice using self-supervised learning models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1