Low Resource Malay Dialect Automatic Speech Recognition Modeling Using Transfer Learning from a Standard Malay Model

IF 0.6 Q3 MULTIDISCIPLINARY SCIENCES Pertanika Journal of Science and Technology Pub Date : 2024-07-16 DOI:10.47836/pjst.32.4.06
Tien-Ping Tan, Lei Qin, Sarah Flora Samson Juan, Jasmina Yen Min Khaw
{"title":"Low Resource Malay Dialect Automatic Speech Recognition Modeling Using Transfer Learning from a Standard Malay Model","authors":"Tien-Ping Tan, Lei Qin, Sarah Flora Samson Juan, Jasmina Yen Min Khaw","doi":"10.47836/pjst.32.4.06","DOIUrl":null,"url":null,"abstract":"Approaches to automatic speech recognition have transited from Hidden Markov Model (HMM)-based ASR to deep neural networks. The advantages of deep neural network approaches are that they can be developed quickly and perform better given large language resources. Nevertheless, dialect speech recognition is still challenging due to the limited resources. Transfer learning approaches have been proposed to improve speech recognition for low resources. In the first approach, the model is pre-trained on a large and diverse labeled dataset to learn the acoustic and language patterns from the speech signal. Then, the model parameters are updated with a new dataset, and the pre-trained model is fine-tuned on a low-resource language dataset. The fine-tuning process is usually completed by freezing the pre-trained layers and training the remaining layers of the model on the low-resource language corpus. Another approach is to use a pre-trained model to capture the compact and meaningful features as input to the encoder. Pre-training in this approach usually involves using unsupervised learning methods to train models on a corpus of large amounts of unmarked data. It enables the model to learn the general patterns and relationships between the input speech signals. This paper proposes a training recipe using transfer learning and Standard Malay models to improve automatic speech recognition for Kelantan and Sarawak Malay dialects.","PeriodicalId":46234,"journal":{"name":"Pertanika Journal of Science and Technology","volume":null,"pages":null},"PeriodicalIF":0.6000,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pertanika Journal of Science and Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.47836/pjst.32.4.06","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

Approaches to automatic speech recognition have transited from Hidden Markov Model (HMM)-based ASR to deep neural networks. The advantages of deep neural network approaches are that they can be developed quickly and perform better given large language resources. Nevertheless, dialect speech recognition is still challenging due to the limited resources. Transfer learning approaches have been proposed to improve speech recognition for low resources. In the first approach, the model is pre-trained on a large and diverse labeled dataset to learn the acoustic and language patterns from the speech signal. Then, the model parameters are updated with a new dataset, and the pre-trained model is fine-tuned on a low-resource language dataset. The fine-tuning process is usually completed by freezing the pre-trained layers and training the remaining layers of the model on the low-resource language corpus. Another approach is to use a pre-trained model to capture the compact and meaningful features as input to the encoder. Pre-training in this approach usually involves using unsupervised learning methods to train models on a corpus of large amounts of unmarked data. It enables the model to learn the general patterns and relationships between the input speech signals. This paper proposes a training recipe using transfer learning and Standard Malay models to improve automatic speech recognition for Kelantan and Sarawak Malay dialects.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用标准马来语模型的迁移学习建立低资源马来方言自动语音识别模型
自动语音识别的方法已经从基于隐马尔可夫模型(HMM)的自动语音识别过渡到深度神经网络。深度神经网络方法的优点是开发速度快,而且在语言资源量大的情况下性能更好。然而,由于资源有限,方言语音识别仍面临挑战。有人提出了迁移学习方法,以提高低资源条件下的语音识别能力。在第一种方法中,模型在一个大型、多样的标注数据集上进行预训练,从语音信号中学习声学和语言模式。然后,用新的数据集更新模型参数,并在低资源语言数据集上对预训练模型进行微调。微调过程通常是通过冻结预训练层并在低资源语言语料库上训练模型的其余层来完成的。另一种方法是使用预训练模型来捕捉紧凑而有意义的特征,作为编码器的输入。这种方法中的预训练通常包括使用无监督学习方法,在大量无标记数据的语料库上训练模型。它能让模型学习输入语音信号之间的一般模式和关系。本文提出了一种使用迁移学习和标准马来语模型的训练方法,以提高吉兰丹和沙捞越马来方言的自动语音识别能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Pertanika Journal of Science and Technology
Pertanika Journal of Science and Technology MULTIDISCIPLINARY SCIENCES-
CiteScore
1.50
自引率
16.70%
发文量
178
期刊介绍: Pertanika Journal of Science and Technology aims to provide a forum for high quality research related to science and engineering research. Areas relevant to the scope of the journal include: bioinformatics, bioscience, biotechnology and bio-molecular sciences, chemistry, computer science, ecology, engineering, engineering design, environmental control and management, mathematics and statistics, medicine and health sciences, nanotechnology, physics, safety and emergency management, and related fields of study.
期刊最新文献
A Review on the Development of Microcarriers for Cell Culture Applications The Compatibility of Cement Bonded Fibreboard Through Dimensional Stability Analysis: A Review Bending Effects on Polyvinyl Alcohol Thin Film for Flexible Wearable Antenna Substrate Mesh Optimisation for General 3D Printed Objects with Cusp-Height Triangulation Approach The Riblet Short-Slot Coupler Using Substrate Integrated Waveguide (SIW) for High-frequency Applications
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1