{"title":"CAST: Context-association architecture with simulated long-utterance training for mandarin speech recognition","authors":"Yue Ming, Boyang Lyu, Zerui Li","doi":"10.1016/j.specom.2023.102985","DOIUrl":null,"url":null,"abstract":"<div><p>End-to-end (E2E) models are widely used because they significantly improve the performance of automatic speech recognition (ASR). However, based on the limitations of existing hardware computing devices, previous studies mainly focus on short utterances. Typically, utterances used for ASR training do not last much longer than 15 s, and therefore the models often fail to generalize to longer utterances at inference time. To address the challenge of long-form speech recognition, we propose a novel Context-Association Architecture with Simulated Long-utterance Training (CAST), which consists of a Context-Association RNN-Transducer (CARNN-T) and a simulating long utterance training (SLUT) strategy. The CARNN-T obtains the sentence-level contextual information by paying attention to the cross-sentence historical utterances and adds it in the inference stage, which improves the robustness of long-form speech recognition. The SLUT strategy simulates long-form audio training by updating the recursive state, which can alleviate the length mismatch between training and testing utterances. Experiments on the test of the Aishell-1 and aidatatang_200zh synthetic corpora show that our model has the best recognition performer on long utterances with the character error rate (CER) of 12.0%/12.6%, respectively.</p></div>","PeriodicalId":49485,"journal":{"name":"Speech Communication","volume":"155 ","pages":"Article 102985"},"PeriodicalIF":2.4000,"publicationDate":"2023-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Speech Communication","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S016763932300119X","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ACOUSTICS","Score":null,"Total":0}
引用次数: 0
Abstract
End-to-end (E2E) models are widely used because they significantly improve the performance of automatic speech recognition (ASR). However, based on the limitations of existing hardware computing devices, previous studies mainly focus on short utterances. Typically, utterances used for ASR training do not last much longer than 15 s, and therefore the models often fail to generalize to longer utterances at inference time. To address the challenge of long-form speech recognition, we propose a novel Context-Association Architecture with Simulated Long-utterance Training (CAST), which consists of a Context-Association RNN-Transducer (CARNN-T) and a simulating long utterance training (SLUT) strategy. The CARNN-T obtains the sentence-level contextual information by paying attention to the cross-sentence historical utterances and adds it in the inference stage, which improves the robustness of long-form speech recognition. The SLUT strategy simulates long-form audio training by updating the recursive state, which can alleviate the length mismatch between training and testing utterances. Experiments on the test of the Aishell-1 and aidatatang_200zh synthetic corpora show that our model has the best recognition performer on long utterances with the character error rate (CER) of 12.0%/12.6%, respectively.
期刊介绍:
Speech Communication is an interdisciplinary journal whose primary objective is to fulfil the need for the rapid dissemination and thorough discussion of basic and applied research results.
The journal''s primary objectives are:
• to present a forum for the advancement of human and human-machine speech communication science;
• to stimulate cross-fertilization between different fields of this domain;
• to contribute towards the rapid and wide diffusion of scientifically sound contributions in this domain.