Neural Chinese silent speech recognition with facial electromyography

IF 3 3区 计算机科学 Q2 ACOUSTICS Speech Communication Pub Date : 2025-04-15 DOI:10.1016/j.specom.2025.103230
Liang Xie , Yakun Zhang , Hao Yuan , Meishan Zhang , Xingyu Zhang , Changyan Zheng , Ye Yan , Erwei Yin
{"title":"Neural Chinese silent speech recognition with facial electromyography","authors":"Liang Xie ,&nbsp;Yakun Zhang ,&nbsp;Hao Yuan ,&nbsp;Meishan Zhang ,&nbsp;Xingyu Zhang ,&nbsp;Changyan Zheng ,&nbsp;Ye Yan ,&nbsp;Erwei Yin","doi":"10.1016/j.specom.2025.103230","DOIUrl":null,"url":null,"abstract":"<div><div>The majority work in speech recognition is based on audible speech and has already achieved great success. However, in several special scenarios, the voice might be unavailable. Recently, Gaddy and Klein (2020) presented an initial study of silent speech analysis, aiming to voice the silent speech from facial electromyography (EMG). In this work, we present the first study of neural silent speech recognition in Chinese, which goes one step further to convert the silent facial EMG signals into text directly. We build a benchmark dataset and then introduce a neural end-to-end model to the task. The model is further optimized with two auxiliary tasks for better feature learning. In addition, we suggest a systematic data augmentation strategy to improve model performance. Experimental results show that our final best model can achieve a character error rate of 38.0% on a sentence-level silent speech recognition task. We also provide in-depth analysis to gain a comprehensive understanding of our task and the various models proposed. Although our model achieves initial results, there is still a gap compared to the ideal level, warranting further attention and research.</div></div>","PeriodicalId":49485,"journal":{"name":"Speech Communication","volume":"171 ","pages":"Article 103230"},"PeriodicalIF":3.0000,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Speech Communication","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167639325000457","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ACOUSTICS","Score":null,"Total":0}
引用次数: 0

Abstract

The majority work in speech recognition is based on audible speech and has already achieved great success. However, in several special scenarios, the voice might be unavailable. Recently, Gaddy and Klein (2020) presented an initial study of silent speech analysis, aiming to voice the silent speech from facial electromyography (EMG). In this work, we present the first study of neural silent speech recognition in Chinese, which goes one step further to convert the silent facial EMG signals into text directly. We build a benchmark dataset and then introduce a neural end-to-end model to the task. The model is further optimized with two auxiliary tasks for better feature learning. In addition, we suggest a systematic data augmentation strategy to improve model performance. Experimental results show that our final best model can achieve a character error rate of 38.0% on a sentence-level silent speech recognition task. We also provide in-depth analysis to gain a comprehensive understanding of our task and the various models proposed. Although our model achieves initial results, there is still a gap compared to the ideal level, warranting further attention and research.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于面部肌电图的汉语无声语音神经识别
语音识别领域的大部分工作都是基于可听语音的,并且已经取得了巨大的成功。然而,在一些特殊情况下,语音可能无法使用。最近,Gaddy 和 Klein(2020)提出了无声语音分析的初步研究,旨在通过面部肌电图(EMG)为无声语音配音。在这项研究中,我们首次提出了中文神经无声语音识别,并进一步将无声面部肌电信号直接转换为文本。我们建立了一个基准数据集,然后为该任务引入了一个端到端的神经模型。为了更好地进行特征学习,我们通过两个辅助任务对模型进行了进一步优化。此外,我们还提出了一种系统的数据增强策略,以提高模型性能。实验结果表明,在句子级无声语音识别任务中,我们的最终最佳模型可以实现 38.0% 的字符错误率。我们还进行了深入分析,以全面了解我们的任务和提出的各种模型。虽然我们的模型取得了初步成果,但与理想水平相比仍有差距,值得进一步关注和研究。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Speech Communication
Speech Communication 工程技术-计算机:跨学科应用
CiteScore
6.80
自引率
6.20%
发文量
94
审稿时长
19.2 weeks
期刊介绍: Speech Communication is an interdisciplinary journal whose primary objective is to fulfil the need for the rapid dissemination and thorough discussion of basic and applied research results. The journal''s primary objectives are: • to present a forum for the advancement of human and human-machine speech communication science; • to stimulate cross-fertilization between different fields of this domain; • to contribute towards the rapid and wide diffusion of scientifically sound contributions in this domain.
期刊最新文献
Lateral channel dynamics and F3 modulation: Quantifying para-sagittal articulation in Australian English /l/ A review on speech emotion recognition for low-resource and Indigenous languages Bottom-up modeling of phoneme learning: Universal sensitivity and language-specific transformation Speaker-conditioned phrase break prediction for text-to-speech with phoneme-level pre-trained language model Effect of individual characteristics on impressions of one’s own recorded voice
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1