Schlieren imaging and video classification of alphabet pronunciations: exploiting phonetic flows for speech recognition and speech therapy.

IF 3.2 4区 计算机科学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Visual Computing for Industry Biomedicine and Art Pub Date : 2024-05-22 DOI:10.1186/s42492-024-00163-w
Mohamed Talaat, Kian Barari, Xiuhua April Si, Jinxiang Xi
{"title":"Schlieren imaging and video classification of alphabet pronunciations: exploiting phonetic flows for speech recognition and speech therapy.","authors":"Mohamed Talaat, Kian Barari, Xiuhua April Si, Jinxiang Xi","doi":"10.1186/s42492-024-00163-w","DOIUrl":null,"url":null,"abstract":"<p><p>Speech is a highly coordinated process that requires precise control over vocal tract morphology/motion to produce intelligible sounds while simultaneously generating unique exhaled flow patterns. The schlieren imaging technique visualizes airflows with subtle density variations. It is hypothesized that speech flows captured by schlieren, when analyzed using a hybrid of convolutional neural network (CNN) and long short-term memory (LSTM) network, can recognize alphabet pronunciations, thus facilitating automatic speech recognition and speech disorder therapy. This study evaluates the feasibility of using a CNN-based video classification network to differentiate speech flows corresponding to the first four alphabets: /A/, /B/, /C/, and /D/. A schlieren optical system was developed, and the speech flows of alphabet pronunciations were recorded for two participants at an acquisition rate of 60 frames per second. A total of 640 video clips, each lasting 1 s, were utilized to train and test a hybrid CNN-LSTM network. Acoustic analyses of the recorded sounds were conducted to understand the phonetic differences among the four alphabets. The hybrid CNN-LSTM network was trained separately on four datasets of varying sizes (i.e., 20, 30, 40, 50 videos per alphabet), all achieving over 95% accuracy in classifying videos of the same participant. However, the network's performance declined when tested on speech flows from a different participant, with accuracy dropping to around 44%, indicating significant inter-participant variability in alphabet pronunciation. Retraining the network with videos from both participants improved accuracy to 93% on the second participant. Analysis of misclassified videos indicated that factors such as low video quality and disproportional head size affected accuracy. These results highlight the potential of CNN-assisted speech recognition and speech therapy using articulation flows, although challenges remain in expanding the alphabet set and participant cohort.</p>","PeriodicalId":29931,"journal":{"name":"Visual Computing for Industry Biomedicine and Art","volume":null,"pages":null},"PeriodicalIF":3.2000,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11109036/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Visual Computing for Industry Biomedicine and Art","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1186/s42492-024-00163-w","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

Speech is a highly coordinated process that requires precise control over vocal tract morphology/motion to produce intelligible sounds while simultaneously generating unique exhaled flow patterns. The schlieren imaging technique visualizes airflows with subtle density variations. It is hypothesized that speech flows captured by schlieren, when analyzed using a hybrid of convolutional neural network (CNN) and long short-term memory (LSTM) network, can recognize alphabet pronunciations, thus facilitating automatic speech recognition and speech disorder therapy. This study evaluates the feasibility of using a CNN-based video classification network to differentiate speech flows corresponding to the first four alphabets: /A/, /B/, /C/, and /D/. A schlieren optical system was developed, and the speech flows of alphabet pronunciations were recorded for two participants at an acquisition rate of 60 frames per second. A total of 640 video clips, each lasting 1 s, were utilized to train and test a hybrid CNN-LSTM network. Acoustic analyses of the recorded sounds were conducted to understand the phonetic differences among the four alphabets. The hybrid CNN-LSTM network was trained separately on four datasets of varying sizes (i.e., 20, 30, 40, 50 videos per alphabet), all achieving over 95% accuracy in classifying videos of the same participant. However, the network's performance declined when tested on speech flows from a different participant, with accuracy dropping to around 44%, indicating significant inter-participant variability in alphabet pronunciation. Retraining the network with videos from both participants improved accuracy to 93% on the second participant. Analysis of misclassified videos indicated that factors such as low video quality and disproportional head size affected accuracy. These results highlight the potential of CNN-assisted speech recognition and speech therapy using articulation flows, although challenges remain in expanding the alphabet set and participant cohort.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
字母发音的 Schlieren 成像和视频分类:利用语音流进行语音识别和语音治疗。
语音是一个高度协调的过程,需要对声道形态/运动进行精确控制,才能发出清晰的声音,同时产生独特的呼出气流模式。Schlieren 成像技术可将具有微妙密度变化的气流可视化。据推测,使用卷积神经网络(CNN)和长短期记忆(LSTM)网络的混合网络分析时,通过schlieren捕捉到的语音流可以识别字母发音,从而促进自动语音识别和语言障碍治疗。本研究评估了使用基于 CNN 的视频分类网络区分与前四个字母相对应的语音流的可行性:/A/、/B/、/C/和/D/。我们开发了一套裂隙光学系统,并以每秒 60 帧的采集率记录了两名参与者的字母发音语音流。共使用了 640 个视频片段(每个片段持续 1 秒)来训练和测试混合 CNN-LSTM 网络。对录制的声音进行了声学分析,以了解四种字母之间的语音差异。混合 CNN-LSTM 网络分别在四个不同规模的数据集(即每个字母 20、30、40 和 50 个视频)上进行了训练,在对同一参与者的视频进行分类时,准确率均超过 95%。然而,在对不同参与者的语音流进行测试时,该网络的性能有所下降,准确率降到了 44% 左右,这表明字母发音在参与者之间存在很大差异。使用两名参与者的视频重新训练网络后,第二名参与者的准确率提高到 93%。对错误分类视频的分析表明,视频质量低和头部大小比例失调等因素影响了准确率。这些结果凸显了利用发音流进行 CNN 辅助语音识别和语音治疗的潜力,尽管在扩大字母集和参与者群方面仍存在挑战。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
5.60
自引率
0.00%
发文量
0
期刊最新文献
A study on the influence of situations on personal avatar characteristics. Noise suppression in photon-counting computed tomography using unsupervised Poisson flow generative models. Machine learning approach for the prediction of macrosomia. Medical image registration and its application in retinal images: a review. IQAGPT: computed tomography image quality assessment with vision-language and ChatGPT models.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1