基于神经网络的驾驶员语音指令视觉识别方法

Александр Александрович,  Аксёнов1, Елена Витальевна Рюмина, Дмитрий Александрович Рюмин, Денис Викторович Иванько, Алексей Анатольевич Карпов, Alexandr A. Axyonov, Elena V. Ryumina, Dmitry A. Ryumin, Denis V. Ivanko, Alexey Karpov
{"title":"基于神经网络的驾驶员语音指令视觉识别方法","authors":"Александр Александрович,  Аксёнов1, Елена Витальевна Рюмина, Дмитрий Александрович Рюмин, Денис Викторович Иванько, Алексей Анатольевич Карпов, Alexandr A. Axyonov, Elena V. Ryumina, Dmitry A. Ryumin, Denis V. Ivanko, Alexey Karpov","doi":"10.17586/2226-1494-2023-23-4-767-775","DOIUrl":null,"url":null,"abstract":"Visual speech recognition or automated lip-reading systems actively apply to speech-to-text translation. Video data proves to be useful in multimodal speech recognition systems, particularly when using acoustic data is difficult or not available at all. The main purpose of this study is to improve driver command recognition by analyzing visual information to reduce touch interaction with various vehicle systems (multimedia and navigation systems, phone calls, etc.) while driving. We propose a method of automated lip-reading the driver’s speech while driving based on a deep neural network of 3DResNet18 architecture. Using neural network architecture with bi-directional LSTM model and attention mechanism allows achieving higher recognition accuracy with a slight decrease in performance. Two different variants of neural network architectures for visual speech recognition are proposed and investigated. When using the first neural network architecture, the result of voice recognition of the driver was 77.68 %, which was lower by 5.78 % than when using the second one the accuracy of which was 83.46 %. Performance of the system which is determined by a real-time indicator RTF in the case of the first neural network architecture is equal to 0.076, and the second — RTF is 0.183 which is more than two times higher. The proposed method was tested on the data of multimodal corpus RUSAVIC recorded in the car. Results of the study can be used in systems of audio-visual speech recognition which is recommended in high noise conditions, for example, when driving a vehicle. In addition, the analysis performed allows us to choose the optimal neural network model of visual speech recognition for subsequent incorporation into the assistive system based on a mobile device.","PeriodicalId":21700,"journal":{"name":"Scientific and Technical Journal of Information Technologies, Mechanics and Optics","volume":"64 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Neural network-based method for visual recognition of driver's voice commands using attention mechanism\",\"authors\":\"Александр Александрович,  Аксёнов1, Елена Витальевна Рюмина, Дмитрий Александрович Рюмин, Денис Викторович Иванько, Алексей Анатольевич Карпов, Alexandr A. Axyonov, Elena V. Ryumina, Dmitry A. Ryumin, Denis V. Ivanko, Alexey Karpov\",\"doi\":\"10.17586/2226-1494-2023-23-4-767-775\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Visual speech recognition or automated lip-reading systems actively apply to speech-to-text translation. Video data proves to be useful in multimodal speech recognition systems, particularly when using acoustic data is difficult or not available at all. The main purpose of this study is to improve driver command recognition by analyzing visual information to reduce touch interaction with various vehicle systems (multimedia and navigation systems, phone calls, etc.) while driving. We propose a method of automated lip-reading the driver’s speech while driving based on a deep neural network of 3DResNet18 architecture. Using neural network architecture with bi-directional LSTM model and attention mechanism allows achieving higher recognition accuracy with a slight decrease in performance. Two different variants of neural network architectures for visual speech recognition are proposed and investigated. When using the first neural network architecture, the result of voice recognition of the driver was 77.68 %, which was lower by 5.78 % than when using the second one the accuracy of which was 83.46 %. Performance of the system which is determined by a real-time indicator RTF in the case of the first neural network architecture is equal to 0.076, and the second — RTF is 0.183 which is more than two times higher. The proposed method was tested on the data of multimodal corpus RUSAVIC recorded in the car. Results of the study can be used in systems of audio-visual speech recognition which is recommended in high noise conditions, for example, when driving a vehicle. In addition, the analysis performed allows us to choose the optimal neural network model of visual speech recognition for subsequent incorporation into the assistive system based on a mobile device.\",\"PeriodicalId\":21700,\"journal\":{\"name\":\"Scientific and Technical Journal of Information Technologies, Mechanics and Optics\",\"volume\":\"64 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Scientific and Technical Journal of Information Technologies, Mechanics and Optics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.17586/2226-1494-2023-23-4-767-775\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"Engineering\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Scientific and Technical Journal of Information Technologies, Mechanics and Optics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.17586/2226-1494-2023-23-4-767-775","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"Engineering","Score":null,"Total":0}
引用次数: 0

摘要

视觉语音识别或自动唇读系统积极应用于语音到文本的翻译。视频数据在多模态语音识别系统中被证明是有用的,特别是当使用声学数据是困难的或根本不可用的时候。本研究的主要目的是通过分析视觉信息来提高驾驶员的指令识别能力,以减少驾驶时与各种车辆系统(多媒体和导航系统、电话等)的触摸交互。我们提出了一种基于3DResNet18架构的深度神经网络在驾驶时自动唇读驾驶员语音的方法。利用双向LSTM模型和注意机制结合的神经网络架构,可以在性能略有下降的情况下获得更高的识别精度。提出并研究了用于视觉语音识别的两种不同的神经网络架构。使用第一种神经网络结构时,驾驶员语音识别的准确率为77.68%,比使用第二种神经网络结构时的准确率为83.46%降低了5.78%。在第一种神经网络结构下,由实时指标RTF决定的系统性能为0.076,第二种神经网络结构的RTF为0.183,提高了两倍以上。在RUSAVIC车载多模态语料库数据上进行了测试。研究结果可用于高噪音环境下的视听语音识别系统,例如在驾驶车辆时。此外,所进行的分析使我们能够选择最佳的视觉语音识别神经网络模型,以便随后纳入基于移动设备的辅助系统。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Neural network-based method for visual recognition of driver's voice commands using attention mechanism
Visual speech recognition or automated lip-reading systems actively apply to speech-to-text translation. Video data proves to be useful in multimodal speech recognition systems, particularly when using acoustic data is difficult or not available at all. The main purpose of this study is to improve driver command recognition by analyzing visual information to reduce touch interaction with various vehicle systems (multimedia and navigation systems, phone calls, etc.) while driving. We propose a method of automated lip-reading the driver’s speech while driving based on a deep neural network of 3DResNet18 architecture. Using neural network architecture with bi-directional LSTM model and attention mechanism allows achieving higher recognition accuracy with a slight decrease in performance. Two different variants of neural network architectures for visual speech recognition are proposed and investigated. When using the first neural network architecture, the result of voice recognition of the driver was 77.68 %, which was lower by 5.78 % than when using the second one the accuracy of which was 83.46 %. Performance of the system which is determined by a real-time indicator RTF in the case of the first neural network architecture is equal to 0.076, and the second — RTF is 0.183 which is more than two times higher. The proposed method was tested on the data of multimodal corpus RUSAVIC recorded in the car. Results of the study can be used in systems of audio-visual speech recognition which is recommended in high noise conditions, for example, when driving a vehicle. In addition, the analysis performed allows us to choose the optimal neural network model of visual speech recognition for subsequent incorporation into the assistive system based on a mobile device.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
0.70
自引率
0.00%
发文量
102
审稿时长
8 weeks
期刊最新文献
Homograph recognition algorithm based on Euclidean metric Deep attention based Proto-oncogene prediction and Oncogene transition possibility detection using moments and position based amino acid features Structural and spectral properties of YAG:Nd, YAG:Ce and YAG:Yb nanocrystalline powders synthesized via modified Pechini method Laser-induced thermal effect on the electrical characteristics of photosensitive PbSe films An improved performance of RetinaNet model for hand-gun detection in custom dataset and real time surveillance video
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1