{"title":"Deep learning speech recognition for residential assistant robot","authors":"R. Jiménez-Moreno, Ricardo A. Castillo","doi":"10.11591/ijai.v12.i2.pp585-592","DOIUrl":null,"url":null,"abstract":"This work presents the design and validation of a voice assistant to command robotic tasks in a residential environment, as a support for people who require isolation or support due to body motor problems. The preprocessing of a database of 3600 audios of 8 different categories of words like “paper”, “glass” or “robot”, that allow to conform commands such as \"carry paper\" or \"bring medicine\", obtaining a matrix array of Mel frequencies and its derivatives, as inputs to a convolutional neural network that presents an accuracy of 96.9% in the discrimination of the categories. The command recognition tests involve recognizing groups of three words starting with \"robot\", for example, \"robot bring glass\", and allow identifying 8 different actions per voice command, with an accuracy of 88.75%.","PeriodicalId":52221,"journal":{"name":"IAES International Journal of Artificial Intelligence","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IAES International Journal of Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.11591/ijai.v12.i2.pp585-592","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Decision Sciences","Score":null,"Total":0}
引用次数: 0
Abstract
This work presents the design and validation of a voice assistant to command robotic tasks in a residential environment, as a support for people who require isolation or support due to body motor problems. The preprocessing of a database of 3600 audios of 8 different categories of words like “paper”, “glass” or “robot”, that allow to conform commands such as "carry paper" or "bring medicine", obtaining a matrix array of Mel frequencies and its derivatives, as inputs to a convolutional neural network that presents an accuracy of 96.9% in the discrimination of the categories. The command recognition tests involve recognizing groups of three words starting with "robot", for example, "robot bring glass", and allow identifying 8 different actions per voice command, with an accuracy of 88.75%.