{"title":"Artificial robot navigation based on gesture and speech recognition","authors":"Ze Lei, Zhaohui Gan, Min Jiang, Ke Dong","doi":"10.1109/SPAC.2014.6982708","DOIUrl":null,"url":null,"abstract":"Human-computer interaction is a hot topic in artificial intelligence. Artificial navigation is an interesting application of human-computer interaction, which control the action of the target device by speech or gestures information. The main virtue of artificial navigation is that it can control target device within a distance without any remote control device. This technology can be used in the areas of robot navigation, vehicle navigation in the industrial site and virtual reality. This paper proposed an algorithm for robot navigation which combining gesture recognition with speech recognition. Firstly, use nine gesture instructions and nine voice commands to establish reference models. Secondly, extract real-time Skeleton information and the current speech messages by Kinect. Thirdly, evaluate the fitness of the current gesture and speech information of the reference model. Finally, deduce the navigation control instructions to command the robot's movement. Gestures and speech can compensate for each other's deficiencies, improve the recognition rate and robustness of the algorithm, reduce the computational complexity of algorithm, makes the human-computer interaction more simple, clear and natural.","PeriodicalId":326246,"journal":{"name":"Proceedings 2014 IEEE International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings 2014 IEEE International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SPAC.2014.6982708","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12
Abstract
Human-computer interaction is a hot topic in artificial intelligence. Artificial navigation is an interesting application of human-computer interaction, which control the action of the target device by speech or gestures information. The main virtue of artificial navigation is that it can control target device within a distance without any remote control device. This technology can be used in the areas of robot navigation, vehicle navigation in the industrial site and virtual reality. This paper proposed an algorithm for robot navigation which combining gesture recognition with speech recognition. Firstly, use nine gesture instructions and nine voice commands to establish reference models. Secondly, extract real-time Skeleton information and the current speech messages by Kinect. Thirdly, evaluate the fitness of the current gesture and speech information of the reference model. Finally, deduce the navigation control instructions to command the robot's movement. Gestures and speech can compensate for each other's deficiencies, improve the recognition rate and robustness of the algorithm, reduce the computational complexity of algorithm, makes the human-computer interaction more simple, clear and natural.