Ricardo Pereira, T. Barros, L. Garrote, Ana C. Lopes, U. Nunes
{"title":"移动机器人RGB-D目标识别精度与推理速度的实验研究","authors":"Ricardo Pereira, T. Barros, L. Garrote, Ana C. Lopes, U. Nunes","doi":"10.1109/RO-MAN47096.2020.9223562","DOIUrl":null,"url":null,"abstract":"This paper presents a study in terms of accuracy and inference speed using RGB-D object detection and classification for mobile platform applications. The study is divided in three stages. In the first, eight state-of-the-art CNN-based object classifiers (AlexNet, VGG16-19, ResNet1850-101, DenseNet, and MobileNetV2) are used to compare the attained performances with the corresponding inference speeds in object classification tasks. The second stage consists in exploiting YOLOv3/YOLOv3-tiny networks to be used as Region of Interest generator method. In order to obtain a real-time object recognition pipeline, the final stage unifies the YOLOv3/YOLOv3-tiny with a CNN-based object classifier. The pipeline evaluates each object classifier with each Region of Interest generator method in terms of their accuracy and frame rate. For the evaluation of the proposed study under the conditions in which real robotic platforms navigate, a nonobject centric RGB-D dataset was recorded in Institute of Systems and Robotics facilities using a camera on-board the ISR-InterBot mobile platform. Experimental evaluations were also carried out in Washington and COCO datasets. Promising performances were achieved by the combination of YOLOv3tiny and ResNet18 networks on the embedded hardware Nvidia Jetson TX2.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"331 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"An Experimental Study of the Accuracy vs Inference Speed of RGB-D Object Recognition in Mobile Robotics\",\"authors\":\"Ricardo Pereira, T. Barros, L. Garrote, Ana C. Lopes, U. Nunes\",\"doi\":\"10.1109/RO-MAN47096.2020.9223562\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper presents a study in terms of accuracy and inference speed using RGB-D object detection and classification for mobile platform applications. The study is divided in three stages. In the first, eight state-of-the-art CNN-based object classifiers (AlexNet, VGG16-19, ResNet1850-101, DenseNet, and MobileNetV2) are used to compare the attained performances with the corresponding inference speeds in object classification tasks. The second stage consists in exploiting YOLOv3/YOLOv3-tiny networks to be used as Region of Interest generator method. In order to obtain a real-time object recognition pipeline, the final stage unifies the YOLOv3/YOLOv3-tiny with a CNN-based object classifier. The pipeline evaluates each object classifier with each Region of Interest generator method in terms of their accuracy and frame rate. For the evaluation of the proposed study under the conditions in which real robotic platforms navigate, a nonobject centric RGB-D dataset was recorded in Institute of Systems and Robotics facilities using a camera on-board the ISR-InterBot mobile platform. Experimental evaluations were also carried out in Washington and COCO datasets. Promising performances were achieved by the combination of YOLOv3tiny and ResNet18 networks on the embedded hardware Nvidia Jetson TX2.\",\"PeriodicalId\":383722,\"journal\":{\"name\":\"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)\",\"volume\":\"331 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/RO-MAN47096.2020.9223562\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RO-MAN47096.2020.9223562","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
An Experimental Study of the Accuracy vs Inference Speed of RGB-D Object Recognition in Mobile Robotics
This paper presents a study in terms of accuracy and inference speed using RGB-D object detection and classification for mobile platform applications. The study is divided in three stages. In the first, eight state-of-the-art CNN-based object classifiers (AlexNet, VGG16-19, ResNet1850-101, DenseNet, and MobileNetV2) are used to compare the attained performances with the corresponding inference speeds in object classification tasks. The second stage consists in exploiting YOLOv3/YOLOv3-tiny networks to be used as Region of Interest generator method. In order to obtain a real-time object recognition pipeline, the final stage unifies the YOLOv3/YOLOv3-tiny with a CNN-based object classifier. The pipeline evaluates each object classifier with each Region of Interest generator method in terms of their accuracy and frame rate. For the evaluation of the proposed study under the conditions in which real robotic platforms navigate, a nonobject centric RGB-D dataset was recorded in Institute of Systems and Robotics facilities using a camera on-board the ISR-InterBot mobile platform. Experimental evaluations were also carried out in Washington and COCO datasets. Promising performances were achieved by the combination of YOLOv3tiny and ResNet18 networks on the embedded hardware Nvidia Jetson TX2.