{"title":"基于模型的RGB水下6D姿态估计","authors":"Davide Sapienza;Elena Govi;Sara Aldhaheri;Marko Bertogna;Eloy Roura;Èric Pairet;Micaela Verucchi;Paola Ardón","doi":"10.1109/LRA.2023.3320028","DOIUrl":null,"url":null,"abstract":"Object pose estimation underwater allows an autonomous system to perform tracking and intervention tasks. Nonetheless, underwater target pose estimation is remarkably challenging due to, among many factors, limited visibility, light scattering, cluttered environments, and constantly varying water conditions. An approach is to employ sonar or laser sensing to acquire 3D data, however, the data is not clear and the sensors expensive. For this reason, the community has focused on extracting pose estimates from RGB input. In this work, we propose an approach that leverages 2D object detection to reliably compute 6D pose estimates in different underwater scenarios. We test our proposal with 4 objects with symmetrical shapes and poor texture spanning across \n<inline-formula><tex-math>$33{,}920$</tex-math></inline-formula>\n synthetic and 10 real scenes. All objects and scenes are made available in an open-source dataset that includes annotations for object detection and pose estimation. When benchmarking against similar end-to-end methodologies for 6D object pose estimation, our pipeline provides estimates that are \n<inline-formula><tex-math>$\\sim \\!8{\\%}$</tex-math></inline-formula>\n more accurate. We also demonstrate the real-world usability of our pose estimation pipeline on an underwater robotic manipulator in a reaching task.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"8 11","pages":"7535-7542"},"PeriodicalIF":4.6000,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Model-Based Underwater 6D Pose Estimation From RGB\",\"authors\":\"Davide Sapienza;Elena Govi;Sara Aldhaheri;Marko Bertogna;Eloy Roura;Èric Pairet;Micaela Verucchi;Paola Ardón\",\"doi\":\"10.1109/LRA.2023.3320028\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Object pose estimation underwater allows an autonomous system to perform tracking and intervention tasks. Nonetheless, underwater target pose estimation is remarkably challenging due to, among many factors, limited visibility, light scattering, cluttered environments, and constantly varying water conditions. An approach is to employ sonar or laser sensing to acquire 3D data, however, the data is not clear and the sensors expensive. For this reason, the community has focused on extracting pose estimates from RGB input. In this work, we propose an approach that leverages 2D object detection to reliably compute 6D pose estimates in different underwater scenarios. We test our proposal with 4 objects with symmetrical shapes and poor texture spanning across \\n<inline-formula><tex-math>$33{,}920$</tex-math></inline-formula>\\n synthetic and 10 real scenes. All objects and scenes are made available in an open-source dataset that includes annotations for object detection and pose estimation. When benchmarking against similar end-to-end methodologies for 6D object pose estimation, our pipeline provides estimates that are \\n<inline-formula><tex-math>$\\\\sim \\\\!8{\\\\%}$</tex-math></inline-formula>\\n more accurate. We also demonstrate the real-world usability of our pose estimation pipeline on an underwater robotic manipulator in a reaching task.\",\"PeriodicalId\":13241,\"journal\":{\"name\":\"IEEE Robotics and Automation Letters\",\"volume\":\"8 11\",\"pages\":\"7535-7542\"},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2023-09-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Robotics and Automation Letters\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10265120/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ROBOTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Robotics and Automation Letters","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10265120/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ROBOTICS","Score":null,"Total":0}
Model-Based Underwater 6D Pose Estimation From RGB
Object pose estimation underwater allows an autonomous system to perform tracking and intervention tasks. Nonetheless, underwater target pose estimation is remarkably challenging due to, among many factors, limited visibility, light scattering, cluttered environments, and constantly varying water conditions. An approach is to employ sonar or laser sensing to acquire 3D data, however, the data is not clear and the sensors expensive. For this reason, the community has focused on extracting pose estimates from RGB input. In this work, we propose an approach that leverages 2D object detection to reliably compute 6D pose estimates in different underwater scenarios. We test our proposal with 4 objects with symmetrical shapes and poor texture spanning across
$33{,}920$
synthetic and 10 real scenes. All objects and scenes are made available in an open-source dataset that includes annotations for object detection and pose estimation. When benchmarking against similar end-to-end methodologies for 6D object pose estimation, our pipeline provides estimates that are
$\sim \!8{\%}$
more accurate. We also demonstrate the real-world usability of our pose estimation pipeline on an underwater robotic manipulator in a reaching task.
期刊介绍:
The scope of this journal is to publish peer-reviewed articles that provide a timely and concise account of innovative research ideas and application results, reporting significant theoretical findings and application case studies in areas of robotics and automation.