{"title":"机器人抓取无纹理物体的三维检测与6D姿态估计","authors":"Jing Zhang, B. Yin, Xianpeng Xiao, Houyi Yang","doi":"10.1109/ICCRE51898.2021.9435702","DOIUrl":null,"url":null,"abstract":"Due to illumination variation under different lighting conditions, texture-less objects have posed significant challenges to visual object localization algorithms for robot grasping. We propose a method to determine the 6D pose of both textured and texture-less objects from a single RGB-D image with a Kinect. First, we apply hierarchical clustering strategy to pre-process the point cloud of a scene. Then, we achieve the 3D object detection by comparing the diameter between clustering point cloud and object model. Last, the rough pose of object is estimated through Hough voting and the estimation result is refined by ICP (Iterative Closest Point). Experimental results show that the accumulation error between the model and the corresponding point in the scene is less than 6mm and the attitude error is less than 1$.5^{\\mathrm{o}}$. The average detection accuracy rate of the proposed method reaches 97%, which can satisfy the grasping requirements of the manipulator. We also demonstrate that our approach has good performance in dynamic lighting conditions.","PeriodicalId":382619,"journal":{"name":"2021 6th International Conference on Control and Robotics Engineering (ICCRE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"3D Detection and 6D Pose Estimation of Texture-Less Objects for Robot Grasping\",\"authors\":\"Jing Zhang, B. Yin, Xianpeng Xiao, Houyi Yang\",\"doi\":\"10.1109/ICCRE51898.2021.9435702\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Due to illumination variation under different lighting conditions, texture-less objects have posed significant challenges to visual object localization algorithms for robot grasping. We propose a method to determine the 6D pose of both textured and texture-less objects from a single RGB-D image with a Kinect. First, we apply hierarchical clustering strategy to pre-process the point cloud of a scene. Then, we achieve the 3D object detection by comparing the diameter between clustering point cloud and object model. Last, the rough pose of object is estimated through Hough voting and the estimation result is refined by ICP (Iterative Closest Point). Experimental results show that the accumulation error between the model and the corresponding point in the scene is less than 6mm and the attitude error is less than 1$.5^{\\\\mathrm{o}}$. The average detection accuracy rate of the proposed method reaches 97%, which can satisfy the grasping requirements of the manipulator. We also demonstrate that our approach has good performance in dynamic lighting conditions.\",\"PeriodicalId\":382619,\"journal\":{\"name\":\"2021 6th International Conference on Control and Robotics Engineering (ICCRE)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-04-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 6th International Conference on Control and Robotics Engineering (ICCRE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCRE51898.2021.9435702\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 6th International Conference on Control and Robotics Engineering (ICCRE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCRE51898.2021.9435702","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
3D Detection and 6D Pose Estimation of Texture-Less Objects for Robot Grasping
Due to illumination variation under different lighting conditions, texture-less objects have posed significant challenges to visual object localization algorithms for robot grasping. We propose a method to determine the 6D pose of both textured and texture-less objects from a single RGB-D image with a Kinect. First, we apply hierarchical clustering strategy to pre-process the point cloud of a scene. Then, we achieve the 3D object detection by comparing the diameter between clustering point cloud and object model. Last, the rough pose of object is estimated through Hough voting and the estimation result is refined by ICP (Iterative Closest Point). Experimental results show that the accumulation error between the model and the corresponding point in the scene is less than 6mm and the attitude error is less than 1$.5^{\mathrm{o}}$. The average detection accuracy rate of the proposed method reaches 97%, which can satisfy the grasping requirements of the manipulator. We also demonstrate that our approach has good performance in dynamic lighting conditions.