{"title":"从彩色和深度图像进行无碰撞抓取检测","authors":"Dinh-Cuong Hoang;Anh-Nhat Nguyen;Chi-Minh Nguyen;An-Binh Phi;Quang-Tri Duong;Khanh-Duong Tran;Viet-Anh Trinh;Van-Duc Tran;Hai-Nam Pham;Phuc-Quan Ngo;Duy-Quang Vu;Thu-Uyen Nguyen;Van-Duc Vu;Duc-Thanh Tran;Van-Thiep Nguyen","doi":"10.1109/TAI.2024.3420848","DOIUrl":null,"url":null,"abstract":"Efficient and reliable grasp pose generation plays a crucial role in robotic manipulation tasks. The advancement of deep learning techniques applied to point cloud data has led to rapid progress in grasp detection. However, point cloud data has limitations: no appearance information and susceptibility to sensor noise. In contrast, color Red, Green, Blue (RGB) images offer high-resolution and intricate textural details, making them a valuable complement to the 3-D geometry offered by point clouds or depth (D) images. Nevertheless, the effective integration of appearance information to enhance point cloud-based grasp detection remains an open question. In this study, we extend the concepts of VoteGrasp \n<xref>[1]</xref>\n and introduce an innovative deep learning approach referred to as VoteGrasp Red, Green, Blue, Depth (RGBD). To build robustness to occlusion, the proposed model generates candidates by casting votes and accumulating evidence for feasible grasp configurations. This methodology revolves around fuzing votes extracted from images and point clouds. To further enhance the collaborative effect of merging appearance and geometry features, we introduce a context learning module. We exploit contextual information by encoding the dependency of objects in the scene into features to boost the performance of grasp generation. The contextual information enables our model to increase the likelihood that the generated grasps are collision-free. The efficacy of our model is verified through comprehensive evaluations on the demanding GraspNet-1Billion dataset, leading to a significant improvement of 9.3 in average precision (AP) over the existing state-of-the-art results. Additionally, we provide extensive analyses through ablation studies to elucidate the contributions of each design decision.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"5 11","pages":"5689-5698"},"PeriodicalIF":0.0000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Collision-Free Grasp Detection From Color and Depth Images\",\"authors\":\"Dinh-Cuong Hoang;Anh-Nhat Nguyen;Chi-Minh Nguyen;An-Binh Phi;Quang-Tri Duong;Khanh-Duong Tran;Viet-Anh Trinh;Van-Duc Tran;Hai-Nam Pham;Phuc-Quan Ngo;Duy-Quang Vu;Thu-Uyen Nguyen;Van-Duc Vu;Duc-Thanh Tran;Van-Thiep Nguyen\",\"doi\":\"10.1109/TAI.2024.3420848\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Efficient and reliable grasp pose generation plays a crucial role in robotic manipulation tasks. The advancement of deep learning techniques applied to point cloud data has led to rapid progress in grasp detection. However, point cloud data has limitations: no appearance information and susceptibility to sensor noise. In contrast, color Red, Green, Blue (RGB) images offer high-resolution and intricate textural details, making them a valuable complement to the 3-D geometry offered by point clouds or depth (D) images. Nevertheless, the effective integration of appearance information to enhance point cloud-based grasp detection remains an open question. In this study, we extend the concepts of VoteGrasp \\n<xref>[1]</xref>\\n and introduce an innovative deep learning approach referred to as VoteGrasp Red, Green, Blue, Depth (RGBD). To build robustness to occlusion, the proposed model generates candidates by casting votes and accumulating evidence for feasible grasp configurations. This methodology revolves around fuzing votes extracted from images and point clouds. To further enhance the collaborative effect of merging appearance and geometry features, we introduce a context learning module. We exploit contextual information by encoding the dependency of objects in the scene into features to boost the performance of grasp generation. The contextual information enables our model to increase the likelihood that the generated grasps are collision-free. The efficacy of our model is verified through comprehensive evaluations on the demanding GraspNet-1Billion dataset, leading to a significant improvement of 9.3 in average precision (AP) over the existing state-of-the-art results. Additionally, we provide extensive analyses through ablation studies to elucidate the contributions of each design decision.\",\"PeriodicalId\":73305,\"journal\":{\"name\":\"IEEE transactions on artificial intelligence\",\"volume\":\"5 11\",\"pages\":\"5689-5698\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on artificial intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10579461/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10579461/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Collision-Free Grasp Detection From Color and Depth Images
Efficient and reliable grasp pose generation plays a crucial role in robotic manipulation tasks. The advancement of deep learning techniques applied to point cloud data has led to rapid progress in grasp detection. However, point cloud data has limitations: no appearance information and susceptibility to sensor noise. In contrast, color Red, Green, Blue (RGB) images offer high-resolution and intricate textural details, making them a valuable complement to the 3-D geometry offered by point clouds or depth (D) images. Nevertheless, the effective integration of appearance information to enhance point cloud-based grasp detection remains an open question. In this study, we extend the concepts of VoteGrasp
[1]
and introduce an innovative deep learning approach referred to as VoteGrasp Red, Green, Blue, Depth (RGBD). To build robustness to occlusion, the proposed model generates candidates by casting votes and accumulating evidence for feasible grasp configurations. This methodology revolves around fuzing votes extracted from images and point clouds. To further enhance the collaborative effect of merging appearance and geometry features, we introduce a context learning module. We exploit contextual information by encoding the dependency of objects in the scene into features to boost the performance of grasp generation. The contextual information enables our model to increase the likelihood that the generated grasps are collision-free. The efficacy of our model is verified through comprehensive evaluations on the demanding GraspNet-1Billion dataset, leading to a significant improvement of 9.3 in average precision (AP) over the existing state-of-the-art results. Additionally, we provide extensive analyses through ablation studies to elucidate the contributions of each design decision.