{"title":"SGSIN: Simultaneous Grasp and Suction Inference Network via Attention-Based Affordance Learning","authors":"Wenshuo Wang;Haiyue Zhu;Marcelo H. Ang","doi":"10.1109/TIE.2024.3451150","DOIUrl":null,"url":null,"abstract":"Universal handling of a wide and diverse range of objects is a grand and long-lasting challenge for robot manipulation. While recent methods have achieved competitive results in grasping a specific set of objects, it still performs unsatisfactorily when faced with cluttered and diverse objects with different characteristics such as material, shape, texture, etc. One critical bottleneck stems from the limited applicability of single gripper modality, which is not capable of handling the complex real-world tasks. In this article, we propose a unified grasping inference framework for multimodality grasping, i.e., the simultaneous grasp and suction inference network (SGSIN). SGSIN utilizes the 3-D scene input to simultaneously predict feasible grasp poses for multiple modalities of grippers as well as to determine the optimal primitive based on a gripper selection module. SGSIN leverages a novel point-level affordance metric to indicate the potential probability of grasping success for each gripper in a unified manner. A novel backbone network is developed to extract strong 3-D feature representations, where the residual block with point and channel attentions is proposed. Our multimodality framework is trained and evaluated on the GraspNet-1B dataset, and the performances achieve the state of the art on respective grasp and suction tasks. Furthermore, we also evaluate the developed pipeline in real-world tasks with cluttered environments. The state-of-the-art performance demonstrates the effectiveness of our proposed framework.","PeriodicalId":13402,"journal":{"name":"IEEE Transactions on Industrial Electronics","volume":"72 5","pages":"4990-5000"},"PeriodicalIF":7.2000,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Industrial Electronics","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10740485/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Universal handling of a wide and diverse range of objects is a grand and long-lasting challenge for robot manipulation. While recent methods have achieved competitive results in grasping a specific set of objects, it still performs unsatisfactorily when faced with cluttered and diverse objects with different characteristics such as material, shape, texture, etc. One critical bottleneck stems from the limited applicability of single gripper modality, which is not capable of handling the complex real-world tasks. In this article, we propose a unified grasping inference framework for multimodality grasping, i.e., the simultaneous grasp and suction inference network (SGSIN). SGSIN utilizes the 3-D scene input to simultaneously predict feasible grasp poses for multiple modalities of grippers as well as to determine the optimal primitive based on a gripper selection module. SGSIN leverages a novel point-level affordance metric to indicate the potential probability of grasping success for each gripper in a unified manner. A novel backbone network is developed to extract strong 3-D feature representations, where the residual block with point and channel attentions is proposed. Our multimodality framework is trained and evaluated on the GraspNet-1B dataset, and the performances achieve the state of the art on respective grasp and suction tasks. Furthermore, we also evaluate the developed pipeline in real-world tasks with cluttered environments. The state-of-the-art performance demonstrates the effectiveness of our proposed framework.
期刊介绍:
Journal Name: IEEE Transactions on Industrial Electronics
Publication Frequency: Monthly
Scope:
The scope of IEEE Transactions on Industrial Electronics encompasses the following areas:
Applications of electronics, controls, and communications in industrial and manufacturing systems and processes.
Power electronics and drive control techniques.
System control and signal processing.
Fault detection and diagnosis.
Power systems.
Instrumentation, measurement, and testing.
Modeling and simulation.
Motion control.
Robotics.
Sensors and actuators.
Implementation of neural networks, fuzzy logic, and artificial intelligence in industrial systems.
Factory automation.
Communication and computer networks.