{"title":"Fusion-Perception-to-Action Transformer: Enhancing Robotic Manipulation With 3-D Visual Fusion Attention and Proprioception","authors":"Yangjun Liu;Sheng Liu;Binghan Chen;Zhi-Xin Yang;Sheng Xu","doi":"10.1109/TRO.2025.3539193","DOIUrl":null,"url":null,"abstract":"Most prior robot learning methods focus on image-based observations, limiting their capability in 3-D robotic manipulation. Voxel representation naturally delivers rich spatial features but remains underutilized. Specifically, current voxel-based methods struggle with fine-grained tasks, since precise actions are not fully achievable. However, humans can accomplish these tasks well using vision and proprioception. Inspired by this, this article proposed a novel Fusion-Perception-to-Action Transformer (FP2AT) with cross-layer feature aggregation to handle fine-grained manipulation in 3-D space. In particular, a multiscale 3-D visual fusion attention mechanism is devised to draw attention to local regions of interest and maintain awareness of global scenes, thereby boosting the capabilities of visual perception and action planning. Meanwhile, a 3-D visual mutual attention mechanism is designed and it can also enhance spatial perception. Besides, we further explore the potential of FP2AT by developing its coarse-to-fine version, which progressively refines the action space for more precise predictions. In addition, a proprioceptive encoder is developed to mimic the perception of body movements and contact, elevating the effectiveness of the FP2AT. Furthermore, a new metric, the average number of key actions (ANKA), is introduced to evaluate efficiency and planning capability. In various simulated and real-robot examples, our methods significantly outperform state-of-the-art 3-D-vision-based methods in success rate and ANKA metrics.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"1553-1567"},"PeriodicalIF":10.5000,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Robotics","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10874177/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0
Abstract
Most prior robot learning methods focus on image-based observations, limiting their capability in 3-D robotic manipulation. Voxel representation naturally delivers rich spatial features but remains underutilized. Specifically, current voxel-based methods struggle with fine-grained tasks, since precise actions are not fully achievable. However, humans can accomplish these tasks well using vision and proprioception. Inspired by this, this article proposed a novel Fusion-Perception-to-Action Transformer (FP2AT) with cross-layer feature aggregation to handle fine-grained manipulation in 3-D space. In particular, a multiscale 3-D visual fusion attention mechanism is devised to draw attention to local regions of interest and maintain awareness of global scenes, thereby boosting the capabilities of visual perception and action planning. Meanwhile, a 3-D visual mutual attention mechanism is designed and it can also enhance spatial perception. Besides, we further explore the potential of FP2AT by developing its coarse-to-fine version, which progressively refines the action space for more precise predictions. In addition, a proprioceptive encoder is developed to mimic the perception of body movements and contact, elevating the effectiveness of the FP2AT. Furthermore, a new metric, the average number of key actions (ANKA), is introduced to evaluate efficiency and planning capability. In various simulated and real-robot examples, our methods significantly outperform state-of-the-art 3-D-vision-based methods in success rate and ANKA metrics.
期刊介绍:
The IEEE Transactions on Robotics (T-RO) is dedicated to publishing fundamental papers covering all facets of robotics, drawing on interdisciplinary approaches from computer science, control systems, electrical engineering, mathematics, mechanical engineering, and beyond. From industrial applications to service and personal assistants, surgical operations to space, underwater, and remote exploration, robots and intelligent machines play pivotal roles across various domains, including entertainment, safety, search and rescue, military applications, agriculture, and intelligent vehicles.
Special emphasis is placed on intelligent machines and systems designed for unstructured environments, where a significant portion of the environment remains unknown and beyond direct sensing or control.