{"title":"Data-Informed Residual Reinforcement Learning for High-Dimensional Robotic Tracking Control","authors":"Cong Li;Fangzhou Liu;Yongchao Wang;Martin Buss","doi":"10.1109/TMECH.2024.3412275","DOIUrl":null,"url":null,"abstract":"The learning inefficiency of reinforcement learning (RL) from scratch hinders its practical application toward continuous robotic tracking control, especially for high-dimensional robots. This article proposes a data-informed residual reinforcement learning (DR-RL)-based robotic tracking control scheme applicable to robots with high dimensionality. The proposed DR-RL methodology outperforms common RL methods regarding sample efficiency and scalability. Specifically, we first decouple the original robot into low-dimensional robotic subsystems; and further utilize one-step backward data to construct incremental subsystems that are equivalent model-free representations of the aforementioned decoupled robotic subsystems. The formulated incremental subsystems allow for parallel learning to relieve computation load and offer us mathematical descriptions of robotic movements for conducting theoretical analysis. Then, we apply DR-RL to learn the tracking control policy, a combination of incremental base policy and incremental residual policy, under a parallel learning architecture. The incremental residual policy uses the guidance from the incremental base policy as the learning initialization and further learns from interactions with environments to endow the tracking control policy with adaptability toward dynamically changing environments. Our proposed DR-RL-based tracking control scheme is developed with rigorous theoretical analysis of system stability and weight convergence. The effectiveness of our proposed method is validated numerically on a 7-DoF KUKA iiwa robot manipulator and experimentally on a 3-DoF robot manipulator that would fail for other counterpart RL methods.","PeriodicalId":13372,"journal":{"name":"IEEE/ASME Transactions on Mechatronics","volume":"30 3","pages":"1681-1691"},"PeriodicalIF":7.3000,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE/ASME Transactions on Mechatronics","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10689563/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
The learning inefficiency of reinforcement learning (RL) from scratch hinders its practical application toward continuous robotic tracking control, especially for high-dimensional robots. This article proposes a data-informed residual reinforcement learning (DR-RL)-based robotic tracking control scheme applicable to robots with high dimensionality. The proposed DR-RL methodology outperforms common RL methods regarding sample efficiency and scalability. Specifically, we first decouple the original robot into low-dimensional robotic subsystems; and further utilize one-step backward data to construct incremental subsystems that are equivalent model-free representations of the aforementioned decoupled robotic subsystems. The formulated incremental subsystems allow for parallel learning to relieve computation load and offer us mathematical descriptions of robotic movements for conducting theoretical analysis. Then, we apply DR-RL to learn the tracking control policy, a combination of incremental base policy and incremental residual policy, under a parallel learning architecture. The incremental residual policy uses the guidance from the incremental base policy as the learning initialization and further learns from interactions with environments to endow the tracking control policy with adaptability toward dynamically changing environments. Our proposed DR-RL-based tracking control scheme is developed with rigorous theoretical analysis of system stability and weight convergence. The effectiveness of our proposed method is validated numerically on a 7-DoF KUKA iiwa robot manipulator and experimentally on a 3-DoF robot manipulator that would fail for other counterpart RL methods.
期刊介绍:
IEEE/ASME Transactions on Mechatronics publishes high quality technical papers on technological advances in mechatronics. A primary purpose of the IEEE/ASME Transactions on Mechatronics is to have an archival publication which encompasses both theory and practice. Papers published in the IEEE/ASME Transactions on Mechatronics disclose significant new knowledge needed to implement intelligent mechatronics systems, from analysis and design through simulation and hardware and software implementation. The Transactions also contains a letters section dedicated to rapid publication of short correspondence items concerning new research results.