{"title":"Impedance Learning-Based Adaptive Force Tracking for Robot on Unknown Terrains","authors":"Yanghong Li;Li Zheng;Yahao Wang;Erbao Dong;Shiwu Zhang","doi":"10.1109/TRO.2025.3530345","DOIUrl":null,"url":null,"abstract":"Aiming at the robust force tracking challenge for robots in continuous contact with uncertain environments, a novel adaptive variable impedance control policy based on deep reinforcement learning (DRL) is proposed in this article. The policy includes a neural network feedforward controller and a variable impedance feedback controller. Based on the DRL algorithm, the iterative network feedforward controller explores and prelearns the optimal policy for impedance tuning in simulation scenarios with randomly generated terrain. The converged results are then used as feedforward inputs in the variable impedance feedback controller to improve the force-tracking performance of the robot during contact. A simplified dynamic contact model between the robot and the uncertain environment called the “couch model,” which satisfies the Lipschiz continuity condition, is developed to provide boundary conditions for the safe transfer of capabilities learned in simulation to real robots. Unlike the exhaustive example that relies on the completeness of the learning samples, this article gives theoretical proofs of the stability and convergence of the proposed control policy via Lyapunov’s theorem and contraction mapping principle. The control method proposed in this article is more interpretable and shows higher sample utilization efficiency and generalization ability in simulations and experiments.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"1404-1420"},"PeriodicalIF":9.4000,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Robotics","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10842469/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0
Abstract
Aiming at the robust force tracking challenge for robots in continuous contact with uncertain environments, a novel adaptive variable impedance control policy based on deep reinforcement learning (DRL) is proposed in this article. The policy includes a neural network feedforward controller and a variable impedance feedback controller. Based on the DRL algorithm, the iterative network feedforward controller explores and prelearns the optimal policy for impedance tuning in simulation scenarios with randomly generated terrain. The converged results are then used as feedforward inputs in the variable impedance feedback controller to improve the force-tracking performance of the robot during contact. A simplified dynamic contact model between the robot and the uncertain environment called the “couch model,” which satisfies the Lipschiz continuity condition, is developed to provide boundary conditions for the safe transfer of capabilities learned in simulation to real robots. Unlike the exhaustive example that relies on the completeness of the learning samples, this article gives theoretical proofs of the stability and convergence of the proposed control policy via Lyapunov’s theorem and contraction mapping principle. The control method proposed in this article is more interpretable and shows higher sample utilization efficiency and generalization ability in simulations and experiments.
期刊介绍:
The IEEE Transactions on Robotics (T-RO) is dedicated to publishing fundamental papers covering all facets of robotics, drawing on interdisciplinary approaches from computer science, control systems, electrical engineering, mathematics, mechanical engineering, and beyond. From industrial applications to service and personal assistants, surgical operations to space, underwater, and remote exploration, robots and intelligent machines play pivotal roles across various domains, including entertainment, safety, search and rescue, military applications, agriculture, and intelligent vehicles.
Special emphasis is placed on intelligent machines and systems designed for unstructured environments, where a significant portion of the environment remains unknown and beyond direct sensing or control.