基于改进型 TD3 深度强化学习的无人潜航器路径跟踪控制

IF 4.9 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS IEEE Transactions on Control Systems Technology Pub Date : 2024-03-27 DOI:10.1109/TCST.2024.3377876
Yexin Fan;Hongyang Dong;Xiaowei Zhao;Petr Denissenko
{"title":"基于改进型 TD3 深度强化学习的无人潜航器路径跟踪控制","authors":"Yexin Fan;Hongyang Dong;Xiaowei Zhao;Petr Denissenko","doi":"10.1109/TCST.2024.3377876","DOIUrl":null,"url":null,"abstract":"This work proposes an innovative path-following control method, anchored in deep reinforcement learning (DRL), for unmanned underwater vehicles (UUVs). This approach is driven by several new designs, all of which aim to enhance learning efficiency and effectiveness and achieve high-performance UUV control. Specifically, a novel experience replay strategy is designed and integrated within the twin-delayed deep deterministic policy gradient algorithm (TD3). It distinguishes the significance of stored transitions by making a trade-off between rewards and temporal-difference (TD) errors, thus enabling the UUV agent to explore optimal control policies more efficiently. Another major challenge within this control problem arises from action oscillations associated with DRL policies. This issue leads to excessive system wear on actuators and makes real-time application difficult. To mitigate this challenge, a newly improved regularization method is proposed, which provides a moderate level of smoothness to the control policy. Furthermore, a dynamic reward function featuring adaptive constraints is designed to avoid unproductive exploration and expedite learning convergence speed further. Simulation results show that our method garners higher rewards in fewer training episodes compared with mainstream DRL-based control approaches (e.g., deep deterministic policy gradient (DDPG) and vanilla TD3) in UUV applications. Moreover, it can adapt to varying path configurations amid uncertainties and disturbances, all while ensuring high tracking accuracy. Simulation and experimental studies are conducted to verify the effectiveness.","PeriodicalId":13103,"journal":{"name":"IEEE Transactions on Control Systems Technology","volume":null,"pages":null},"PeriodicalIF":4.9000,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Path-Following Control of Unmanned Underwater Vehicle Based on an Improved TD3 Deep Reinforcement Learning\",\"authors\":\"Yexin Fan;Hongyang Dong;Xiaowei Zhao;Petr Denissenko\",\"doi\":\"10.1109/TCST.2024.3377876\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This work proposes an innovative path-following control method, anchored in deep reinforcement learning (DRL), for unmanned underwater vehicles (UUVs). This approach is driven by several new designs, all of which aim to enhance learning efficiency and effectiveness and achieve high-performance UUV control. Specifically, a novel experience replay strategy is designed and integrated within the twin-delayed deep deterministic policy gradient algorithm (TD3). It distinguishes the significance of stored transitions by making a trade-off between rewards and temporal-difference (TD) errors, thus enabling the UUV agent to explore optimal control policies more efficiently. Another major challenge within this control problem arises from action oscillations associated with DRL policies. This issue leads to excessive system wear on actuators and makes real-time application difficult. To mitigate this challenge, a newly improved regularization method is proposed, which provides a moderate level of smoothness to the control policy. Furthermore, a dynamic reward function featuring adaptive constraints is designed to avoid unproductive exploration and expedite learning convergence speed further. Simulation results show that our method garners higher rewards in fewer training episodes compared with mainstream DRL-based control approaches (e.g., deep deterministic policy gradient (DDPG) and vanilla TD3) in UUV applications. Moreover, it can adapt to varying path configurations amid uncertainties and disturbances, all while ensuring high tracking accuracy. Simulation and experimental studies are conducted to verify the effectiveness.\",\"PeriodicalId\":13103,\"journal\":{\"name\":\"IEEE Transactions on Control Systems Technology\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.9000,\"publicationDate\":\"2024-03-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Control Systems Technology\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10480708/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Control Systems Technology","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10480708/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

本研究针对无人潜航器(UUV)提出了一种基于深度强化学习(DRL)的创新路径跟踪控制方法。该方法由几项新设计驱动,所有这些设计都旨在提高学习效率和效果,实现高性能的无人潜航器控制。具体来说,设计了一种新颖的经验重放策略,并将其集成到双延迟深度确定性策略梯度算法(TD3)中。它通过在奖励和时差(TD)误差之间进行权衡来区分存储转换的重要性,从而使 UUV 代理能够更有效地探索最优控制策略。该控制问题的另一个主要挑战来自与 DRL 策略相关的动作振荡。这个问题会导致执行器系统过度磨损,给实时应用带来困难。为了缓解这一挑战,我们提出了一种新改进的正则化方法,它能为控制策略提供适度的平滑性。此外,还设计了具有自适应约束的动态奖励函数,以避免无益的探索,进一步加快学习收敛速度。仿真结果表明,与 UUV 应用中基于 DRL 的主流控制方法(如深度确定性策略梯度(DDPG)和 vanilla TD3)相比,我们的方法能在更短的训练时间内获得更高的奖励。此外,它还能适应不确定性和干扰下的不同路径配置,同时确保高跟踪精度。为验证其有效性,我们进行了仿真和实验研究。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Path-Following Control of Unmanned Underwater Vehicle Based on an Improved TD3 Deep Reinforcement Learning
This work proposes an innovative path-following control method, anchored in deep reinforcement learning (DRL), for unmanned underwater vehicles (UUVs). This approach is driven by several new designs, all of which aim to enhance learning efficiency and effectiveness and achieve high-performance UUV control. Specifically, a novel experience replay strategy is designed and integrated within the twin-delayed deep deterministic policy gradient algorithm (TD3). It distinguishes the significance of stored transitions by making a trade-off between rewards and temporal-difference (TD) errors, thus enabling the UUV agent to explore optimal control policies more efficiently. Another major challenge within this control problem arises from action oscillations associated with DRL policies. This issue leads to excessive system wear on actuators and makes real-time application difficult. To mitigate this challenge, a newly improved regularization method is proposed, which provides a moderate level of smoothness to the control policy. Furthermore, a dynamic reward function featuring adaptive constraints is designed to avoid unproductive exploration and expedite learning convergence speed further. Simulation results show that our method garners higher rewards in fewer training episodes compared with mainstream DRL-based control approaches (e.g., deep deterministic policy gradient (DDPG) and vanilla TD3) in UUV applications. Moreover, it can adapt to varying path configurations amid uncertainties and disturbances, all while ensuring high tracking accuracy. Simulation and experimental studies are conducted to verify the effectiveness.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Control Systems Technology
IEEE Transactions on Control Systems Technology 工程技术-工程:电子与电气
CiteScore
10.70
自引率
2.10%
发文量
218
审稿时长
6.7 months
期刊介绍: The IEEE Transactions on Control Systems Technology publishes high quality technical papers on technological advances in control engineering. The word technology is from the Greek technologia. The modern meaning is a scientific method to achieve a practical purpose. Control Systems Technology includes all aspects of control engineering needed to implement practical control systems, from analysis and design, through simulation and hardware. A primary purpose of the IEEE Transactions on Control Systems Technology is to have an archival publication which will bridge the gap between theory and practice. Papers are published in the IEEE Transactions on Control System Technology which disclose significant new knowledge, exploratory developments, or practical applications in all aspects of technology needed to implement control systems, from analysis and design through simulation, and hardware.
期刊最新文献
Predictive Control for Autonomous Driving With Uncertain, Multimodal Predictions High-Speed Interception Multicopter Control by Image-Based Visual Servoing Real-Time Mixed-Integer Quadratic Programming for Vehicle Decision-Making and Motion Planning Hierarchical Control for Vehicle Repositioning in Autonomous Mobility-on-Demand Systems Sharable Clothoid-Based Continuous Motion Planning for Connected Automated Vehicles
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1