Robust Reinforcement Learning Control Framework for a Quadrotor Unmanned Aerial Vehicle Using Critic Neural Network

IF 6.1 Q1 AUTOMATION & CONTROL SYSTEMS Advanced intelligent systems (Weinheim an der Bergstrasse, Germany) Pub Date : 2025-02-02 DOI:10.1002/aisy.202400427
Yu Cai, Yefeng Yang, Tao Huang, Boyang Li
{"title":"Robust Reinforcement Learning Control Framework for a Quadrotor Unmanned Aerial Vehicle Using Critic Neural Network","authors":"Yu Cai,&nbsp;Yefeng Yang,&nbsp;Tao Huang,&nbsp;Boyang Li","doi":"10.1002/aisy.202400427","DOIUrl":null,"url":null,"abstract":"<p>This article introduces a novel robust reinforcement learning (RL) control scheme for a quadrotor unmanned aerial vehicle (QUAV) under external disturbances and model uncertainties. First, the translational and rotational motions of the QUAV are decoupled and trained separately to mitigate the computational complexity of the controller design and training process. Then, the proximal policy optimization algorithm with a dual-critic structure is proposed to address the overestimation issue and accelerate the convergence speed of RL controllers. Furthermore, a novel reward function and a robust compensator employing a switch value function are proposed to address model uncertainties and external disturbances. At last, simulation results and comparisons demonstrate the effectiveness and robustness of the proposed RL control framework.</p>","PeriodicalId":93858,"journal":{"name":"Advanced intelligent systems (Weinheim an der Bergstrasse, Germany)","volume":"7 3","pages":""},"PeriodicalIF":6.1000,"publicationDate":"2025-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aisy.202400427","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advanced intelligent systems (Weinheim an der Bergstrasse, Germany)","FirstCategoryId":"1085","ListUrlMain":"https://advanced.onlinelibrary.wiley.com/doi/10.1002/aisy.202400427","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

This article introduces a novel robust reinforcement learning (RL) control scheme for a quadrotor unmanned aerial vehicle (QUAV) under external disturbances and model uncertainties. First, the translational and rotational motions of the QUAV are decoupled and trained separately to mitigate the computational complexity of the controller design and training process. Then, the proximal policy optimization algorithm with a dual-critic structure is proposed to address the overestimation issue and accelerate the convergence speed of RL controllers. Furthermore, a novel reward function and a robust compensator employing a switch value function are proposed to address model uncertainties and external disturbances. At last, simulation results and comparisons demonstrate the effectiveness and robustness of the proposed RL control framework.

Abstract Image

Abstract Image

Abstract Image

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
使用 Critic 神经网络的四旋翼无人飞行器鲁棒强化学习控制框架
针对四旋翼无人机在外界干扰和模型不确定的情况下,提出了一种新的鲁棒强化学习(RL)控制方案。首先,将QUAV的平移和旋转运动解耦并单独训练,以减轻控制器设计和训练过程的计算复杂度。在此基础上,提出了一种双临界结构的近端策略优化算法,解决了RL控制器的过估计问题,加快了RL控制器的收敛速度。此外,提出了一种新的奖励函数和采用开关值函数的鲁棒补偿器来解决模型的不确定性和外部干扰。最后,仿真结果和对比验证了所提强化学习控制框架的有效性和鲁棒性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
1.30
自引率
0.00%
发文量
0
审稿时长
4 weeks
期刊最新文献
Issue Information Ultralight Soft Wearable Haptic Interface with Shear-Normal-Vibration Feedback Symbolic Reservoir Computing within Memristive Crossbar Arrays as a Cellular Automata Electronic-Free Particle Robots Communicate through Architected Tentacles Advances in 3D Printing Technologies for Fabricating Magnetic Soft Microrobots
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1