基于样本正则化和自适应学习率的强化学习法,用于 AGV 路径规划

IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Neurocomputing Pub Date : 2024-11-06 DOI:10.1016/j.neucom.2024.128820
Jun Nie , Guihua Zhang , Xiao Lu , Haixia Wang , Chunyang Sheng , Lijie Sun
{"title":"基于样本正则化和自适应学习率的强化学习法,用于 AGV 路径规划","authors":"Jun Nie ,&nbsp;Guihua Zhang ,&nbsp;Xiao Lu ,&nbsp;Haixia Wang ,&nbsp;Chunyang Sheng ,&nbsp;Lijie Sun","doi":"10.1016/j.neucom.2024.128820","DOIUrl":null,"url":null,"abstract":"<div><div>This paper proposes the proximal policy optimization (PPO) method based on sample regularization (SR) and adaptive learning rate (ALR) to address the issues of limited exploration ability and slow convergence speed in Autonomous Guided Vehicle (AGV) path planning using reinforcement learning algorithms in dynamic environments. Firstly, the regularization term based on empirical samples is designed to solve the bias and imbalance issues of training samples, and the sample regularization is added to the objective function to improve the policy selectivity of the PPO algorithm, thereby increasing the AGV’s exploration ability during the training process in the working environment. Secondly, the Fisher information matrix of the Kullback-Leibler (KL) divergence approximation and the KL divergence constraint term are exploited to design the policy update mechanism based on the dynamically adjustable adaptive learning rate throughout training. The method considers the geometric structure of the parameter space and the change of the policy gradient, aiming to optimize parameter update direction and enhance convergence speed and stability of the algorithm. Finally, the AGV path planning scheme based on reinforcement learning is established for simulation verification and comparations in two-dimensional raster map and Gazebo 3D simulation environment. Simulation results verify the feasibility and superiority of the proposed method applied to the AGV path planning problem.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"614 ","pages":"Article 128820"},"PeriodicalIF":5.5000,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Reinforcement learning method based on sample regularization and adaptive learning rate for AGV path planning\",\"authors\":\"Jun Nie ,&nbsp;Guihua Zhang ,&nbsp;Xiao Lu ,&nbsp;Haixia Wang ,&nbsp;Chunyang Sheng ,&nbsp;Lijie Sun\",\"doi\":\"10.1016/j.neucom.2024.128820\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>This paper proposes the proximal policy optimization (PPO) method based on sample regularization (SR) and adaptive learning rate (ALR) to address the issues of limited exploration ability and slow convergence speed in Autonomous Guided Vehicle (AGV) path planning using reinforcement learning algorithms in dynamic environments. Firstly, the regularization term based on empirical samples is designed to solve the bias and imbalance issues of training samples, and the sample regularization is added to the objective function to improve the policy selectivity of the PPO algorithm, thereby increasing the AGV’s exploration ability during the training process in the working environment. Secondly, the Fisher information matrix of the Kullback-Leibler (KL) divergence approximation and the KL divergence constraint term are exploited to design the policy update mechanism based on the dynamically adjustable adaptive learning rate throughout training. The method considers the geometric structure of the parameter space and the change of the policy gradient, aiming to optimize parameter update direction and enhance convergence speed and stability of the algorithm. Finally, the AGV path planning scheme based on reinforcement learning is established for simulation verification and comparations in two-dimensional raster map and Gazebo 3D simulation environment. Simulation results verify the feasibility and superiority of the proposed method applied to the AGV path planning problem.</div></div>\",\"PeriodicalId\":19268,\"journal\":{\"name\":\"Neurocomputing\",\"volume\":\"614 \",\"pages\":\"Article 128820\"},\"PeriodicalIF\":5.5000,\"publicationDate\":\"2024-11-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neurocomputing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0925231224015911\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231224015911","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

本文提出了基于样本正则化(SR)和自适应学习率(ALR)的近端策略优化(PPO)方法,以解决在动态环境下使用强化学习算法进行自主导航车(AGV)路径规划时探索能力有限和收敛速度慢的问题。首先,设计了基于经验样本的正则化项来解决训练样本的偏差和不平衡问题,并在目标函数中加入样本正则化来提高 PPO 算法的策略选择性,从而提高 AGV 在工作环境中训练过程中的探索能力。其次,利用 Kullback-Leibler (KL) 发散近似的 Fisher 信息矩阵和 KL 发散约束项,在整个训练过程中设计基于动态可调自适应学习率的策略更新机制。该方法考虑了参数空间的几何结构和策略梯度的变化,旨在优化参数更新方向,提高算法的收敛速度和稳定性。最后,建立了基于强化学习的 AGV 路径规划方案,并在二维光栅地图和 Gazebo 三维仿真环境中进行了仿真验证和比较。仿真结果验证了所提方法应用于 AGV 路径规划问题的可行性和优越性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Reinforcement learning method based on sample regularization and adaptive learning rate for AGV path planning
This paper proposes the proximal policy optimization (PPO) method based on sample regularization (SR) and adaptive learning rate (ALR) to address the issues of limited exploration ability and slow convergence speed in Autonomous Guided Vehicle (AGV) path planning using reinforcement learning algorithms in dynamic environments. Firstly, the regularization term based on empirical samples is designed to solve the bias and imbalance issues of training samples, and the sample regularization is added to the objective function to improve the policy selectivity of the PPO algorithm, thereby increasing the AGV’s exploration ability during the training process in the working environment. Secondly, the Fisher information matrix of the Kullback-Leibler (KL) divergence approximation and the KL divergence constraint term are exploited to design the policy update mechanism based on the dynamically adjustable adaptive learning rate throughout training. The method considers the geometric structure of the parameter space and the change of the policy gradient, aiming to optimize parameter update direction and enhance convergence speed and stability of the algorithm. Finally, the AGV path planning scheme based on reinforcement learning is established for simulation verification and comparations in two-dimensional raster map and Gazebo 3D simulation environment. Simulation results verify the feasibility and superiority of the proposed method applied to the AGV path planning problem.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Neurocomputing
Neurocomputing 工程技术-计算机:人工智能
CiteScore
13.10
自引率
10.00%
发文量
1382
审稿时长
70 days
期刊介绍: Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.
期刊最新文献
Editorial Board Virtual sample generation for small sample learning: A survey, recent developments and future prospects Adaptive selection of spectral–spatial features for hyperspectral image classification using a modified-CBAM-based network FPGA-based component-wise LSTM training accelerator for neural granger causality analysis Multi-sensor information fusion in Internet of Vehicles based on deep learning: A review
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1