Shutong Zhang , Xing Chen , Zhaogeng Liu , Hechang Chen , Yi Chang
{"title":"HiPPO: Enhancing proximal policy optimization with highlight replay","authors":"Shutong Zhang , Xing Chen , Zhaogeng Liu , Hechang Chen , Yi Chang","doi":"10.1016/j.patcog.2025.111408","DOIUrl":null,"url":null,"abstract":"<div><div>Sample efficiency remains a paramount challenge in policy gradient methods within reinforcement learning. The success of experience replay demonstrates the importance of leveraging historical experiences, often through off-policy methods to enhance approximate policy learning algorithms that aim to maximize current interaction sample reuse, aligning approximate policies with target objectives. However, the inaccurate approximation can negatively affect actual optimization, leading to poorer current experiences than past ones. We propose Highlight Replay Enhanced Proximal Policy Optimization (HiPPO) to address the challenge. Specifically, HiPPO optimizes by highlighting policies and introducing a penalty reward function for constrained optimization, which alleviates the constraints of policy similarity and boosts adaptability to historical experiences. Empirical studies show HiPPO outperforming state-of-the-art algorithms in MuJoCo continuous tasks in performance and learning speed. An in-depth analysis of the experimental results validates the effectiveness of employing highlight replay and penalty reward functions in our proposed method.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"162 ","pages":"Article 111408"},"PeriodicalIF":7.5000,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0031320325000688","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Sample efficiency remains a paramount challenge in policy gradient methods within reinforcement learning. The success of experience replay demonstrates the importance of leveraging historical experiences, often through off-policy methods to enhance approximate policy learning algorithms that aim to maximize current interaction sample reuse, aligning approximate policies with target objectives. However, the inaccurate approximation can negatively affect actual optimization, leading to poorer current experiences than past ones. We propose Highlight Replay Enhanced Proximal Policy Optimization (HiPPO) to address the challenge. Specifically, HiPPO optimizes by highlighting policies and introducing a penalty reward function for constrained optimization, which alleviates the constraints of policy similarity and boosts adaptability to historical experiences. Empirical studies show HiPPO outperforming state-of-the-art algorithms in MuJoCo continuous tasks in performance and learning speed. An in-depth analysis of the experimental results validates the effectiveness of employing highlight replay and penalty reward functions in our proposed method.
期刊介绍:
The field of Pattern Recognition is both mature and rapidly evolving, playing a crucial role in various related fields such as computer vision, image processing, text analysis, and neural networks. It closely intersects with machine learning and is being applied in emerging areas like biometrics, bioinformatics, multimedia data analysis, and data science. The journal Pattern Recognition, established half a century ago during the early days of computer science, has since grown significantly in scope and influence.