Embedding expert demonstrations into clustering buffer for effective deep reinforcement learning

IF 2.7 3区 工程技术 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Frontiers of Information Technology & Electronic Engineering Pub Date : 2023-12-07 DOI:10.1631/fitee.2300084
Shihmin Wang, Binqi Zhao, Zhengfeng Zhang, Junping Zhang, Jian Pu
{"title":"Embedding expert demonstrations into clustering buffer for effective deep reinforcement learning","authors":"Shihmin Wang, Binqi Zhao, Zhengfeng Zhang, Junping Zhang, Jian Pu","doi":"10.1631/fitee.2300084","DOIUrl":null,"url":null,"abstract":"<p>As one of the most fundamental topics in reinforcement learning (RL), sample efficiency is essential to the deployment of deep RL algorithms. Unlike most existing exploration methods that sample an action from different types of posterior distributions, we focus on the policy sampling process and propose an efficient selective sampling approach to improve sample efficiency by modeling the internal hierarchy of the environment. Specifically, we first employ clustering methods in the policy sampling process to generate an action candidate set. Then we introduce a clustering buffer for modeling the internal hierarchy, which consists of on-policy data, off-policy data, and expert data to evaluate actions from the clusters in the action candidate set in the exploration stage. In this way, our approach is able to take advantage of the supervision information in the expert demonstration data. Experiments on six different continuous locomotion environments demonstrate superior reinforcement learning performance and faster convergence of selective sampling. In particular, on the LGSVL task, our method can reduce the number of convergence steps by 46.7% and the convergence time by 28.5%. Furthermore, our code is open-source for reproducibility. The code is available at https://github.com/Shihwin/SelectiveSampling.</p>","PeriodicalId":12608,"journal":{"name":"Frontiers of Information Technology & Electronic Engineering","volume":"20 1","pages":""},"PeriodicalIF":2.7000,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers of Information Technology & Electronic Engineering","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1631/fitee.2300084","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

As one of the most fundamental topics in reinforcement learning (RL), sample efficiency is essential to the deployment of deep RL algorithms. Unlike most existing exploration methods that sample an action from different types of posterior distributions, we focus on the policy sampling process and propose an efficient selective sampling approach to improve sample efficiency by modeling the internal hierarchy of the environment. Specifically, we first employ clustering methods in the policy sampling process to generate an action candidate set. Then we introduce a clustering buffer for modeling the internal hierarchy, which consists of on-policy data, off-policy data, and expert data to evaluate actions from the clusters in the action candidate set in the exploration stage. In this way, our approach is able to take advantage of the supervision information in the expert demonstration data. Experiments on six different continuous locomotion environments demonstrate superior reinforcement learning performance and faster convergence of selective sampling. In particular, on the LGSVL task, our method can reduce the number of convergence steps by 46.7% and the convergence time by 28.5%. Furthermore, our code is open-source for reproducibility. The code is available at https://github.com/Shihwin/SelectiveSampling.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
将专家示范嵌入聚类缓冲区,实现有效的深度强化学习
作为强化学习(RL)中最基本的课题之一,采样效率对于深度 RL 算法的部署至关重要。现有的大多数探索方法都是从不同类型的后验分布中对行动进行采样,与此不同,我们专注于策略采样过程,并提出了一种高效的选择性采样方法,通过模拟环境的内部层次结构来提高采样效率。具体来说,我们首先在策略采样过程中采用聚类方法生成行动候选集。然后,我们引入一个聚类缓冲区来模拟内部层次结构,该缓冲区由政策内数据、政策外数据和专家数据组成,用于在探索阶段评估行动候选集中的聚类行动。这样,我们的方法就能利用专家示范数据中的监督信息。在六种不同的连续运动环境中进行的实验表明,强化学习性能优越,选择性采样的收敛速度更快。特别是在 LGSVL 任务中,我们的方法可以减少 46.7% 的收敛步数和 28.5% 的收敛时间。此外,我们的代码是开源的,具有可重复性。代码可在 https://github.com/Shihwin/SelectiveSampling 上获取。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Frontiers of Information Technology & Electronic Engineering
Frontiers of Information Technology & Electronic Engineering COMPUTER SCIENCE, INFORMATION SYSTEMSCOMPU-COMPUTER SCIENCE, SOFTWARE ENGINEERING
CiteScore
6.00
自引率
10.00%
发文量
1372
期刊介绍: Frontiers of Information Technology & Electronic Engineering (ISSN 2095-9184, monthly), formerly known as Journal of Zhejiang University SCIENCE C (Computers & Electronics) (2010-2014), is an international peer-reviewed journal launched by Chinese Academy of Engineering (CAE) and Zhejiang University, co-published by Springer & Zhejiang University Press. FITEE is aimed to publish the latest implementation of applications, principles, and algorithms in the broad area of Electrical and Electronic Engineering, including but not limited to Computer Science, Information Sciences, Control, Automation, Telecommunications. There are different types of articles for your choice, including research articles, review articles, science letters, perspective, new technical notes and methods, etc.
期刊最新文献
A novel overlapping minimization SMOTE algorithm for imbalanced classification A review on the developments and space applications of mid- and long-wavelength infrared detection technologies Detecting compromised accounts caused by phone number recycling on e-commerce platforms: taking Meituan as an example Flocking fragmentation formulation for a multi-robot system under multi-hop and lossy ad hoc networks Event-triggered distributed cross-dimensional formation control for heterogeneous multi-agent systems
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1