Uncertainty Estimation based Intrinsic Reward For Efficient Reinforcement Learning

Chao Chen, Tianjiao Wan, Peichang Shi, Bo Ding, Zijian Gao, Dawei Feng
{"title":"Uncertainty Estimation based Intrinsic Reward For Efficient Reinforcement Learning","authors":"Chao Chen, Tianjiao Wan, Peichang Shi, Bo Ding, Zijian Gao, Dawei Feng","doi":"10.1109/JCC56315.2022.00008","DOIUrl":null,"url":null,"abstract":"For reinforcement learning, the extrinsic reward is a core factor for the learning process which however can be very sparse or completely missing. In response, researchers have proposed the idea of intrinsic reward, such as encouraging the agent to visit novel states through prediction error. However, the deep prediction model can provide over-confident and miscalibrated predictions. To mitigate the impact of inaccurate prediction, previous research applied deep ensembles and achieved superior results, despite the increased computation and storage space. In this paper, inspired by the uncertainty estimation, we leverage Monte Carlo Dropout to generate intrinsic reward from the perspective of uncertainty estimation with the goal to decrease the demands for computing resources while retaining superior performance. Utilizing the simple yet effective approach, we conduct extensive experiments across a variety of benchmark environments. The experimental results suggest that our method provides a competitive performance in final score and is faster in running speed, while requiring much fewer computing resources and storage space.","PeriodicalId":239996,"journal":{"name":"2022 IEEE International Conference on Joint Cloud Computing (JCC)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Joint Cloud Computing (JCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/JCC56315.2022.00008","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

For reinforcement learning, the extrinsic reward is a core factor for the learning process which however can be very sparse or completely missing. In response, researchers have proposed the idea of intrinsic reward, such as encouraging the agent to visit novel states through prediction error. However, the deep prediction model can provide over-confident and miscalibrated predictions. To mitigate the impact of inaccurate prediction, previous research applied deep ensembles and achieved superior results, despite the increased computation and storage space. In this paper, inspired by the uncertainty estimation, we leverage Monte Carlo Dropout to generate intrinsic reward from the perspective of uncertainty estimation with the goal to decrease the demands for computing resources while retaining superior performance. Utilizing the simple yet effective approach, we conduct extensive experiments across a variety of benchmark environments. The experimental results suggest that our method provides a competitive performance in final score and is faster in running speed, while requiring much fewer computing resources and storage space.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于不确定性估计的高效强化学习内在奖励
对于强化学习来说,外部奖励是学习过程的核心因素,但它可能非常稀疏或完全缺失。作为回应,研究人员提出了内在奖励的想法,例如通过预测错误来鼓励代理访问新状态。然而,深度预测模型可能会提供过度自信和错误校准的预测。为了减轻不准确预测的影响,以前的研究应用了深度集成,并取得了更好的结果,尽管增加了计算和存储空间。在本文中,受不确定性估计的启发,我们利用蒙特卡罗Dropout从不确定性估计的角度产生内在奖励,目的是在保持优越性能的同时减少对计算资源的需求。利用简单而有效的方法,我们在各种基准测试环境中进行了广泛的实验。实验结果表明,我们的方法在最终分数上具有竞争力,运行速度更快,同时需要更少的计算资源和存储空间。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Two-stage Scheduling of Stream Computing for Industrial Cloud-edge Collaboration Threshold Based Load Balancing Algorithm in Cloud Computing Improving scalability of multi-agent reinforcement learning with parameters sharing MicroStream: A Distributed In-memory Caching Service For Data Production Towards A Secure Joint Cloud With Confidential Computing
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1