深度学习正则化在离线 RL 中对行动者的作用

Denis Tarasov, Anja Surina, Caglar Gulcehre
{"title":"深度学习正则化在离线 RL 中对行动者的作用","authors":"Denis Tarasov, Anja Surina, Caglar Gulcehre","doi":"arxiv-2409.07606","DOIUrl":null,"url":null,"abstract":"Deep learning regularization techniques, such as \\emph{dropout}, \\emph{layer\nnormalization}, or \\emph{weight decay}, are widely adopted in the construction\nof modern artificial neural networks, often resulting in more robust training\nprocesses and improved generalization capabilities. However, in the domain of\n\\emph{Reinforcement Learning} (RL), the application of these techniques has\nbeen limited, usually applied to value function estimators\n\\citep{hiraoka2021dropout, smith2022walk}, and may result in detrimental\neffects. This issue is even more pronounced in offline RL settings, which bear\ngreater similarity to supervised learning but have received less attention.\nRecent work in continuous offline RL has demonstrated that while we can build\nsufficiently powerful critic networks, the generalization of actor networks\nremains a bottleneck. In this study, we empirically show that applying standard\nregularization techniques to actor networks in offline RL actor-critic\nalgorithms yields improvements of 6\\% on average across two algorithms and\nthree different continuous D4RL domains.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The Role of Deep Learning Regularizations on Actors in Offline RL\",\"authors\":\"Denis Tarasov, Anja Surina, Caglar Gulcehre\",\"doi\":\"arxiv-2409.07606\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep learning regularization techniques, such as \\\\emph{dropout}, \\\\emph{layer\\nnormalization}, or \\\\emph{weight decay}, are widely adopted in the construction\\nof modern artificial neural networks, often resulting in more robust training\\nprocesses and improved generalization capabilities. However, in the domain of\\n\\\\emph{Reinforcement Learning} (RL), the application of these techniques has\\nbeen limited, usually applied to value function estimators\\n\\\\citep{hiraoka2021dropout, smith2022walk}, and may result in detrimental\\neffects. This issue is even more pronounced in offline RL settings, which bear\\ngreater similarity to supervised learning but have received less attention.\\nRecent work in continuous offline RL has demonstrated that while we can build\\nsufficiently powerful critic networks, the generalization of actor networks\\nremains a bottleneck. In this study, we empirically show that applying standard\\nregularization techniques to actor networks in offline RL actor-critic\\nalgorithms yields improvements of 6\\\\% on average across two algorithms and\\nthree different continuous D4RL domains.\",\"PeriodicalId\":501301,\"journal\":{\"name\":\"arXiv - CS - Machine Learning\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Machine Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.07606\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07606","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

深度学习正则化技术,如{emph{dropout}}、{emph{layernormalization}或{emph{weight decay}},在现代人工神经网络的构建中被广泛采用,通常会带来更稳健的训练过程和更好的泛化能力。然而,在强化学习(RL)领域,这些技术的应用一直很有限,通常应用于值函数估计器(value function estimators){hiraoka2021dropout,smith2022walk},并可能导致有害影响。这一问题在离线 RL 设置中更为突出,离线 RL 与监督学习更为相似,但受到的关注较少。最近在连续离线 RL 方面的研究表明,虽然我们可以构建足够强大的批评者网络,但行为者网络的泛化仍然是一个瓶颈。在本研究中,我们通过实证研究表明,在离线 RL 角色批判算法中对角色网络应用标准规则化技术,可以在两种算法和三个不同的连续 D4RL 领域中平均提高 6%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
The Role of Deep Learning Regularizations on Actors in Offline RL
Deep learning regularization techniques, such as \emph{dropout}, \emph{layer normalization}, or \emph{weight decay}, are widely adopted in the construction of modern artificial neural networks, often resulting in more robust training processes and improved generalization capabilities. However, in the domain of \emph{Reinforcement Learning} (RL), the application of these techniques has been limited, usually applied to value function estimators \citep{hiraoka2021dropout, smith2022walk}, and may result in detrimental effects. This issue is even more pronounced in offline RL settings, which bear greater similarity to supervised learning but have received less attention. Recent work in continuous offline RL has demonstrated that while we can build sufficiently powerful critic networks, the generalization of actor networks remains a bottleneck. In this study, we empirically show that applying standard regularization techniques to actor networks in offline RL actor-critic algorithms yields improvements of 6\% on average across two algorithms and three different continuous D4RL domains.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Almost Sure Convergence of Linear Temporal Difference Learning with Arbitrary Features The Impact of Element Ordering on LM Agent Performance Towards Interpretable End-Stage Renal Disease (ESRD) Prediction: Utilizing Administrative Claims Data with Explainable AI Techniques Extended Deep Submodular Functions Symmetry-Enriched Learning: A Category-Theoretic Framework for Robust Machine Learning Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1