A Hybrid Online Off-Policy Reinforcement Learning Agent Framework Supported by Transformers.

International journal of neural systems Pub Date : 2023-12-01 Epub Date: 2023-10-20 DOI:10.1142/S012906572350065X
Enrique Adrian Villarrubia-Martin, Luis Rodriguez-Benitez, Luis Jimenez-Linares, David Muñoz-Valero, Jun Liu
{"title":"A Hybrid Online Off-Policy Reinforcement Learning Agent Framework Supported by Transformers.","authors":"Enrique Adrian Villarrubia-Martin, Luis Rodriguez-Benitez, Luis Jimenez-Linares, David Muñoz-Valero, Jun Liu","doi":"10.1142/S012906572350065X","DOIUrl":null,"url":null,"abstract":"<p><p>Reinforcement learning (RL) is a powerful technique that allows agents to learn optimal decision-making policies through interactions with an environment. However, traditional RL algorithms suffer from several limitations such as the need for large amounts of data and long-term credit assignment, i.e. the problem of determining which actions actually produce a certain reward. Recently, Transformers have shown their capacity to address these constraints in this area of learning in an offline setting. This paper proposes a framework that uses Transformers to enhance the training of online off-policy RL agents and address the challenges described above through self-attention. The proposal introduces a hybrid agent with a mixed policy that combines an online off-policy agent with an offline Transformer agent using the Decision Transformer architecture. By sequentially exchanging the experience replay buffer between the agents, the agent's learning training efficiency is improved in the first iterations and so is the training of Transformer-based RL agents in situations with limited data availability or unknown environments.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2350065"},"PeriodicalIF":0.0000,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International journal of neural systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1142/S012906572350065X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/10/20 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Reinforcement learning (RL) is a powerful technique that allows agents to learn optimal decision-making policies through interactions with an environment. However, traditional RL algorithms suffer from several limitations such as the need for large amounts of data and long-term credit assignment, i.e. the problem of determining which actions actually produce a certain reward. Recently, Transformers have shown their capacity to address these constraints in this area of learning in an offline setting. This paper proposes a framework that uses Transformers to enhance the training of online off-policy RL agents and address the challenges described above through self-attention. The proposal introduces a hybrid agent with a mixed policy that combines an online off-policy agent with an offline Transformer agent using the Decision Transformer architecture. By sequentially exchanging the experience replay buffer between the agents, the agent's learning training efficiency is improved in the first iterations and so is the training of Transformer-based RL agents in situations with limited data availability or unknown environments.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
一个由Transformers支持的混合在线-离线策略强化学习代理框架。
强化学习(RL)是一种强大的技术,它允许代理通过与环境的交互来学习最优决策策略。然而,传统的RL算法受到一些限制,例如需要大量数据和长期的信用分配,即确定哪些行为实际上产生了一定的奖励的问题。最近,变形金刚已经显示出他们有能力在离线环境中解决这一学习领域的这些限制。本文提出了一个框架,该框架使用Transformers来加强在线策略外RL代理的培训,并通过自我关注来解决上述挑战。该提案引入了一种具有混合策略的混合代理,该混合策略使用Decision Transformer架构将在线策略外代理与离线Transformer代理相结合。通过在代理之间顺序交换经验回放缓冲区,在第一次迭代中提高了代理的学习训练效率,在数据可用性有限或环境未知的情况下,基于Transformer的RL代理的训练也提高了效率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Multi-Label Zero-Shot Learning Via Contrastive Label-Based Attention. Unraveling the Differential Efficiency of Dorsal and Ventral Pathways in Visual Semantic Decoding. Neural Memory State Space Models for Medical Image Segmentation. Spatially Selective Retinal Ganglion Cell Activation Using Low Invasive Extraocular Temporal Interference Stimulation. Decoding Continuous Tracking Eye Movements from Cortical Spiking Activity.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1