Windows deep transformer Q-networks: an extended variance reduction architecture for partially observable reinforcement learning

IF 3.5 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Applied Intelligence Pub Date : 2024-11-27 DOI:10.1007/s10489-024-05867-3
Zijian Wang, Bin Wang, Hongbo Dou, Zhongyuan Liu
{"title":"Windows deep transformer Q-networks: an extended variance reduction architecture for partially observable reinforcement learning","authors":"Zijian Wang,&nbsp;Bin Wang,&nbsp;Hongbo Dou,&nbsp;Zhongyuan Liu","doi":"10.1007/s10489-024-05867-3","DOIUrl":null,"url":null,"abstract":"<p>Partial Observability Markov Desicion Process (POMDP) is always worth studying in reinforcement learning (RL) due to its universality in the real world. Compared with Markov Decision Processes (MDP), agents in POMDP cannot fully receive information from the environment, which is an obstacle to traditional RL algorithms. One solution is to establishes a sequence-to-sequence model. As the core of deep Q-networks, Transformer has achieved certain outperformed results in dealing with partial observability problems. Nevertheless, deep Q-network has the issue of over-estimation of Q-value, which leads to unstable input data quality in Transformer. With the accumulation of deviation fast, model performance may decline drastically, resulting in severe errors that are fatal to policy learning. In this paper, we note that the previous Q-value overestimation mitigation model is not suitable for Deep Transformer Q-Networks (DTQN) framework, for DTQN is a sequence-to-sequence model, not merely a value optimization model in traditional RL. Therefore, we propose Windows DTQN, based on the reduction of Q-value variance via the synergistic effect of shallow and deep windows. In particular, Windows DTQN ensembles the historical Q-networks through the shallow windows, and estimates the uncertainty of the Q-networks through the deep windows for weight allocation. Our experiments conducted on gridverse environments demonstrate that our model achieves better results than the current mainstream DQN algorithms in POMDP. Compared to DTQN, Windows DTQN increases the average success rate by 5.1% and the average return by 1.11.</p>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 1","pages":""},"PeriodicalIF":3.5000,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Intelligence","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10489-024-05867-3","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Partial Observability Markov Desicion Process (POMDP) is always worth studying in reinforcement learning (RL) due to its universality in the real world. Compared with Markov Decision Processes (MDP), agents in POMDP cannot fully receive information from the environment, which is an obstacle to traditional RL algorithms. One solution is to establishes a sequence-to-sequence model. As the core of deep Q-networks, Transformer has achieved certain outperformed results in dealing with partial observability problems. Nevertheless, deep Q-network has the issue of over-estimation of Q-value, which leads to unstable input data quality in Transformer. With the accumulation of deviation fast, model performance may decline drastically, resulting in severe errors that are fatal to policy learning. In this paper, we note that the previous Q-value overestimation mitigation model is not suitable for Deep Transformer Q-Networks (DTQN) framework, for DTQN is a sequence-to-sequence model, not merely a value optimization model in traditional RL. Therefore, we propose Windows DTQN, based on the reduction of Q-value variance via the synergistic effect of shallow and deep windows. In particular, Windows DTQN ensembles the historical Q-networks through the shallow windows, and estimates the uncertainty of the Q-networks through the deep windows for weight allocation. Our experiments conducted on gridverse environments demonstrate that our model achieves better results than the current mainstream DQN algorithms in POMDP. Compared to DTQN, Windows DTQN increases the average success rate by 5.1% and the average return by 1.11.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Windows 深度变换器 Q 网络:用于部分可观测强化学习的扩展方差缩小架构
部分可观测马尔可夫决策过程(POMDP)因其在现实世界中的普遍性,一直是强化学习(RL)领域值得研究的课题。与马尔可夫决策过程(Markov Decision Processes,MDP)相比,POMDP 中的代理不能完全接收来自环境的信息,这是传统 RL 算法的一个障碍。一种解决方案是建立序列到序列模型。作为深度 Q 网络的核心,Transformer 在处理部分可观测性问题上取得了一定的卓越成果。然而,深度 Q 网络存在 Q 值估计过高的问题,这导致 Transformer 的输入数据质量不稳定。随着偏差的快速积累,模型性能可能会急剧下降,从而导致严重错误,这对策略学习是致命的。在本文中,我们注意到之前的 Q 值高估缓解模型并不适合深度变换器 Q 网络(DTQN)框架,因为 DTQN 是序列到序列模型,而不仅仅是传统 RL 中的值优化模型。因此,我们提出了 Windows DTQN,其基础是通过浅窗和深窗的协同效应来减少 Q 值方差。其中,Windows DTQN 通过浅层窗口集合历史 Q 网络,并通过深层窗口估计 Q 网络的不确定性以进行权重分配。我们在网格反演环境中进行的实验证明,我们的模型在 POMDP 中比目前主流的 DQN 算法取得了更好的结果。与 DTQN 相比,Windows DTQN 的平均成功率提高了 5.1%,平均回报率提高了 1.11。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Applied Intelligence
Applied Intelligence 工程技术-计算机:人工智能
CiteScore
6.60
自引率
20.80%
发文量
1361
审稿时长
5.9 months
期刊介绍: With a focus on research in artificial intelligence and neural networks, this journal addresses issues involving solutions of real-life manufacturing, defense, management, government and industrial problems which are too complex to be solved through conventional approaches and require the simulation of intelligent thought processes, heuristics, applications of knowledge, and distributed and parallel processing. The integration of these multiple approaches in solving complex problems is of particular importance. The journal presents new and original research and technological developments, addressing real and complex issues applicable to difficult problems. It provides a medium for exchanging scientific research and technological achievements accomplished by the international community.
期刊最新文献
Leverage visual attention and multimodal scene graph for visual navigation Rethinking urban region representation: adaptive soft-thresholding and attentive graph learning for robust data fusion Kernel Transposed Projection Envelope Linear Discriminant Analysis Mode DGKD: Depth-Guided knowledge distillation network for monocular 3D object detection Group consensus decision method for probabilistic language complex project scheme based on cloud model and Q-learning algorithm
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1