Investigation of Maximization Bias in Sarsa Variants

Ganesh Tata, Eric Austin
{"title":"Investigation of Maximization Bias in Sarsa Variants","authors":"Ganesh Tata, Eric Austin","doi":"10.1109/SSCI50451.2021.9660081","DOIUrl":null,"url":null,"abstract":"The overestimation of action values caused by randomness in rewards can harm the ability to learn and the performance of reinforcement learning agents. This maximization bias has been well established and studied in the off-policy Q-learning algorithm. However, less study has been done for on-policy algorithms such as Sarsa and its variants. We conduct a thorough empirical analysis on Sarsa, Expected Sarsa, and n-step Sarsa. We find that the on-policy Sarsa variants suffer from less maximization bias than off-policy Q-learning in several test environments. We show how the choice of hyper-parameters impacts the severity of the bias. A decaying learning rate schedule results in more maximization bias than a fixed learning rate. Larger learning rates lead to larger overestimation. A larger exploration parameter leads to worse bias in Q-learning but less bias in the on-policy algorithms. We also show that a larger variance in rewards leads to more bias in both Q-Learning and Sarsa., but Sarsa is less affected than Q-learning.","PeriodicalId":255763,"journal":{"name":"2021 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Symposium Series on Computational Intelligence (SSCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SSCI50451.2021.9660081","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

The overestimation of action values caused by randomness in rewards can harm the ability to learn and the performance of reinforcement learning agents. This maximization bias has been well established and studied in the off-policy Q-learning algorithm. However, less study has been done for on-policy algorithms such as Sarsa and its variants. We conduct a thorough empirical analysis on Sarsa, Expected Sarsa, and n-step Sarsa. We find that the on-policy Sarsa variants suffer from less maximization bias than off-policy Q-learning in several test environments. We show how the choice of hyper-parameters impacts the severity of the bias. A decaying learning rate schedule results in more maximization bias than a fixed learning rate. Larger learning rates lead to larger overestimation. A larger exploration parameter leads to worse bias in Q-learning but less bias in the on-policy algorithms. We also show that a larger variance in rewards leads to more bias in both Q-Learning and Sarsa., but Sarsa is less affected than Q-learning.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Sarsa变异中最大化偏差的研究
奖励随机性导致的对行为值的过高估计会损害强化学习代理的学习能力和性能。这种最大化偏差已经在离策略q学习算法中得到了很好的建立和研究。然而,对Sarsa及其变体等政策算法的研究较少。我们对Sarsa、Expected Sarsa和n-step Sarsa进行了深入的实证分析。我们发现,在几个测试环境中,策略上的Sarsa变体比策略下的q学习受到更小的最大化偏差。我们展示了超参数的选择如何影响偏差的严重程度。与固定学习率相比,衰减学习率计划会导致更大的最大化偏差。较大的学习率导致较大的高估。更大的探索参数导致q学习的偏差更大,但在非策略算法中偏差较小。我们还表明,在Q-Learning和Sarsa中,奖励的较大差异会导致更多的偏差。,但Sarsa受影响比Q-learning小。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Voice Dialog System for Simulated Patient Robot and Detection of Interviewer Nodding Deep Learning Approaches to Remaining Useful Life Prediction: A Survey Evaluation of Graph Convolutions for Spatio-Temporal Predictions of EV-Charge Availability Balanced K-means using Quantum annealing A Study of Transfer Learning in a Generation Constructive Hyper-Heuristic for One Dimensional Bin Packing
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1