Act-Then-Measure: Reinforcement Learning for Partially Observable Environments with Active Measuring

Merlijn Krale, T. D. Simão, N. Jansen
{"title":"Act-Then-Measure: Reinforcement Learning for Partially Observable Environments with Active Measuring","authors":"Merlijn Krale, T. D. Simão, N. Jansen","doi":"10.48550/arXiv.2303.08271","DOIUrl":null,"url":null,"abstract":"We study Markov decision processes (MDPs), where agents control when and how they gather information, as formalized by action-contingent noiselessly observable MDPs (ACNO-MPDs). In these models, actions have two components: a control action that influences how the environment changes and a measurement action that affects the agent's observation. To solve ACNO-MDPs, we introduce the act-then-measure (ATM) heuristic, which assumes that we can ignore future state uncertainty when choosing control actions. To decide whether or not to measure, we introduce the concept of measuring value. We show how following this heuristic may lead to shorter policy computation times and prove a bound on the performance loss it incurs. We develop a reinforcement learning algorithm based on the ATM heuristic, using a Dyna-Q variant adapted for partially observable domains, and showcase its superior performance compared to prior methods on a number of partially-observable environments.","PeriodicalId":239898,"journal":{"name":"International Conference on Automated Planning and Scheduling","volume":"160 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Automated Planning and Scheduling","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2303.08271","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

We study Markov decision processes (MDPs), where agents control when and how they gather information, as formalized by action-contingent noiselessly observable MDPs (ACNO-MPDs). In these models, actions have two components: a control action that influences how the environment changes and a measurement action that affects the agent's observation. To solve ACNO-MDPs, we introduce the act-then-measure (ATM) heuristic, which assumes that we can ignore future state uncertainty when choosing control actions. To decide whether or not to measure, we introduce the concept of measuring value. We show how following this heuristic may lead to shorter policy computation times and prove a bound on the performance loss it incurs. We develop a reinforcement learning algorithm based on the ATM heuristic, using a Dyna-Q variant adapted for partially observable domains, and showcase its superior performance compared to prior methods on a number of partially-observable environments.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
行动-测量:基于主动测量的部分可观察环境的强化学习
我们研究马尔可夫决策过程(mdp),其中代理控制何时以及如何收集信息,由行动偶然的无噪音可观察mdp (ACNO-MPDs)形式化。在这些模型中,行为有两个组成部分:影响环境如何变化的控制行为和影响代理观察的测量行为。为了解决ACNO-MDPs问题,我们引入了行动-测量(ATM)启发式算法,该算法假设我们在选择控制动作时可以忽略未来状态的不确定性。为了决定是否测量,我们引入了测量值的概念。我们将展示遵循这种启发式方法如何缩短策略计算时间,并证明它所导致的性能损失的限度。我们开发了一种基于ATM启发式的强化学习算法,使用适用于部分可观察域的Dyna-Q变体,并在许多部分可观察环境中展示了与先前方法相比其优越的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Fast and Robust Resource-Constrained Scheduling with Graph Neural Networks Solving the Multi-Choice Two Dimensional Shelf Strip Packing Problem with Time Windows Generalizing Action Justification and Causal Links to Policies Exact Anytime Multi-Agent Path Finding Using Branch-and-Cut-and-Price and Large Neighborhood Search A Constraint Programming Solution to the Guillotine Rectangular Cutting Problem
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1