Exploration Policies for On-the-Fly Controller Synthesis: A Reinforcement Learning Approach

Tom'as Delgado, Marco S'anchez Sorondo, V. Braberman, Sebastián Uchitel
{"title":"Exploration Policies for On-the-Fly Controller Synthesis: A Reinforcement Learning Approach","authors":"Tom'as Delgado, Marco S'anchez Sorondo, V. Braberman, Sebastián Uchitel","doi":"10.1609/icaps.v33i1.27238","DOIUrl":null,"url":null,"abstract":"Controller synthesis is in essence a case of model-based planning for non-deterministic environments in which plans (actually “strategies”) are meant to preserve system goals indefinitely. In the case of supervisory control environments are specified as the parallel composition of state machines and valid strategies are required to be “non-blocking” (i.e., always enabling the environment to reach certain marked states) in addition to safe (i.e., keep the system within a safe zone). Recently, On-the-fly Directed Controller Synthesis techniques were proposed to avoid the exploration of the entire -and exponentially large- environment space, at the cost of non-maximal permissiveness, to either find a strategy or conclude that there is none. The incremental exploration of the plant is currently guided by a domain-independent human-designed heuristic.\nIn this work, we propose a new method for obtaining heuristics based on Reinforcement Learning (RL). The synthesis algorithm is thus framed as an RL task with an unbounded action space and a modified version of DQN is used. With a simple and general set of features that abstracts both states and actions, we show that it is possible to learn heuristics on small versions of a problem that generalize to the larger instances, effectively doing zero-shot policy transfer. Our agents learn from scratch in a highly partially observable RL task and outperform the existing heuristic overall, in instances unseen during training.","PeriodicalId":239898,"journal":{"name":"International Conference on Automated Planning and Scheduling","volume":"101 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Automated Planning and Scheduling","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1609/icaps.v33i1.27238","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Controller synthesis is in essence a case of model-based planning for non-deterministic environments in which plans (actually “strategies”) are meant to preserve system goals indefinitely. In the case of supervisory control environments are specified as the parallel composition of state machines and valid strategies are required to be “non-blocking” (i.e., always enabling the environment to reach certain marked states) in addition to safe (i.e., keep the system within a safe zone). Recently, On-the-fly Directed Controller Synthesis techniques were proposed to avoid the exploration of the entire -and exponentially large- environment space, at the cost of non-maximal permissiveness, to either find a strategy or conclude that there is none. The incremental exploration of the plant is currently guided by a domain-independent human-designed heuristic. In this work, we propose a new method for obtaining heuristics based on Reinforcement Learning (RL). The synthesis algorithm is thus framed as an RL task with an unbounded action space and a modified version of DQN is used. With a simple and general set of features that abstracts both states and actions, we show that it is possible to learn heuristics on small versions of a problem that generalize to the larger instances, effectively doing zero-shot policy transfer. Our agents learn from scratch in a highly partially observable RL task and outperform the existing heuristic overall, in instances unseen during training.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
动态控制器综合的探索策略:一种强化学习方法
控制器综合本质上是一种针对非确定性环境的基于模型的规划,在这种环境中,计划(实际上是“策略”)意味着无限期地保持系统目标。在监督控制环境的情况下,被指定为状态机的并行组合,有效的策略需要“非阻塞”(即,始终使环境达到某些标记状态)以及安全(即,将系统保持在安全区域内)。最近,动态定向控制器综合技术被提出,以避免以非最大许可为代价,探索整个和指数级大的环境空间,要么找到一个策略,要么得出没有策略的结论。目前,植物的增量探索是由一个独立于领域的人类设计的启发式指导的。在这项工作中,我们提出了一种基于强化学习(RL)的新方法来获得启发式。因此,综合算法被框架为具有无界动作空间的RL任务,并使用了改进版本的DQN。通过对状态和动作进行抽象的一组简单而通用的特征,我们展示了有可能在一个问题的小版本上学习启发式,并将其推广到更大的实例,从而有效地进行零射击策略转移。我们的智能体在高度部分可观察的RL任务中从零开始学习,并且在训练期间看不见的情况下,总体上优于现有的启发式。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Fast and Robust Resource-Constrained Scheduling with Graph Neural Networks Solving the Multi-Choice Two Dimensional Shelf Strip Packing Problem with Time Windows Generalizing Action Justification and Causal Links to Policies Exact Anytime Multi-Agent Path Finding Using Branch-and-Cut-and-Price and Large Neighborhood Search A Constraint Programming Solution to the Guillotine Rectangular Cutting Problem
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1