{"title":"Metric Interval Temporal Logic based Reinforcement Learning with Runtime Monitoring and Self-Correction","authors":"Zhenyu Lin, J. Baras","doi":"10.23919/acc45564.2020.9147506","DOIUrl":null,"url":null,"abstract":"In this paper we present a modular Q-learning framework to deal with the robot task planning, runtime monitoring and self-correction problem. The task is specified using metric interval temporal logic (MITL) with finite time constraints. We first construct a runtime monitor automaton using three-valued LTL (LTL3), and a sub-task MITL monitor is constructed by decomposing and augmenting the monitor automaton. During the learning phase, a modular Q-learning approach is proposed such that each module could learn different sub-tasks. During runtime, the sub-task MITL monitors could monitor the execution and guide the agent for possible self-correction if an error occurs. Our experiments show that under our framework, the robot is able to learn a feasible execution sequence that satisfies the given MITL specifications under finite time constraints. When the runtime environment becomes different than the learning environment and the original action will violate the specifications, the robotic agent is able to self-correct and accomplish the task if it is still possible.","PeriodicalId":288450,"journal":{"name":"2020 American Control Conference (ACC)","volume":"29 31","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 American Control Conference (ACC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/acc45564.2020.9147506","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
In this paper we present a modular Q-learning framework to deal with the robot task planning, runtime monitoring and self-correction problem. The task is specified using metric interval temporal logic (MITL) with finite time constraints. We first construct a runtime monitor automaton using three-valued LTL (LTL3), and a sub-task MITL monitor is constructed by decomposing and augmenting the monitor automaton. During the learning phase, a modular Q-learning approach is proposed such that each module could learn different sub-tasks. During runtime, the sub-task MITL monitors could monitor the execution and guide the agent for possible self-correction if an error occurs. Our experiments show that under our framework, the robot is able to learn a feasible execution sequence that satisfies the given MITL specifications under finite time constraints. When the runtime environment becomes different than the learning environment and the original action will violate the specifications, the robotic agent is able to self-correct and accomplish the task if it is still possible.