Francesco De Lellis;Marco Coraggio;Giovanni Russo;Mirco Musolesi;Mario di Bernardo
{"title":"通过强化学习中的奖励塑造保证控制要求","authors":"Francesco De Lellis;Marco Coraggio;Giovanni Russo;Mirco Musolesi;Mario di Bernardo","doi":"10.1109/TCST.2024.3393210","DOIUrl":null,"url":null,"abstract":"In addressing control problems such as regulation and tracking through reinforcement learning (RL), it is often required to guarantee that the acquired policy meets essential performance and stability criteria such as a desired settling time and steady-state error before deployment. Motivated by this, we present a set of results and a systematic reward-shaping procedure that: 1) ensures the optimal policy generates trajectories that align with specified control requirements and 2) allows to assess whether any given policy satisfies them. We validate our approach through comprehensive numerical experiments conducted in two representative environments from OpenAI Gym: the Pendulum swing-up problem and the Lunar Lander. Utilizing both tabular and deep RL methods, our experiments consistently affirm the efficacy of our proposed framework, highlighting its effectiveness in ensuring policy adherence to the prescribed control requirements.","PeriodicalId":13103,"journal":{"name":"IEEE Transactions on Control Systems Technology","volume":"32 6","pages":"2102-2113"},"PeriodicalIF":4.9000,"publicationDate":"2024-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10534075","citationCount":"0","resultStr":"{\"title\":\"Guaranteeing Control Requirements via Reward Shaping in Reinforcement Learning\",\"authors\":\"Francesco De Lellis;Marco Coraggio;Giovanni Russo;Mirco Musolesi;Mario di Bernardo\",\"doi\":\"10.1109/TCST.2024.3393210\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In addressing control problems such as regulation and tracking through reinforcement learning (RL), it is often required to guarantee that the acquired policy meets essential performance and stability criteria such as a desired settling time and steady-state error before deployment. Motivated by this, we present a set of results and a systematic reward-shaping procedure that: 1) ensures the optimal policy generates trajectories that align with specified control requirements and 2) allows to assess whether any given policy satisfies them. We validate our approach through comprehensive numerical experiments conducted in two representative environments from OpenAI Gym: the Pendulum swing-up problem and the Lunar Lander. Utilizing both tabular and deep RL methods, our experiments consistently affirm the efficacy of our proposed framework, highlighting its effectiveness in ensuring policy adherence to the prescribed control requirements.\",\"PeriodicalId\":13103,\"journal\":{\"name\":\"IEEE Transactions on Control Systems Technology\",\"volume\":\"32 6\",\"pages\":\"2102-2113\"},\"PeriodicalIF\":4.9000,\"publicationDate\":\"2024-03-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10534075\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Control Systems Technology\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10534075/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Control Systems Technology","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10534075/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
Guaranteeing Control Requirements via Reward Shaping in Reinforcement Learning
In addressing control problems such as regulation and tracking through reinforcement learning (RL), it is often required to guarantee that the acquired policy meets essential performance and stability criteria such as a desired settling time and steady-state error before deployment. Motivated by this, we present a set of results and a systematic reward-shaping procedure that: 1) ensures the optimal policy generates trajectories that align with specified control requirements and 2) allows to assess whether any given policy satisfies them. We validate our approach through comprehensive numerical experiments conducted in two representative environments from OpenAI Gym: the Pendulum swing-up problem and the Lunar Lander. Utilizing both tabular and deep RL methods, our experiments consistently affirm the efficacy of our proposed framework, highlighting its effectiveness in ensuring policy adherence to the prescribed control requirements.
期刊介绍:
The IEEE Transactions on Control Systems Technology publishes high quality technical papers on technological advances in control engineering. The word technology is from the Greek technologia. The modern meaning is a scientific method to achieve a practical purpose. Control Systems Technology includes all aspects of control engineering needed to implement practical control systems, from analysis and design, through simulation and hardware. A primary purpose of the IEEE Transactions on Control Systems Technology is to have an archival publication which will bridge the gap between theory and practice. Papers are published in the IEEE Transactions on Control System Technology which disclose significant new knowledge, exploratory developments, or practical applications in all aspects of technology needed to implement control systems, from analysis and design through simulation, and hardware.