This paper presents a fully risk-aware model predictive control (MPC) framework for chance-constrained discrete-time linear control systems with process noise. Conditional value-at-risk (CVaR) as a popular coherent risk measure is incorporated in both the constraints and the cost function of the MPC framework. This allows the system to navigate the entire spectrum of risk assessments, from worst-case to risk-neutral scenarios, ensuring both constraint satisfaction and performance optimization in stochastic environments. The recursive feasibility and risk-aware exponential stability of the resulting risk-aware MPC are demonstrated through rigorous theoretical analysis by considering the disturbance feedback policy parameterization. In the end, two numerical examples are given to elucidate the efficacy of the proposed method.
{"title":"Risk-Aware Stochastic MPC for Chance-Constrained Linear Systems","authors":"Pouria Tooranjipour;Bahare Kiumarsi;Hamidreza Modares","doi":"10.1109/OJCSYS.2024.3421372","DOIUrl":"https://doi.org/10.1109/OJCSYS.2024.3421372","url":null,"abstract":"This paper presents a fully risk-aware model predictive control (MPC) framework for chance-constrained discrete-time linear control systems with process noise. Conditional value-at-risk (CVaR) as a popular coherent risk measure is incorporated in both the constraints and the cost function of the MPC framework. This allows the system to navigate the entire spectrum of risk assessments, from worst-case to risk-neutral scenarios, ensuring both constraint satisfaction and performance optimization in stochastic environments. The recursive feasibility and risk-aware exponential stability of the resulting risk-aware MPC are demonstrated through rigorous theoretical analysis by considering the disturbance feedback policy parameterization. In the end, two numerical examples are given to elucidate the efficacy of the proposed method.","PeriodicalId":73299,"journal":{"name":"IEEE open journal of control systems","volume":"3 ","pages":"282-294"},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10578318","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141631005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-26DOI: 10.1109/OJCSYS.2024.3419642
René A. Carmona;Claire Zeng
Recently, a deep-learning algorithm referred to as Deep Galerkin Method (DGM), has gained a lot of attention among those trying to solve numerically Mean Field Games with finite horizon, even if the performance seems to be decreasing significantly with increasing horizon. On the other hand, it has been proven that some specific classes of Mean Field Games enjoy some form of the turnpike property identified over seven decades ago by economists. The gist of this phenomenon is a proof that the solution of an optimal control problem over a long time interval spends most of its time near the stationary solution of the ergodic version of the corresponding infinite horizon optimization problem. After reviewing the implementation of DGM for finite horizon Mean Field Games, we introduce a “turnpike-accelerated” version that incorporates the turnpike estimates in the loss function to be optimized, and we perform a comparative numerical analysis to show the advantages of this accelerated version over the baseline DGM algorithm. We demonstrate on some of the Mean Field Game models with local-couplings known to have the turnpike property, as well as a new class of linear-quadratic models for which we derive explicit turnpike estimates.
{"title":"Leveraging the Turnpike Effect for Mean Field Games Numerics","authors":"René A. Carmona;Claire Zeng","doi":"10.1109/OJCSYS.2024.3419642","DOIUrl":"https://doi.org/10.1109/OJCSYS.2024.3419642","url":null,"abstract":"Recently, a deep-learning algorithm referred to as Deep Galerkin Method (DGM), has gained a lot of attention among those trying to solve numerically Mean Field Games with finite horizon, even if the performance seems to be decreasing significantly with increasing horizon. On the other hand, it has been proven that some specific classes of Mean Field Games enjoy some form of the turnpike property identified over seven decades ago by economists. The gist of this phenomenon is a proof that the solution of an optimal control problem over a long time interval spends most of its time near the stationary solution of the ergodic version of the corresponding infinite horizon optimization problem. After reviewing the implementation of DGM for finite horizon Mean Field Games, we introduce a “turnpike-accelerated” version that incorporates the turnpike estimates in the loss function to be optimized, and we perform a comparative numerical analysis to show the advantages of this accelerated version over the baseline DGM algorithm. We demonstrate on some of the Mean Field Game models with local-couplings known to have the turnpike property, as well as a new class of linear-quadratic models for which we derive explicit turnpike estimates.","PeriodicalId":73299,"journal":{"name":"IEEE open journal of control systems","volume":"3 ","pages":"389-404"},"PeriodicalIF":0.0,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10572276","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142376852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-24DOI: 10.1109/OJCSYS.2024.3418306
Lunet Yifru;Ali Baheri
Reinforcement learning (RL) has revolutionized decision-making across a wide range of domains over the past few decades. Yet, deploying RL policies in real-world scenarios presents the crucial challenge of ensuring safety. Traditional safe RL approaches have predominantly focused on incorporating predefined safety constraints into the policy learning process. However, this reliance on predefined safety constraints poses limitations in dynamic and unpredictable real-world settings where such constraints may not be available or sufficiently adaptable. Bridging this gap, we propose a novel approach that concurrently learns a safe RL control policy and identifies the unknown safety constraint parameters of a given environment. Initializing with a parametric signal temporal logic (pSTL) safety specification and a small initial labeled dataset, we frame the problem as a bilevel optimization task, intricately integrating constrained policy optimization, using a Lagrangian-variant of the twin delayed deep deterministic policy gradient (TD3) algorithm, with Bayesian optimization for optimizing parameters for the given pSTL safety specification. Through experimentation in comprehensive case studies, we validate the efficacy of this approach across varying forms of environmental constraints, consistently yielding safe RL policies with high returns. Furthermore, our findings indicate successful learning of STL safety constraint parameters, exhibiting a high degree of conformity with true environmental safety constraints. The performance of our model closely mirrors that of an ideal scenario that possesses complete prior knowledge of safety constraints, demonstrating its proficiency in accurately identifying environmental safety constraints and learning safe policies that adhere to those constraints. A Python implementation of the algorithm can be found at https://github.com/SAILRIT/Concurrent-Learning-of-Control-Policy-and-Unknown-Constraints-in-Reinforcement-Learning.git