Pub Date : 2026-01-14DOI: 10.1109/OJCSYS.2026.3654320
{"title":"2025 Index IEEE Open Journal of Control Systems","authors":"","doi":"10.1109/OJCSYS.2026.3654320","DOIUrl":"https://doi.org/10.1109/OJCSYS.2026.3654320","url":null,"abstract":"","PeriodicalId":73299,"journal":{"name":"IEEE open journal of control systems","volume":"4 ","pages":"629-639"},"PeriodicalIF":0.0,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11353212","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-17DOI: 10.1109/OJCSYS.2025.3628513
{"title":"IEEE Control Systems Society Publication Information","authors":"","doi":"10.1109/OJCSYS.2025.3628513","DOIUrl":"https://doi.org/10.1109/OJCSYS.2025.3628513","url":null,"abstract":"","PeriodicalId":73299,"journal":{"name":"IEEE open journal of control systems","volume":"4 ","pages":"C3-C3"},"PeriodicalIF":0.0,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11303147","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-12DOI: 10.1109/OJCSYS.2025.3642596
MirSaleh Bahavarnia;Ahmad F. Taha
This paper introduces a method for efficiently updating a nominal stabilizing static output feedback (SOF) controller in perturbed linear systems. As operating points and state-space matrices change in dynamic systems, accommodating updates to the SOF controller are necessary. Traditional methods address such changes by re-solving for the updated SOF gain, which is often (i) computationally expensive due to the NP-hard nature of the problem or (ii) infeasible due to the limitations of its semi-definite programming relaxations. To overcome this, we leverage the concept of minimum destabilizing real perturbation (MDRP) to formulate a norm minimization problem that yields fast, reliable controller updates. This approach accommodates a variety of known perturbations, including abrupt changes, model inaccuracies, and equilibrium-dependent linearizations. We remark that the application of our proposed approach is limited to the class of SOF controllers in perturbed linear systems. We also introduce geometric metrics to quantify the proximity to instability and rigorously define stability-guaranteed regions. Extensive numerical simulations validate the efficiency and robustness of the proposed method. Moreover, such extensive numerical simulations corroborate that although we utilize a heuristic optimization method to compute the MDRP, it performs quite well in practice compared to an existing approximation method in the literature, namely the hybrid expansion-contraction (HEC) method. We demonstrate the results on the SOF control of multi-machine power networks with changing operating points, and demonstrate that the computed quick updates produce comparable solutions to the traditional SOF ones, while requiring orders of magnitude less computational time.
{"title":"Quick Updates for the Perturbed Static Output Feedback Control Problem in Linear Systems With Applications to Power Systems","authors":"MirSaleh Bahavarnia;Ahmad F. Taha","doi":"10.1109/OJCSYS.2025.3642596","DOIUrl":"https://doi.org/10.1109/OJCSYS.2025.3642596","url":null,"abstract":"This paper introduces a method for efficiently updating a nominal stabilizing static output feedback (SOF) controller in perturbed linear systems. As operating points and state-space matrices change in dynamic systems, accommodating updates to the SOF controller are necessary. Traditional methods address such changes by re-solving for the updated SOF gain, which is often (<italic>i</i>) computationally expensive due to the NP-hard nature of the problem or (<italic>ii</i>) infeasible due to the limitations of its semi-definite programming relaxations. To overcome this, we leverage the concept of <italic>minimum destabilizing real perturbation</i> (MDRP) to formulate a norm minimization problem that yields fast, reliable controller updates. This approach accommodates a variety of known perturbations, including abrupt changes, model inaccuracies, and equilibrium-dependent linearizations. We remark that the application of our proposed approach is limited to the class of SOF controllers in perturbed linear systems. We also introduce geometric metrics to quantify the proximity to instability and rigorously define stability-guaranteed regions. Extensive numerical simulations validate the efficiency and robustness of the proposed method. Moreover, such extensive numerical simulations corroborate that although we utilize a heuristic optimization method to compute the MDRP, it performs quite well in practice compared to an existing approximation method in the literature, namely the hybrid expansion-contraction (HEC) method. We demonstrate the results on the SOF control of multi-machine power networks with changing operating points, and demonstrate that the computed quick updates produce comparable solutions to the traditional SOF ones, while requiring orders of magnitude less computational time.","PeriodicalId":73299,"journal":{"name":"IEEE open journal of control systems","volume":"5 ","pages":"49-63"},"PeriodicalIF":0.0,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11298551","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-08DOI: 10.1109/OJCSYS.2025.3641369
Aiping Zhong;Baike She;Philip E. Paré
This work introduces a physics-informed neural networks (PINNs)-based model predictive control (MPC) framework for susceptible-infected-recovered ($SIR$) spreading models. Existing studies in MPC design for epidemic control often assume either 1) measurable states of the dynamics, where the parameters are learned, or 2) known parameters of the model, where the states are learned. In this work, we address the joint real-time estimation of states and parameters within the MPC framework using only noisy infected states, under the assumption that 1) only the recovery rate is known, or 2) only the basic reproduction number is known. Under the first assumption, we propose MPC-PINNs and two novel PINNs algorithms, all of which are integrated into the MPC framework. First, we introduce MPC-PINNs, which are designed for the $SIR$ model with control. We then propose log-scaled PINNs (MPC-LS-PINNs), which incorporate a log-scaled loss function to improve robustness against noise. Next, we present split-integral PINNs (MPC-SI-PINNs), which leverage integral operators and state coupling in the neural network training process to effectively reconstruct the complete epidemic state information. Building upon these methods, we further extend our framework for the second assumption. We establish the necessary conditions and extend our PINNs algorithms, where MPC-SI-PINNs are simplified as split-PINNs (MPC-S-PINNs). By incorporating these algorithms into the MPC framework, we simultaneously estimate the epidemic states and parameters while generating optimal control strategies. Comparative experiments against an extended Kalman filter, ideal MPC, and different neural network structures, together with validation on real COVID-19 data from Italy, demonstrate the effectiveness of the proposed methods under different settings.
这项工作引入了一个基于物理信息神经网络(pinn)的模型预测控制(MPC)框架,用于易感感染-恢复($SIR$)传播模型。现有的流行病控制MPC设计研究通常假设1)动力学的可测量状态,其中参数被学习,或2)模型的已知参数,其中状态被学习。在这项工作中,我们在假设1)只有恢复率已知,或2)只有基本繁殖数已知的情况下,仅使用噪声感染状态,解决MPC框架内状态和参数的联合实时估计。在第一个假设下,我们提出了MPC- pinn和两种新的pinn算法,所有这些算法都集成到MPC框架中。首先,我们介绍了mpc - pin,它是为具有控制的$SIR$模型设计的。然后,我们提出对数尺度pinn (mpc - ls - pinn),其中包含对数尺度损失函数以提高对噪声的鲁棒性。接下来,我们提出了分裂积分pinn (mpc - si - pinn),它利用神经网络训练过程中的积分算子和状态耦合来有效地重建完整的流行病状态信息。在这些方法的基础上,我们进一步扩展了第二个假设的框架。我们建立了必要的条件并扩展了我们的pinn算法,其中mpc - si - pinn被简化为拆分pinn (mpc - s - pinn)。通过将这些算法整合到MPC框架中,我们在生成最优控制策略的同时估计了流行病的状态和参数。与扩展卡尔曼滤波、理想MPC和不同神经网络结构的对比实验,以及对意大利COVID-19真实数据的验证,证明了所提出方法在不同设置下的有效性。
{"title":"A Physics-Informed Neural Networks-Based Model Predictive Control Framework for $SIR$ Epidemics","authors":"Aiping Zhong;Baike She;Philip E. Paré","doi":"10.1109/OJCSYS.2025.3641369","DOIUrl":"https://doi.org/10.1109/OJCSYS.2025.3641369","url":null,"abstract":"This work introduces a physics-informed neural networks (PINNs)-based model predictive control (MPC) framework for susceptible-infected-recovered (<inline-formula><tex-math>$SIR$</tex-math></inline-formula>) spreading models. Existing studies in MPC design for epidemic control often assume either 1) measurable states of the dynamics, where the parameters are learned, or 2) known parameters of the model, where the states are learned. In this work, we address the joint real-time estimation of states and parameters within the MPC framework using only noisy infected states, under the assumption that 1) only the recovery rate is known, or 2) only the basic reproduction number is known. Under the first assumption, we propose MPC-PINNs and two novel PINNs algorithms, all of which are integrated into the MPC framework. First, we introduce MPC-PINNs, which are designed for the <inline-formula><tex-math>$SIR$</tex-math></inline-formula> model with control. We then propose log-scaled PINNs (MPC-LS-PINNs), which incorporate a log-scaled loss function to improve robustness against noise. Next, we present split-integral PINNs (MPC-SI-PINNs), which leverage integral operators and state coupling in the neural network training process to effectively reconstruct the complete epidemic state information. Building upon these methods, we further extend our framework for the second assumption. We establish the necessary conditions and extend our PINNs algorithms, where MPC-SI-PINNs are simplified as split-PINNs (MPC-S-PINNs). By incorporating these algorithms into the MPC framework, we simultaneously estimate the epidemic states and parameters while generating optimal control strategies. Comparative experiments against an extended Kalman filter, ideal MPC, and different neural network structures, together with validation on real COVID-19 data from Italy, demonstrate the effectiveness of the proposed methods under different settings.","PeriodicalId":73299,"journal":{"name":"IEEE open journal of control systems","volume":"5 ","pages":"31-48"},"PeriodicalIF":0.0,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11282991","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145830942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-24DOI: 10.1109/OJCSYS.2025.3636033
Ainur Zhaikhan;Ali H. Sayed
This study proposes the use of a social learning method to estimate a global state within a multi-agent off-policy actor-critic algorithm for reinforcement learning (RL) operating in a partially observable environment. We assume that the network of agents operates in a fully-decentralized manner, possessing the capability to exchange variables with their immediate neighbors. The proposed design methodology is supported by an analysis demonstrating that the difference between final outcomes, obtained when the global state is fully observed versus estimated through the social learning method, is $varepsilon$-bounded when an appropriate number of iterations of social learning updates are implemented. Unlike many existing dec-POMDP-based RL approaches, the proposed algorithm is suitable for model-free multi-agent reinforcement learning as it does not require knowledge of a transition model. Furthermore, experimental results illustrate the efficacy of the algorithm and demonstrate its superiority over the current state-of-the-art methods.
{"title":"Multi-Agent Off-Policy Actor-Critic Reinforcement Learning for Partially Observable Environments","authors":"Ainur Zhaikhan;Ali H. Sayed","doi":"10.1109/OJCSYS.2025.3636033","DOIUrl":"https://doi.org/10.1109/OJCSYS.2025.3636033","url":null,"abstract":"This study proposes the use of a social learning method to estimate a global state within a multi-agent off-policy actor-critic algorithm for reinforcement learning (RL) operating in a partially observable environment. We assume that the network of agents operates in a fully-decentralized manner, possessing the capability to exchange variables with their immediate neighbors. The proposed design methodology is supported by an analysis demonstrating that the difference between final outcomes, obtained when the global state is fully observed versus estimated through the social learning method, is <inline-formula><tex-math>$varepsilon$</tex-math></inline-formula>-bounded when an appropriate number of iterations of social learning updates are implemented. Unlike many existing dec-POMDP-based RL approaches, the proposed algorithm is suitable for model-free multi-agent reinforcement learning as it does not require knowledge of a transition model. Furthermore, experimental results illustrate the efficacy of the algorithm and demonstrate its superiority over the current state-of-the-art methods.","PeriodicalId":73299,"journal":{"name":"IEEE open journal of control systems","volume":"5 ","pages":"1-15"},"PeriodicalIF":0.0,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11266921","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-21DOI: 10.1109/OJCSYS.2025.3635449
Rajan K. Aggarwal;J. Christian Gerdes
Autonomous vehicle control in low-friction environments should be capable of using all of the available traction at the road to accomplish maneuvering objectives. In these environments, however, the limit of traction is difficult to estimate, which challenges standard motion planning techniques. In this paper, we introduce a trajectory optimization framework that natively incorporates the complex and nonlinear effects of friction uncertainty into the planning process to improve both the performance and robustness of maneuvering at high accelerations. The core approach of the method is to explicitly consider a range of possible dynamics models, inclusive of their closed-loop behavior, simultaneously in the optimization. We illustrate this method through a racing example, where the minimum-time objective facilitates intuitive performance and robustness metrics (lap time and tracking error limits), all while necessitating vehicle maneuvering through nonlinear and friction-sensitive regions of the state space. Experiments on an autonomous VW Golf GTI on a challenging winter ice track demonstrate the efficacy of this approach.
{"title":"Friction-Robust Autonomous Racing Using Trajectory Optimization Over Multiple Models","authors":"Rajan K. Aggarwal;J. Christian Gerdes","doi":"10.1109/OJCSYS.2025.3635449","DOIUrl":"https://doi.org/10.1109/OJCSYS.2025.3635449","url":null,"abstract":"Autonomous vehicle control in low-friction environments should be capable of using all of the available traction at the road to accomplish maneuvering objectives. In these environments, however, the limit of traction is difficult to estimate, which challenges standard motion planning techniques. In this paper, we introduce a trajectory optimization framework that natively incorporates the complex and nonlinear effects of friction uncertainty into the planning process to improve both the performance and robustness of maneuvering at high accelerations. The core approach of the method is to explicitly consider a range of possible dynamics models, <italic>inclusive</i> of their closed-loop behavior, simultaneously in the optimization. We illustrate this method through a racing example, where the minimum-time objective facilitates intuitive performance and robustness metrics (lap time and tracking error limits), all while necessitating vehicle maneuvering through nonlinear and friction-sensitive regions of the state space. Experiments on an autonomous VW Golf GTI on a challenging winter ice track demonstrate the efficacy of this approach.","PeriodicalId":73299,"journal":{"name":"IEEE open journal of control systems","volume":"5 ","pages":"16-30"},"PeriodicalIF":0.0,"publicationDate":"2025-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11264289","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-24DOI: 10.1109/OJCSYS.2025.3625465
Josefine B. Graebener;Apurva S. Badithela;Denizalp Goktas;Wyatt Ubellacker;Eric V. Mazumdar;Aaron D. Ames;Richard M. Murray
Designing tests for autonomous systems is challenging due to their complexity. This work proposes a flow-based approach for reactive test synthesis from temporal logic specifications, enabling the synthesis of test environments consisting of static and reactive obstacles, and. These specifications describe desired test behavior, including system requirements as well as a test objective not revealed to the system. The synthesized test strategy places restrictions on system actions in closed-loop with system behavior, accomplishing the test objective while ensuring realizability of the system’s objective without aiding it (a general-sum setting). Automata theory and flow networks are leveraged to formulate a mixed-linear program (MILP) for test synthesis. For a dynamic test agent, the agent strategy is synthesized for a generalized reactivity of rank 1 (GR(1)) specification constructed from the MILP solution. This flow-based, reactive test synthesis is conducted offline and is agnostic to the system controller. Finally, the resulting test strategy is demonstrated in simulation and hardware experiments on a pair of quadrupedal robots for a variety of specifications.
{"title":"Flow-Based Synthesis of Reactive Tests for Discrete Decision-Making Systems With Temporal Logic Specifications","authors":"Josefine B. Graebener;Apurva S. Badithela;Denizalp Goktas;Wyatt Ubellacker;Eric V. Mazumdar;Aaron D. Ames;Richard M. Murray","doi":"10.1109/OJCSYS.2025.3625465","DOIUrl":"https://doi.org/10.1109/OJCSYS.2025.3625465","url":null,"abstract":"Designing tests for autonomous systems is challenging due to their complexity. This work proposes a flow-based approach for reactive test synthesis from temporal logic specifications, enabling the synthesis of test environments consisting of static and reactive obstacles, and. These specifications describe desired test behavior, including system requirements as well as a test objective not revealed to the system. The synthesized test strategy places restrictions on system actions in closed-loop with system behavior, accomplishing the test objective while ensuring realizability of the system’s objective without aiding it (a general-sum setting). Automata theory and flow networks are leveraged to formulate a mixed-linear program (MILP) for test synthesis. For a dynamic test agent, the agent strategy is synthesized for a generalized reactivity of rank 1 (GR(1)) specification constructed from the MILP solution. This flow-based, reactive test synthesis is conducted offline and is agnostic to the system controller. Finally, the resulting test strategy is demonstrated in simulation and hardware experiments on a pair of quadrupedal robots for a variety of specifications.","PeriodicalId":73299,"journal":{"name":"IEEE open journal of control systems","volume":"4 ","pages":"597-617"},"PeriodicalIF":0.0,"publicationDate":"2025-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11217209","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-24DOI: 10.1109/OJCSYS.2025.3625364
Jeremy D. Watson
Predictive control can either be data-based (e.g. data-enabled predictive control, or DeePC) or model-based (model predictive control). In this paper we aim to bridge the gap between the two by investigating the case where only a partial model is available, i.e. incorporating model knowledge into DeePC. In our formulation, the partial knowledge takes the form of known state and output equations that are a subset of the complete model equations. We formulate an approach to take advantage of partial model knowledge which we call hybrid data-enabled predictive control (HDeePC). We prove feasible set equivalence and equivalent closed-loop behavior in the noiseless, LTI case. As we show, this has potential advantages over a purely data-based approach in terms of computational expense and robustness to noise in some cases. Furthermore, this allows applications to certain linear time-varying and nonlinear systems. Finally, a number of case studies, including the control of an energy storage system in a microgrid, a triple-mass system, and a larger power system, illustrate the potential of HDeePC.
{"title":"Hybrid Data-Enabled Predictive Control: Incorporating Model Knowledge Into the DeePC","authors":"Jeremy D. Watson","doi":"10.1109/OJCSYS.2025.3625364","DOIUrl":"https://doi.org/10.1109/OJCSYS.2025.3625364","url":null,"abstract":"Predictive control can either be data-based (e.g. data-enabled predictive control, or DeePC) or model-based (model predictive control). In this paper we aim to bridge the gap between the two by investigating the case where only a partial model is available, i.e. incorporating model knowledge into DeePC. In our formulation, the partial knowledge takes the form of known state and output equations that are a subset of the complete model equations. We formulate an approach to take advantage of partial model knowledge which we call hybrid data-enabled predictive control (HDeePC). We prove feasible set equivalence and equivalent closed-loop behavior in the noiseless, LTI case. As we show, this has potential advantages over a purely data-based approach in terms of computational expense and robustness to noise in some cases. Furthermore, this allows applications to certain linear time-varying and nonlinear systems. Finally, a number of case studies, including the control of an energy storage system in a microgrid, a triple-mass system, and a larger power system, illustrate the potential of HDeePC.","PeriodicalId":73299,"journal":{"name":"IEEE open journal of control systems","volume":"4 ","pages":"549-561"},"PeriodicalIF":0.0,"publicationDate":"2025-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11217168","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145510198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-23DOI: 10.1109/OJCSYS.2025.3624615
Rupam Kalyan Chakraborty;Geethu Joseph;Chandra R. Murthy
Sparsity constraints on the control inputs of a linear dynamical system naturally arise in several practical applications such as networked control, computer vision, seismic signal processing, and cyber-physical systems. In this work, we consider the problem of jointly estimating the states and sparse inputs of such systems from low-dimensional (compressive) measurements. Due to the low-dimensional measurements, conventional Kalman filtering and smoothing algorithms fail to accurately estimate the states and inputs. We present a Bayesian approach that exploits the input sparsity to significantly improve estimation accuracy. Sparsity in the input estimates is promoted by using different prior distributions on the input. We investigate two main approaches: regularizer-based maximum a posteriori estimation and Bayesian learning-based estimation. We also extend the approaches to handle control inputs with common support and analyze the time and memory complexities of the presented algorithms. Finally, using numerical simulations, we show that our algorithms outperform the state-of-the-art methods in terms of accuracy and time/memory complexities, especially in the low-dimensional measurement regime.
{"title":"State and Sparse Input Estimation in Linear Dynamical Systems Using Low-Dimensional Measurements","authors":"Rupam Kalyan Chakraborty;Geethu Joseph;Chandra R. Murthy","doi":"10.1109/OJCSYS.2025.3624615","DOIUrl":"https://doi.org/10.1109/OJCSYS.2025.3624615","url":null,"abstract":"Sparsity constraints on the control inputs of a linear dynamical system naturally arise in several practical applications such as networked control, computer vision, seismic signal processing, and cyber-physical systems. In this work, we consider the problem of jointly estimating the states and sparse inputs of such systems from low-dimensional (compressive) measurements. Due to the low-dimensional measurements, conventional Kalman filtering and smoothing algorithms fail to accurately estimate the states and inputs. We present a Bayesian approach that exploits the input sparsity to significantly improve estimation accuracy. Sparsity in the input estimates is promoted by using different prior distributions on the input. We investigate two main approaches: regularizer-based maximum a posteriori estimation and Bayesian learning-based estimation. We also extend the approaches to handle control inputs with common support and analyze the time and memory complexities of the presented algorithms. Finally, using numerical simulations, we show that our algorithms outperform the state-of-the-art methods in terms of accuracy and time/memory complexities, especially in the low-dimensional measurement regime.","PeriodicalId":73299,"journal":{"name":"IEEE open journal of control systems","volume":"4 ","pages":"581-596"},"PeriodicalIF":0.0,"publicationDate":"2025-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11215643","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145560628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-10DOI: 10.1109/OJCSYS.2025.3620209
Fernando Castañeda;Jason J. Choi;Wonsuhk Jung;Bike Zhang;Claire J. Tomlin;Koushil Sreenath
Learning-based control has demonstrated great promise for handling complex tasks in various applications. However, ensuring system safety under uncertain dynamics remains a significant challenge. Control Barrier Functions (CBFs) offer mathematical tools for enforcing safety constraints given known system dynamics, yet their guarantees can be lost in the presence of model errors. In this article, we present a framework that combines model-based safety methods with data-driven techniques to guarantee safety recursively for systems with uncertain dynamics. We build upon our previous work, where Gaussian Process (GP) regression was utilized to quantify uncertainty in model-based CBF constraints, resulting in a second-order cone program (SOCP) controller. When the SOCP is feasible at a state, it provides a pointwise probabilistic safety guarantee. A critical innovation we develop further in this work is an event-triggered online data collection algorithm that actively and safely gathers data to provide the recursive feasibility of the SOCP-based controller. By continuously assessing the sufficiency of data based on the feasibility measure of the SOCP, our method triggers safe exploratory actions when necessary to reduce the uncertainty in critical control directions. This approach ensures that a feasible, safety-preserving control input is always available, thereby establishing forward invariance of the safe set with high probability, even in previously unexplored regions. We validate the proposed framework through two numerical simulation experiments.
{"title":"Recursively Feasible Probabilistic Safe Online Learning With Control Barrier Functions","authors":"Fernando Castañeda;Jason J. Choi;Wonsuhk Jung;Bike Zhang;Claire J. Tomlin;Koushil Sreenath","doi":"10.1109/OJCSYS.2025.3620209","DOIUrl":"https://doi.org/10.1109/OJCSYS.2025.3620209","url":null,"abstract":"Learning-based control has demonstrated great promise for handling complex tasks in various applications. However, ensuring system safety under uncertain dynamics remains a significant challenge. Control Barrier Functions (CBFs) offer mathematical tools for enforcing safety constraints given known system dynamics, yet their guarantees can be lost in the presence of model errors. In this article, we present a framework that combines model-based safety methods with data-driven techniques to guarantee safety recursively for systems with uncertain dynamics. We build upon our previous work, where Gaussian Process (GP) regression was utilized to quantify uncertainty in model-based CBF constraints, resulting in a second-order cone program (SOCP) controller. When the SOCP is feasible at a state, it provides a pointwise probabilistic safety guarantee. A critical innovation we develop further in this work is an event-triggered online data collection algorithm that actively and safely gathers data to provide the recursive feasibility of the SOCP-based controller. By continuously assessing the sufficiency of data based on the feasibility measure of the SOCP, our method triggers safe exploratory actions when necessary to reduce the uncertainty in critical control directions. This approach ensures that a feasible, safety-preserving control input is always available, thereby establishing forward invariance of the safe set with high probability, even in previously unexplored regions. We validate the proposed framework through two numerical simulation experiments.","PeriodicalId":73299,"journal":{"name":"IEEE open journal of control systems","volume":"4 ","pages":"531-548"},"PeriodicalIF":0.0,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11199905","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}