This addresses errors in [1]. Due to a production error, Figs. 4, 5, 6, 8, and 9 are not rendering correctly in the article PDF. The correct figures are as follows. Figure 4. Mountains—Closed-loop trajectories before training (left) and after training (middle and right) over 100 randomly sampled initial conditions marked with $circ$. Snapshots taken at time-instants τ. Colored (gray) lines show the trajectories in [0, τi] ([τi, ∞)). Colored balls (and their radius) represent the agents (and their size for collision avoidance). Figure 5. Mountains—Closed-loop trajectories after 25%, 50% and 75% of the total training whose closed-loop trajectory is shown in Fig. 4. Even if the performance can be further optimized, stability is always guaranteed. Figure 6. Mountains—Closed-loop trajectories after training. (Left and middle) Controller tested over a system with mass uncertainty (-10% and +10%, respectively). (Right) Trained controller with safety promotion through (45). Training initial conditions marked with $circ$. Snapshots taken at time-instants τ. Colored (gray) lines show the trajectories in [0, τi] ([τi, ∞)). Colored balls (and their radius) represent the agents (and their size for collision avoidance). Figure 8. Mountains—Closed-loop trajectories when using the online policy given by (48). Snapshots of three trajectories starting at different test initial conditions. Figure 9. Mountains—Three different closed-loop trajectories after training a REN controller without ${mathcal{L}}_{2}$ stability guarantees over 100 randomly sampled initial conditions marked with $circ$. Colored (gray) lines show the trajectories in (after) the training time interval.
{"title":"Erratum to “Learning to Boost the Performance of Stable Nonlinear Systems”","authors":"Luca Furieri;Clara Lucía Galimberti;Giancarlo Ferrari-Trecate","doi":"10.1109/OJCSYS.2025.3529361","DOIUrl":"https://doi.org/10.1109/OJCSYS.2025.3529361","url":null,"abstract":"This addresses errors in [1]. Due to a production error, Figs. 4, 5, 6, 8, and 9 are not rendering correctly in the article PDF. The correct figures are as follows. Figure 4. Mountains—Closed-loop trajectories before training (left) and after training (middle and right) over 100 randomly sampled initial conditions marked with $circ$. Snapshots taken at time-instants τ. Colored (gray) lines show the trajectories in [0, τi] ([τi, ∞)). Colored balls (and their radius) represent the agents (and their size for collision avoidance). Figure 5. Mountains—Closed-loop trajectories after 25%, 50% and 75% of the total training whose closed-loop trajectory is shown in Fig. 4. Even if the performance can be further optimized, stability is always guaranteed. Figure 6. Mountains—Closed-loop trajectories after training. (Left and middle) Controller tested over a system with mass uncertainty (-10% and +10%, respectively). (Right) Trained controller with safety promotion through (45). Training initial conditions marked with $circ$. Snapshots taken at time-instants τ. Colored (gray) lines show the trajectories in [0, τi] ([τi, ∞)). Colored balls (and their radius) represent the agents (and their size for collision avoidance). Figure 8. Mountains—Closed-loop trajectories when using the online policy given by (48). Snapshots of three trajectories starting at different test initial conditions. Figure 9. Mountains—Three different closed-loop trajectories after training a REN controller without ${mathcal{L}}_{2}$ stability guarantees over 100 randomly sampled initial conditions marked with $circ$. Colored (gray) lines show the trajectories in (after) the training time interval.","PeriodicalId":73299,"journal":{"name":"IEEE open journal of control systems","volume":"4 ","pages":"53-53"},"PeriodicalIF":0.0,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10870044","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143106755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-13DOI: 10.1109/OJCSYS.2025.3529364
Anil Alan;Tamas G. Molnar;Aaron D. Ames;Gábor Orosz
While control barrier functions provide a powerful tool to endow controllers with formal safety guarantees, robust control barrier functions (RCBF) can be used to extend these guarantees for systems with model inaccuracies. This paper presents a generalized RCBF framework that unifies and extends existing notions of RCBFs for a broad class of model uncertainties. Main results are conditions for robust safety through generalized RCBFs. We apply these generalized principles for more specific design examples: a worst-case type design, an estimation-based design, and a tunable version of the latter. These examples are demonstrated to perform increasingly closer to an oracle design with ideal model information. Theoretical contributions are demonstrated on a practical example of a pendulum with unknown periodic excitation. Using numerical simulations, a comparison among design examples are carried out based on a performance metric depicting the increased likeness to the oracle design.
{"title":"Generalizing Robust Control Barrier Functions From a Controller Design Perspective","authors":"Anil Alan;Tamas G. Molnar;Aaron D. Ames;Gábor Orosz","doi":"10.1109/OJCSYS.2025.3529364","DOIUrl":"https://doi.org/10.1109/OJCSYS.2025.3529364","url":null,"abstract":"While control barrier functions provide a powerful tool to endow controllers with formal safety guarantees, robust control barrier functions (RCBF) can be used to extend these guarantees for systems with model inaccuracies. This paper presents a generalized RCBF framework that unifies and extends existing notions of RCBFs for a broad class of model uncertainties. Main results are conditions for robust safety through generalized RCBFs. We apply these generalized principles for more specific design examples: a worst-case type design, an estimation-based design, and a tunable version of the latter. These examples are demonstrated to perform increasingly closer to an oracle design with ideal model information. Theoretical contributions are demonstrated on a practical example of a pendulum with unknown periodic excitation. Using numerical simulations, a comparison among design examples are carried out based on a performance metric depicting the increased likeness to the oracle design.","PeriodicalId":73299,"journal":{"name":"IEEE open journal of control systems","volume":"4 ","pages":"54-69"},"PeriodicalIF":0.0,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10839547","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143361460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-10DOI: 10.1109/OJCSYS.2025.3528596
{"title":"2024 Index IEEE Open Journal of Control Systems Vol. 3","authors":"","doi":"10.1109/OJCSYS.2025.3528596","DOIUrl":"https://doi.org/10.1109/OJCSYS.2025.3528596","url":null,"abstract":"","PeriodicalId":73299,"journal":{"name":"IEEE open journal of control systems","volume":"3 ","pages":"514-523"},"PeriodicalIF":0.0,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10837576","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142940701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-07DOI: 10.1109/OJCSYS.2024.3360366
{"title":"IEEE Control Systems Society Publication Information","authors":"","doi":"10.1109/OJCSYS.2024.3360366","DOIUrl":"https://doi.org/10.1109/OJCSYS.2024.3360366","url":null,"abstract":"","PeriodicalId":73299,"journal":{"name":"IEEE open journal of control systems","volume":"3 ","pages":"C3-C3"},"PeriodicalIF":0.0,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10832464","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142938151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-07DOI: 10.1109/OJCSYS.2024.3360362
{"title":"IEEE Open Journal of Control Systems Publication Information","authors":"","doi":"10.1109/OJCSYS.2024.3360362","DOIUrl":"https://doi.org/10.1109/OJCSYS.2024.3360362","url":null,"abstract":"","PeriodicalId":73299,"journal":{"name":"IEEE open journal of control systems","volume":"3 ","pages":"C2-C2"},"PeriodicalIF":0.0,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10832461","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142938153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-06DOI: 10.1109/OJCSYS.2025.3526003
Jiacheng Tang;Jiguo Song;Abhishek Gupta
Dynamic watermarking is an active intrusion detection technique that can potentially detect replay attacks, spoofing attacks, and deception attacks in the feedback channel for control systems. In this paper, we develop a novel dynamic watermarking algorithm for finite-state finite-action Markov decision processes. We derive a lower bound on the mean time between false alarms and an upper bound on the mean delay between the time an attack occurs and when it is detected. We further compute the sensitivity of the performance of the control system as a function of the watermark. We demonstrate the effectiveness of the proposed dynamic watermarking algorithm by detecting a spoofing attack in a sensor network system.
{"title":"Dynamic Watermarking for Finite Markov Decision Processes","authors":"Jiacheng Tang;Jiguo Song;Abhishek Gupta","doi":"10.1109/OJCSYS.2025.3526003","DOIUrl":"https://doi.org/10.1109/OJCSYS.2025.3526003","url":null,"abstract":"Dynamic watermarking is an active intrusion detection technique that can potentially detect replay attacks, spoofing attacks, and deception attacks in the feedback channel for control systems. In this paper, we develop a novel dynamic watermarking algorithm for finite-state finite-action Markov decision processes. We derive a lower bound on the mean time between false alarms and an upper bound on the mean delay between the time an attack occurs and when it is detected. We further compute the sensitivity of the performance of the control system as a function of the watermark. We demonstrate the effectiveness of the proposed dynamic watermarking algorithm by detecting a spoofing attack in a sensor network system.","PeriodicalId":73299,"journal":{"name":"IEEE open journal of control systems","volume":"4 ","pages":"41-52"},"PeriodicalIF":0.0,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10824908","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143106756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-06DOI: 10.1109/OJCSYS.2025.3525983
Hyung Jun Kim;Mohammadreza Kamaldar;Dennis S. Bernstein
This paper considers initial undershoot in the step response of discrete-time, input-output Hammerstein (DIH) systems, which have linear unforced dynamics and nonlinear zero dynamics (ZD). Initial undershoot occurs when the step response of a system moves initially in a direction that is opposite to the direction of the asymptotic response. For DIH systems, the paper investigates the relationship among the existence of initial undershoot, the step height, the height-dependent delay, and the stability of the ZD. For linear, time-invariant systems, the height-dependent delay specializes to the relative degree. The main result of the paper provides conditions under which, for all sufficiently small step heights, initial undershoot in the step response of a DIH system implies instability of the ZD. Several examples of DIH systems are presented to illustrate these results.
{"title":"Initial Undershoot in Discrete-Time Input–Output Hammerstein Systems","authors":"Hyung Jun Kim;Mohammadreza Kamaldar;Dennis S. Bernstein","doi":"10.1109/OJCSYS.2025.3525983","DOIUrl":"https://doi.org/10.1109/OJCSYS.2025.3525983","url":null,"abstract":"This paper considers initial undershoot in the step response of discrete-time, input-output Hammerstein (DIH) systems, which have linear unforced dynamics and nonlinear zero dynamics (ZD). Initial undershoot occurs when the step response of a system moves initially in a direction that is opposite to the direction of the asymptotic response. For DIH systems, the paper investigates the relationship among the existence of initial undershoot, the step height, the height-dependent delay, and the stability of the ZD. For linear, time-invariant systems, the height-dependent delay specializes to the relative degree. The main result of the paper provides conditions under which, for all sufficiently small step heights, initial undershoot in the step response of a DIH system implies instability of the ZD. Several examples of DIH systems are presented to illustrate these results.","PeriodicalId":73299,"journal":{"name":"IEEE open journal of control systems","volume":"4 ","pages":"30-40"},"PeriodicalIF":0.0,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10824927","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143106754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-28DOI: 10.1109/OJCSYS.2024.3508396
Xile Kang;Hideaki Ishii
This paper focuses on cyber-security issues of networked control systems in closed-loop forms from the perspective of quantized sampled-data systems. Quantization of control inputs adds quantization error to the plant input, resulting in certain variation in the plant output. On the other hand, sampling can introduce non-minimum phase zeros in discretized systems. We consider zero-dynamics attacks, which is a class of false data injection attacks utilizing such unstable zeros. Although non-quantized zero-dynamics attacks are undetectable from the plant output side, quantized attacks may be revealed by larger output variation. Our setting is that the attack signal is applied with the same uniform quantizer used for the control input. We evaluate the attack stealthiness in the closed-loop system setting by quantifying the output variation. Specifically, we characterize the cases for static and dynamic quantization in the attack signal, while keeping the control input statically quantized. Then we demonstrate that the attacker can reduce such output variation with a modified approach, by compensating the quantization error of the attack signal inside the attack dynamics. We provide numerical examples to illustrate the effectiveness of the proposed approaches. We show that observing the quantized control input value by a mirroring model can reveal the zero-dynamics attacks.
{"title":"Quantization Effects on Zero-Dynamics Attacks to Closed-Loop Sampled-Data Control Systems","authors":"Xile Kang;Hideaki Ishii","doi":"10.1109/OJCSYS.2024.3508396","DOIUrl":"https://doi.org/10.1109/OJCSYS.2024.3508396","url":null,"abstract":"This paper focuses on cyber-security issues of networked control systems in closed-loop forms from the perspective of quantized sampled-data systems. Quantization of control inputs adds quantization error to the plant input, resulting in certain variation in the plant output. On the other hand, sampling can introduce non-minimum phase zeros in discretized systems. We consider zero-dynamics attacks, which is a class of false data injection attacks utilizing such unstable zeros. Although non-quantized zero-dynamics attacks are undetectable from the plant output side, quantized attacks may be revealed by larger output variation. Our setting is that the attack signal is applied with the same uniform quantizer used for the control input. We evaluate the attack stealthiness in the closed-loop system setting by quantifying the output variation. Specifically, we characterize the cases for static and dynamic quantization in the attack signal, while keeping the control input statically quantized. Then we demonstrate that the attacker can reduce such output variation with a modified approach, by compensating the quantization error of the attack signal inside the attack dynamics. We provide numerical examples to illustrate the effectiveness of the proposed approaches. We show that observing the quantized control input value by a mirroring model can reveal the zero-dynamics attacks.","PeriodicalId":73299,"journal":{"name":"IEEE open journal of control systems","volume":"4 ","pages":"18-29"},"PeriodicalIF":0.0,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10770577","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper investigates the system identification problem for linear discrete-time systems under adversaries and analyzes two lasso-type estimators. We examine non-asymptotic properties of these estimators in two separate scenarios, corresponding to deterministic and stochastic models for the attack times. We prove that when the system is stable and attacks are injected periodically, the sample complexity for exact recovery of the system dynamics is linear in terms of the dimension of the states. When adversarial attacks occur at each time instance with probability $p$