Pub Date : 2026-01-12DOI: 10.1109/LCSYS.2026.3652578
Sasa Chen;Dan Liu;Binrui Wang;Jun Mao;Xiaohang Li
This letter concentrates on the trade-off between control time and energy consumption in achieving finite-time synchronization of stochastic higher-order networks. Different from the existing results that either only consider pairwise interactions between nodes or restrict higher-order interactions to 1-directed hyperedges, multi-directed hypergraphs are employed to better describe group interactions in real networks. A switching control strategy is proposed to estimate the control time and energy consumption in achieving finite-time synchronization of higher-order networks. In addition, to discuss the trade-off between control time and energy consumption, an evaluation index function with adjustable weights is constructed to analyze the optimal control gains in finite time. Finally, an example for higher-order complex networks composed of Chua’s circuits is provided to validate the effectiveness of the proposed results.
{"title":"A Trade-Off Between Time and Energy of Stochastic Higher-Order Networks on Multi-Directed Hypergraphs","authors":"Sasa Chen;Dan Liu;Binrui Wang;Jun Mao;Xiaohang Li","doi":"10.1109/LCSYS.2026.3652578","DOIUrl":"https://doi.org/10.1109/LCSYS.2026.3652578","url":null,"abstract":"This letter concentrates on the trade-off between control time and energy consumption in achieving finite-time synchronization of stochastic higher-order networks. Different from the existing results that either only consider pairwise interactions between nodes or restrict higher-order interactions to 1-directed hyperedges, multi-directed hypergraphs are employed to better describe group interactions in real networks. A switching control strategy is proposed to estimate the control time and energy consumption in achieving finite-time synchronization of higher-order networks. In addition, to discuss the trade-off between control time and energy consumption, an evaluation index function with adjustable weights is constructed to analyze the optimal control gains in finite time. Finally, an example for higher-order complex networks composed of Chua’s circuits is provided to validate the effectiveness of the proposed results.","PeriodicalId":37235,"journal":{"name":"IEEE Control Systems Letters","volume":"9 ","pages":"3143-3148"},"PeriodicalIF":2.0,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1109/LCSYS.2026.3653302
Yankai Lin;Xiaopuwen Wang
In this letter, we investigate the performance of stochastic coordinate descent algorithms for convex optimization problems from the novel perspective of regret minimization. Specifically, we consider a stochastic coordinate selection rule that might vary over time, whereas existing results commonly focus on stochastic selection rules that remain fixed. Moreover, unlike most existing results, which assume that the expectation of algorithm updates follows the exact negative gradient direction, our framework is applicable to any stochastic gradient descent (SGD) like algorithms with estimated gradients that satisfy our main assumption, thus making our approach more general than existing results. We show that the updates of the stochastic algorithm to solve a standard convex optimization problem also serve as an online algorithm for an online convex optimization problem that has time-varying cost functions at each time step. We prove that, under relatively weak assumptions, the online algorithm achieves sublinear regret against the solution of the problem, therefore ensuring the average performance of the algorithm within a finite time window. Finally, we verify our theoretical results by solving a linear regression problem using the proposed algorithm.
{"title":"Stochastic Coordinate Descent via Regret Minimization","authors":"Yankai Lin;Xiaopuwen Wang","doi":"10.1109/LCSYS.2026.3653302","DOIUrl":"https://doi.org/10.1109/LCSYS.2026.3653302","url":null,"abstract":"In this letter, we investigate the performance of stochastic coordinate descent algorithms for convex optimization problems from the novel perspective of regret minimization. Specifically, we consider a stochastic coordinate selection rule that might vary over time, whereas existing results commonly focus on stochastic selection rules that remain fixed. Moreover, unlike most existing results, which assume that the expectation of algorithm updates follows the exact negative gradient direction, our framework is applicable to any stochastic gradient descent (SGD) like algorithms with estimated gradients that satisfy our main assumption, thus making our approach more general than existing results. We show that the updates of the stochastic algorithm to solve a standard convex optimization problem also serve as an online algorithm for an online convex optimization problem that has time-varying cost functions at each time step. We prove that, under relatively weak assumptions, the online algorithm achieves sublinear regret against the solution of the problem, therefore ensuring the average performance of the algorithm within a finite time window. Finally, we verify our theoretical results by solving a linear regression problem using the proposed algorithm.","PeriodicalId":37235,"journal":{"name":"IEEE Control Systems Letters","volume":"9 ","pages":"3155-3160"},"PeriodicalIF":2.0,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1109/LCSYS.2026.3652898
Mahdi Taheri;Soon-Jo Chung;Fred Y. Hadaegh
This letter studies the problem of real-time fault recovery control for nonlinear control-affine systems subject to actuator loss of effectiveness faults and external disturbances. We develop a two-stage framework that combines causal inference with selective online adaptation to achieve an effective learning-based recovery control method. In the offline phase, we introduce a causal layer attribution technique based on the average causal effect (ACE) to evaluate the relative importance of each layer in a pretrained deep neural network (DNN) controller compensating for faults. This provides a principled approach to select the most causally influential layer for fault recovery control in the sense of ACE, and goes beyond the widely used last-layer adaptation approach. In the online phase, we deploy a Lyapunov-based gradient update to adapt only the ACE-selected layer to circumvent the need for full-network or last-layer only updates. The proposed adaptive controller guarantees uniform ultimate boundedness (UUB) with exponential convergence of the closed-loop system in the presence of actuator faults and external disturbances. Compared to conventional adaptive DNN controllers with full-network adaptation, our methodology has a reduced computational overhead in the online phase. To demonstrate the effectiveness of our proposed methodology, a case study is provided on a 3-axis attitude control system of a spacecraft with four reaction wheels.
{"title":"Closing the Loop Inside Neural Networks: Causality-Guided Layer Adaptation for Fault Recovery Control","authors":"Mahdi Taheri;Soon-Jo Chung;Fred Y. Hadaegh","doi":"10.1109/LCSYS.2026.3652898","DOIUrl":"https://doi.org/10.1109/LCSYS.2026.3652898","url":null,"abstract":"This letter studies the problem of real-time fault recovery control for nonlinear control-affine systems subject to actuator loss of effectiveness faults and external disturbances. We develop a two-stage framework that combines causal inference with selective online adaptation to achieve an effective learning-based recovery control method. In the offline phase, we introduce a causal layer attribution technique based on the average causal effect (ACE) to evaluate the relative importance of each layer in a pretrained deep neural network (DNN) controller compensating for faults. This provides a principled approach to select the most causally influential layer for fault recovery control in the sense of ACE, and goes beyond the widely used last-layer adaptation approach. In the online phase, we deploy a Lyapunov-based gradient update to adapt only the ACE-selected layer to circumvent the need for full-network or last-layer only updates. The proposed adaptive controller guarantees uniform ultimate boundedness (UUB) with exponential convergence of the closed-loop system in the presence of actuator faults and external disturbances. Compared to conventional adaptive DNN controllers with full-network adaptation, our methodology has a reduced computational overhead in the online phase. To demonstrate the effectiveness of our proposed methodology, a case study is provided on a 3-axis attitude control system of a spacecraft with four reaction wheels.","PeriodicalId":37235,"journal":{"name":"IEEE Control Systems Letters","volume":"9 ","pages":"3137-3142"},"PeriodicalIF":2.0,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-06DOI: 10.1109/LCSYS.2026.3651656
Shayan Meshkat Alsadat;Zhe Xu
Offline reinforcement learning (RL) learns policies from fixed-size datasets without interacting with the environment, while multi-agent reinforcement learning (MARL) faces challenges from large joint state-action spaces and agent interdependencies. Most offline MARL methods apply regularizations, ignoring system-wide dependencies, risking extrapolation errors. We propose Automata-Guided Multi-Agent Offline RL with Reward Machine (AGMORL), a novel framework extending automata learning to offline MARL with reward machines. AGMORL uses a deterministic finite automaton to learn the reward machine from a dataset, capturing team dynamics and agent interactions, while guiding individual policies to avoid out-of-distribution actions by encoding dataset high-level knowledge. Unlike other methods, it avoids secondary components like generative models. We provide convergence guarantees to an optimal policy and show AGMORL outperforms state-of-the-art offline MARL methods.
{"title":"Encoding High-Level Knowledge in Offline Multi-Agent Reinforcement Learning Using Reward Machines","authors":"Shayan Meshkat Alsadat;Zhe Xu","doi":"10.1109/LCSYS.2026.3651656","DOIUrl":"https://doi.org/10.1109/LCSYS.2026.3651656","url":null,"abstract":"Offline reinforcement learning (RL) learns policies from fixed-size datasets without interacting with the environment, while multi-agent reinforcement learning (MARL) faces challenges from large joint state-action spaces and agent interdependencies. Most offline MARL methods apply regularizations, ignoring system-wide dependencies, risking extrapolation errors. We propose Automata-Guided Multi-Agent Offline RL with Reward Machine (AGMORL), a novel framework extending automata learning to offline MARL with reward machines. AGMORL uses a deterministic finite automaton to learn the reward machine from a dataset, capturing team dynamics and agent interactions, while guiding individual policies to avoid out-of-distribution actions by encoding dataset high-level knowledge. Unlike other methods, it avoids secondary components like generative models. We provide convergence guarantees to an optimal policy and show AGMORL outperforms state-of-the-art offline MARL methods.","PeriodicalId":37235,"journal":{"name":"IEEE Control Systems Letters","volume":"9 ","pages":"3149-3154"},"PeriodicalIF":2.0,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-31DOI: 10.1109/LCSYS.2025.3650099
Huan Li;Qian Dong
A prescribed-time non-cooperative game control scheme is investigated for formation tracking of nonlinear multi-agent systems in this letter. Within the framework of non-cooperative game theory, followers pursue self-interested objectives, which may conflict both among themselves and with the collective formation goal, leading to a compromised formation geometry determined by the dynamic Nash equilibrium. Meanwhile, a prescribed-time control scheme is proposed such that the states of formation system can converge to the dynamic Nash equilibrium of formation tracking problem. Furthermore, by integrating a robust term and leveraging the Lipschitz condition, the proposed controller effectively handles the uncertainties in the system. The effectiveness of the method is further confirmed by numerical examples.
{"title":"Formation Tracking for Nonlinear Systems via Prescribed-Time Noncooperative Game","authors":"Huan Li;Qian Dong","doi":"10.1109/LCSYS.2025.3650099","DOIUrl":"https://doi.org/10.1109/LCSYS.2025.3650099","url":null,"abstract":"A prescribed-time non-cooperative game control scheme is investigated for formation tracking of nonlinear multi-agent systems in this letter. Within the framework of non-cooperative game theory, followers pursue self-interested objectives, which may conflict both among themselves and with the collective formation goal, leading to a compromised formation geometry determined by the dynamic Nash equilibrium. Meanwhile, a prescribed-time control scheme is proposed such that the states of formation system can converge to the dynamic Nash equilibrium of formation tracking problem. Furthermore, by integrating a robust term and leveraging the Lipschitz condition, the proposed controller effectively handles the uncertainties in the system. The effectiveness of the method is further confirmed by numerical examples.","PeriodicalId":37235,"journal":{"name":"IEEE Control Systems Letters","volume":"9 ","pages":"3107-3112"},"PeriodicalIF":2.0,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-31DOI: 10.1109/LCSYS.2025.3650105
Zhicheng Deng;Xuyang Wu;Jie Lu
In most existing asynchronous methods, the stepsize depends on an upper bound on the delays and decreases as this bound increases. However, since the upper bound is usually unknown and large, the resulting stepsizes are not only difficult to determine in practice but also overly conservative, which leads to slow convergence. To solve this issue, we propose an adaptive stepsize strategy for a typical asynchronous optimization method—the Incremental Aggregated Gradient (IAG) method. Unlike existing methods in the literature, our stepsize does not rely on any delay information and is less conservative, which leads to easier stepsize determination and faster convergence. Under standard assumptions, we provide the convergence rate of IAG with the proposed stepsize. Numerical experiments demonstrate the superior performance of our stepsize over alternative stepsize strategies.
{"title":"A Delay-Free Adaptive Stepsize for the Incremental Aggregated Gradient Method","authors":"Zhicheng Deng;Xuyang Wu;Jie Lu","doi":"10.1109/LCSYS.2025.3650105","DOIUrl":"https://doi.org/10.1109/LCSYS.2025.3650105","url":null,"abstract":"In most existing asynchronous methods, the stepsize depends on an upper bound on the delays and decreases as this bound increases. However, since the upper bound is usually unknown and large, the resulting stepsizes are not only difficult to determine in practice but also overly conservative, which leads to slow convergence. To solve this issue, we propose an adaptive stepsize strategy for a typical asynchronous optimization method—the Incremental Aggregated Gradient (IAG) method. Unlike existing methods in the literature, our stepsize does not rely on any delay information and is less conservative, which leads to easier stepsize determination and faster convergence. Under standard assumptions, we provide the convergence rate of IAG with the proposed stepsize. Numerical experiments demonstrate the superior performance of our stepsize over alternative stepsize strategies.","PeriodicalId":37235,"journal":{"name":"IEEE Control Systems Letters","volume":"9 ","pages":"3095-3100"},"PeriodicalIF":2.0,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-26DOI: 10.1109/LCSYS.2025.3648637
Tianyu Zhou;Zihao Liang;Zehui Lu;Shaoshuai Mou
This letter proposes a Safe Online Control-Informed Learning framework for safety-critical autonomous systems. The framework unifies optimal control, parameter estimation, and safety constraints into an online learning process. It employs an extended Kalman filter to incrementally update system parameters in real time, enabling robust and data-efficient adaptation under uncertainty. A softplus barrier function enforces constraint satisfaction during learning and control while eliminating the dependence on high-quality initial guesses. Theoretical analysis establishes convergence and safety guarantees, and the framework’s effectiveness is demonstrated on cart-pole and robot-arm systems.
{"title":"Safe Online Control-Informed Learning","authors":"Tianyu Zhou;Zihao Liang;Zehui Lu;Shaoshuai Mou","doi":"10.1109/LCSYS.2025.3648637","DOIUrl":"https://doi.org/10.1109/LCSYS.2025.3648637","url":null,"abstract":"This letter proposes a Safe Online Control-Informed Learning framework for safety-critical autonomous systems. The framework unifies optimal control, parameter estimation, and safety constraints into an online learning process. It employs an extended Kalman filter to incrementally update system parameters in real time, enabling robust and data-efficient adaptation under uncertainty. A softplus barrier function enforces constraint satisfaction during learning and control while eliminating the dependence on high-quality initial guesses. Theoretical analysis establishes convergence and safety guarantees, and the framework’s effectiveness is demonstrated on cart-pole and robot-arm systems.","PeriodicalId":37235,"journal":{"name":"IEEE Control Systems Letters","volume":"9 ","pages":"3083-3088"},"PeriodicalIF":2.0,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-26DOI: 10.1109/LCSYS.2025.3648776
Hang Zhang;Xiangru Xu
This letter addresses the safe control synthesis problem for neural network control systems subject to bounded unknown disturbances and known exogenous inputs. A forward reachability analysis method is developed to over-approximate the system’s forward reachable sets using constrained zonotopes, where the control sequence appears linearly in both the zonotope center and the right-hand side of the associated equality constraints. Based on these over-approximations, a quadratically constrained program and its convexification are formulated to synthesize control sequences that guarantee safety. A numerical example demonstrates the effectiveness of the proposed approach.
{"title":"Safe Control Synthesis for Neural Network Control Systems via Constrained Zonotopes","authors":"Hang Zhang;Xiangru Xu","doi":"10.1109/LCSYS.2025.3648776","DOIUrl":"https://doi.org/10.1109/LCSYS.2025.3648776","url":null,"abstract":"This letter addresses the safe control synthesis problem for neural network control systems subject to bounded unknown disturbances and known exogenous inputs. A forward reachability analysis method is developed to over-approximate the system’s forward reachable sets using constrained zonotopes, where the control sequence appears linearly in both the zonotope center and the right-hand side of the associated equality constraints. Based on these over-approximations, a quadratically constrained program and its convexification are formulated to synthesize control sequences that guarantee safety. A numerical example demonstrates the effectiveness of the proposed approach.","PeriodicalId":37235,"journal":{"name":"IEEE Control Systems Letters","volume":"9 ","pages":"3071-3076"},"PeriodicalIF":2.0,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-26DOI: 10.1109/LCSYS.2025.3648432
Andreas Köhler;Ping Zhang
This letter proposes a set of novel sufficient conditions that guarantee three behavioral properties in signal interpreted Petri nets (SIPNs), i.e., liveness, reachability, and reversibility. SIPNs provide a modeling formalism for representing the control algorithm of discrete manufacturing systems. The liveness, reachability, and reversibility properties ensure that the desired control actions remain perpetually executable, the system states are reachable, and that the system can always return to its initial state, respectively. The sufficient conditions are derived based on the Petri net state equation and the enabling rules of the transitions in SIPNs. Moreover, it is shown how the reachability and reversibility can be computationally verified based on an integer linear programming problem. The computational complexity for verifying the properties is polynomial with respect to the number of markings in the SIPN when the reachable set is already available.
{"title":"Liveness, Reachability, and Reversibility of Signal Interpreted Petri Nets","authors":"Andreas Köhler;Ping Zhang","doi":"10.1109/LCSYS.2025.3648432","DOIUrl":"https://doi.org/10.1109/LCSYS.2025.3648432","url":null,"abstract":"This letter proposes a set of novel sufficient conditions that guarantee three behavioral properties in signal interpreted Petri nets (SIPNs), i.e., liveness, reachability, and reversibility. SIPNs provide a modeling formalism for representing the control algorithm of discrete manufacturing systems. The liveness, reachability, and reversibility properties ensure that the desired control actions remain perpetually executable, the system states are reachable, and that the system can always return to its initial state, respectively. The sufficient conditions are derived based on the Petri net state equation and the enabling rules of the transitions in SIPNs. Moreover, it is shown how the reachability and reversibility can be computationally verified based on an integer linear programming problem. The computational complexity for verifying the properties is polynomial with respect to the number of markings in the SIPN when the reachable set is already available.","PeriodicalId":37235,"journal":{"name":"IEEE Control Systems Letters","volume":"9 ","pages":"3131-3136"},"PeriodicalIF":2.0,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-26DOI: 10.1109/LCSYS.2025.3649131
Guojie Li;Ping Zhou
In this letter, a prescribed performance function-based data-driven virtual setpoint P-type controller (PPF-DDVSPC) is proposed for single-input single-output (SISO) systems with nonlinear nonaffine dynamics. First, the original model with error constraint is converted into an unconstrained form using the prescribed performance function and error transformation technique. A virtual setpoint updating law, nested within the outer layer of the traditional P-type controller, is developed based on the newly defined unconstrained variable to limit the tracking error. Then, the unconstrained model and virtual setpoint law are converted into the available equivalent linear data models through dynamic linearization technology. The unknown pseudo-partial derivatives in the two models are estimated utilizing the modified projection algorithm. Finally, the P-type controller with prescribed performance is obtained by replacing the actual setpoint signal with the resulting virtual setpoint law. The bounded input and bounded output (BIBO) stability of the system is demonstrated by the contraction mapping principle, which ensures that the constraint conditions are satisfied. The effectiveness and robustness of the PPF-DDVSPC method are validated through a data-driven simulation of the blast furnace ironmaking process.
{"title":"Data-Driven Virtual Setpoint P-Type Control With Prescribed Performance Function","authors":"Guojie Li;Ping Zhou","doi":"10.1109/LCSYS.2025.3649131","DOIUrl":"https://doi.org/10.1109/LCSYS.2025.3649131","url":null,"abstract":"In this letter, a prescribed performance function-based data-driven virtual setpoint P-type controller (PPF-DDVSPC) is proposed for single-input single-output (SISO) systems with nonlinear nonaffine dynamics. First, the original model with error constraint is converted into an unconstrained form using the prescribed performance function and error transformation technique. A virtual setpoint updating law, nested within the outer layer of the traditional P-type controller, is developed based on the newly defined unconstrained variable to limit the tracking error. Then, the unconstrained model and virtual setpoint law are converted into the available equivalent linear data models through dynamic linearization technology. The unknown pseudo-partial derivatives in the two models are estimated utilizing the modified projection algorithm. Finally, the P-type controller with prescribed performance is obtained by replacing the actual setpoint signal with the resulting virtual setpoint law. The bounded input and bounded output (BIBO) stability of the system is demonstrated by the contraction mapping principle, which ensures that the constraint conditions are satisfied. The effectiveness and robustness of the PPF-DDVSPC method are validated through a data-driven simulation of the blast furnace ironmaking process.","PeriodicalId":37235,"journal":{"name":"IEEE Control Systems Letters","volume":"9 ","pages":"3089-3094"},"PeriodicalIF":2.0,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}