Pub Date : 2026-03-01Epub Date: 2026-02-09DOI: 10.1016/j.ifacsc.2026.100386
Giovanni Campanile , Vittoria Martinelli , Davide Salzano , Davide Fiore
We present an analysis of a genetic feedback control strategy enabling engineered microorganisms to self-regulate their population density by leveraging a quorum sensing mechanism for the production of a growth inhibitor protein, whose activation is regulated by an embedded antithetic controller. Through mathematical modeling and steady-state analysis, we provide design guidelines to tune the reference parameter and critical rates—such as dilution and inhibitor production rates—to regulate density at steady state. We show that the proposed control architecture guarantees robust regulation of the cell density by validating its performance and robustness via realistic agent-based simulations in BSim, which accurately replicate the growth environment and capture key features like spatial constraints and cell growth.
{"title":"Regulating population density through antithetic feedback control of cell growth","authors":"Giovanni Campanile , Vittoria Martinelli , Davide Salzano , Davide Fiore","doi":"10.1016/j.ifacsc.2026.100386","DOIUrl":"10.1016/j.ifacsc.2026.100386","url":null,"abstract":"<div><div>We present an analysis of a genetic feedback control strategy enabling engineered microorganisms to self-regulate their population density by leveraging a quorum sensing mechanism for the production of a growth inhibitor protein, whose activation is regulated by an embedded antithetic controller. Through mathematical modeling and steady-state analysis, we provide design guidelines to tune the reference parameter and critical rates—such as dilution and inhibitor production rates—to regulate density at steady state. We show that the proposed control architecture guarantees robust regulation of the cell density by validating its performance and robustness via realistic agent-based simulations in BSim, which accurately replicate the growth environment and capture key features like spatial constraints and cell growth.</div></div>","PeriodicalId":29926,"journal":{"name":"IFAC Journal of Systems and Control","volume":"35 ","pages":"Article 100386"},"PeriodicalIF":1.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147385006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-12-27DOI: 10.1016/j.ifacsc.2025.100357
Mohd Faizan, Mahdi Boukerdja, Anne Lise Gehin, Belkacem Ould Bouamama, Sumit Sood
Energy system resilience refers to the ability of systems to operate effectively during disruptive events. These disruptions occur when control mechanisms fail due to actuator saturation, triggered by faults or attacks with unpredictable behaviour. Maintaining system resilience relies on recovery control strategies. However, these strategies are often delayed, leading to severe system performance degradation. A novel indicator, Remaining Time to Recovery (RTTR), has been introduced in this work to address the delay in recovery control implementation. This indicator facilitates the implementation of the anticipatory recovery control strategies to address this delay. An innovative method for the online estimation of RTTR has been proposed, based on a hybrid approach that combines Bond Graph (BG) modelling and Machine Learning (ML). In the proposed work, the BG reference model interacts with system measurements and instantly estimates power losses caused by faults or attacks before the system’s performance is impacted. The ML layer, using linear regression (LR), processes the estimated power loss data to derive a prediction model of power loss evolution that is updated in real-time. RTTR is then predicted based on the initiation of power loss and the predicted evolution of that loss over time. The proposed methodology has been validated on a two-tank system using real-time Hardware-in-the-Loop (HIL) simulation with a Speedgoat target machine. The HIL simulations in different scenarios have been presented to demonstrate the reliability and accuracy of the proposed approach.
{"title":"Online estimation of remaining time to recovery to enhance resilience using bond graph based power loss estimation","authors":"Mohd Faizan, Mahdi Boukerdja, Anne Lise Gehin, Belkacem Ould Bouamama, Sumit Sood","doi":"10.1016/j.ifacsc.2025.100357","DOIUrl":"10.1016/j.ifacsc.2025.100357","url":null,"abstract":"<div><div>Energy system resilience refers to the ability of systems to operate effectively during disruptive events. These disruptions occur when control mechanisms fail due to actuator saturation, triggered by faults or attacks with unpredictable behaviour. Maintaining system resilience relies on recovery control strategies. However, these strategies are often delayed, leading to severe system performance degradation. A novel indicator, Remaining Time to Recovery (RTTR), has been introduced in this work to address the delay in recovery control implementation. This indicator facilitates the implementation of the anticipatory recovery control strategies to address this delay. An innovative method for the online estimation of RTTR has been proposed, based on a hybrid approach that combines Bond Graph (BG) modelling and Machine Learning (ML). In the proposed work, the BG reference model interacts with system measurements and instantly estimates power losses caused by faults or attacks before the system’s performance is impacted. The ML layer, using linear regression (LR), processes the estimated power loss data to derive a prediction model of power loss evolution that is updated in real-time. RTTR is then predicted based on the initiation of power loss and the predicted evolution of that loss over time. The proposed methodology has been validated on a two-tank system using real-time Hardware-in-the-Loop (HIL) simulation with a Speedgoat target machine. The HIL simulations in different scenarios have been presented to demonstrate the reliability and accuracy of the proposed approach.</div></div>","PeriodicalId":29926,"journal":{"name":"IFAC Journal of Systems and Control","volume":"35 ","pages":"Article 100357"},"PeriodicalIF":1.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145885138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-01-08DOI: 10.1016/j.ifacsc.2026.100362
Junhua Zheng , Zhiqiang Ge , Li Sun
While deep learning has made significant achievements in the past years, it suffers from several serious shortcomings. Particularly, the performance of deep learning may be severely degraded under a small size of labeled training dataset, the case of which is quite common in industrial application scenarios although we are in the age of big data. In this paper, a semi-supervised deep model is proposed for predictive learning and data analytics, which is based upon the recently developed lightweight deep partial least squares model (PLS) structure. Precisely, the simple self-training strategy is used as the driving force to formulate the semi-supervised deep PLS model, which has no restriction in model structure and thus is flexible for predictive learning. In addition, to reduce the uncertainty of the self-training process, i.e. prediction error accumulation, different random seeds are introduced for model training, the results of which are combined together through an ensemble learning strategy. As a result, the predictive model becomes more stable and robust to those uncertainties introduced by both unlabeled data and the semi-supervised learning process. A real industrial example is provided for performance evaluation of the proposed method.
{"title":"Ensemble self-training deep partial least squares models for stable semi-supervised predictive learning and data analytics","authors":"Junhua Zheng , Zhiqiang Ge , Li Sun","doi":"10.1016/j.ifacsc.2026.100362","DOIUrl":"10.1016/j.ifacsc.2026.100362","url":null,"abstract":"<div><div>While deep learning has made significant achievements in the past years, it suffers from several serious shortcomings. Particularly, the performance of deep learning may be severely degraded under a small size of labeled training dataset, the case of which is quite common in industrial application scenarios although we are in the age of big data. In this paper, a semi-supervised deep model is proposed for predictive learning and data analytics, which is based upon the recently developed lightweight deep partial least squares model (PLS) structure. Precisely, the simple self-training strategy is used as the driving force to formulate the semi-supervised deep PLS model, which has no restriction in model structure and thus is flexible for predictive learning. In addition, to reduce the uncertainty of the self-training process, i.e. prediction error accumulation, different random seeds are introduced for model training, the results of which are combined together through an ensemble learning strategy. As a result, the predictive model becomes more stable and robust to those uncertainties introduced by both unlabeled data and the semi-supervised learning process. A real industrial example is provided for performance evaluation of the proposed method.</div></div>","PeriodicalId":29926,"journal":{"name":"IFAC Journal of Systems and Control","volume":"35 ","pages":"Article 100362"},"PeriodicalIF":1.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145977242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-01-21DOI: 10.1016/j.ifacsc.2026.100366
Thomas Banker, Nathan P. Lawrence, Ali Mesbah
A major challenge in reinforcement learning (RL) is guaranteeing an agent’s closed-loop stability under unknown, possibly sparse, reward functions. While model-free RL is flexible to a variety of systems and rewards, model-based control strategies such as optimization-based control naturally accommodate prior system models to provide guarantees on safety and stability. However, these models may not be representative of the true global performance objective, resulting in suboptimal policies. In this paper, we present a policy search RL approach that decouples the stability requirement from the global performance objective. The key idea is to use an optimization-based policy structure as an effective stabilizing parameterization with which the agent can learn to maximize an unknown reward in a model-free fashion. Specifically, the agent employs a predictive control architecture and implicitly learns a stabilizing terminal cost, which is constructed through fixed-point iterations of the discrete algebraic Riccati equation. By implicitly differentiating this fixed-point, derivatives of the stability condition inform policy gradients. The proposed approach is shown to design high-performance, stabilizing policies for various sparse, global performance objectives. Furthermore, the proposed approach can account for uncertainty in the dynamics using the stochastic discrete algebraic Riccati equation to promote robust stability. This work demonstrates a principled policy search RL approach, integrating prior models and system observations in an agent’s design, towards safe and reliable decision-making under uncertainty.
{"title":"Stability-constrained policy optimization under unknown rewards","authors":"Thomas Banker, Nathan P. Lawrence, Ali Mesbah","doi":"10.1016/j.ifacsc.2026.100366","DOIUrl":"10.1016/j.ifacsc.2026.100366","url":null,"abstract":"<div><div>A major challenge in reinforcement learning (RL) is guaranteeing an agent’s closed-loop stability under unknown, possibly sparse, reward functions. While model-free RL is flexible to a variety of systems and rewards, model-based control strategies such as optimization-based control naturally accommodate prior system models to provide guarantees on safety and stability. However, these models may not be representative of the true global performance objective, resulting in suboptimal policies. In this paper, we present a policy search RL approach that decouples the stability requirement from the global performance objective. The key idea is to use an optimization-based policy structure as an effective stabilizing parameterization with which the agent can learn to maximize an unknown reward in a model-free fashion. Specifically, the agent employs a predictive control architecture and implicitly learns a stabilizing terminal cost, which is constructed through fixed-point iterations of the discrete algebraic Riccati equation. By implicitly differentiating this fixed-point, derivatives of the stability condition inform policy gradients. The proposed approach is shown to design high-performance, stabilizing policies for various sparse, global performance objectives. Furthermore, the proposed approach can account for uncertainty in the dynamics using the stochastic discrete algebraic Riccati equation to promote robust stability. This work demonstrates a principled policy search RL approach, integrating prior models and system observations in an agent’s design, towards safe and reliable decision-making under uncertainty.</div></div>","PeriodicalId":29926,"journal":{"name":"IFAC Journal of Systems and Control","volume":"35 ","pages":"Article 100366"},"PeriodicalIF":1.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-02-04DOI: 10.1016/j.ifacsc.2026.100384
Fritz A. Engeln , Jan-Willem van Wingerden , Timm Faulwasser
The behavior of a linear time-invariant system can be characterized entirely by measured input–output data that spans the vector space of all possible trajectories of the system relying on the fundamental lemma by Willems et al. However, useful a priori knowledge of the system is usually neglected. We propose a novel method for incorporating prior knowledge, specifically, known pole and zero locations, into a data-driven representation by constructing filters that pre-process the measured input–output data. To this end, a physics-informed data-driven predictor is introduced, where trajectories are obtained as linear combinations of the columns of a filtered block-Hankel matrix. We explicitly derive the output prediction error and show how leveraging prior knowledge reduces the impact of future noise realizations on output predictions and improves the accuracy of the initial state that is inferred from past data.
{"title":"Data-driven modeling with prior system knowledge","authors":"Fritz A. Engeln , Jan-Willem van Wingerden , Timm Faulwasser","doi":"10.1016/j.ifacsc.2026.100384","DOIUrl":"10.1016/j.ifacsc.2026.100384","url":null,"abstract":"<div><div>The behavior of a linear time-invariant system can be characterized entirely by measured input–output data that spans the vector space of all possible trajectories of the system relying on the fundamental lemma by Willems et al. However, useful <em>a priori</em> knowledge of the system is usually neglected. We propose a novel method for incorporating prior knowledge, specifically, known pole and zero locations, into a data-driven representation by constructing filters that pre-process the measured input–output data. To this end, a physics-informed data-driven predictor is introduced, where trajectories are obtained as linear combinations of the columns of a filtered block-Hankel matrix. We explicitly derive the output prediction error and show how leveraging prior knowledge reduces the impact of future noise realizations on output predictions and improves the accuracy of the initial state that is inferred from past data.</div></div>","PeriodicalId":29926,"journal":{"name":"IFAC Journal of Systems and Control","volume":"35 ","pages":"Article 100384"},"PeriodicalIF":1.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146173143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Warburg effect describes the preference of highly proliferating cells (like cancer cells) for aerobic glycolysis and lactate production despite oxygen availability. In a recent paper, Jaiswal and Singh (2024) proposed that this behavior arises from a negative feedback loop linking cytoplasmic NADH levels and cell proliferation. Their model integrates glycolysis, oxidative phosphorylation, and pyruvate-to-lactate conversion to explain how the NADH/NAD+ ratio governs proliferation and quiescence. Here, we propose the qualitative behavior analysis, showing how quiescent and non quiescent equilibria arise according to model parameters. The corresponding bifurcation diagrams provide new biological insights on cellular behavior and pave the way to further investigation on the cellular machinery leading to the Warburg effect.
{"title":"Qualitative behavior analysis of a model underlying the Warburg effect","authors":"Pasquale Palumbo , Susanna Brotti , Raghvendra Singh","doi":"10.1016/j.ifacsc.2026.100387","DOIUrl":"10.1016/j.ifacsc.2026.100387","url":null,"abstract":"<div><div>The Warburg effect describes the preference of highly proliferating cells (like cancer cells) for aerobic glycolysis and lactate production despite oxygen availability. In a recent paper, Jaiswal and Singh (2024) proposed that this behavior arises from a negative feedback loop linking cytoplasmic NADH levels and cell proliferation. Their model integrates glycolysis, oxidative phosphorylation, and pyruvate-to-lactate conversion to explain how the NADH/NAD+ ratio governs proliferation and quiescence. Here, we propose the qualitative behavior analysis, showing how quiescent and non quiescent equilibria arise according to model parameters. The corresponding bifurcation diagrams provide new biological insights on cellular behavior and pave the way to further investigation on the cellular machinery leading to the Warburg effect.</div></div>","PeriodicalId":29926,"journal":{"name":"IFAC Journal of Systems and Control","volume":"35 ","pages":"Article 100387"},"PeriodicalIF":1.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147384865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-12-15DOI: 10.1016/j.ifacsc.2025.100354
Shuvo Dev , Mehedi Hassan , Naruttam Kumar Roy , Rabiul Islam
This study examines the design of a resilient control strategy for an IEEE 8-bus power system with renewable integration. It makes use of sophisticated control techniques such as Linear Quadratic Regulator (LQR), Linear Quadratic Gaussian (LQG), Sector-Bounded LQG (SBLQG), and Norm-Bounded LQG (NBLQG). By correcting model errors, the major goal of this study is to increase the power system’s resilience while preserving respectable performance indicators. To evaluate the efficacy of each control strategy, a thorough comparison is carried out using pole-zero plots, Bode plots, time-domain specifications, robust analysis, and statistical analysis. According to the pole-zero analysis, all control strategies have poles that are located in the left half-plane; the SBLQG and NBLQG strategies have the most leftward pole placements, which is a sign of better stability. The gain margin and phase margin consistently rise with each approach, according to Bode plot research, while the gain crossover and phase crossover frequencies also slightly increase. The controller’s enhanced robustness is evident in the 9.63% gain margin increases for LQG, 55.29% for SBLQG, and 86.79% for NBLQG when compared to LQR. In terms of time-domain performance, a decrease in rise time, peak time, and settling time is noted, while the percentage overshoot progressively diminishes in the sequence of LQR, LQG, SBLQG, and NBLQG. The percentage decrement in settling time for the controllers compared to LQR is 24.73% for LQG, 93.23% for SBLQG, and 98.06% for NBLQG, further highlighting their enhanced performance. The largest negative Cohen’s d values are observed in the comparison between LQR and NBLQG, with −24.4618 for GM and −18.9984 for PM, indicating a significant performance disparity. The results show that NBLQG is the most robust control strategy, exhibiting a modest settling time decrement. This research contributes to the field by illustrating how robust control methods, particularly NBLQG, effectively mitigate the impact of model uncertainties, thereby enhancing power system stability and performance in the presence of inaccuracies.
{"title":"Optimal and robust control techniques for stability enhancement in a renewable integrated power system","authors":"Shuvo Dev , Mehedi Hassan , Naruttam Kumar Roy , Rabiul Islam","doi":"10.1016/j.ifacsc.2025.100354","DOIUrl":"10.1016/j.ifacsc.2025.100354","url":null,"abstract":"<div><div>This study examines the design of a resilient control strategy for an IEEE 8-bus power system with renewable integration. It makes use of sophisticated control techniques such as Linear Quadratic Regulator (LQR), Linear Quadratic Gaussian (LQG), Sector-Bounded LQG (SBLQG), and Norm-Bounded LQG (NBLQG). By correcting model errors, the major goal of this study is to increase the power system’s resilience while preserving respectable performance indicators. To evaluate the efficacy of each control strategy, a thorough comparison is carried out using pole-zero plots, Bode plots, time-domain specifications, robust analysis, and statistical analysis. According to the pole-zero analysis, all control strategies have poles that are located in the left half-plane; the SBLQG and NBLQG strategies have the most leftward pole placements, which is a sign of better stability. The gain margin and phase margin consistently rise with each approach, according to Bode plot research, while the gain crossover and phase crossover frequencies also slightly increase. The controller’s enhanced robustness is evident in the 9.63% gain margin increases for LQG, 55.29% for SBLQG, and 86.79% for NBLQG when compared to LQR. In terms of time-domain performance, a decrease in rise time, peak time, and settling time is noted, while the percentage overshoot progressively diminishes in the sequence of LQR, LQG, SBLQG, and NBLQG. The percentage decrement in settling time for the controllers compared to LQR is 24.73% for LQG, 93.23% for SBLQG, and 98.06% for NBLQG, further highlighting their enhanced performance. The largest negative Cohen’s d values are observed in the comparison between LQR and NBLQG, with −24.4618 for GM and −18.9984 for PM, indicating a significant performance disparity. The results show that NBLQG is the most robust control strategy, exhibiting a modest settling time decrement. This research contributes to the field by illustrating how robust control methods, particularly NBLQG, effectively mitigate the impact of model uncertainties, thereby enhancing power system stability and performance in the presence of inaccuracies.</div></div>","PeriodicalId":29926,"journal":{"name":"IFAC Journal of Systems and Control","volume":"35 ","pages":"Article 100354"},"PeriodicalIF":1.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145884546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-01-29DOI: 10.1016/j.ifacsc.2026.100371
Sarasij Banerjee , Eric Hekler , Daniel E. Rivera
This paper presents a methodology for optimizing “plant-friendly” multisine input signals to identify nonlinear dynamic systems under time-domain input and output constraints, without requiring a global parametric model a priori. The goal is to construct an informative dataset for open-loop, data-driven identification while selecting operational requirements. A weighted optimization framework is proposed to minimize the output crest factor resulting from a data-driven model, with penalties for violating input and output constraints. Model-on-Demand (MoD) estimation is employed to simulate outputs using prior data, effectively predicting nonlinear responses without global modeling. This MoD-based formulation enables evaluating output crest factors and output constraint compliance with modest modeling effort and improved impact. The resulting non-smooth, non-convex problem is solved using the Simultaneous Perturbation Stochastic Approximation (SPSA) algorithm, which perturbs the multisine phase vector to achieve the desired performance efficiently. This method supports the concept of identification test monitoring, as illustrated in this paper. Within the identification test loops, each optimized excitation is applied to gather new estimation data, iteratively refining MoD-based output predictions and improving constraint satisfaction. The method’s effectiveness is demonstrated through a safety-critical case study on a Susceptible-Infected-Recovered (SIR) epidemiological network, showing that the optimized excitation yields highly informative data for identification while keeping the infection spread within safe limits.
{"title":"Multisine input signal design for constrained, “plant-friendly” system identification of nonlinear systems","authors":"Sarasij Banerjee , Eric Hekler , Daniel E. Rivera","doi":"10.1016/j.ifacsc.2026.100371","DOIUrl":"10.1016/j.ifacsc.2026.100371","url":null,"abstract":"<div><div>This paper presents a methodology for optimizing “plant-friendly” multisine input signals to identify nonlinear dynamic systems under time-domain input and output constraints, without requiring a global parametric model <em>a priori</em>. The goal is to construct an informative dataset for open-loop, data-driven identification while selecting operational requirements. A weighted optimization framework is proposed to minimize the output crest factor resulting from a data-driven model, with penalties for violating input and output constraints. Model-on-Demand (MoD) estimation is employed to simulate outputs using prior data, effectively predicting nonlinear responses without global modeling. This MoD-based formulation enables evaluating output crest factors and output constraint compliance with modest modeling effort and improved impact. The resulting non-smooth, non-convex problem is solved using the Simultaneous Perturbation Stochastic Approximation (SPSA) algorithm, which perturbs the multisine phase vector to achieve the desired performance efficiently. This method supports the concept of <em>identification test monitoring</em>, as illustrated in this paper. Within the identification test loops, each optimized excitation is applied to gather new estimation data, iteratively refining MoD-based output predictions and improving constraint satisfaction. The method’s effectiveness is demonstrated through a safety-critical case study on a Susceptible-Infected-Recovered (SIR) epidemiological network, showing that the optimized excitation yields highly informative data for identification while keeping the infection spread within safe limits.</div></div>","PeriodicalId":29926,"journal":{"name":"IFAC Journal of Systems and Control","volume":"35 ","pages":"Article 100371"},"PeriodicalIF":1.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-01-23DOI: 10.1016/j.ifacsc.2026.100367
Mohamed Arnouss , Yezekael Hayel , Karam Allali
Economic savings achieved through targeted isolation avoid additional disease burdens and effectively address the disease-economy trade-offs in epidemic control. In this study, we use phase-space analysis to derive the explicit solution of the optimal control problem that minimize the infection peak given budget limitation. The optimal policy obtained is an adaptive control where the isolation rate dynamically adjusts according to the current epidemic state. We show that targeted isolation control policy achieves the same infection peak as transmission reduction policies under equivalent budgets, while avoiding broad socio-economic disruptions. Additionally, we show through numerical simulations that the control resolves the epidemic faster and reduces total infections. This demonstrates that targeted isolation can strike a balance between public health and economic stability, offering actionable insights for public health decisions moving forward.
{"title":"Adaptive optimal resource allocation for isolation interventions: Flattening the curve","authors":"Mohamed Arnouss , Yezekael Hayel , Karam Allali","doi":"10.1016/j.ifacsc.2026.100367","DOIUrl":"10.1016/j.ifacsc.2026.100367","url":null,"abstract":"<div><div>Economic savings achieved through targeted isolation avoid additional disease burdens and effectively address the disease-economy trade-offs in epidemic control. In this study, we use phase-space analysis to derive the explicit solution of the optimal control problem that minimize the infection peak given budget limitation. The optimal policy obtained is an adaptive control where the isolation rate dynamically adjusts according to the current epidemic state. We show that targeted isolation control policy achieves the same infection peak as transmission reduction policies under equivalent budgets, while avoiding broad socio-economic disruptions. Additionally, we show through numerical simulations that the control resolves the epidemic faster and reduces total infections. This demonstrates that targeted isolation can strike a balance between public health and economic stability, offering actionable insights for public health decisions moving forward.</div></div>","PeriodicalId":29926,"journal":{"name":"IFAC Journal of Systems and Control","volume":"35 ","pages":"Article 100367"},"PeriodicalIF":1.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146173694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-12-03DOI: 10.1016/j.ifacsc.2025.100351
Lui Holder-Pearson , J. Geoffrey Chase , Yeong Shiong Chiew , Geoffrey Shaw , Bernard Lambermont , Thomas Desaive
Acute respiratory distress and respiratory disease often require patients be treated with mechanical ventilation (MV) and thus place extreme demand on intensive care units (ICUs). This burden can be unsustainably high in some periods, and particularly during pandemics, such as Covid-19. In low resource regions and countries, the result can be inequity, a problem addressable via simple technological innovation. Ventilator sharing over two or more patients has been proposed but strongly discouraged because it could not treat different patient needs and hindered individual patient monitoring. However, all these approaches ventilated patients in-parallel, each breathing at the same time.
A simple switching valve enables series breathing, one patient after the other. External, low-cost, and reusable sensor arrays enable individual monitoring, while low-cost adjustable pressure reducing valves allow pressure to be fully customised across two patients. This study uses an experimental test lung to experimentally demonstrate and validate the ability of such a system to balance ventilation across 2 simulated patients with very different lung compliances.
A method is presented to achieve equal tidal volumes in two lungs with differing compliances of 0.10 L cmH 2O−1 and 0.05 L cmH 2O−1. This goal requires driving and end-expiratory pressures of at least 20 cmH 2O, which are clinically relatively high. The approach prioritises safety, ensuring more compliant lung is not over-ventilated during the process, reducing the risk of ventilator-induced lung injury (VILI). The system is compatible with different ventilators, and cost-effectively fabricated in low-resource settings. Strategies addressing key safety concerns, such as cross-contamination, sterilisation, and ventilator configuration, are also presented.
急性呼吸窘迫和呼吸系统疾病通常需要患者进行机械通气(MV)治疗,因此对重症监护病房(icu)提出了极高的要求。在某些时期,特别是在Covid-19等大流行期间,这种负担可能高得不可持续。在资源匮乏的地区和国家,结果可能是不平等,这个问题可以通过简单的技术创新来解决。两名或两名以上患者共用呼吸机已被提议,但强烈反对,因为它不能满足不同患者的需求,并阻碍了患者的个体监测。然而,所有这些方法都是平行的,每次呼吸都是同时进行的。一个简单的开关阀可以实现病人一个接一个的连续呼吸。外部、低成本和可重复使用的传感器阵列可以实现个人监测,而低成本的可调减压阀可以完全定制两个患者的压力。本研究通过实验测试肺,实验证明并验证了该系统在两个肺顺应性差异很大的模拟患者中平衡通气的能力。提出了一种方法,以实现相等的潮汐体积在两个肺不同的顺应性0.10 L cmh2o−1和0.05 L cmh2o−1。这一目标要求驱动和呼气末压力至少为20 cmh2o,这在临床上是相对较高的。该方法优先考虑安全性,确保更适应的肺在通气过程中不会过度通气,降低呼吸机诱导肺损伤(VILI)的风险。该系统与不同的呼吸机兼容,并且在低资源环境下具有成本效益。还提出了解决关键安全问题的策略,例如交叉污染,灭菌和呼吸机配置。
{"title":"Experimental Validation of the ACTIV Multi-Patient Mechanical Ventilation System","authors":"Lui Holder-Pearson , J. Geoffrey Chase , Yeong Shiong Chiew , Geoffrey Shaw , Bernard Lambermont , Thomas Desaive","doi":"10.1016/j.ifacsc.2025.100351","DOIUrl":"10.1016/j.ifacsc.2025.100351","url":null,"abstract":"<div><div>Acute respiratory distress and respiratory disease often require patients be treated with mechanical ventilation (MV) and thus place extreme demand on intensive care units (ICUs). This burden can be unsustainably high in some periods, and particularly during pandemics, such as Covid-19. In low resource regions and countries, the result can be inequity, a problem addressable via simple technological innovation. Ventilator sharing over two or more patients has been proposed but strongly discouraged because it could not treat different patient needs and hindered individual patient monitoring. However, all these approaches ventilated patients in-parallel, each breathing at the same time.</div><div>A simple switching valve enables series breathing, one patient after the other. External, low-cost, and reusable sensor arrays enable individual monitoring, while low-cost adjustable pressure reducing valves allow pressure to be fully customised across two patients. This study uses an experimental test lung to experimentally demonstrate and validate the ability of such a system to balance ventilation across 2 simulated patients with very different lung compliances.</div><div>A method is presented to achieve equal tidal volumes in two lungs with differing compliances of 0.10 L cmH <sub>2</sub>O<sup>−1</sup> and 0.05 L cmH <sub>2</sub>O<sup>−1</sup>. This goal requires driving and end-expiratory pressures of at least 20 cmH <sub>2</sub>O, which are clinically relatively high. The approach prioritises safety, ensuring more compliant lung is not over-ventilated during the process, reducing the risk of ventilator-induced lung injury (VILI). The system is compatible with different ventilators, and cost-effectively fabricated in low-resource settings. Strategies addressing key safety concerns, such as cross-contamination, sterilisation, and ventilator configuration, are also presented.</div></div>","PeriodicalId":29926,"journal":{"name":"IFAC Journal of Systems and Control","volume":"35 ","pages":"Article 100351"},"PeriodicalIF":1.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145697832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}