Analyzing the worst-case performance of deep neural networks against input perturbations amounts to solving a large-scale non-convex optimization problem, for which several past works have proposed convex relaxations as a promising alternative. However, even for reasonably-sized neural networks, these relaxations are not tractable, and so must be replaced by even weaker relaxations in practice. In this work, we propose a novel operator splitting method that can directly solve a convex relaxation of the problem to high accuracy, by splitting it into smaller sub-problems that often have analytical solutions. The method is modular, scales to very large problem instances, and compromises of operations that are amenable to fast parallelization with GPU acceleration. We demonstrate our method in bounding the worst-case performance of large convolutional networks in image classification and reinforcement learning settings, and in reachability analysis of neural network dynamical systems.
{"title":"DeepSplit: Scalable Verification of Deep Neural Networks via Operator Splitting","authors":"Shaoru Chen;Eric Wong;J. Zico Kolter;Mahyar Fazlyab","doi":"10.1109/OJCSYS.2022.3187429","DOIUrl":"https://doi.org/10.1109/OJCSYS.2022.3187429","url":null,"abstract":"Analyzing the worst-case performance of deep neural networks against input perturbations amounts to solving a large-scale non-convex optimization problem, for which several past works have proposed convex relaxations as a promising alternative. However, even for reasonably-sized neural networks, these relaxations are not tractable, and so must be replaced by even weaker relaxations in practice. In this work, we propose a novel operator splitting method that can directly solve a convex relaxation of the problem to high accuracy, by splitting it into smaller sub-problems that often have analytical solutions. The method is modular, scales to very large problem instances, and compromises of operations that are amenable to fast parallelization with GPU acceleration. We demonstrate our method in bounding the worst-case performance of large convolutional networks in image classification and reinforcement learning settings, and in reachability analysis of neural network dynamical systems.","PeriodicalId":73299,"journal":{"name":"IEEE open journal of control systems","volume":"1 ","pages":"126-140"},"PeriodicalIF":0.0,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/9552933/9683993/09811356.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50349024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-28DOI: 10.1109/OJCSYS.2022.3186838
Ján Drgoňa;Aaron Tuor;Soumya Vasisht;Draguna Vrabie
In this paper, we provide sufficient conditions for dissipativity and local asymptotic stability of discrete-time dynamical systems parametrized by deep neural networks. We leverage the representation of neural networks as pointwise affine maps, thus exposing their local linear operators and making them accessible to classical system analytic and design methods. This allows us to “crack open the black box” of the neural dynamical system’s behavior by evaluating their dissipativity, and estimating their stationary points and state-space partitioning. We relate the norms of these local linear operators to the energy stored in the dissipative system with supply rates represented by their aggregate bias terms. Empirically, we analyze the variance in dynamical behavior and eigenvalue spectra of these local linear operators with varying weight factorizations, activation functions, bias terms, and depths.
{"title":"Dissipative Deep Neural Dynamical Systems","authors":"Ján Drgoňa;Aaron Tuor;Soumya Vasisht;Draguna Vrabie","doi":"10.1109/OJCSYS.2022.3186838","DOIUrl":"https://doi.org/10.1109/OJCSYS.2022.3186838","url":null,"abstract":"In this paper, we provide sufficient conditions for dissipativity and local asymptotic stability of discrete-time dynamical systems parametrized by deep neural networks. We leverage the representation of neural networks as pointwise affine maps, thus exposing their local linear operators and making them accessible to classical system analytic and design methods. This allows us to “crack open the black box” of the neural dynamical system’s behavior by evaluating their dissipativity, and estimating their stationary points and state-space partitioning. We relate the norms of these local linear operators to the energy stored in the dissipative system with supply rates represented by their aggregate bias terms. Empirically, we analyze the variance in dynamical behavior and eigenvalue spectra of these local linear operators with varying weight factorizations, activation functions, bias terms, and depths.","PeriodicalId":73299,"journal":{"name":"IEEE open journal of control systems","volume":"1 ","pages":"100-112"},"PeriodicalIF":0.0,"publicationDate":"2022-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/9552933/9683993/09809789.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50349022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this article, a discrete fractional order adaptive law (DFOAL) is designed based on the Caputo fractional difference to perform parameter estimation of structured uncertainties. The paper provides a rigorous stability analysis of the DFOAL parameter estimation method. The DFOAL is then modified in order to improve parameter estimator performance to show that, under certain conditions, it provides asymptotic convergence to the true parameter values even when the regressor is not persistently exciting. A method to allow for practical implementation of the DFOAL and the modified DFOAL is developed. Finally, the modified DFOAL is used to identify the plant parameters in an indirect adaptive control law for a class of nonlinear discrete-time systems with structured uncertainty.
{"title":"A Discrete Fractional Order Adaptive Law for Parameter Estimation and Adaptive Control","authors":"Mohamed Aburakhis;Raúl Ordóñez;Ouboti Djaneye-Boundjou","doi":"10.1109/OJCSYS.2022.3185002","DOIUrl":"https://doi.org/10.1109/OJCSYS.2022.3185002","url":null,"abstract":"In this article, a discrete fractional order adaptive law (DFOAL) is designed based on the Caputo fractional difference to perform parameter estimation of structured uncertainties. The paper provides a rigorous stability analysis of the DFOAL parameter estimation method. The DFOAL is then modified in order to improve parameter estimator performance to show that, under certain conditions, it provides asymptotic convergence to the true parameter values even when the regressor is not persistently exciting. A method to allow for practical implementation of the DFOAL and the modified DFOAL is developed. Finally, the modified DFOAL is used to identify the plant parameters in an indirect adaptive control law for a class of nonlinear discrete-time systems with structured uncertainty.","PeriodicalId":73299,"journal":{"name":"IEEE open journal of control systems","volume":"1 ","pages":"113-125"},"PeriodicalIF":0.0,"publicationDate":"2022-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/9552933/9683993/09802697.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50349023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-17DOI: 10.1109/OJCSYS.2022.3181584
Abed AlRahman Al Makdah;Vishaal Krishnan;Fabio Pasqualetti
In this work, we propose a framework in which we use a Lipschitz-constrained loss minimization scheme to learn feedback control policies with guarantees on closed-loop stability, adversarial robustness, and generalization. These policies are learned directly from expert demonstrations, contained in a dataset of state-control input pairs, without any prior knowledge of the task and system model. Our analysis exploits the Lipschitz property of the learned policies to obtain closed-loop guarantees on stability, adversarial robustness, and generalization over scenarios unexplored by the expert. In particular, first, we establish robust closed-loop stability under the learned control policy, where we provide guarantees that the closed-loop trajectory under the learned policy stays within a bounded region around the expert trajectory and converges asymptotically to a bounded region around the origin. Second, we derive bounds on the closed-loop regret with respect to the expert policy and on the deterioration of the closed-loop performance under bounded (adversarial) disturbances to the state measurements. These bounds provide certificates for closed-loop performance and adversarial robustness for learned policies. Third, we derive a (probabilistic) bound on generalization error for the learned policies. Numerical results validate our analysis and demonstrate the effectiveness of our robust feedback policy learning framework. Finally, our results support the existence of a potential tradeoff between nominal closed-loop performance and adversarial robustness, and that improvements in nominal closed-loop performance can only be made at the expense of robustness to adversarial perturbations.
{"title":"Learning Lipschitz Feedback Policies From Expert Demonstrations: Closed-Loop Guarantees, Robustness and Generalization","authors":"Abed AlRahman Al Makdah;Vishaal Krishnan;Fabio Pasqualetti","doi":"10.1109/OJCSYS.2022.3181584","DOIUrl":"https://doi.org/10.1109/OJCSYS.2022.3181584","url":null,"abstract":"In this work, we propose a framework in which we use a Lipschitz-constrained loss minimization scheme to learn feedback control policies with guarantees on closed-loop stability, adversarial robustness, and generalization. These policies are learned directly from expert demonstrations, contained in a dataset of state-control input pairs, without any prior knowledge of the task and system model. Our analysis exploits the Lipschitz property of the learned policies to obtain closed-loop guarantees on stability, adversarial robustness, and generalization over scenarios unexplored by the expert. In particular, first, we establish robust closed-loop stability under the learned control policy, where we provide guarantees that the closed-loop trajectory under the learned policy stays within a bounded region around the expert trajectory and converges asymptotically to a bounded region around the origin. Second, we derive bounds on the closed-loop regret with respect to the expert policy and on the deterioration of the closed-loop performance under bounded (adversarial) disturbances to the state measurements. These bounds provide certificates for closed-loop performance and adversarial robustness for learned policies. Third, we derive a (probabilistic) bound on generalization error for the learned policies. Numerical results validate our analysis and demonstrate the effectiveness of our robust feedback policy learning framework. Finally, our results support the existence of a potential tradeoff between nominal closed-loop performance and adversarial robustness, and that improvements in nominal closed-loop performance can only be made at the expense of robustness to adversarial perturbations.","PeriodicalId":73299,"journal":{"name":"IEEE open journal of control systems","volume":"1 ","pages":"85-99"},"PeriodicalIF":0.0,"publicationDate":"2022-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/9552933/9683993/09798865.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50349021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-08DOI: 10.1109/OJCSYS.2022.3181111
Michael R. P. Ragazzon;Saverio Messineo;Jan Tommy Gravdahl;David M. Harcombe;Michael G. Ruppert
Effective demodulation of amplitude and phase is a requirement in a wide array of applications. Recent efforts have increased the demodulation performance, in particular, the Lyapunov demodulator allows bandwidths up to the carrier frequency of the signal. However, being inherently restricted to first-order filtering of the input signal, it is highly sensitive to frequency components outside its passband region. This makes it unsuitable for certain applications such as multifrequency atomic force microscopy (AFM). In this article, the structure of the Lyapunov demodulator is transformed to an equivalent form and generalized by exploiting the internal model principle. The resulting generalized Lyapunov demodulator structure allows for arbitrary filtering order and is easy to implement, requiring only a bandpass filter, a single integrator, and two nonlinear transformations. The generalized Lyapunov demodulator is implemented experimentally on a field-programmable gate array (FPGA). Then it is used for imaging in an AFM and benchmarked against the standard Lyapunov demodulator and the widely used lock-in amplifier. The lock-in amplifier achieves great noise attenuation capabilities and off-mode rejection at low bandwidths, whereas the standard Lyapunov demodulator is shown to be effective at high bandwidths. We demonstrate that the proposed demodulator combines the best from the two state-of-the-art demodulators, demonstrating high bandwidths, large off-mode rejection, and excellent noise attenuation simultaneously.
{"title":"The Generalized Lyapunov Demodulator: High-Bandwidth, Low-Noise Amplitude and Phase Estimation","authors":"Michael R. P. Ragazzon;Saverio Messineo;Jan Tommy Gravdahl;David M. Harcombe;Michael G. Ruppert","doi":"10.1109/OJCSYS.2022.3181111","DOIUrl":"https://doi.org/10.1109/OJCSYS.2022.3181111","url":null,"abstract":"Effective demodulation of amplitude and phase is a requirement in a wide array of applications. Recent efforts have increased the demodulation performance, in particular, the Lyapunov demodulator allows bandwidths up to the carrier frequency of the signal. However, being inherently restricted to first-order filtering of the input signal, it is highly sensitive to frequency components outside its passband region. This makes it unsuitable for certain applications such as multifrequency atomic force microscopy (AFM). In this article, the structure of the Lyapunov demodulator is transformed to an equivalent form and generalized by exploiting the internal model principle. The resulting generalized Lyapunov demodulator structure allows for arbitrary filtering order and is easy to implement, requiring only a bandpass filter, a single integrator, and two nonlinear transformations. The generalized Lyapunov demodulator is implemented experimentally on a field-programmable gate array (FPGA). Then it is used for imaging in an AFM and benchmarked against the standard Lyapunov demodulator and the widely used lock-in amplifier. The lock-in amplifier achieves great noise attenuation capabilities and off-mode rejection at low bandwidths, whereas the standard Lyapunov demodulator is shown to be effective at high bandwidths. We demonstrate that the proposed demodulator combines the best from the two state-of-the-art demodulators, demonstrating high bandwidths, large off-mode rejection, and excellent noise attenuation simultaneously.","PeriodicalId":73299,"journal":{"name":"IEEE open journal of control systems","volume":"1 ","pages":"69-84"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/9552933/9683993/09790310.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50237536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-03DOI: 10.1109/OJCSYS.2022.3180002
Joey Reinders;Bram Hunnekens;Nathan van de Wouw;Tom Oomen
Mechanical ventilators facilitate breathing for patients who cannot breathe (sufficiently) on their own. The aim of this paper is to estimate relevant lung parameters and the spontaneous breathing effort of a ventilated patient that help keeping track of the patient’s clinical condition. A key challenge is that estimation using the available sensors for typical model structures results in a non-identifiable parametrization. A sparse optimization algorithm to estimate the lung parameters and the patient effort, without interfering with the patient’s treatment, using an $ell _{1}$