{"title":"级联抛物型多项式方程边界稳定的神经算子近似值","authors":"Kaijing Lv, Junmin Wang, Yuandong Cao","doi":"10.1002/acs.3902","DOIUrl":null,"url":null,"abstract":"This article proposes a novel method to accelerate the boundary feedback control design of cascaded parabolic difference equations (PDEs) through DeepONet. The backstepping method has been widely used in boundary control problems of PDE systems, but solving the backstepping kernel function can be time‐consuming. To address this, a neural operator (NO) learning scheme is leveraged for accelerating the control design of cascaded parabolic PDEs. DeepONet, a class of deep neural networks designed for approximating nonlinear operators, has shown potential for approximating PDE backstepping designs in recent studies. Specifically, we focus on approximating gain kernel PDEs for two cascaded parabolic PDEs. We utilize neural operators to map only two kernel functions, while the other two are computed using the analytical solution, thus simplifying the training process. We establish the continuity and boundedness of the kernels, and demonstrate the existence of arbitrarily close DeepONet approximations to the kernel PDEs. Furthermore, we demonstrate that the DeepONet approximation gain kernels ensure stability when replacing the exact backstepping gain kernels. Notably, DeepONet operator exhibits computation speeds two orders of magnitude faster than PDE solvers for such gain functions, and their theoretically proven stabilizing capability is validated through simulations.","PeriodicalId":50347,"journal":{"name":"International Journal of Adaptive Control and Signal Processing","volume":"20 1","pages":""},"PeriodicalIF":3.9000,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Neural Operator Approximations for Boundary Stabilization of Cascaded Parabolic PDEs\",\"authors\":\"Kaijing Lv, Junmin Wang, Yuandong Cao\",\"doi\":\"10.1002/acs.3902\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This article proposes a novel method to accelerate the boundary feedback control design of cascaded parabolic difference equations (PDEs) through DeepONet. The backstepping method has been widely used in boundary control problems of PDE systems, but solving the backstepping kernel function can be time‐consuming. To address this, a neural operator (NO) learning scheme is leveraged for accelerating the control design of cascaded parabolic PDEs. DeepONet, a class of deep neural networks designed for approximating nonlinear operators, has shown potential for approximating PDE backstepping designs in recent studies. Specifically, we focus on approximating gain kernel PDEs for two cascaded parabolic PDEs. We utilize neural operators to map only two kernel functions, while the other two are computed using the analytical solution, thus simplifying the training process. We establish the continuity and boundedness of the kernels, and demonstrate the existence of arbitrarily close DeepONet approximations to the kernel PDEs. Furthermore, we demonstrate that the DeepONet approximation gain kernels ensure stability when replacing the exact backstepping gain kernels. Notably, DeepONet operator exhibits computation speeds two orders of magnitude faster than PDE solvers for such gain functions, and their theoretically proven stabilizing capability is validated through simulations.\",\"PeriodicalId\":50347,\"journal\":{\"name\":\"International Journal of Adaptive Control and Signal Processing\",\"volume\":\"20 1\",\"pages\":\"\"},\"PeriodicalIF\":3.9000,\"publicationDate\":\"2024-09-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Adaptive Control and Signal Processing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1002/acs.3902\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Adaptive Control and Signal Processing","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1002/acs.3902","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
Neural Operator Approximations for Boundary Stabilization of Cascaded Parabolic PDEs
This article proposes a novel method to accelerate the boundary feedback control design of cascaded parabolic difference equations (PDEs) through DeepONet. The backstepping method has been widely used in boundary control problems of PDE systems, but solving the backstepping kernel function can be time‐consuming. To address this, a neural operator (NO) learning scheme is leveraged for accelerating the control design of cascaded parabolic PDEs. DeepONet, a class of deep neural networks designed for approximating nonlinear operators, has shown potential for approximating PDE backstepping designs in recent studies. Specifically, we focus on approximating gain kernel PDEs for two cascaded parabolic PDEs. We utilize neural operators to map only two kernel functions, while the other two are computed using the analytical solution, thus simplifying the training process. We establish the continuity and boundedness of the kernels, and demonstrate the existence of arbitrarily close DeepONet approximations to the kernel PDEs. Furthermore, we demonstrate that the DeepONet approximation gain kernels ensure stability when replacing the exact backstepping gain kernels. Notably, DeepONet operator exhibits computation speeds two orders of magnitude faster than PDE solvers for such gain functions, and their theoretically proven stabilizing capability is validated through simulations.
期刊介绍:
The International Journal of Adaptive Control and Signal Processing is concerned with the design, synthesis and application of estimators or controllers where adaptive features are needed to cope with uncertainties.Papers on signal processing should also have some relevance to adaptive systems. The journal focus is on model based control design approaches rather than heuristic or rule based control design methods. All papers will be expected to include significant novel material.
Both the theory and application of adaptive systems and system identification are areas of interest. Papers on applications can include problems in the implementation of algorithms for real time signal processing and control. The stability, convergence, robustness and numerical aspects of adaptive algorithms are also suitable topics. The related subjects of controller tuning, filtering, networks and switching theory are also of interest. Principal areas to be addressed include:
Auto-Tuning, Self-Tuning and Model Reference Adaptive Controllers
Nonlinear, Robust and Intelligent Adaptive Controllers
Linear and Nonlinear Multivariable System Identification and Estimation
Identification of Linear Parameter Varying, Distributed and Hybrid Systems
Multiple Model Adaptive Control
Adaptive Signal processing Theory and Algorithms
Adaptation in Multi-Agent Systems
Condition Monitoring Systems
Fault Detection and Isolation Methods
Fault Detection and Isolation Methods
Fault-Tolerant Control (system supervision and diagnosis)
Learning Systems and Adaptive Modelling
Real Time Algorithms for Adaptive Signal Processing and Control
Adaptive Signal Processing and Control Applications
Adaptive Cloud Architectures and Networking
Adaptive Mechanisms for Internet of Things
Adaptive Sliding Mode Control.