Pub Date : 2024-08-21DOI: 10.1007/s10957-024-02488-1
Caroline Geiersbach, Tim Suchan, Kathrin Welker
In this paper, we present a stochastic augmented Lagrangian approach on (possibly infinite-dimensional) Riemannian manifolds to solve stochastic optimization problems with a finite number of deterministic constraints. We investigate the convergence of the method, which is based on a stochastic approximation approach with random stopping combined with an iterative procedure for updating Lagrange multipliers. The algorithm is applied to a multi-shape optimization problem with geometric constraints and demonstrated numerically.
{"title":"Stochastic Augmented Lagrangian Method in Riemannian Shape Manifolds","authors":"Caroline Geiersbach, Tim Suchan, Kathrin Welker","doi":"10.1007/s10957-024-02488-1","DOIUrl":"https://doi.org/10.1007/s10957-024-02488-1","url":null,"abstract":"<p>In this paper, we present a stochastic augmented Lagrangian approach on (possibly infinite-dimensional) Riemannian manifolds to solve stochastic optimization problems with a finite number of deterministic constraints. We investigate the convergence of the method, which is based on a stochastic approximation approach with random stopping combined with an iterative procedure for updating Lagrange multipliers. The algorithm is applied to a multi-shape optimization problem with geometric constraints and demonstrated numerically.</p>","PeriodicalId":50100,"journal":{"name":"Journal of Optimization Theory and Applications","volume":"159 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-12DOI: 10.1007/s10957-024-02507-1
Feiyang Han, Yimin Wei, Pengpeng Xie
Total least squares (TLS), also named as errors in variables in statistical analysis, is an effective method for solving linear equations with the situations, when noise is not just in observation data but also in mapping operations. Besides, the Tikhonov regularization is widely considered in plenty of ill-posed problems. Moreover, the structure of mapping operator plays a crucial role in solving the TLS problem. Tensor operators have some advantages over the characterization of models, which requires us to build the corresponding theory on the tensor TLS. This paper proposes tensor regularized TLS and structured tensor TLS methods for solving ill-conditioned and structured tensor equations, respectively, adopting a tensor-tensor-product. Properties and algorithms for the solution of these approaches are also presented and proved. Based on this method, some applications in image and video deblurring are explored. Numerical examples illustrate the effectiveness of our methods, compared with some existing methods.
{"title":"Regularized and Structured Tensor Total Least Squares Methods with Applications","authors":"Feiyang Han, Yimin Wei, Pengpeng Xie","doi":"10.1007/s10957-024-02507-1","DOIUrl":"https://doi.org/10.1007/s10957-024-02507-1","url":null,"abstract":"<p>Total least squares (TLS), also named as errors in variables in statistical analysis, is an effective method for solving linear equations with the situations, when noise is not just in observation data but also in mapping operations. Besides, the Tikhonov regularization is widely considered in plenty of ill-posed problems. Moreover, the structure of mapping operator plays a crucial role in solving the TLS problem. Tensor operators have some advantages over the characterization of models, which requires us to build the corresponding theory on the tensor TLS. This paper proposes tensor regularized TLS and structured tensor TLS methods for solving ill-conditioned and structured tensor equations, respectively, adopting a tensor-tensor-product. Properties and algorithms for the solution of these approaches are also presented and proved. Based on this method, some applications in image and video deblurring are explored. Numerical examples illustrate the effectiveness of our methods, compared with some existing methods.</p>","PeriodicalId":50100,"journal":{"name":"Journal of Optimization Theory and Applications","volume":"2 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-12DOI: 10.1007/s10957-024-02503-5
Jamie Fravel, Robert Hildebrand, Laurel Travis
We study continuous, equality knapsack problems with uniform separable, non-convex objective functions that are continuous, antisymmetric about a point, and have concave and convex regions. For example, this model captures a simple allocation problem with the goal of optimizing an expected value where the objective is a sum of cumulative distribution functions of identically distributed normal distributions (i.e., a sum of inverse probit functions). We prove structural results of this model under general assumptions and provide two algorithms for efficient optimization: (1) running in linear time and (2) running in a constant number of operations given preprocessing of the objective function.
{"title":"Continuous Equality Knapsack with Probit-Style Objectives","authors":"Jamie Fravel, Robert Hildebrand, Laurel Travis","doi":"10.1007/s10957-024-02503-5","DOIUrl":"https://doi.org/10.1007/s10957-024-02503-5","url":null,"abstract":"<p>We study continuous, equality knapsack problems with uniform separable, non-convex objective functions that are continuous, antisymmetric about a point, and have concave and convex regions. For example, this model captures a simple allocation problem with the goal of optimizing an expected value where the objective is a sum of cumulative distribution functions of identically distributed normal distributions (i.e., a sum of inverse probit functions). We prove structural results of this model under general assumptions and provide two algorithms for efficient optimization: (1) running in linear time and (2) running in a constant number of operations given preprocessing of the objective function.</p>","PeriodicalId":50100,"journal":{"name":"Journal of Optimization Theory and Applications","volume":"160 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141935095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-06DOI: 10.1007/s10957-024-02498-z
Yuanhang Liu, Weijia Wu, Donghui Yang
This paper focuses on investigating the optimal actuator location for achieving minimum norm controls in the context of approximate controllability for degenerate parabolic equations. We propose a formulation of the optimization problem that encompasses both the actuator location and its associated minimum norm control. Specifically, we transform the problem into a two-person zero-sum game problem, resulting in the development of four equivalent formulations. Finally, we establish the crucial result that the solution to the relaxed optimization problem serves as an optimal actuator location for the classical problem.
{"title":"Optimal Actuator Location of the Norm Optimal Controls for Degenerate Parabolic Equations","authors":"Yuanhang Liu, Weijia Wu, Donghui Yang","doi":"10.1007/s10957-024-02498-z","DOIUrl":"https://doi.org/10.1007/s10957-024-02498-z","url":null,"abstract":"<p>This paper focuses on investigating the optimal actuator location for achieving minimum norm controls in the context of approximate controllability for degenerate parabolic equations. We propose a formulation of the optimization problem that encompasses both the actuator location and its associated minimum norm control. Specifically, we transform the problem into a two-person zero-sum game problem, resulting in the development of four equivalent formulations. Finally, we establish the crucial result that the solution to the relaxed optimization problem serves as an optimal actuator location for the classical problem.</p>","PeriodicalId":50100,"journal":{"name":"Journal of Optimization Theory and Applications","volume":"160 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141935096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-05DOI: 10.1007/s10957-024-02504-4
Daylen K. Thimm
Consider three closed linear subspaces (C_1, C_2,) and (C_3) of a Hilbert space H and the orthogonal projections (P_1, P_2) and (P_3) onto them. Halperin showed that a point in (C_1cap C_2 cap C_3) can be found by iteratively projecting any point (x_0 in H) onto all the sets in a periodic fashion. The limit point is then the projection of (x_0) onto (C_1cap C_2 cap C_3). Nevertheless, a non-periodic projection order may lead to a non-convergent projection series, as shown by Kopecká, Müller, and Paszkiewicz. This raises the question how many projection orders in ({1,2,3}^{mathbb {N}}) are “well behaved” in the sense that they lead to a convergent projection series. Melo, da Cruz Neto, and de Brito provided a necessary and sufficient condition under which the projection series converges and showed that the “well behaved” projection orders form a large subset in the sense of having full product measure. We show that also from a topological viewpoint the set of “well behaved” projection orders is a large subset: it contains a dense (G_delta ) subset with respect to the product topology. Furthermore, we analyze why the proof of the measure theoretic case cannot be directly adapted to the topological setting.
{"title":"Most Iterations of Projections Converge","authors":"Daylen K. Thimm","doi":"10.1007/s10957-024-02504-4","DOIUrl":"https://doi.org/10.1007/s10957-024-02504-4","url":null,"abstract":"<p>Consider three closed linear subspaces <span>(C_1, C_2,)</span> and <span>(C_3)</span> of a Hilbert space <i>H</i> and the orthogonal projections <span>(P_1, P_2)</span> and <span>(P_3)</span> onto them. Halperin showed that a point in <span>(C_1cap C_2 cap C_3)</span> can be found by iteratively projecting any point <span>(x_0 in H)</span> onto all the sets in a periodic fashion. The limit point is then the projection of <span>(x_0)</span> onto <span>(C_1cap C_2 cap C_3)</span>. Nevertheless, a non-periodic projection order may lead to a non-convergent projection series, as shown by Kopecká, Müller, and Paszkiewicz. This raises the question how many projection orders in <span>({1,2,3}^{mathbb {N}})</span> are “well behaved” in the sense that they lead to a convergent projection series. Melo, da Cruz Neto, and de Brito provided a necessary and sufficient condition under which the projection series converges and showed that the “well behaved” projection orders form a large subset in the sense of having full product measure. We show that also from a topological viewpoint the set of “well behaved” projection orders is a large subset: it contains a dense <span>(G_delta )</span> subset with respect to the product topology. Furthermore, we analyze why the proof of the measure theoretic case cannot be directly adapted to the topological setting.</p>","PeriodicalId":50100,"journal":{"name":"Journal of Optimization Theory and Applications","volume":"9 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141935102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-04DOI: 10.1007/s10957-024-02491-6
Eric Luxenberg, Dhruv Malik, Yuanzhi Li, Aarti Singh, Stephen Boyd
We consider robust empirical risk minimization (ERM), where model parameters are chosen to minimize the worst-case empirical loss when each data point varies over a given convex uncertainty set. In some simple cases, such problems can be expressed in an analytical form. In general the problem can be made tractable via dualization, which turns a min-max problem into a min-min problem. Dualization requires expertise and is tedious and error-prone. We demonstrate how CVXPY can be used to automate this dualization procedure in a user-friendly manner. Our framework allows practitioners to specify and solve robust ERM problems with a general class of convex losses, capturing many standard regression and classification problems. Users can easily specify any complex uncertainty set that is representable via disciplined convex programming (DCP) constraints.
{"title":"Specifying and Solving Robust Empirical Risk Minimization Problems Using CVXPY","authors":"Eric Luxenberg, Dhruv Malik, Yuanzhi Li, Aarti Singh, Stephen Boyd","doi":"10.1007/s10957-024-02491-6","DOIUrl":"https://doi.org/10.1007/s10957-024-02491-6","url":null,"abstract":"<p>We consider robust empirical risk minimization (ERM), where model parameters are chosen to minimize the worst-case empirical loss when each data point varies over a given convex uncertainty set. In some simple cases, such problems can be expressed in an analytical form. In general the problem can be made tractable via dualization, which turns a min-max problem into a min-min problem. Dualization requires expertise and is tedious and error-prone. We demonstrate how CVXPY can be used to automate this dualization procedure in a user-friendly manner. Our framework allows practitioners to specify and solve robust ERM problems with a general class of convex losses, capturing many standard regression and classification problems. Users can easily specify any complex uncertainty set that is representable via disciplined convex programming (DCP) constraints.</p>","PeriodicalId":50100,"journal":{"name":"Journal of Optimization Theory and Applications","volume":"9 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141935014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-02DOI: 10.1007/s10957-024-02501-7
Yu Cao, Yuanheng Wang, Habib ur Rehman, Yekini Shehu, Jen-Chih Yao
In this paper, we propose a new splitting algorithm to find the zero of a monotone inclusion problem that features the sum of three maximal monotone operators and a Lipschitz continuous monotone operator in Hilbert spaces. We prove that the sequence of iterates generated by our proposed splitting algorithm converges weakly to the zero of the considered inclusion problem under mild conditions on the iterative parameters. Several splitting algorithms in the literature are recovered as special cases of our proposed algorithm. Another interesting feature of our algorithm is that one forward evaluation of the Lipschitz continuous monotone operator is utilized at each iteration. Numerical results are given to support the theoretical analysis.
{"title":"Convergence Analysis of a New Forward-Reflected-Backward Algorithm for Four Operators Without Cocoercivity","authors":"Yu Cao, Yuanheng Wang, Habib ur Rehman, Yekini Shehu, Jen-Chih Yao","doi":"10.1007/s10957-024-02501-7","DOIUrl":"https://doi.org/10.1007/s10957-024-02501-7","url":null,"abstract":"<p>In this paper, we propose a new splitting algorithm to find the zero of a monotone inclusion problem that features the sum of three maximal monotone operators and a Lipschitz continuous monotone operator in Hilbert spaces. We prove that the sequence of iterates generated by our proposed splitting algorithm converges weakly to the zero of the considered inclusion problem under mild conditions on the iterative parameters. Several splitting algorithms in the literature are recovered as special cases of our proposed algorithm. Another interesting feature of our algorithm is that one forward evaluation of the Lipschitz continuous monotone operator is utilized at each iteration. Numerical results are given to support the theoretical analysis.</p>","PeriodicalId":50100,"journal":{"name":"Journal of Optimization Theory and Applications","volume":"26 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141882507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-02DOI: 10.1007/s10957-024-02502-6
Cheik Traoré, Vassilis Apidopoulos, Saverio Salzo, Silvia Villa
In the context of finite sums minimization, variance reduction techniques are widely used to improve the performance of state-of-the-art stochastic gradient methods. Their practical impact is clear, as well as their theoretical properties. Stochastic proximal point algorithms have been studied as an alternative to stochastic gradient algorithms since they are more stable with respect to the choice of the step size. However, their variance-reduced versions are not as well studied as the gradient ones. In this work, we propose the first unified study of variance reduction techniques for stochastic proximal point algorithms. We introduce a generic stochastic proximal-based algorithm that can be specified to give the proximal version of SVRG, SAGA, and some of their variants. For this algorithm, in the smooth setting, we provide several convergence rates for the iterates and the objective function values, which are faster than those of the vanilla stochastic proximal point algorithm. More specifically, for convex functions, we prove a sublinear convergence rate of O(1/k). In addition, under the Polyak-łojasiewicz condition, we obtain linear convergence rates. Finally, our numerical experiments demonstrate the advantages of the proximal variance reduction methods over their gradient counterparts in terms of the stability with respect to the choice of the step size in most cases, especially for difficult problems.
{"title":"Variance Reduction Techniques for Stochastic Proximal Point Algorithms","authors":"Cheik Traoré, Vassilis Apidopoulos, Saverio Salzo, Silvia Villa","doi":"10.1007/s10957-024-02502-6","DOIUrl":"https://doi.org/10.1007/s10957-024-02502-6","url":null,"abstract":"<p>In the context of finite sums minimization, variance reduction techniques are widely used to improve the performance of state-of-the-art stochastic gradient methods. Their practical impact is clear, as well as their theoretical properties. Stochastic proximal point algorithms have been studied as an alternative to stochastic gradient algorithms since they are more stable with respect to the choice of the step size. However, their variance-reduced versions are not as well studied as the gradient ones. In this work, we propose the first unified study of variance reduction techniques for stochastic proximal point algorithms. We introduce a generic stochastic proximal-based algorithm that can be specified to give the proximal version of SVRG, SAGA, and some of their variants. For this algorithm, in the smooth setting, we provide several convergence rates for the iterates and the objective function values, which are faster than those of the vanilla stochastic proximal point algorithm. More specifically, for convex functions, we prove a sublinear convergence rate of <i>O</i>(1/<i>k</i>). In addition, under the Polyak-łojasiewicz condition, we obtain linear convergence rates. Finally, our numerical experiments demonstrate the advantages of the proximal variance reduction methods over their gradient counterparts in terms of the stability with respect to the choice of the step size in most cases, especially for difficult problems.</p>","PeriodicalId":50100,"journal":{"name":"Journal of Optimization Theory and Applications","volume":"5 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141882639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-02DOI: 10.1007/s10957-024-02506-2
Atsushi Hori, Daisuke Tsuyuguchi, Ellen H. Fukuda
The multi-leader–multi-follower game (MLMFG) involves two or more leaders and followers and serves as a generalization of the Stackelberg game and the single-leader–multi-follower game. Although MLMFG covers wide range of real-world applications, its research is still sparse. Notably, fundamental solution methods for this class of problems remain insufficiently established. A prevailing approach is to recast the MLMFG as an equilibrium problem with equilibrium constraints (EPEC) and solve it using a solver. Meanwhile, interpreting the solution to the EPEC in the context of MLMFG may be complex due to shared decision variables among all leaders, followers’ strategies that each leader can unilaterally change, but the variables are essentially controlled by followers. To address this issue, we introduce a response function of followers’ noncooperative game that is a function with leaders’ strategies as a variable. Employing this approach allows the MLMFG to be solved as a single-level differentiable variational inequality using a smoothing scheme for the followers’ response function. We also demonstrate that the sequence of solutions to the smoothed variational inequality converges to a stationary equilibrium of the MLMFG. Finally, we illustrate the behavior of the smoothing method by numerical experiments.
{"title":"A Method for Multi-Leader–Multi-Follower Games by Smoothing the Followers’ Response Function","authors":"Atsushi Hori, Daisuke Tsuyuguchi, Ellen H. Fukuda","doi":"10.1007/s10957-024-02506-2","DOIUrl":"https://doi.org/10.1007/s10957-024-02506-2","url":null,"abstract":"<p>The multi-leader–multi-follower game (MLMFG) involves two or more leaders and followers and serves as a generalization of the Stackelberg game and the single-leader–multi-follower game. Although MLMFG covers wide range of real-world applications, its research is still sparse. Notably, fundamental solution methods for this class of problems remain insufficiently established. A prevailing approach is to recast the MLMFG as an equilibrium problem with equilibrium constraints (EPEC) and solve it using a solver. Meanwhile, interpreting the solution to the EPEC in the context of MLMFG may be complex due to shared decision variables among all leaders, followers’ strategies that each leader can unilaterally change, but the variables are essentially controlled by followers. To address this issue, we introduce a response function of followers’ noncooperative game that is a function with leaders’ strategies as a variable. Employing this approach allows the MLMFG to be solved as a single-level differentiable variational inequality using a smoothing scheme for the followers’ response function. We also demonstrate that the sequence of solutions to the smoothed variational inequality converges to a stationary equilibrium of the MLMFG. Finally, we illustrate the behavior of the smoothing method by numerical experiments.</p>","PeriodicalId":50100,"journal":{"name":"Journal of Optimization Theory and Applications","volume":"22 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141882506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01DOI: 10.1007/s10957-024-02499-y
Alexey S. Matveev, Dmitrii V. Sugak
This article is concerned with optimal control problems for plants described by systems of high order nonlinear PDE’s (whose linear approximation is elliptic in the sense of Douglis-Nirenberg), with a special attention being given to their particular case: the standard stationary system of non-linear Navier–Stokes equations. The objective is to establish an analog of the Pontryagin’s maximum principle. The major challenge stems from the presence of infinitely many point-wise constraints on the system’s state, which are imposed at any point from a given continuum set of independent variables. Necessary conditions for optimality in the form of an “abstract” maximum principle are first obtained for a general optimal control problem couched in the language of functional analysis. This result is targeted at a wide class of problems, with an idea to absorb, in its proof, a great deal of technical work needed for derivation of optimality conditions so that only an interpretation of the discussed result would be basically needed to handle a particular problem. The applicability of this approach is demonstrated via obtaining the afore-mentioned analog of the Pontryagin’s maximum principle for a state-constrained system of high-order elliptic equations and the Navier–Stokes equations.
{"title":"Pontryagin’s Maximum Principle for a State-Constrained System of Douglis-Nirenberg Type","authors":"Alexey S. Matveev, Dmitrii V. Sugak","doi":"10.1007/s10957-024-02499-y","DOIUrl":"https://doi.org/10.1007/s10957-024-02499-y","url":null,"abstract":"<p>This article is concerned with optimal control problems for plants described by systems of high order nonlinear PDE’s (whose linear approximation is elliptic in the sense of Douglis-Nirenberg), with a special attention being given to their particular case: the standard stationary system of non-linear Navier–Stokes equations. The objective is to establish an analog of the Pontryagin’s maximum principle. The major challenge stems from the presence of infinitely many point-wise constraints on the system’s state, which are imposed at any point from a given continuum set of independent variables. Necessary conditions for optimality in the form of an “abstract” maximum principle are first obtained for a general optimal control problem couched in the language of functional analysis. This result is targeted at a wide class of problems, with an idea to absorb, in its proof, a great deal of technical work needed for derivation of optimality conditions so that only an interpretation of the discussed result would be basically needed to handle a particular problem. The applicability of this approach is demonstrated via obtaining the afore-mentioned analog of the Pontryagin’s maximum principle for a state-constrained system of high-order elliptic equations and the Navier–Stokes equations.\u0000</p>","PeriodicalId":50100,"journal":{"name":"Journal of Optimization Theory and Applications","volume":"4 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141882505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}