Pub Date : 2024-09-17DOI: 10.1007/s10957-024-02526-y
Fabián Flores-Bazán, Felipe Opazo
We analyze the fulfillment of the simultaneous diagonalization (SD via congruence) property for any two real matrices, and develop sufficient conditions expressed in different way to those appeared in the last few years. These conditions are established under a different perspective, and in any case, they supplement and clarify other similar results published elsewhere. Following our point of view reflected in a previous work, we offer some necessary and sufficient conditions, different in nature to those in Jiang and Li (SIAM J Optim 26:1649–1668, 2016), for SD: roughly speaking our approach is more geometric and needs to compute images and kernels of matrices; whereas that in Jiang and Li (SIAM J Optim 26:1649–1668, 2016) requires to compute determinant and canonical forms. The bidimensional situation is particularly analyzed, providing new more precise characterizations than those in higher dimension and joint those given earlier by the authors. In addition, we also establish the connection of our characterization of SD with that provided in Jiang and Li (SIAM J Optim 26:1649–1668, 2016).
{"title":"Simultaneous Diagonalization Under Weak Regularity and a Characterization","authors":"Fabián Flores-Bazán, Felipe Opazo","doi":"10.1007/s10957-024-02526-y","DOIUrl":"https://doi.org/10.1007/s10957-024-02526-y","url":null,"abstract":"<p>We analyze the fulfillment of the simultaneous diagonalization (SD via congruence) property for any two real matrices, and develop sufficient conditions expressed in different way to those appeared in the last few years. These conditions are established under a different perspective, and in any case, they supplement and clarify other similar results published elsewhere. Following our point of view reflected in a previous work, we offer some necessary and sufficient conditions, different in nature to those in Jiang and Li (SIAM J Optim 26:1649–1668, 2016), for SD: roughly speaking our approach is more geometric and needs to compute images and kernels of matrices; whereas that in Jiang and Li (SIAM J Optim 26:1649–1668, 2016) requires to compute determinant and canonical forms. The bidimensional situation is particularly analyzed, providing new more precise characterizations than those in higher dimension and joint those given earlier by the authors. In addition, we also establish the connection of our characterization of SD with that provided in Jiang and Li (SIAM J Optim 26:1649–1668, 2016).</p>","PeriodicalId":50100,"journal":{"name":"Journal of Optimization Theory and Applications","volume":"41 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142260879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-14DOI: 10.1007/s10957-024-02523-1
Lei Wang, Xin Liu, Yin Zhang
In this paper, we develop an algorithm for federated principal component analysis (PCA) with emphases on both communication efficiency and data privacy. Generally speaking, federated PCA algorithms based on direct adaptations of classic iterative methods, such as simultaneous subspace iterations, are unable to preserve data privacy, while algorithms based on variable-splitting and consensus-seeking, such as alternating direction methods of multipliers (ADMM), lack in communication-efficiency. In this work, we propose a novel consensus-seeking formulation by equalizing subspaces spanned by splitting variables instead of equalizing variables themselves, thus greatly relaxing feasibility restrictions and allowing much faster convergence. Then we develop an ADMM-like algorithm with several special features to make it practically efficient, including a low-rank multiplier formula and techniques for treating subproblems. We establish that the proposed algorithm can better protect data privacy than classic methods adapted to the federated PCA setting. We derive convergence results, including a worst-case complexity estimate, for the proposed ADMM-like algorithm in the presence of the nonlinear equality constraints. Extensive empirical results are presented to show that the new algorithm, while enhancing data privacy, requires far fewer rounds of communication than existing peer algorithms for federated PCA.
{"title":"Seeking Consensus on Subspaces in Federated Principal Component Analysis","authors":"Lei Wang, Xin Liu, Yin Zhang","doi":"10.1007/s10957-024-02523-1","DOIUrl":"https://doi.org/10.1007/s10957-024-02523-1","url":null,"abstract":"<p>In this paper, we develop an algorithm for federated principal component analysis (PCA) with emphases on both communication efficiency and data privacy. Generally speaking, federated PCA algorithms based on direct adaptations of classic iterative methods, such as simultaneous subspace iterations, are unable to preserve data privacy, while algorithms based on variable-splitting and consensus-seeking, such as alternating direction methods of multipliers (ADMM), lack in communication-efficiency. In this work, we propose a novel consensus-seeking formulation by equalizing subspaces spanned by splitting variables instead of equalizing variables themselves, thus greatly relaxing feasibility restrictions and allowing much faster convergence. Then we develop an ADMM-like algorithm with several special features to make it practically efficient, including a low-rank multiplier formula and techniques for treating subproblems. We establish that the proposed algorithm can better protect data privacy than classic methods adapted to the federated PCA setting. We derive convergence results, including a worst-case complexity estimate, for the proposed ADMM-like algorithm in the presence of the nonlinear equality constraints. Extensive empirical results are presented to show that the new algorithm, while enhancing data privacy, requires far fewer rounds of communication than existing peer algorithms for federated PCA.</p>","PeriodicalId":50100,"journal":{"name":"Journal of Optimization Theory and Applications","volume":"18 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142260878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-14DOI: 10.1007/s10957-024-02509-z
Nick Tsipinakis, Panos Parpas
The analysis of second-order optimization methods based either on sub-sampling, randomization or sketching has two serious shortcomings compared to the conventional Newton method. The first shortcoming is that the analysis of the iterates has only been shown to be scale-invariant only under specific assumptions on the problem structure. The second shortfall is that the fast convergence rates of second-order methods have only been established by making assumptions regarding the input data. In this paper, we propose a randomized Newton method for self-concordant functions to address both shortfalls. We propose a Self-concordant Iterative-minimization-Galerkin-based Multilevel Algorithm (SIGMA) and establish its super-linear convergence rate using the theory of self-concordant functions. Our analysis is based on the connections between multigrid optimization methods, and the role of coarse-grained or reduced-order models in the computation of search directions. We take advantage of the insights from the analysis to significantly improve the performance of second-order methods in machine learning applications. We report encouraging initial experiments that suggest SIGMA outperforms other state-of-the-art sub-sampled/sketched Newton methods for both medium and large-scale problems.
{"title":"A Multilevel Method for Self-Concordant Minimization","authors":"Nick Tsipinakis, Panos Parpas","doi":"10.1007/s10957-024-02509-z","DOIUrl":"https://doi.org/10.1007/s10957-024-02509-z","url":null,"abstract":"<p>The analysis of second-order optimization methods based either on sub-sampling, randomization or sketching has two serious shortcomings compared to the conventional Newton method. The first shortcoming is that the analysis of the iterates has only been shown to be scale-invariant only under specific assumptions on the problem structure. The second shortfall is that the fast convergence rates of second-order methods have only been established by making assumptions regarding the input data. In this paper, we propose a randomized Newton method for self-concordant functions to address both shortfalls. We propose a Self-concordant Iterative-minimization-Galerkin-based Multilevel Algorithm (SIGMA) and establish its super-linear convergence rate using the theory of self-concordant functions. Our analysis is based on the connections between multigrid optimization methods, and the role of coarse-grained or reduced-order models in the computation of search directions. We take advantage of the insights from the analysis to significantly improve the performance of second-order methods in machine learning applications. We report encouraging initial experiments that suggest SIGMA outperforms other state-of-the-art sub-sampled/sketched Newton methods for both medium and large-scale problems.</p>","PeriodicalId":50100,"journal":{"name":"Journal of Optimization Theory and Applications","volume":"87 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-12DOI: 10.1007/s10957-024-02520-4
Konstantin Sonntag, Bennet Gebken, Georg Müller, Sebastian Peitz, Stefan Volkwein
The efficient optimization method for locally Lipschitz continuous multiobjective optimization problems from Gebken and Peitz (J Optim Theory Appl 188:696–723, 2021) is extended from finite-dimensional problems to general Hilbert spaces. The method iteratively computes Pareto critical points, where in each iteration, an approximation of the Clarke subdifferential is computed in an efficient manner and then used to compute a common descent direction for all objective functions. To prove convergence, we present some new optimality results for nonsmooth multiobjective optimization problems in Hilbert spaces. Using these, we can show that every accumulation point of the sequence generated by our algorithm is Pareto critical under common assumptions. Computational efficiency for finding Pareto critical points is numerically demonstrated for multiobjective optimal control of an obstacle problem.
Gebken 和 Peitz(J Optim Theory Appl 188:696-723, 2021)提出的局部利普齐兹连续多目标优化问题的高效优化方法从有限维问题扩展到了一般希尔伯特空间。该方法以迭代的方式计算帕累托临界点,在每次迭代中,都会以高效的方式计算克拉克次微分的近似值,然后用于计算所有目标函数的共同下降方向。为了证明收敛性,我们针对希尔伯特空间中的非光滑多目标优化问题提出了一些新的最优性结果。利用这些结果,我们可以证明由我们的算法生成的序列的每个累积点在共同假设下都是帕累托临界点。针对障碍物的多目标优化控制问题,我们用数值证明了找到帕累托临界点的计算效率。
{"title":"A Descent Method for Nonsmooth Multiobjective Optimization in Hilbert Spaces","authors":"Konstantin Sonntag, Bennet Gebken, Georg Müller, Sebastian Peitz, Stefan Volkwein","doi":"10.1007/s10957-024-02520-4","DOIUrl":"https://doi.org/10.1007/s10957-024-02520-4","url":null,"abstract":"<p>The efficient optimization method for locally Lipschitz continuous multiobjective optimization problems from Gebken and Peitz (J Optim Theory Appl 188:696–723, 2021) is extended from finite-dimensional problems to general Hilbert spaces. The method iteratively computes Pareto critical points, where in each iteration, an approximation of the Clarke subdifferential is computed in an efficient manner and then used to compute a common descent direction for all objective functions. To prove convergence, we present some new optimality results for nonsmooth multiobjective optimization problems in Hilbert spaces. Using these, we can show that every accumulation point of the sequence generated by our algorithm is Pareto critical under common assumptions. Computational efficiency for finding Pareto critical points is numerically demonstrated for multiobjective optimal control of an obstacle problem.</p>","PeriodicalId":50100,"journal":{"name":"Journal of Optimization Theory and Applications","volume":"46 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-12DOI: 10.1007/s10957-024-02527-x
Jingyong Tang, Jinchuan Zhou
In this paper we investigate a class of stochastic absolute value equations (SAVE). After establishing the relationship between the stochastic linear complementarity problem and SAVE, we study the expected residual minimization (ERM) formulation for SAVE and its Monte Carlo sample average approximation. In particular, we show that the ERM problem and its sample average approximation have optimal solutions under the condition of (R_0) pair, and the optimal value of the sample average approximation has uniform exponential convergence. Furthermore, we prove that the solutions to the ERM problem are robust for SAVE. For a class of SAVE problems, we use its special structure to construct a smooth residual and further study the convergence of the stationary points. Finally, a smoothing gradient method is proposed by simultaneously considering sample sampling and smooth techniques for solving SAVE. Numerical experiments exhibit the effectiveness of the method.
本文研究了一类随机绝对值方程(SAVE)。在建立了随机线性互补问题和 SAVE 之间的关系之后,我们研究了 SAVE 的期望残差最小化(ERM)公式及其蒙特卡罗样本平均近似值。特别是,我们证明了 ERM 问题及其样本平均近似值在 (R_0) 对的条件下有最优解,而且样本平均近似值的最优值具有均匀的指数收敛性。此外,我们还证明了 ERM 问题的解对于 SAVE 是稳健的。对于一类 SAVE 问题,我们利用其特殊结构构建了平滑残差,并进一步研究了静止点的收敛性。最后,我们提出了一种平滑梯度法,同时考虑了样本采样和平滑技术来求解 SAVE。数值实验证明了该方法的有效性。
{"title":"Expected Residual Minimization Formulation for Stochastic Absolute Value Equations","authors":"Jingyong Tang, Jinchuan Zhou","doi":"10.1007/s10957-024-02527-x","DOIUrl":"https://doi.org/10.1007/s10957-024-02527-x","url":null,"abstract":"<p>In this paper we investigate a class of stochastic absolute value equations (SAVE). After establishing the relationship between the stochastic linear complementarity problem and SAVE, we study the expected residual minimization (ERM) formulation for SAVE and its Monte Carlo sample average approximation. In particular, we show that the ERM problem and its sample average approximation have optimal solutions under the condition of <span>(R_0)</span> pair, and the optimal value of the sample average approximation has uniform exponential convergence. Furthermore, we prove that the solutions to the ERM problem are robust for SAVE. For a class of SAVE problems, we use its special structure to construct a smooth residual and further study the convergence of the stationary points. Finally, a smoothing gradient method is proposed by simultaneously considering sample sampling and smooth techniques for solving SAVE. Numerical experiments exhibit the effectiveness of the method.</p>","PeriodicalId":50100,"journal":{"name":"Journal of Optimization Theory and Applications","volume":"7 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-10DOI: 10.1007/s10957-024-02524-0
Roberto Andreani, John Frank Matos Ascona, Valeriano Antunes de Oliveira
In this study, first-order necessary optimality conditions, in the form of a weak maximum principle, are derived for discrete optimal control problems with mixed equality and inequality constraints. Such conditions are achieved by using the Dubovitskii–Milyutin formalism approach. Nondegenerate conditions are obtained under the constant rank of the subspace component (CRSC) constraint qualification, which is an important generalization of both the Mangasarian–Fromovitz and constant rank constraint qualifications. Beyond its theoretical significance, CRSC has practical importance because it is closely related to the formulation of optimization algorithms. In addition, an instance of a discrete optimal control problem is presented in which CRSC holds while other stronger regularity conditions do not.
{"title":"A Weak Maximum Principle for Discrete Optimal Control Problems with Mixed Constraints","authors":"Roberto Andreani, John Frank Matos Ascona, Valeriano Antunes de Oliveira","doi":"10.1007/s10957-024-02524-0","DOIUrl":"https://doi.org/10.1007/s10957-024-02524-0","url":null,"abstract":"<p>In this study, first-order necessary optimality conditions, in the form of a weak maximum principle, are derived for discrete optimal control problems with mixed equality and inequality constraints. Such conditions are achieved by using the Dubovitskii–Milyutin formalism approach. Nondegenerate conditions are obtained under the constant rank of the subspace component (CRSC) constraint qualification, which is an important generalization of both the Mangasarian–Fromovitz and constant rank constraint qualifications. Beyond its theoretical significance, CRSC has practical importance because it is closely related to the formulation of optimization algorithms. In addition, an instance of a discrete optimal control problem is presented in which CRSC holds while other stronger regularity conditions do not.</p>","PeriodicalId":50100,"journal":{"name":"Journal of Optimization Theory and Applications","volume":"25 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-10DOI: 10.1007/s10957-024-02513-3
Patrick Cheridito, Arnulf Jentzen, Florian Rossmannek
Dynamical systems theory has recently been applied in optimization to prove that gradient descent algorithms bypass so-called strict saddle points of the loss function. However, in many modern machine learning applications, the required regularity conditions are not satisfied. In this paper, we prove a variant of the relevant dynamical systems result, a center-stable manifold theorem, in which we relax some of the regularity requirements. We explore its relevance for various machine learning tasks, with a particular focus on shallow rectified linear unit (ReLU) and leaky ReLU networks with scalar input. Building on a detailed examination of critical points of the square integral loss function for shallow ReLU and leaky ReLU networks relative to an affine target function, we show that gradient descent circumvents most saddle points. Furthermore, we prove convergence to global minima under favourable initialization conditions, quantified by an explicit threshold on the limiting loss.
{"title":"Gradient Descent Provably Escapes Saddle Points in the Training of Shallow ReLU Networks","authors":"Patrick Cheridito, Arnulf Jentzen, Florian Rossmannek","doi":"10.1007/s10957-024-02513-3","DOIUrl":"https://doi.org/10.1007/s10957-024-02513-3","url":null,"abstract":"<p>Dynamical systems theory has recently been applied in optimization to prove that gradient descent algorithms bypass so-called strict saddle points of the loss function. However, in many modern machine learning applications, the required regularity conditions are not satisfied. In this paper, we prove a variant of the relevant dynamical systems result, a center-stable manifold theorem, in which we relax some of the regularity requirements. We explore its relevance for various machine learning tasks, with a particular focus on shallow rectified linear unit (ReLU) and leaky ReLU networks with scalar input. Building on a detailed examination of critical points of the square integral loss function for shallow ReLU and leaky ReLU networks relative to an affine target function, we show that gradient descent circumvents most saddle points. Furthermore, we prove convergence to global minima under favourable initialization conditions, quantified by an explicit threshold on the limiting loss.</p>","PeriodicalId":50100,"journal":{"name":"Journal of Optimization Theory and Applications","volume":"58 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-09DOI: 10.1007/s10957-024-02512-4
Wenfan Wang, Jun Zhang, Ruili Jiao
This paper addresses the design of attitude controller for a fixed-wing unmanned aerial vehicle. To address the complexity of the coupled nonlinear model of a fixed-wing aircraft, this paper introduces a Fractional-Order Type-2 Fuzzy PID (FOTFPID) controller. The adoption of interval valued type-2 fuzzy sets, as an extension of conventional fuzzy sets, has endowed decision makers with the ability to assign membership and non-membership values as intervals. This enhanced capability facilitates more resilient decision-making processes. The Bat optimization algorithm is also employed to fine-tune the membership functions, scaling factors, and primary controller parameters, aiming to minimize the integrated absolute error index. Numerical simulations are conducted to demonstrate effectiveness of the proposed controllers in comparison to classical PID controllers, while subjecting the aircraft system to various disturbance conditions.
本文探讨了固定翼无人飞行器姿态控制器的设计。针对固定翼飞机耦合非线性模型的复杂性,本文引入了分数阶 2 型模糊 PID(FOTFPID)控制器。作为传统模糊集的扩展,区间值 2 型模糊集的采用赋予了决策者将成员值和非成员值分配为区间的能力。这种增强的能力有助于提高决策过程的弹性。此外,还采用了 Bat 优化算法对成员函数、缩放因子和主控制器参数进行微调,目的是使综合绝对误差指数最小。在飞机系统受到各种干扰的条件下,进行了数值模拟,以证明所提出的控制器与传统 PID 控制器相比的有效性。
{"title":"Optimized Fractional-Order Type-2 Fuzzy PID Attitude Controller for Fixed-Wing Aircraft","authors":"Wenfan Wang, Jun Zhang, Ruili Jiao","doi":"10.1007/s10957-024-02512-4","DOIUrl":"https://doi.org/10.1007/s10957-024-02512-4","url":null,"abstract":"<p>This paper addresses the design of attitude controller for a fixed-wing unmanned aerial vehicle. To address the complexity of the coupled nonlinear model of a fixed-wing aircraft, this paper introduces a Fractional-Order Type-2 Fuzzy PID (FOTFPID) controller. The adoption of interval valued type-2 fuzzy sets, as an extension of conventional fuzzy sets, has endowed decision makers with the ability to assign membership and non-membership values as intervals. This enhanced capability facilitates more resilient decision-making processes. The Bat optimization algorithm is also employed to fine-tune the membership functions, scaling factors, and primary controller parameters, aiming to minimize the integrated absolute error index. Numerical simulations are conducted to demonstrate effectiveness of the proposed controllers in comparison to classical PID controllers, while subjecting the aircraft system to various disturbance conditions.</p>","PeriodicalId":50100,"journal":{"name":"Journal of Optimization Theory and Applications","volume":"32 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Two new accelerated fixed-time stable dynamic systems are proposed for solving absolute value equations (AVEs): (Ax-|x|-b=0). Under some mild conditions, the equilibrium point of the proposed dynamic systems is completely equivalent to the solution of the AVEs under consideration. Meanwhile, we have introduced a new relatively tighter global error bound for the AVEs. Leveraging this finding, we have separately established the globally fixed-time stability of the proposed methods, along with providing the conservative settling-time for each method. Compared with some existing state-of-the-art dynamical methods, preliminary numerical experiments show the effectiveness of our methods in solving the AVEs.
{"title":"Convergence-Accelerated Fixed-Time Dynamical Methods for Absolute Value Equations","authors":"Xu Zhang, Cailian Li, Longcheng Zhang, Yaling Hu, Zheng Peng","doi":"10.1007/s10957-024-02525-z","DOIUrl":"https://doi.org/10.1007/s10957-024-02525-z","url":null,"abstract":"<p>Two new accelerated fixed-time stable dynamic systems are proposed for solving absolute value equations (AVEs): <span>(Ax-|x|-b=0)</span>. Under some mild conditions, the equilibrium point of the proposed dynamic systems is completely equivalent to the solution of the AVEs under consideration. Meanwhile, we have introduced a new relatively tighter global error bound for the AVEs. Leveraging this finding, we have separately established the globally fixed-time stability of the proposed methods, along with providing the conservative settling-time for each method. Compared with some existing state-of-the-art dynamical methods, preliminary numerical experiments show the effectiveness of our methods in solving the AVEs.</p>","PeriodicalId":50100,"journal":{"name":"Journal of Optimization Theory and Applications","volume":"6 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-04DOI: 10.1007/s10957-024-02476-5
Florent Nacry, Vo Anh Thuong Nguyen, Juliette Venel
In this paper, we establish through an openness condition the metric subregularity of a multimapping with normal (omega (cdot ))-regularity of either the graph or values. Various preservation results for prox-regular and subsmooth sets are also provided.
{"title":"Metric Subregularity and $$omega (cdot )$$ -Normal Regularity Properties","authors":"Florent Nacry, Vo Anh Thuong Nguyen, Juliette Venel","doi":"10.1007/s10957-024-02476-5","DOIUrl":"https://doi.org/10.1007/s10957-024-02476-5","url":null,"abstract":"<p>In this paper, we establish through an openness condition the metric subregularity of a multimapping with normal <span>(omega (cdot ))</span>-regularity of either the graph or values. Various preservation results for prox-regular and subsmooth sets are also provided.\u0000</p>","PeriodicalId":50100,"journal":{"name":"Journal of Optimization Theory and Applications","volume":"67 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}