Pub Date : 2024-01-03DOI: 10.1007/s10898-023-01350-4
Abstract
A new exact projective penalty method is proposed for the equivalent reduction of constrained optimization problems to nonsmooth unconstrained ones. In the method, the original objective function is extended to infeasible points by summing its value at the projection of an infeasible point on the feasible set with the distance to the projection. Beside Euclidean projections, also a pointed projection in the direction of some fixed internal feasible point can be used. The equivalence means that local and global minimums of the problems coincide. Nonconvex sets with multivalued Euclidean projections are admitted, and the objective function may be lower semicontinuous. The particular case of convex problems is included. The obtained unconstrained or box constrained problem is solved by a version of the branch and bound method combined with local optimization. In principle, any local optimizer can be used within the branch and bound scheme but in numerical experiments sequential quadratic programming method was successfully used. So the proposed exact penalty method does not assume the existence of the objective function outside the allowable area and does not require the selection of the penalty coefficient.
{"title":"The exact projective penalty method for constrained optimization","authors":"","doi":"10.1007/s10898-023-01350-4","DOIUrl":"https://doi.org/10.1007/s10898-023-01350-4","url":null,"abstract":"<h3>Abstract</h3> <p>A new exact projective penalty method is proposed for the equivalent reduction of constrained optimization problems to nonsmooth unconstrained ones. In the method, the original objective function is extended to infeasible points by summing its value at the projection of an infeasible point on the feasible set with the distance to the projection. Beside Euclidean projections, also a pointed projection in the direction of some fixed internal feasible point can be used. The equivalence means that local and global minimums of the problems coincide. Nonconvex sets with multivalued Euclidean projections are admitted, and the objective function may be lower semicontinuous. The particular case of convex problems is included. The obtained unconstrained or box constrained problem is solved by a version of the branch and bound method combined with local optimization. In principle, any local optimizer can be used within the branch and bound scheme but in numerical experiments sequential quadratic programming method was successfully used. So the proposed exact penalty method does not assume the existence of the objective function outside the allowable area and does not require the selection of the penalty coefficient.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139093752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-02DOI: 10.1007/s10898-023-01336-2
Abstract
While constrained, multiobjective optimization is generally very difficult, there is a special case in which such problems can be solved with a simple, elegant branch-and-bound algorithm. This special case is when the objective and constraint functions are Lipschitz continuous with known Lipschitz constants. Given these Lipschitz constants, one can compute lower bounds on the functions over subregions of the search space. This allows one to iteratively partition the search space into rectangles, deleting those rectangles which—based on the lower bounds—contain points that are all provably infeasible or provably dominated by previously sampled point(s). As the algorithm proceeds, the rectangles that have not been deleted provide a tight covering of the Pareto set in the input space. Unfortunately, for black-box optimization this elegant algorithm cannot be applied, as we would not know the Lipschitz constants. In this paper, we show how one can heuristically extend this branch-and-bound algorithm to the case when the problem functions are black-box using an approach similar to that used in the well-known DIRECT global optimization algorithm. We call the resulting method “simDIRECT.” Initial experience with simDIRECT on test problems suggests that it performs similar to, or better than, multiobjective evolutionary algorithms when solving problems with small numbers of variables (up to 12) and a limited number of runs (up to 600).
{"title":"Constrained multiobjective optimization of expensive black-box functions using a heuristic branch-and-bound approach","authors":"","doi":"10.1007/s10898-023-01336-2","DOIUrl":"https://doi.org/10.1007/s10898-023-01336-2","url":null,"abstract":"<h3>Abstract</h3> <p>While constrained, multiobjective optimization is generally very difficult, there is a special case in which such problems can be solved with a simple, elegant branch-and-bound algorithm. This special case is when the objective and constraint functions are Lipschitz continuous with known Lipschitz constants. Given these Lipschitz constants, one can compute lower bounds on the functions over subregions of the search space. This allows one to iteratively partition the search space into rectangles, deleting those rectangles which—based on the lower bounds—contain points that are all provably infeasible or provably dominated by previously sampled point(s). As the algorithm proceeds, the rectangles that have not been deleted provide a tight covering of the Pareto set in the input space. Unfortunately, for black-box optimization this elegant algorithm cannot be applied, as we would not know the Lipschitz constants. In this paper, we show how one can heuristically extend this branch-and-bound algorithm to the case when the problem functions are black-box using an approach similar to that used in the well-known DIRECT global optimization algorithm. We call the resulting method “simDIRECT.” Initial experience with simDIRECT on test problems suggests that it performs similar to, or better than, multiobjective evolutionary algorithms when solving problems with small numbers of variables (up to 12) and a limited number of runs (up to 600). </p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139080250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-02DOI: 10.1007/s10898-023-01352-2
Daniela Lera, Maria Chiara Nasso, Mikhail Posypkin, Yaroslav D. Sergeyev
In this paper, the problem of approximating and visualizing the solution set of systems of nonlinear inequalities is considered. It is supposed that left-hand parts of the inequalities can be multiextremal and non-differentiable. Thus, traditional local methods using gradients cannot be applied in these circumstances. Problems of this kind arise in many scientific applications, in particular, in finding working spaces of robots where it is necessary to determine not one but all the solutions of the system of nonlinear inequalities. Global optimization algorithms can be taken as an inspiration for developing methods for solving this problem. In this article, two new methods using two different approximations of Peano–Hilbert space-filling curves actively used in global optimization are proposed. Convergence conditions of the new methods are established. Numerical experiments executed on problems regarding finding the working spaces of several robots show a promising performance of the new algorithms.
{"title":"Determining solution set of nonlinear inequalities using space-filling curves for finding working spaces of planar robots","authors":"Daniela Lera, Maria Chiara Nasso, Mikhail Posypkin, Yaroslav D. Sergeyev","doi":"10.1007/s10898-023-01352-2","DOIUrl":"https://doi.org/10.1007/s10898-023-01352-2","url":null,"abstract":"<p>In this paper, the problem of approximating and visualizing the solution set of systems of nonlinear inequalities is considered. It is supposed that left-hand parts of the inequalities can be multiextremal and non-differentiable. Thus, traditional local methods using gradients cannot be applied in these circumstances. Problems of this kind arise in many scientific applications, in particular, in finding working spaces of robots where it is necessary to determine not one but <i>all</i> the solutions of the system of nonlinear inequalities. Global optimization algorithms can be taken as an inspiration for developing methods for solving this problem. In this article, two new methods using two different approximations of Peano–Hilbert space-filling curves actively used in global optimization are proposed. Convergence conditions of the new methods are established. Numerical experiments executed on problems regarding finding the working spaces of several robots show a promising performance of the new algorithms.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139077175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-02DOI: 10.1007/s10898-023-01344-2
Abstract
We discuss two key problems related to learning and optimization of neural networks: the computation of the adversarial attack for adversarial robustness and approximate optimization of complex functions. We show that both problems can be cast as instances of DC-programming. We give an explicit decomposition of the corresponding functions as differences of convex functions (DC) and report the results of experiments demonstrating the effectiveness of the DCA algorithm applied to these problems.
摘要 我们讨论了与神经网络学习和优化相关的两个关键问题:计算对抗鲁棒性的对抗攻击和复杂函数的近似优化。我们证明,这两个问题都可以作为 DC 编程的实例。我们给出了相应函数作为凸函数差分 (DC) 的明确分解,并报告了实验结果,证明了 DCA 算法应用于这些问题的有效性。
{"title":"DC-programming for neural network optimizations","authors":"","doi":"10.1007/s10898-023-01344-2","DOIUrl":"https://doi.org/10.1007/s10898-023-01344-2","url":null,"abstract":"<h3>Abstract</h3> <p>We discuss two key problems related to learning and optimization of neural networks: the computation of the adversarial attack for adversarial robustness and approximate optimization of complex functions. We show that both problems can be cast as instances of DC-programming. We give an explicit decomposition of the corresponding functions as differences of convex functions (DC) and report the results of experiments demonstrating the effectiveness of the DCA algorithm applied to these problems. </p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139080275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-16DOI: 10.1007/s10898-023-01347-z
Quoc Tran-Dinh
We develop two “Nesterov’s accelerated” variants of the well-known extragradient method to approximate a solution of a co-hypomonotone inclusion constituted by the sum of two operators, where one is Lipschitz continuous and the other is possibly multivalued. The first scheme can be viewed as an accelerated variant of Tseng’s forward-backward-forward splitting (FBFS) method, while the second one is a Nesterov’s accelerated variant of the “past” FBFS scheme, which requires only one evaluation of the Lipschitz operator and one resolvent of the multivalued mapping. Under appropriate conditions on the parameters, we theoretically prove that both algorithms achieve (mathcal {O}left( 1/kright) ) last-iterate convergence rates on the residual norm, where k is the iteration counter. Our results can be viewed as alternatives of a recent class of Halpern-type methods for root-finding problems. For comparison, we also provide a new convergence analysis of the two recent extra-anchored gradient-type methods for solving co-hypomonotone inclusions.
我们开发了著名的外梯度法的两个 "涅斯捷罗夫加速 "变体,用于近似求解由两个算子之和构成的共hypomonotone包容体,其中一个算子是立普齐兹连续的,另一个算子可能是多值的。第一种方案可视为曾氏前向-后向-前向分裂(FBFS)方法的加速变体,而第二种方案则是 "过去 "FBFS 方案的涅斯捷罗夫加速变体,只需对 Lipschitz 算子和多值映射的一个解析量进行一次求值。在参数的适当条件下,我们从理论上证明了这两种算法在残差规范上都达到了最后迭代收敛率,其中 k 是迭代计数器。我们的结果可以看作是最近一类用于寻根问题的哈尔彭类方法的替代方案。为了进行比较,我们还对最近的两种用于求解共假单调夹杂的外锚定梯度型方法进行了新的收敛分析。
{"title":"Extragradient-type methods with $$mathcal {O}left( 1/kright) $$ last-iterate convergence rates for co-hypomonotone inclusions","authors":"Quoc Tran-Dinh","doi":"10.1007/s10898-023-01347-z","DOIUrl":"https://doi.org/10.1007/s10898-023-01347-z","url":null,"abstract":"<p>We develop two “Nesterov’s accelerated” variants of the well-known extragradient method to approximate a solution of a co-hypomonotone inclusion constituted by the sum of two operators, where one is Lipschitz continuous and the other is possibly multivalued. The first scheme can be viewed as an accelerated variant of Tseng’s forward-backward-forward splitting (FBFS) method, while the second one is a Nesterov’s accelerated variant of the “past” FBFS scheme, which requires only one evaluation of the Lipschitz operator and one resolvent of the multivalued mapping. Under appropriate conditions on the parameters, we theoretically prove that both algorithms achieve <span>(mathcal {O}left( 1/kright) )</span> last-iterate convergence rates on the residual norm, where <i>k</i> is the iteration counter. Our results can be viewed as alternatives of a recent class of Halpern-type methods for root-finding problems. For comparison, we also provide a new convergence analysis of the two recent extra-anchored gradient-type methods for solving co-hypomonotone inclusions.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2023-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138681693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-16DOI: 10.1007/s10898-023-01348-y
Xianfu Wang, Ziyuan Wang
We propose a Bregman inertial forward-reflected-backward (BiFRB) method for nonconvex composite problems. Assuming the generalized concave Kurdyka-Łojasiewicz property, we obtain sequential convergence of BiFRB, as well as convergence rates on both the function value and actual sequence. One distinguishing feature in our analysis is that we utilize a careful treatment of merit function parameters, circumventing the usual restrictive assumption on the inertial parameters. We also present formulae for the Bregman subproblem, supplementing not only BiFRB but also the work of Boţ-Csetnek-László and Boţ-Csetnek. Numerical simulations are conducted to evaluate the performance of our proposed algorithm.
{"title":"A Bregman inertial forward-reflected-backward method for nonconvex minimization","authors":"Xianfu Wang, Ziyuan Wang","doi":"10.1007/s10898-023-01348-y","DOIUrl":"https://doi.org/10.1007/s10898-023-01348-y","url":null,"abstract":"<p>We propose a Bregman inertial forward-reflected-backward (BiFRB) method for nonconvex composite problems. Assuming the generalized concave Kurdyka-Łojasiewicz property, we obtain sequential convergence of BiFRB, as well as convergence rates on both the function value and actual sequence. One distinguishing feature in our analysis is that we utilize a careful treatment of merit function parameters, circumventing the usual restrictive assumption on the inertial parameters. We also present formulae for the Bregman subproblem, supplementing not only BiFRB but also the work of Boţ-Csetnek-László and Boţ-Csetnek. Numerical simulations are conducted to evaluate the performance of our proposed algorithm.\u0000</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2023-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138681761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-14DOI: 10.1007/s10898-023-01343-3
Fusheng Bai, Dongchi Zou, Yutao Wei
Many practical problems involve the optimization of computationally expensive blackbox functions. The computational cost resulting from expensive function evaluations considerably limits the number of true objective function evaluations allowed in order to find a good solution. In this paper, we propose a clustering-based surrogate-assisted evolutionary algorithm, in which a clustering-based local search technique is embedded into the radial basis function surrogate-assisted evolutionary algorithm framework to obtain sample points which might be close to the local solutions of the actual optimization problem. The algorithm generates sample points cyclically by the clustering-based local search, which takes the cluster centers of the ultimate population obtained by the differential evolution iterations applied to the surrogate model in one cycle as new sample points, and these new sample points are added into the initial population for the differential evolution iterations of the next cycle. In this way the exploration and the exploitation are better balanced during the search process. To verify the effectiveness of the present algorithm, it is compared with four state-of-the-art surrogate-assisted evolutionary algorithms on 24 synthetic test problems and one application problem. Experimental results show that the present algorithm outperforms other algorithms on most synthetic test problems and the application problem.
{"title":"A surrogate-assisted evolutionary algorithm with clustering-based sampling for high-dimensional expensive blackbox optimization","authors":"Fusheng Bai, Dongchi Zou, Yutao Wei","doi":"10.1007/s10898-023-01343-3","DOIUrl":"https://doi.org/10.1007/s10898-023-01343-3","url":null,"abstract":"<p>Many practical problems involve the optimization of computationally expensive blackbox functions. The computational cost resulting from expensive function evaluations considerably limits the number of true objective function evaluations allowed in order to find a good solution. In this paper, we propose a clustering-based surrogate-assisted evolutionary algorithm, in which a clustering-based local search technique is embedded into the radial basis function surrogate-assisted evolutionary algorithm framework to obtain sample points which might be close to the local solutions of the actual optimization problem. The algorithm generates sample points cyclically by the clustering-based local search, which takes the cluster centers of the ultimate population obtained by the differential evolution iterations applied to the surrogate model in one cycle as new sample points, and these new sample points are added into the initial population for the differential evolution iterations of the next cycle. In this way the exploration and the exploitation are better balanced during the search process. To verify the effectiveness of the present algorithm, it is compared with four state-of-the-art surrogate-assisted evolutionary algorithms on 24 synthetic test problems and one application problem. Experimental results show that the present algorithm outperforms other algorithms on most synthetic test problems and the application problem.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2023-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138630136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-14DOI: 10.1007/s10898-023-01345-1
Ksenia Bestuzheva, Antonia Chmiela, Benjamin Müller, Felipe Serrano, Stefan Vigerske, Fabian Wegscheider
For over 10 years, the constraint integer programming framework SCIP has been extended by capabilities for the solution of convex and nonconvex mixed-integer nonlinear programs (MINLPs). With the recently published version 8.0, these capabilities have been largely reworked and extended. This paper discusses the motivations for recent changes and provides an overview of features that are particular to MINLP solving in SCIP. Further, difficulties in benchmarking global MINLP solvers are discussed and a comparison with several state-of-the-art global MINLP solvers is provided.
{"title":"Global optimization of mixed-integer nonlinear programs with SCIP 8","authors":"Ksenia Bestuzheva, Antonia Chmiela, Benjamin Müller, Felipe Serrano, Stefan Vigerske, Fabian Wegscheider","doi":"10.1007/s10898-023-01345-1","DOIUrl":"https://doi.org/10.1007/s10898-023-01345-1","url":null,"abstract":"<p>For over 10 years, the constraint integer programming framework SCIP has been extended by capabilities for the solution of convex and nonconvex mixed-integer nonlinear programs (MINLPs). With the recently published version 8.0, these capabilities have been largely reworked and extended. This paper discusses the motivations for recent changes and provides an overview of features that are particular to MINLP solving in SCIP. Further, difficulties in benchmarking global MINLP solvers are discussed and a comparison with several state-of-the-art global MINLP solvers is provided.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2023-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138629926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-11DOI: 10.1007/s10898-023-01346-0
Zhen-Ping Yang, Yong Zhao, Gui-Hua Lin
In this paper, we propose a variable sample-size optimistic mirror descent algorithm under the Bregman distance for a class of stochastic mixed variational inequalities. Different from those conventional variable sample-size extragradient algorithms to evaluate the expected mapping twice at each iteration, our algorithm requires only one evaluation of the expected mapping and hence can significantly reduce the computation load. In the monotone case, the proposed algorithm can achieve ({mathcal {O}}(1/t)) ergodic convergence rate in terms of the expected restricted gap function and, under the strongly generalized monotonicity condition, the proposed algorithm has a locally linear convergence rate of the Bregman distance between iterations and solutions when the sample size increases geometrically. Furthermore, we derive some results on stochastic local stability under the generalized monotonicity condition. Numerical experiments indicate that the proposed algorithm compares favorably with some existing methods.
{"title":"Variable sample-size optimistic mirror descent algorithm for stochastic mixed variational inequalities","authors":"Zhen-Ping Yang, Yong Zhao, Gui-Hua Lin","doi":"10.1007/s10898-023-01346-0","DOIUrl":"https://doi.org/10.1007/s10898-023-01346-0","url":null,"abstract":"<p>In this paper, we propose a variable sample-size optimistic mirror descent algorithm under the Bregman distance for a class of stochastic mixed variational inequalities. Different from those conventional variable sample-size extragradient algorithms to evaluate the expected mapping twice at each iteration, our algorithm requires only one evaluation of the expected mapping and hence can significantly reduce the computation load. In the monotone case, the proposed algorithm can achieve <span>({mathcal {O}}(1/t))</span> ergodic convergence rate in terms of the expected restricted gap function and, under the strongly generalized monotonicity condition, the proposed algorithm has a locally linear convergence rate of the Bregman distance between iterations and solutions when the sample size increases geometrically. Furthermore, we derive some results on stochastic local stability under the generalized monotonicity condition. Numerical experiments indicate that the proposed algorithm compares favorably with some existing methods.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138574424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-24DOI: 10.1007/s10898-023-01341-5
E. L. Dias Júnior, P. J. S. Santos, A. Soubeyran, J. C. O. Souza
This paper has two parts. In the mathematical part, we present two inexact versions of the proximal point method for solving quasi-equilibrium problems (QEP) in Hilbert spaces. Under mild assumptions, we prove that the methods find a solution to the quasi-equilibrium problem with an approximated computation of each iteration or using a perturbation of the regularized bifunction. In the behavioral part, we justify the choice of the new perturbation, with the help of the main example that drives quasi-equilibrium problems: the Cournot duopoly model, which founded game theory. This requires to exhibit a new QEP reformulation of the Cournot model that will appear more intuitive and rigorous. It leads directly to the formulation of our perturbation function. Some numerical experiments show the performance of the proposed methods.
{"title":"On inexact versions of a quasi-equilibrium problem: a Cournot duopoly perspective","authors":"E. L. Dias Júnior, P. J. S. Santos, A. Soubeyran, J. C. O. Souza","doi":"10.1007/s10898-023-01341-5","DOIUrl":"https://doi.org/10.1007/s10898-023-01341-5","url":null,"abstract":"<p>This paper has two parts. In the mathematical part, we present two inexact versions of the proximal point method for solving quasi-equilibrium problems (QEP) in Hilbert spaces. Under mild assumptions, we prove that the methods find a solution to the quasi-equilibrium problem with an approximated computation of each iteration or using a perturbation of the regularized bifunction. In the behavioral part, we justify the choice of the new perturbation, with the help of the main example that drives quasi-equilibrium problems: the Cournot duopoly model, which founded game theory. This requires to exhibit a new QEP reformulation of the Cournot model that will appear more intuitive and rigorous. It leads directly to the formulation of our perturbation function. Some numerical experiments show the performance of the proposed methods.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2023-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138524736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}