Pub Date : 2024-07-06DOI: 10.1007/s11590-024-02136-7
Xiaosong Ding, Jun Ma, Xiuming Li, Xi Chen
This paper presents improved approaches to the treatment of combinatorial challenges associated with the search process for conservative cuts arising in disjoint bilinear programming. We introduce a new randomized approach that leverages the active constraint information within a hyperplane containing the given local solution. It can restrict the search process to only one dimension and mitigate the impact of growing degeneracy imposed on computational loads. The utilization of recursion further refines our strategy by systematically reducing the number of adjacent vertices available for exchange. Extensive computational experiments validate that these approaches can significantly enhance computational efficiency to the scale of (10^{-3}) s, particularly for those problems with high dimensions and degrees of degeneracy.
本文提出了一种改进的方法,用于处理与不相邻双线性编程中出现的保守切分搜索过程相关的组合难题。我们引入了一种新的随机方法,该方法利用了包含给定局部解的超平面内的主动约束信息。它可以将搜索过程限制在一个维度内,并减轻不断增长的退化性对计算负荷的影响。利用递归进一步完善了我们的策略,系统地减少了可供交换的相邻顶点数量。广泛的计算实验验证了这些方法可以显著提高计算效率,达到 (10^{-3}) s 的规模,特别是对于那些具有高维度和退化度的问题。
{"title":"Improved randomized approaches to the location of a conservative hyperplane","authors":"Xiaosong Ding, Jun Ma, Xiuming Li, Xi Chen","doi":"10.1007/s11590-024-02136-7","DOIUrl":"https://doi.org/10.1007/s11590-024-02136-7","url":null,"abstract":"<p>This paper presents improved approaches to the treatment of combinatorial challenges associated with the search process for conservative cuts arising in disjoint bilinear programming. We introduce a new randomized approach that leverages the active constraint information within a hyperplane containing the given local solution. It can restrict the search process to only one dimension and mitigate the impact of growing degeneracy imposed on computational loads. The utilization of recursion further refines our strategy by systematically reducing the number of adjacent vertices available for exchange. Extensive computational experiments validate that these approaches can significantly enhance computational efficiency to the scale of <span>(10^{-3})</span> s, particularly for those problems with high dimensions and degrees of degeneracy.</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":"175 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141572386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-06DOI: 10.1007/s11590-024-02132-x
Kexin Ren, Chunguang Liu, Lumiao Wang
In this paper, we consider the modified second accelerated proximal gradient algorithm (APG(_{s})) introduced in Lin and Liu (Optim Lett 13(4), 805–824, 2019), discuss the behaviour of this method on more general cases, prove the convergence properties under weaker assumptions. Finally, numerical experiments are performed to support our theoretical results.
在本文中,我们考虑了 Lin 和 Liu(Optim Lett 13(4), 805-824, 2019)中介绍的修正的第二加速近似梯度算法(APG(_{s})),讨论了该方法在更一般情况下的行为,证明了在较弱假设下的收敛特性。最后,我们进行了数值实验来支持我们的理论结果。
{"title":"The modified second APG method for a class of nonconvex nonsmooth problems","authors":"Kexin Ren, Chunguang Liu, Lumiao Wang","doi":"10.1007/s11590-024-02132-x","DOIUrl":"https://doi.org/10.1007/s11590-024-02132-x","url":null,"abstract":"<p>In this paper, we consider <i> the modified second accelerated proximal gradient algorithm</i> (APG<span>(_{s})</span>) introduced in Lin and Liu (Optim Lett 13(4), 805–824, 2019), discuss the behaviour of this method on more general cases, prove the convergence properties under weaker assumptions. Finally, numerical experiments are performed to support our theoretical results.</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":"28 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141572385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-02DOI: 10.1007/s11590-024-02131-y
Zexian Liu, Yu-Hong Dai, Hongwei Liu
Subspace minimization conjugate gradient (SMCG) methods are a class of quite efficient iterative methods for unconstrained optimization. The orthogonality is an important property of linear conjugate gradient method. It is however observed that the orthogonality of the gradients in linear conjugate gradient method is often lost, which usually causes slow convergence. Based on SMCG(_)BB (Liu and Liu in J Optim Theory Appl 180(3):879–906, 2019), we combine subspace minimization conjugate gradient method with the limited memory technique and present a limited memory subspace minimization conjugate gradient algorithm for unconstrained optimization. The proposed method includes two types of iterations: SMCG iteration and quasi-Newton (QN) iteration. In the SMCG iteration, the search direction is determined by solving a quadratic approximation problem, in which the important parameter is estimated based on some properties of the objective function at the current iterative point. In the QN iteration, a modified quasi-Newton method in the subspace is proposed to improve the orthogonality. Additionally, a modified strategy for choosing the initial stepsize is exploited. The global convergence of the proposed method is established under weak conditions. Some numerical results indicate that, for the tested functions in the CUTEr library, the proposed method has a great improvement over SMCG(_)BB, and it is comparable to the latest limited memory conjugate gradient software package CG(_)DESCENT (6.8) (Hager and Zhang in SIAM J Optim 23(4):2150–2168, 2013) and is also superior to the famous limited memory BFGS (L-BFGS) method.
子空间最小化共轭梯度(SMCG)方法是一类相当有效的无约束优化迭代方法。正交性是线性共轭梯度法的一个重要特性。然而,在线性共轭梯度法中梯度的正交性经常会丢失,这通常会导致收敛速度变慢。基于 SMCG(_)BB (Liu and Liu in J Optim Theory Appl 180(3):879-906, 2019),我们将子空间最小化共轭梯度法与有限记忆技术相结合,提出了一种用于无约束优化的有限记忆子空间最小化共轭梯度算法。所提出的方法包括两种迭代:SMCG 迭代和准牛顿(QN)迭代。在 SMCG 迭代中,搜索方向是通过求解二次逼近问题确定的,其中重要参数是根据当前迭代点目标函数的某些属性估算的。在 QN 迭代中,提出了一种改进的子空间准牛顿方法,以提高正交性。此外,还采用了一种改进的初始步长选择策略。提出的方法在弱条件下具有全局收敛性。一些数值结果表明,对于 CUTEr 库中的测试函数,所提方法比 SMCG(_)BB 有很大改进,与最新的有限记忆共轭梯度软件包 CG(_)DESCENT (6.8) (Hager 和 Zhang 在 SIAM J Optim 23(4):2150-2168, 2013)不相上下,也优于著名的有限记忆 BFGS(L-BFGS)方法。
{"title":"A limited memory subspace minimization conjugate gradient algorithm for unconstrained optimization","authors":"Zexian Liu, Yu-Hong Dai, Hongwei Liu","doi":"10.1007/s11590-024-02131-y","DOIUrl":"https://doi.org/10.1007/s11590-024-02131-y","url":null,"abstract":"<p>Subspace minimization conjugate gradient (SMCG) methods are a class of quite efficient iterative methods for unconstrained optimization. The orthogonality is an important property of linear conjugate gradient method. It is however observed that the orthogonality of the gradients in linear conjugate gradient method is often lost, which usually causes slow convergence. Based on SMCG<span>(_)</span>BB (Liu and Liu in J Optim Theory Appl 180(3):879–906, 2019), we combine subspace minimization conjugate gradient method with the limited memory technique and present a limited memory subspace minimization conjugate gradient algorithm for unconstrained optimization. The proposed method includes two types of iterations: SMCG iteration and quasi-Newton (QN) iteration. In the SMCG iteration, the search direction is determined by solving a quadratic approximation problem, in which the important parameter is estimated based on some properties of the objective function at the current iterative point. In the QN iteration, a modified quasi-Newton method in the subspace is proposed to improve the orthogonality. Additionally, a modified strategy for choosing the initial stepsize is exploited. The global convergence of the proposed method is established under weak conditions. Some numerical results indicate that, for the tested functions in the CUTEr library, the proposed method has a great improvement over SMCG<span>(_)</span>BB, and it is comparable to the latest limited memory conjugate gradient software package CG<span>(_)</span>DESCENT (6.8) (Hager and Zhang in SIAM J Optim 23(4):2150–2168, 2013) and is also superior to the famous limited memory BFGS (L-BFGS) method.</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":"21 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141501395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-25DOI: 10.1007/s11590-024-02119-8
Antonio Orvieto
In 1964, Polyak showed that the Heavy-ball method, the simplest momentum technique, accelerates convergence of strongly-convex problems in the vicinity of the solution. While Nesterov later developed a globally accelerated version, Polyak’s original algorithm remains simpler and more widely used in applications such as deep learning. Despite this popularity, the question of whether Heavy-ball is also globally accelerated or not has not been fully answered yet, and no convincing counterexample has been provided. This is largely due to the difficulty in finding an effective Lyapunov function: indeed, most proofs of Heavy-ball acceleration in the strongly-convex quadratic setting rely on eigenvalue arguments. Our work adopts a different approach: studying momentum through the lens of quadratic invariants of simple harmonic oscillators. By utilizing the modified Hamiltonian of Störmer-Verlet integrators, we are able to construct a Lyapunov function that demonstrates an (O(1/k^2)) rate for Heavy-ball in the case of convex quadratic problems. Our novel proof technique, though restricted to linear regression, is found to work well empirically also on non-quadratic convex problems, and thus provides insights on the structure of Lyapunov functions to be used in the general convex case. As such, our paper makes a promising first step towards potentially proving the acceleration of Polyak’s momentum method and we hope it inspires further research around this question.
{"title":"An accelerated lyapunov function for Polyak’s Heavy-ball on convex quadratics","authors":"Antonio Orvieto","doi":"10.1007/s11590-024-02119-8","DOIUrl":"https://doi.org/10.1007/s11590-024-02119-8","url":null,"abstract":"<p>In 1964, Polyak showed that the Heavy-ball method, the simplest momentum technique, accelerates convergence of strongly-convex problems in the vicinity of the solution. While Nesterov later developed a globally accelerated version, Polyak’s original algorithm remains simpler and more widely used in applications such as deep learning. Despite this popularity, the question of whether Heavy-ball is also globally accelerated or not has not been fully answered yet, and no convincing counterexample has been provided. This is largely due to the difficulty in finding an effective Lyapunov function: indeed, most proofs of Heavy-ball acceleration in the strongly-convex quadratic setting rely on eigenvalue arguments. Our work adopts a different approach: studying momentum through the lens of quadratic invariants of simple harmonic oscillators. By utilizing the modified Hamiltonian of Störmer-Verlet integrators, we are able to construct a Lyapunov function that demonstrates an <span>(O(1/k^2))</span> rate for Heavy-ball in the case of convex quadratic problems. Our novel proof technique, though restricted to linear regression, is found to work well empirically also on non-quadratic convex problems, and thus provides insights on the structure of Lyapunov functions to be used in the general convex case. As such, our paper makes a promising first step towards potentially proving the acceleration of Polyak’s momentum method and we hope it inspires further research around this question.</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":"24 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141501227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-25DOI: 10.1007/s11590-024-02124-x
Michael Zabarankin, Bogdan Grechuk, Dawei Hao
Understanding and modeling of agent’s risk/reward preferences is a central problem in various applications of risk management including investment science and portfolio theory in particular. One of the approaches is to axiomatically define a set of performance measures and to use a benchmark to identify a particular measure from that set by either inverse optimization or functional dominance. For example, such a benchmark could be the rate of return of an existing attractive financial instrument. This work introduces deviation and drawdown measures that incorporate rates of return of indicated financial instruments (benchmarks). For discrete distributions and discrete sample paths, portfolio problems with such measures are reduced to linear programs and solved based on historical data in cases of a single benchmark and three benchmarks used simultaneously. The optimal portfolios and corresponding benchmarks have similar expected/cumulative rates of return in sample and out of sample, but the former are considerably less volatile.
{"title":"Benchmark-based deviation and drawdown measures in portfolio optimization","authors":"Michael Zabarankin, Bogdan Grechuk, Dawei Hao","doi":"10.1007/s11590-024-02124-x","DOIUrl":"https://doi.org/10.1007/s11590-024-02124-x","url":null,"abstract":"<p>Understanding and modeling of agent’s risk/reward preferences is a central problem in various applications of risk management including investment science and portfolio theory in particular. One of the approaches is to axiomatically define a set of performance measures and to use a benchmark to identify a particular measure from that set by either inverse optimization or functional dominance. For example, such a benchmark could be the rate of return of an existing attractive financial instrument. This work introduces deviation and drawdown measures that incorporate rates of return of indicated financial instruments (benchmarks). For discrete distributions and discrete sample paths, portfolio problems with such measures are reduced to linear programs and solved based on historical data in cases of a single benchmark and three benchmarks used simultaneously. The optimal portfolios and corresponding benchmarks have similar expected/cumulative rates of return in sample and out of sample, but the former are considerably less volatile.</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":"74 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141501228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-24DOI: 10.1007/s11590-024-02130-z
Raul Garcia, Seyedmohammadhossein Hosseinian, Mallesh Pai, Andrew J. Schaefer
We propose an extension of two-player zero-sum games, where one player may select available actions for themselves and the opponent, subject to a budget constraint. We present a mixed-integer linear programming (MILP) formulation for the problem, provide analytical results regarding its solution, and discuss applications in the security and advertising domains. Our computational experiments demonstrate that heuristic approaches, on average, yield suboptimal solutions with at least a 20% relative gap with those obtained by the MILP formulation.
{"title":"Strategy investments in zero-sum games","authors":"Raul Garcia, Seyedmohammadhossein Hosseinian, Mallesh Pai, Andrew J. Schaefer","doi":"10.1007/s11590-024-02130-z","DOIUrl":"https://doi.org/10.1007/s11590-024-02130-z","url":null,"abstract":"<p>We propose an extension of two-player zero-sum games, where one player may select available actions for themselves and the opponent, subject to a budget constraint. We present a mixed-integer linear programming (MILP) formulation for the problem, provide analytical results regarding its solution, and discuss applications in the security and advertising domains. Our computational experiments demonstrate that heuristic approaches, on average, yield suboptimal solutions with at least a 20% relative gap with those obtained by the MILP formulation.</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141501229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-19DOI: 10.1007/s11590-024-02129-6
Woocheol Choi, Changbum Chun, Yoon Mo Jung, Sangwoon Yun
Composite optimization problems on Riemannian manifolds arise in applications such as sparse principal component analysis and dictionary learning. Recently, Huang and Wei introduced a Riemannian proximal gradient method (Huang and Wei in MP 194:371–413, 2022) and an inexact Riemannian proximal gradient method (Wen and Ke in COA 85:1–32, 2023), utilizing the retraction mapping to address these challenges. They established the sublinear convergence rate of the Riemannian proximal gradient method under the retraction convexity and a geometric condition on retractions, as well as the local linear convergence rate of the inexact Riemannian proximal gradient method under the Riemannian Kurdyka-Lojasiewicz property. In this paper, we demonstrate the linear convergence rate of the Riemannian proximal gradient method and the linear convergence rate of the proximal gradient method proposed in Chen et al. (SIAM J Opt 30:210–239, 2020) under strong retraction convexity. Additionally, we provide a counterexample that violates the geometric condition on retractions, which is crucial for establishing the sublinear convergence rate of the Riemannian proximal gradient method.
{"title":"On the linear convergence rate of Riemannian proximal gradient method","authors":"Woocheol Choi, Changbum Chun, Yoon Mo Jung, Sangwoon Yun","doi":"10.1007/s11590-024-02129-6","DOIUrl":"https://doi.org/10.1007/s11590-024-02129-6","url":null,"abstract":"<p>Composite optimization problems on Riemannian manifolds arise in applications such as sparse principal component analysis and dictionary learning. Recently, Huang and Wei introduced a Riemannian proximal gradient method (Huang and Wei in MP 194:371–413, 2022) and an inexact Riemannian proximal gradient method (Wen and Ke in COA 85:1–32, 2023), utilizing the retraction mapping to address these challenges. They established the sublinear convergence rate of the Riemannian proximal gradient method under the retraction convexity and a geometric condition on retractions, as well as the local linear convergence rate of the inexact Riemannian proximal gradient method under the Riemannian Kurdyka-Lojasiewicz property. In this paper, we demonstrate the linear convergence rate of the Riemannian proximal gradient method and the linear convergence rate of the proximal gradient method proposed in Chen et al. (SIAM J Opt 30:210–239, 2020) under strong retraction convexity. Additionally, we provide a counterexample that violates the geometric condition on retractions, which is crucial for establishing the sublinear convergence rate of the Riemannian proximal gradient method.</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":"5 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141501230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-18DOI: 10.1007/s11590-024-02128-7
Van Dung Nguyen
In this work, we propose a new splitting method for monotone inclusion problems with three operators in real Hilbert spaces, in which one is maximal monotone, one is monotone-Lipschitz and one is cocoercive. By specializing in two operator inclusion, we recover the forward–backward and the generalization of the reflected-forward–backward splitting methods as particular cases. The weak convergence of the algorithm under standard assumptions is established. The linear convergence rate of the proposed method is obtained under an additional condition like the strong monotonicity. We also give some theoretical comparisons to demonstrate the efficiency of the proposed method.
{"title":"A modification of the forward–backward splitting method for monotone inclusions","authors":"Van Dung Nguyen","doi":"10.1007/s11590-024-02128-7","DOIUrl":"https://doi.org/10.1007/s11590-024-02128-7","url":null,"abstract":"<p>In this work, we propose a new splitting method for monotone inclusion problems with three operators in real Hilbert spaces, in which one is maximal monotone, one is monotone-Lipschitz and one is cocoercive. By specializing in two operator inclusion, we recover the forward–backward and the generalization of the reflected-forward–backward splitting methods as particular cases. The weak convergence of the algorithm under standard assumptions is established. The linear convergence rate of the proposed method is obtained under an additional condition like the strong monotonicity. We also give some theoretical comparisons to demonstrate the efficiency of the proposed method.</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":"2015 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141501231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-12DOI: 10.1007/s11590-024-02127-8
Dimitri J. Papageorgiou, Jan Kronqvist, Asha Ramanujam, James Kor, Youngdae Kim, Can Li
When faced with a limited budget of function evaluations, state-of-the-art black-box optimization (BBO) solvers struggle to obtain globally, or sometimes even locally, optimal solutions. In such cases, one may pursue solution polishing, i.e., a computational method to improve (or “polish”) an incumbent solution, typically via some sort of evolutionary algorithm involving two or more solutions. While solution polishing in “white-box” optimization has existed for years, relatively little has been published regarding its application in costly-to-evaluate BBO. To fill this void, we explore two novel methods for performing solution polishing along one-dimensional curves rather than along straight lines. We introduce a convex quadratic program that can generate promising curves through multiple elite solutions, i.e., via path relinking, or around a single elite solution. In comparing four solution polishing techniques for continuous BBO, we show that solution polishing along a curve is competitive with solution polishing using a state-of-the-art BBO solver.
{"title":"Solution polishing via path relinking for continuous black-box optimization","authors":"Dimitri J. Papageorgiou, Jan Kronqvist, Asha Ramanujam, James Kor, Youngdae Kim, Can Li","doi":"10.1007/s11590-024-02127-8","DOIUrl":"https://doi.org/10.1007/s11590-024-02127-8","url":null,"abstract":"<p>When faced with a limited budget of function evaluations, state-of-the-art black-box optimization (BBO) solvers struggle to obtain globally, or sometimes even locally, optimal solutions. In such cases, one may pursue solution polishing, i.e., a computational method to improve (or “polish”) an incumbent solution, typically via some sort of evolutionary algorithm involving two or more solutions. While solution polishing in “white-box” optimization has existed for years, relatively little has been published regarding its application in costly-to-evaluate BBO. To fill this void, we explore two novel methods for performing solution polishing along one-dimensional curves rather than along straight lines. We introduce a convex quadratic program that can generate promising curves through multiple elite solutions, i.e., via path relinking, or around a single elite solution. In comparing four solution polishing techniques for continuous BBO, we show that solution polishing along a curve is competitive with solution polishing using a state-of-the-art BBO solver.</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":"40 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141501232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-11DOI: 10.1007/s11590-024-02126-9
Akhtar A. Khan, Dezhou Kong, Jinlu Li
This paper proposes and analyzes the notion of dual cones associated with the metric projection and generalized projection in Banach spaces. We show that the dual cones, related to the metric projection and generalized metric projection, lose many important properties in transitioning from Hilbert spaces to Banach spaces. We also propose and analyze the notions of faces and visions in Banach spaces and relate them to metric projection and generalized projection. We provide many illustrative examples to give insight into the given results
{"title":"Dual and generalized dual cones in Banach spaces","authors":"Akhtar A. Khan, Dezhou Kong, Jinlu Li","doi":"10.1007/s11590-024-02126-9","DOIUrl":"https://doi.org/10.1007/s11590-024-02126-9","url":null,"abstract":"<p>This paper proposes and analyzes the notion of dual cones associated with the metric projection and generalized projection in Banach spaces. We show that the dual cones, related to the metric projection and generalized metric projection, lose many important properties in transitioning from Hilbert spaces to Banach spaces. We also propose and analyze the notions of faces and visions in Banach spaces and relate them to metric projection and generalized projection. We provide many illustrative examples to give insight into the given results</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":"188 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141513847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}