首页 > 最新文献

Optimization Letters最新文献

英文 中文
A faster heuristic for the traveling salesman problem with drone 有无人机的旅行推销员问题的快速启发式
IF 1.6 4区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2024-07-08 DOI: 10.1007/s11590-024-02134-9
Pedro Henrique Del Bianco Hokama, Carla Negri Lintzmayer, Mário César San Felice

The Flying Sidekick Traveling Salesman Problem (FSTSP) consists of using one truck and one drone to perform deliveries to a set of customers. The drone is limited to delivering to one customer at a time, after which it returns to the truck, from where it can be launched again. The goal is to minimize the time required to service all customers and return both vehicles to the depot. In the literature, we can find heuristics for this problem that follow the order-first split-second approach: find a Hamiltonian cycle h with all customers, and then remove some customers to be handled by the drone while deciding from where the drone will be launched and where it will be retrieved. Indeed, they optimally solve the h-FSTSP, which is a variation that consists of solving the FSTSP while respecting a given initial cycle h. We present the Lazy Drone Property, which guarantees that only some combinations of nodes for the launch and retrieval of the drone need to be considered by algorithms for the h-FSTSP. We also present an algorithm that uses the property, and we show experimental results which corroborate its effectiveness in decreasing the running time of such algorithms. Our algorithm was shown to be more than 84 times faster than the previously best-known ones over the literature benchmark. Moreover, on average, it considered an amount of launch and retrieval pairs that is linear on the number of customers, indicating that the algorithm’s performance should be sustainable for larger instances.

飞行侧翼旅行推销员问题(FSTSP)包括使用一辆卡车和一架无人机向一组客户送货。无人机每次只能为一位客户送货,之后返回卡车,并在卡车上再次起飞。我们的目标是尽量缩短为所有客户提供服务并将两辆车送回仓库所需的时间。在文献中,我们可以找到针对该问题的启发式方法,它们遵循顺序优先的瞬间方法:找到一个包含所有客户的哈密尔顿循环 h,然后删除一些客户,由无人机处理,同时决定无人机从哪里发射,在哪里回收。我们提出了 "懒惰无人机特性"(Lazy Drone Property),它保证了 h-FSTSP 算法只需考虑无人机发射和回收的某些节点组合。我们还提出了一种使用该特性的算法,并展示了实验结果,这些结果证实了该特性在减少此类算法运行时间方面的有效性。实验结果表明,我们的算法比之前最著名的算法在文献基准上的运行速度快 84 倍以上。此外,平均而言,它所考虑的发射和检索对的数量与客户数量呈线性关系,这表明该算法的性能在更大的实例中应该是可持续的。
{"title":"A faster heuristic for the traveling salesman problem with drone","authors":"Pedro Henrique Del Bianco Hokama, Carla Negri Lintzmayer, Mário César San Felice","doi":"10.1007/s11590-024-02134-9","DOIUrl":"https://doi.org/10.1007/s11590-024-02134-9","url":null,"abstract":"<p>The <i>Flying Sidekick Traveling Salesman Problem (FSTSP)</i> consists of using one truck and one drone to perform deliveries to a set of customers. The drone is limited to delivering to one customer at a time, after which it returns to the truck, from where it can be launched again. The goal is to minimize the time required to service all customers and return both vehicles to the depot. In the literature, we can find heuristics for this problem that follow the order-first split-second approach: find a Hamiltonian cycle <i>h</i> with all customers, and then remove some customers to be handled by the drone while deciding from where the drone will be launched and where it will be retrieved. Indeed, they optimally solve the <i>h-FSTSP</i>, which is a variation that consists of solving the FSTSP while respecting a given initial cycle <i>h</i>. We present the Lazy Drone Property, which guarantees that only some combinations of nodes for the launch and retrieval of the drone need to be considered by algorithms for the h-FSTSP. We also present an algorithm that uses the property, and we show experimental results which corroborate its effectiveness in decreasing the running time of such algorithms. Our algorithm was shown to be more than 84 times faster than the previously best-known ones over the literature benchmark. Moreover, on average, it considered an amount of launch and retrieval pairs that is linear on the number of customers, indicating that the algorithm’s performance should be sustainable for larger instances.\u0000</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141572383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved randomized approaches to the location of a conservative hyperplane 保守超平面位置的改进随机方法
IF 1.6 4区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2024-07-06 DOI: 10.1007/s11590-024-02136-7
Xiaosong Ding, Jun Ma, Xiuming Li, Xi Chen

This paper presents improved approaches to the treatment of combinatorial challenges associated with the search process for conservative cuts arising in disjoint bilinear programming. We introduce a new randomized approach that leverages the active constraint information within a hyperplane containing the given local solution. It can restrict the search process to only one dimension and mitigate the impact of growing degeneracy imposed on computational loads. The utilization of recursion further refines our strategy by systematically reducing the number of adjacent vertices available for exchange. Extensive computational experiments validate that these approaches can significantly enhance computational efficiency to the scale of (10^{-3}) s, particularly for those problems with high dimensions and degrees of degeneracy.

本文提出了一种改进的方法,用于处理与不相邻双线性编程中出现的保守切分搜索过程相关的组合难题。我们引入了一种新的随机方法,该方法利用了包含给定局部解的超平面内的主动约束信息。它可以将搜索过程限制在一个维度内,并减轻不断增长的退化性对计算负荷的影响。利用递归进一步完善了我们的策略,系统地减少了可供交换的相邻顶点数量。广泛的计算实验验证了这些方法可以显著提高计算效率,达到 (10^{-3}) s 的规模,特别是对于那些具有高维度和退化度的问题。
{"title":"Improved randomized approaches to the location of a conservative hyperplane","authors":"Xiaosong Ding, Jun Ma, Xiuming Li, Xi Chen","doi":"10.1007/s11590-024-02136-7","DOIUrl":"https://doi.org/10.1007/s11590-024-02136-7","url":null,"abstract":"<p>This paper presents improved approaches to the treatment of combinatorial challenges associated with the search process for conservative cuts arising in disjoint bilinear programming. We introduce a new randomized approach that leverages the active constraint information within a hyperplane containing the given local solution. It can restrict the search process to only one dimension and mitigate the impact of growing degeneracy imposed on computational loads. The utilization of recursion further refines our strategy by systematically reducing the number of adjacent vertices available for exchange. Extensive computational experiments validate that these approaches can significantly enhance computational efficiency to the scale of <span>(10^{-3})</span> s, particularly for those problems with high dimensions and degrees of degeneracy.</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141572386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The modified second APG method for a class of nonconvex nonsmooth problems 一类非凸非光滑问题的修正第二 APG 方法
IF 1.6 4区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2024-07-06 DOI: 10.1007/s11590-024-02132-x
Kexin Ren, Chunguang Liu, Lumiao Wang

In this paper, we consider the modified second accelerated proximal gradient algorithm (APG(_{s})) introduced in Lin and Liu (Optim Lett 13(4), 805–824, 2019), discuss the behaviour of this method on more general cases, prove the convergence properties under weaker assumptions. Finally, numerical experiments are performed to support our theoretical results.

在本文中,我们考虑了 Lin 和 Liu(Optim Lett 13(4), 805-824, 2019)中介绍的修正的第二加速近似梯度算法(APG(_{s})),讨论了该方法在更一般情况下的行为,证明了在较弱假设下的收敛特性。最后,我们进行了数值实验来支持我们的理论结果。
{"title":"The modified second APG method for a class of nonconvex nonsmooth problems","authors":"Kexin Ren, Chunguang Liu, Lumiao Wang","doi":"10.1007/s11590-024-02132-x","DOIUrl":"https://doi.org/10.1007/s11590-024-02132-x","url":null,"abstract":"<p>In this paper, we consider <i> the modified second accelerated proximal gradient algorithm</i> (APG<span>(_{s})</span>) introduced in Lin and Liu (Optim Lett 13(4), 805–824, 2019), discuss the behaviour of this method on more general cases, prove the convergence properties under weaker assumptions. Finally, numerical experiments are performed to support our theoretical results.</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141572385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction: Multistart algorithm for identifying all optima of nonconvex stochastic functions 更正:确定非凸随机函数所有最优值的多开始算法
IF 1.3 4区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2024-07-02 DOI: 10.1007/s11590-024-02135-8
Prateek Jaiswal, Jeffrey Larson
{"title":"Correction: Multistart algorithm for identifying all optima of nonconvex stochastic functions","authors":"Prateek Jaiswal, Jeffrey Larson","doi":"10.1007/s11590-024-02135-8","DOIUrl":"https://doi.org/10.1007/s11590-024-02135-8","url":null,"abstract":"","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141685469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A limited memory subspace minimization conjugate gradient algorithm for unconstrained optimization 用于无约束优化的有限内存子空间最小化共轭梯度算法
IF 1.6 4区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2024-07-02 DOI: 10.1007/s11590-024-02131-y
Zexian Liu, Yu-Hong Dai, Hongwei Liu

Subspace minimization conjugate gradient (SMCG) methods are a class of quite efficient iterative methods for unconstrained optimization. The orthogonality is an important property of linear conjugate gradient method. It is however observed that the orthogonality of the gradients in linear conjugate gradient method is often lost, which usually causes slow convergence. Based on SMCG(_)BB (Liu and Liu in J Optim Theory Appl 180(3):879–906, 2019), we combine subspace minimization conjugate gradient method with the limited memory technique and present a limited memory subspace minimization conjugate gradient algorithm for unconstrained optimization. The proposed method includes two types of iterations: SMCG iteration and quasi-Newton (QN) iteration. In the SMCG iteration, the search direction is determined by solving a quadratic approximation problem, in which the important parameter is estimated based on some properties of the objective function at the current iterative point. In the QN iteration, a modified quasi-Newton method in the subspace is proposed to improve the orthogonality. Additionally, a modified strategy for choosing the initial stepsize is exploited. The global convergence of the proposed method is established under weak conditions. Some numerical results indicate that, for the tested functions in the CUTEr library, the proposed method has a great improvement over SMCG(_)BB, and it is comparable to the latest limited memory conjugate gradient software package CG(_)DESCENT (6.8) (Hager and Zhang in SIAM J Optim 23(4):2150–2168, 2013) and is also superior to the famous limited memory BFGS (L-BFGS) method.

子空间最小化共轭梯度(SMCG)方法是一类相当有效的无约束优化迭代方法。正交性是线性共轭梯度法的一个重要特性。然而,在线性共轭梯度法中梯度的正交性经常会丢失,这通常会导致收敛速度变慢。基于 SMCG(_)BB (Liu and Liu in J Optim Theory Appl 180(3):879-906, 2019),我们将子空间最小化共轭梯度法与有限记忆技术相结合,提出了一种用于无约束优化的有限记忆子空间最小化共轭梯度算法。所提出的方法包括两种迭代:SMCG 迭代和准牛顿(QN)迭代。在 SMCG 迭代中,搜索方向是通过求解二次逼近问题确定的,其中重要参数是根据当前迭代点目标函数的某些属性估算的。在 QN 迭代中,提出了一种改进的子空间准牛顿方法,以提高正交性。此外,还采用了一种改进的初始步长选择策略。提出的方法在弱条件下具有全局收敛性。一些数值结果表明,对于 CUTEr 库中的测试函数,所提方法比 SMCG(_)BB 有很大改进,与最新的有限记忆共轭梯度软件包 CG(_)DESCENT (6.8) (Hager 和 Zhang 在 SIAM J Optim 23(4):2150-2168, 2013)不相上下,也优于著名的有限记忆 BFGS(L-BFGS)方法。
{"title":"A limited memory subspace minimization conjugate gradient algorithm for unconstrained optimization","authors":"Zexian Liu, Yu-Hong Dai, Hongwei Liu","doi":"10.1007/s11590-024-02131-y","DOIUrl":"https://doi.org/10.1007/s11590-024-02131-y","url":null,"abstract":"<p>Subspace minimization conjugate gradient (SMCG) methods are a class of quite efficient iterative methods for unconstrained optimization. The orthogonality is an important property of linear conjugate gradient method. It is however observed that the orthogonality of the gradients in linear conjugate gradient method is often lost, which usually causes slow convergence. Based on SMCG<span>(_)</span>BB (Liu and Liu in J Optim Theory Appl 180(3):879–906, 2019), we combine subspace minimization conjugate gradient method with the limited memory technique and present a limited memory subspace minimization conjugate gradient algorithm for unconstrained optimization. The proposed method includes two types of iterations: SMCG iteration and quasi-Newton (QN) iteration. In the SMCG iteration, the search direction is determined by solving a quadratic approximation problem, in which the important parameter is estimated based on some properties of the objective function at the current iterative point. In the QN iteration, a modified quasi-Newton method in the subspace is proposed to improve the orthogonality. Additionally, a modified strategy for choosing the initial stepsize is exploited. The global convergence of the proposed method is established under weak conditions. Some numerical results indicate that, for the tested functions in the CUTEr library, the proposed method has a great improvement over SMCG<span>(_)</span>BB, and it is comparable to the latest limited memory conjugate gradient software package CG<span>(_)</span>DESCENT (6.8) (Hager and Zhang in SIAM J Optim 23(4):2150–2168, 2013) and is also superior to the famous limited memory BFGS (L-BFGS) method.</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141501395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An accelerated lyapunov function for Polyak’s Heavy-ball on convex quadratics 凸四边形上波利克重球的加速拉普诺夫函数
IF 1.6 4区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2024-06-25 DOI: 10.1007/s11590-024-02119-8
Antonio Orvieto

In 1964, Polyak showed that the Heavy-ball method, the simplest momentum technique, accelerates convergence of strongly-convex problems in the vicinity of the solution. While Nesterov later developed a globally accelerated version, Polyak’s original algorithm remains simpler and more widely used in applications such as deep learning. Despite this popularity, the question of whether Heavy-ball is also globally accelerated or not has not been fully answered yet, and no convincing counterexample has been provided. This is largely due to the difficulty in finding an effective Lyapunov function: indeed, most proofs of Heavy-ball acceleration in the strongly-convex quadratic setting rely on eigenvalue arguments. Our work adopts a different approach: studying momentum through the lens of quadratic invariants of simple harmonic oscillators. By utilizing the modified Hamiltonian of Störmer-Verlet integrators, we are able to construct a Lyapunov function that demonstrates an (O(1/k^2)) rate for Heavy-ball in the case of convex quadratic problems. Our novel proof technique, though restricted to linear regression, is found to work well empirically also on non-quadratic convex problems, and thus provides insights on the structure of Lyapunov functions to be used in the general convex case. As such, our paper makes a promising first step towards potentially proving the acceleration of Polyak’s momentum method and we hope it inspires further research around this question.

1964 年,波利克证明了重球法(Heavy-ball method)这一最简单的动量技术可以加速强凸问题在解附近的收敛。虽然涅斯捷罗夫后来开发出了全局加速版本,但波利克的原始算法仍然更简单,而且在深度学习等应用中得到了更广泛的应用。尽管这种算法很受欢迎,但 Heavy-ball 算法是否也是全局加速算法的问题还没有完全得到解答,也没有提供令人信服的反例。这主要是由于很难找到有效的 Lyapunov 函数:事实上,在强凸二次方程环境中,Heavy-ball 加速的大多数证明都依赖于特征值论证。我们的研究采用了一种不同的方法:从简单谐波振荡器二次不变量的角度来研究动量。通过利用斯托默-韦勒积分器的修正哈密顿,我们能够构建一个 Lyapunov 函数,在凸二次问题的情况下,证明重球的速率为 (O(1/k^2))。我们新颖的证明技术虽然仅限于线性回归,但根据经验,它在非二次凸问题上也能很好地工作,从而为在一般凸问题上使用的 Lyapunov 函数的结构提供了启示。因此,我们的论文迈出了有希望的第一步,有可能证明波利克动量法的加速性,我们希望它能激发围绕这一问题的进一步研究。
{"title":"An accelerated lyapunov function for Polyak’s Heavy-ball on convex quadratics","authors":"Antonio Orvieto","doi":"10.1007/s11590-024-02119-8","DOIUrl":"https://doi.org/10.1007/s11590-024-02119-8","url":null,"abstract":"<p>In 1964, Polyak showed that the Heavy-ball method, the simplest momentum technique, accelerates convergence of strongly-convex problems in the vicinity of the solution. While Nesterov later developed a globally accelerated version, Polyak’s original algorithm remains simpler and more widely used in applications such as deep learning. Despite this popularity, the question of whether Heavy-ball is also globally accelerated or not has not been fully answered yet, and no convincing counterexample has been provided. This is largely due to the difficulty in finding an effective Lyapunov function: indeed, most proofs of Heavy-ball acceleration in the strongly-convex quadratic setting rely on eigenvalue arguments. Our work adopts a different approach: studying momentum through the lens of quadratic invariants of simple harmonic oscillators. By utilizing the modified Hamiltonian of Störmer-Verlet integrators, we are able to construct a Lyapunov function that demonstrates an <span>(O(1/k^2))</span> rate for Heavy-ball in the case of convex quadratic problems. Our novel proof technique, though restricted to linear regression, is found to work well empirically also on non-quadratic convex problems, and thus provides insights on the structure of Lyapunov functions to be used in the general convex case. As such, our paper makes a promising first step towards potentially proving the acceleration of Polyak’s momentum method and we hope it inspires further research around this question.</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141501227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Benchmark-based deviation and drawdown measures in portfolio optimization 投资组合优化中基于基准的偏差和缩减测量方法
IF 1.6 4区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2024-06-25 DOI: 10.1007/s11590-024-02124-x
Michael Zabarankin, Bogdan Grechuk, Dawei Hao

Understanding and modeling of agent’s risk/reward preferences is a central problem in various applications of risk management including investment science and portfolio theory in particular. One of the approaches is to axiomatically define a set of performance measures and to use a benchmark to identify a particular measure from that set by either inverse optimization or functional dominance. For example, such a benchmark could be the rate of return of an existing attractive financial instrument. This work introduces deviation and drawdown measures that incorporate rates of return of indicated financial instruments (benchmarks). For discrete distributions and discrete sample paths, portfolio problems with such measures are reduced to linear programs and solved based on historical data in cases of a single benchmark and three benchmarks used simultaneously. The optimal portfolios and corresponding benchmarks have similar expected/cumulative rates of return in sample and out of sample, but the former are considerably less volatile.

了解代理人的风险/回报偏好并为其建模是风险管理各种应用中的核心问题,尤其是投资科学和投资组合理论。其中一种方法是公理地定义一组业绩衡量标准,并通过反向优化或函数支配的方法,使用一个基准从这组标准中确定一个特定的衡量标准。例如,这种基准可以是现有有吸引力的金融工具的收益率。这项工作引入了包含指定金融工具收益率(基准)的偏差和缩减度量。对于离散分布和离散抽样路径,使用此类度量的投资组合问题被简化为线性程序,并根据单一基准和同时使用三个基准时的历史数据进行求解。最优投资组合和相应的基准在样本内和样本外具有相似的预期/累计收益率,但前者的波动性要小得多。
{"title":"Benchmark-based deviation and drawdown measures in portfolio optimization","authors":"Michael Zabarankin, Bogdan Grechuk, Dawei Hao","doi":"10.1007/s11590-024-02124-x","DOIUrl":"https://doi.org/10.1007/s11590-024-02124-x","url":null,"abstract":"<p>Understanding and modeling of agent’s risk/reward preferences is a central problem in various applications of risk management including investment science and portfolio theory in particular. One of the approaches is to axiomatically define a set of performance measures and to use a benchmark to identify a particular measure from that set by either inverse optimization or functional dominance. For example, such a benchmark could be the rate of return of an existing attractive financial instrument. This work introduces deviation and drawdown measures that incorporate rates of return of indicated financial instruments (benchmarks). For discrete distributions and discrete sample paths, portfolio problems with such measures are reduced to linear programs and solved based on historical data in cases of a single benchmark and three benchmarks used simultaneously. The optimal portfolios and corresponding benchmarks have similar expected/cumulative rates of return in sample and out of sample, but the former are considerably less volatile.</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141501228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Strategy investments in zero-sum games 零和博弈中的战略投资
IF 1.6 4区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2024-06-24 DOI: 10.1007/s11590-024-02130-z
Raul Garcia, Seyedmohammadhossein Hosseinian, Mallesh Pai, Andrew J. Schaefer

We propose an extension of two-player zero-sum games, where one player may select available actions for themselves and the opponent, subject to a budget constraint. We present a mixed-integer linear programming (MILP) formulation for the problem, provide analytical results regarding its solution, and discuss applications in the security and advertising domains. Our computational experiments demonstrate that heuristic approaches, on average, yield suboptimal solutions with at least a 20% relative gap with those obtained by the MILP formulation.

我们提出了双人零和博弈的一种扩展,在这种博弈中,一方可以在预算约束下为自己和对手选择可用的行动。我们提出了该问题的混合整数线性规划(MILP)公式,提供了有关其求解的分析结果,并讨论了在安全和广告领域的应用。我们的计算实验证明,启发式方法平均会产生次优解,与 MILP 公式得到的解至少有 20% 的相对差距。
{"title":"Strategy investments in zero-sum games","authors":"Raul Garcia, Seyedmohammadhossein Hosseinian, Mallesh Pai, Andrew J. Schaefer","doi":"10.1007/s11590-024-02130-z","DOIUrl":"https://doi.org/10.1007/s11590-024-02130-z","url":null,"abstract":"<p>We propose an extension of two-player zero-sum games, where one player may select available actions for themselves and the opponent, subject to a budget constraint. We present a mixed-integer linear programming (MILP) formulation for the problem, provide analytical results regarding its solution, and discuss applications in the security and advertising domains. Our computational experiments demonstrate that heuristic approaches, on average, yield suboptimal solutions with at least a 20% relative gap with those obtained by the MILP formulation.</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141501229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the linear convergence rate of Riemannian proximal gradient method 论黎曼近似梯度法的线性收敛速率
IF 1.6 4区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2024-06-19 DOI: 10.1007/s11590-024-02129-6
Woocheol Choi, Changbum Chun, Yoon Mo Jung, Sangwoon Yun

Composite optimization problems on Riemannian manifolds arise in applications such as sparse principal component analysis and dictionary learning. Recently, Huang and Wei introduced a Riemannian proximal gradient method (Huang and Wei in MP 194:371–413, 2022) and an inexact Riemannian proximal gradient method (Wen and Ke in COA 85:1–32, 2023), utilizing the retraction mapping to address these challenges. They established the sublinear convergence rate of the Riemannian proximal gradient method under the retraction convexity and a geometric condition on retractions, as well as the local linear convergence rate of the inexact Riemannian proximal gradient method under the Riemannian Kurdyka-Lojasiewicz property. In this paper, we demonstrate the linear convergence rate of the Riemannian proximal gradient method and the linear convergence rate of the proximal gradient method proposed in Chen et al. (SIAM J Opt 30:210–239, 2020) under strong retraction convexity. Additionally, we provide a counterexample that violates the geometric condition on retractions, which is crucial for establishing the sublinear convergence rate of the Riemannian proximal gradient method.

黎曼流形上的复合优化问题出现在稀疏主成分分析和字典学习等应用中。最近,Huang 和 Wei 利用回缩映射提出了黎曼近似梯度法(Huang 和 Wei,发表于 MP 194:371-413, 2022)和非精确黎曼近似梯度法(Wen 和 Ke,发表于 COA 85:1-32, 2023)来解决这些难题。他们建立了在回缩凸性和回缩几何条件下的黎曼近似梯度法的亚线性收敛率,以及在黎曼库尔迪卡-洛雅谢维茨性质下的非精确黎曼近似梯度法的局部线性收敛率。在本文中,我们证明了黎曼近似梯度法的线性收敛率,以及 Chen 等人 (SIAM J Opt 30:210-239, 2020) 提出的近似梯度法在强回缩凸性下的线性收敛率。此外,我们还提供了一个违反回缩几何条件的反例,这对建立黎曼近似梯度法的亚线性收敛率至关重要。
{"title":"On the linear convergence rate of Riemannian proximal gradient method","authors":"Woocheol Choi, Changbum Chun, Yoon Mo Jung, Sangwoon Yun","doi":"10.1007/s11590-024-02129-6","DOIUrl":"https://doi.org/10.1007/s11590-024-02129-6","url":null,"abstract":"<p>Composite optimization problems on Riemannian manifolds arise in applications such as sparse principal component analysis and dictionary learning. Recently, Huang and Wei introduced a Riemannian proximal gradient method (Huang and Wei in MP 194:371–413, 2022) and an inexact Riemannian proximal gradient method (Wen and Ke in COA 85:1–32, 2023), utilizing the retraction mapping to address these challenges. They established the sublinear convergence rate of the Riemannian proximal gradient method under the retraction convexity and a geometric condition on retractions, as well as the local linear convergence rate of the inexact Riemannian proximal gradient method under the Riemannian Kurdyka-Lojasiewicz property. In this paper, we demonstrate the linear convergence rate of the Riemannian proximal gradient method and the linear convergence rate of the proximal gradient method proposed in Chen et al. (SIAM J Opt 30:210–239, 2020) under strong retraction convexity. Additionally, we provide a counterexample that violates the geometric condition on retractions, which is crucial for establishing the sublinear convergence rate of the Riemannian proximal gradient method.</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141501230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A modification of the forward–backward splitting method for monotone inclusions 单调夹杂物前向后分裂法的修正
IF 1.6 4区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2024-06-18 DOI: 10.1007/s11590-024-02128-7
Van Dung Nguyen

In this work, we propose a new splitting method for monotone inclusion problems with three operators in real Hilbert spaces, in which one is maximal monotone, one is monotone-Lipschitz and one is cocoercive. By specializing in two operator inclusion, we recover the forward–backward and the generalization of the reflected-forward–backward splitting methods as particular cases. The weak convergence of the algorithm under standard assumptions is established. The linear convergence rate of the proposed method is obtained under an additional condition like the strong monotonicity. We also give some theoretical comparisons to demonstrate the efficiency of the proposed method.

在这项工作中,我们提出了一种新的拆分方法,用于实希尔伯特空间中三个算子的单调包含问题,其中一个算子是最大单调的,一个是单调-利普希兹的,一个是可塞的。通过对两个算子包含的特殊化,我们恢复了作为特殊情况的前向后向和广义反射前向后向分裂方法。在标准假设条件下,算法的弱收敛性得以确定。在强单调性等附加条件下,我们得到了所提方法的线性收敛率。我们还给出了一些理论比较,以证明所提方法的效率。
{"title":"A modification of the forward–backward splitting method for monotone inclusions","authors":"Van Dung Nguyen","doi":"10.1007/s11590-024-02128-7","DOIUrl":"https://doi.org/10.1007/s11590-024-02128-7","url":null,"abstract":"<p>In this work, we propose a new splitting method for monotone inclusion problems with three operators in real Hilbert spaces, in which one is maximal monotone, one is monotone-Lipschitz and one is cocoercive. By specializing in two operator inclusion, we recover the forward–backward and the generalization of the reflected-forward–backward splitting methods as particular cases. The weak convergence of the algorithm under standard assumptions is established. The linear convergence rate of the proposed method is obtained under an additional condition like the strong monotonicity. We also give some theoretical comparisons to demonstrate the efficiency of the proposed method.</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141501231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Optimization Letters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1