首页 > 最新文献

Journal of Global Optimization最新文献

英文 中文
Simple proximal-type algorithms for equilibrium problems 平衡问题的简单近似型算法
IF 1.8 3区 数学 Q1 Mathematics Pub Date : 2024-03-14 DOI: 10.1007/s10898-024-01377-1
Yonghong Yao, Abubakar Adamu, Yekini Shehu, Jen-Chih Yao

This paper proposes two simple and elegant proximal-type algorithms to solve equilibrium problems with pseudo-monotone bifunctions in the setting of Hilbert spaces. The proposed algorithms use one proximal point evaluation of the bifunction at each iteration. Consequently, prove that the sequences of iterates generated by the first algorithm converge weakly to a solution of the equilibrium problem (assuming existence) and obtain a linear convergence rate under standard assumptions. We also design a viscosity version of the first algorithm and obtain its corresponding strong convergence result. Some popular existing algorithms in the literature are recovered. We finally give some numerical tests and compare our algorithms with some related ones to show the performance and efficiency of our proposed algorithms.

本文提出了两种简单而优雅的近似型算法,用于求解希尔伯特空间中具有伪单调双函数的均衡问题。所提出的算法在每次迭代时使用一个近似点评估双函数。因此,我们证明了第一种算法产生的迭代序列弱收敛于平衡问题的解(假设存在),并在标准假设下获得线性收敛率。我们还设计了第一种算法的粘性版本,并得到了相应的强收敛结果。我们还恢复了文献中一些流行的现有算法。最后,我们给出了一些数值测试,并将我们的算法与一些相关算法进行了比较,以显示我们提出的算法的性能和效率。
{"title":"Simple proximal-type algorithms for equilibrium problems","authors":"Yonghong Yao, Abubakar Adamu, Yekini Shehu, Jen-Chih Yao","doi":"10.1007/s10898-024-01377-1","DOIUrl":"https://doi.org/10.1007/s10898-024-01377-1","url":null,"abstract":"<p>This paper proposes two simple and elegant proximal-type algorithms to solve equilibrium problems with pseudo-monotone bifunctions in the setting of Hilbert spaces. The proposed algorithms use one proximal point evaluation of the bifunction at each iteration. Consequently, prove that the sequences of iterates generated by the first algorithm converge weakly to a solution of the equilibrium problem (assuming existence) and obtain a linear convergence rate under standard assumptions. We also design a viscosity version of the first algorithm and obtain its corresponding strong convergence result. Some popular existing algorithms in the literature are recovered. We finally give some numerical tests and compare our algorithms with some related ones to show the performance and efficiency of our proposed algorithms.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"27 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140154990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A nonmonotone accelerated proximal gradient method with variable stepsize strategy for nonsmooth and nonconvex minimization problems 针对非光滑和非凸最小化问题的非单调加速近端梯度法与可变步长策略
IF 1.8 3区 数学 Q1 Mathematics Pub Date : 2024-03-05 DOI: 10.1007/s10898-024-01366-4
Hongwei Liu, Ting Wang, Zexian Liu

In this paper, we consider the problem that minimizing the sum of a nonsmooth function with a smooth one in the nonconvex setting, which arising in many contemporary applications such as machine learning, statistics, and signal/image processing. To solve this problem, we propose a new nonmonotone accelerated proximal gradient method with variable stepsize strategy. Note that incorporating inertial term into proximal gradient method is a simple and efficient acceleration technique, while the descent property of the proximal gradient algorithm will lost. In our algorithm, the iterates generated by inertial proximal gradient scheme are accepted when the objective function values decrease or increase appropriately; otherwise, the iteration point is generated by proximal gradient scheme, which makes the function values on a subset of iterates are decreasing. We also introduce a variable stepsize strategy, which does not need a line search or does not need to know the Lipschitz constant and makes the algorithm easy to implement. We show that the sequence of iterates generated by the algorithm converges to a critical point of the objective function. Further, under the assumption that the objective function satisfies the Kurdyka–Łojasiewicz inequality, we prove the convergence rates of the objective function values and the iterates. Moreover, numerical results on both convex and nonconvex problems are reported to demonstrate the effectiveness and superiority of the proposed method and stepsize strategy.

在本文中,我们考虑了在非凸环境中最小化非光滑函数与光滑函数之和的问题,这个问题在机器学习、统计和信号/图像处理等许多当代应用中都会出现。为了解决这个问题,我们提出了一种采用可变步长策略的新的非单调加速近似梯度法。需要注意的是,在近似梯度法中加入惯性项是一种简单高效的加速技术,但会失去近似梯度算法的下降特性。在我们的算法中,当目标函数值适当减少或增加时,惯性近似梯度方案产生的迭代点被接受;否则,迭代点由近似梯度方案产生,这使得迭代点子集上的函数值不断减少。我们还引入了可变步长策略,它不需要线性搜索,也不需要知道 Lipschitz 常数,使算法易于实现。我们证明,算法产生的迭代序列会收敛到目标函数的临界点。此外,在目标函数满足 Kurdyka-Łojasiewicz 不等式的假设下,我们证明了目标函数值和迭代的收敛率。此外,我们还报告了凸问题和非凸问题的数值结果,以证明所提方法和步长策略的有效性和优越性。
{"title":"A nonmonotone accelerated proximal gradient method with variable stepsize strategy for nonsmooth and nonconvex minimization problems","authors":"Hongwei Liu, Ting Wang, Zexian Liu","doi":"10.1007/s10898-024-01366-4","DOIUrl":"https://doi.org/10.1007/s10898-024-01366-4","url":null,"abstract":"<p>In this paper, we consider the problem that minimizing the sum of a nonsmooth function with a smooth one in the nonconvex setting, which arising in many contemporary applications such as machine learning, statistics, and signal/image processing. To solve this problem, we propose a new nonmonotone accelerated proximal gradient method with variable stepsize strategy. Note that incorporating inertial term into proximal gradient method is a simple and efficient acceleration technique, while the descent property of the proximal gradient algorithm will lost. In our algorithm, the iterates generated by inertial proximal gradient scheme are accepted when the objective function values decrease or increase appropriately; otherwise, the iteration point is generated by proximal gradient scheme, which makes the function values on a subset of iterates are decreasing. We also introduce a variable stepsize strategy, which does not need a line search or does not need to know the Lipschitz constant and makes the algorithm easy to implement. We show that the sequence of iterates generated by the algorithm converges to a critical point of the objective function. Further, under the assumption that the objective function satisfies the Kurdyka–Łojasiewicz inequality, we prove the convergence rates of the objective function values and the iterates. Moreover, numerical results on both convex and nonconvex problems are reported to demonstrate the effectiveness and superiority of the proposed method and stepsize strategy.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"192 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140032824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sketch-based multiplicative updating algorithms for symmetric nonnegative tensor factorizations with applications to face image clustering 基于草图的对称非负张量因子乘法更新算法及其在人脸图像聚类中的应用
IF 1.8 3区 数学 Q1 Mathematics Pub Date : 2024-03-01 DOI: 10.1007/s10898-024-01374-4

Abstract

Nonnegative tensor factorizations (NTF) have applications in statistics, computer vision, exploratory multi-way data analysis, and blind source separation. This paper studies randomized multiplicative updating algorithms for symmetric NTF via random projections and random samplings. For random projections, we consider two methods to generate the random matrix and analyze the computational complexity, while for random samplings the uniform sampling strategy and its variants are examined. The mixing of these two strategies is then considered. Some theoretical results are presented based on the bounds of the singular values of sub-Gaussian matrices and the fact that randomly sampling rows from an orthogonal matrix results in a well-conditioned matrix. These algorithms are easy to implement, and their efficiency is verified via test tensors from both synthetic and real datasets, such as for clustering facial images.

摘要 非负张量因式(NTF)应用于统计学、计算机视觉、探索性多向数据分析和盲源分离。本文通过随机投影和随机抽样,研究对称非负张量因式的随机乘法更新算法。对于随机投影,我们考虑了两种生成随机矩阵的方法,并分析了计算复杂度;而对于随机抽样,则研究了均匀抽样策略及其变体。然后还考虑了这两种策略的混合。根据亚高斯矩阵奇异值的边界,以及从正交矩阵中随机抽样行会得到条件良好的矩阵这一事实,提出了一些理论结果。这些算法易于实现,并通过合成和真实数据集(如面部图像聚类)中的测试张量验证了其效率。
{"title":"Sketch-based multiplicative updating algorithms for symmetric nonnegative tensor factorizations with applications to face image clustering","authors":"","doi":"10.1007/s10898-024-01374-4","DOIUrl":"https://doi.org/10.1007/s10898-024-01374-4","url":null,"abstract":"<h3>Abstract</h3> <p>Nonnegative tensor factorizations (NTF) have applications in statistics, computer vision, exploratory multi-way data analysis, and blind source separation. This paper studies randomized multiplicative updating algorithms for symmetric NTF via random projections and random samplings. For random projections, we consider two methods to generate the random matrix and analyze the computational complexity, while for random samplings the uniform sampling strategy and its variants are examined. The mixing of these two strategies is then considered. Some theoretical results are presented based on the bounds of the singular values of sub-Gaussian matrices and the fact that randomly sampling rows from an orthogonal matrix results in a well-conditioned matrix. These algorithms are easy to implement, and their efficiency is verified via test tensors from both synthetic and real datasets, such as for clustering facial images.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"12 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140005115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computing the recession cone of a convex upper image via convex projection 通过凸投影计算凸上像的后退锥
IF 1.8 3区 数学 Q1 Mathematics Pub Date : 2024-03-01 DOI: 10.1007/s10898-023-01351-3
Gabriela Kováčová, Firdevs Ulus

It is possible to solve unbounded convex vector optimization problems (CVOPs) in two phases: (1) computing or approximating the recession cone of the upper image and (2) solving the equivalent bounded CVOP where the ordering cone is extended based on the first phase. In this paper, we consider unbounded CVOPs and propose an alternative solution methodology to compute or approximate the recession cone of the upper image. In particular, we relate the dual of the recession cone with the Lagrange dual of weighted sum scalarization problems whenever the dual problem can be written explicitly. Computing this set requires solving a convex (or polyhedral) projection problem. We show that this methodology can be applied to semidefinite, quadratic, and linear vector optimization problems and provide some numerical examples.

无界凸向量优化问题(CVOPs)可以分两个阶段求解:(1) 计算或近似求解上层图像的后退锥;(2) 在第一阶段的基础上求解等效的有界 CVOP,其中排序锥是扩展的。在本文中,我们考虑了无界 CVOP,并提出了另一种计算或近似上像后退锥的求解方法。特别是,只要对偶问题可以明确写出,我们就会将后退锥的对偶与加权和标量化问题的拉格朗日对偶联系起来。计算这个集合需要解决一个凸(或多面体)投影问题。我们展示了这种方法可应用于半有限、二次和线性矢量优化问题,并提供了一些数值示例。
{"title":"Computing the recession cone of a convex upper image via convex projection","authors":"Gabriela Kováčová, Firdevs Ulus","doi":"10.1007/s10898-023-01351-3","DOIUrl":"https://doi.org/10.1007/s10898-023-01351-3","url":null,"abstract":"<p>It is possible to solve unbounded convex vector optimization problems (CVOPs) in two phases: (1) computing or approximating the recession cone of the upper image and (2) solving the equivalent bounded CVOP where the ordering cone is extended based on the first phase. In this paper, we consider unbounded CVOPs and propose an alternative solution methodology to compute or approximate the recession cone of the upper image. In particular, we relate the dual of the recession cone with the Lagrange dual of weighted sum scalarization problems whenever the dual problem can be written explicitly. Computing this set requires solving a convex (or polyhedral) projection problem. We show that this methodology can be applied to semidefinite, quadratic, and linear vector optimization problems and provide some numerical examples.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"115 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140004997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A performance analysis of Basin hopping compared to established metaheuristics for global optimization Basin hopping 的性能分析:与已建立的元优化全局优化相比
IF 1.8 3区 数学 Q1 Mathematics Pub Date : 2024-02-28 DOI: 10.1007/s10898-024-01373-5
Marco Baioletti, Valentino Santucci, Marco Tomassini

During the last decades many metaheuristics for global numerical optimization have been proposed. Among them, Basin Hopping is very simple and straightforward to implement, although rarely used outside its original Physical Chemistry community. In this work, our aim is to compare Basin Hopping, and two population variants of it, with readily available implementations of the well known metaheuristics Differential Evolution, Particle Swarm Optimization, and Covariance Matrix Adaptation Evolution Strategy. We perform numerical experiments using the IOH profiler environment with the BBOB test function set and two difficult real-world problems. The experiments were carried out in two different but complementary ways: by measuring the performance under a fixed budget of function evaluations and by considering a fixed target value. The general conclusion is that Basin Hopping and its newly introduced population variant are almost as good as Covariance Matrix Adaptation on the synthetic benchmark functions and better than it on the two hard cluster energy minimization problems. Thus, the proposed analyses show that Basin Hopping can be considered a good candidate for global numerical optimization problems along with the more established metaheuristics, especially if one wants to obtain quick and reliable results on an unknown problem.

在过去几十年中,人们提出了许多用于全局数值优化的元启发式算法。其中,Basin Hopping 非常简单直接,尽管在其最初的物理化学社区之外很少使用。在这项工作中,我们的目的是将 Basin Hopping 及其两个种群变体与众所周知的微分进化、粒子群优化和协方差矩阵适应进化策略的现成实现进行比较。我们使用 IOH profiler 环境,利用 BBOB 测试函数集和两个现实世界的难题进行了数值实验。实验以两种不同但互补的方式进行:在固定的函数评估预算下测量性能,以及考虑固定的目标值。总的结论是,在合成基准函数上,Basin Hopping 及其新引入的群体变体与 Covariance Matrix Adaptation 几乎一样好,而在两个困难的集群能量最小化问题上,Basin Hopping 及其新引入的群体变体比 Covariance Matrix Adaptation 更好。因此,所提出的分析表明,Basin Hopping 可与更成熟的元启发式一起,被视为全局数值优化问题的理想候选方案,尤其是当人们希望在未知问题上获得快速、可靠的结果时。
{"title":"A performance analysis of Basin hopping compared to established metaheuristics for global optimization","authors":"Marco Baioletti, Valentino Santucci, Marco Tomassini","doi":"10.1007/s10898-024-01373-5","DOIUrl":"https://doi.org/10.1007/s10898-024-01373-5","url":null,"abstract":"<p>During the last decades many metaheuristics for global numerical optimization have been proposed. Among them, Basin Hopping is very simple and straightforward to implement, although rarely used outside its original Physical Chemistry community. In this work, our aim is to compare Basin Hopping, and two population variants of it, with readily available implementations of the well known metaheuristics Differential Evolution, Particle Swarm Optimization, and Covariance Matrix Adaptation Evolution Strategy. We perform numerical experiments using the <i>IOH profiler</i> environment with the BBOB test function set and two difficult real-world problems. The experiments were carried out in two different but complementary ways: by measuring the performance under a fixed budget of function evaluations and by considering a fixed target value. The general conclusion is that Basin Hopping and its newly introduced population variant are almost as good as Covariance Matrix Adaptation on the synthetic benchmark functions and better than it on the two hard cluster energy minimization problems. Thus, the proposed analyses show that Basin Hopping can be considered a good candidate for global numerical optimization problems along with the more established metaheuristics, especially if one wants to obtain quick and reliable results on an unknown problem.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"914 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140005393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast deterministic algorithms for non-submodular maximization with strong performance guarantees 具有强大性能保证的非次模化最大化快速确定性算法
IF 1.8 3区 数学 Q1 Mathematics Pub Date : 2024-02-22 DOI: 10.1007/s10898-024-01371-7
Cheng Lu, Wenguo Yang

We study the non-submodular maximization problem, in which the objective function is characterized by parameters, subject to a cardinality or (p)-system constraint. By adapting the Threshold-Greedy algorithm for the submodular maximization, we present two deterministic algorithms for approximately solving the non-submodular maximization problem. Our analysis shows that the algorithms we propose requires much less function evaluations than existing algorithms, while providing comparable approximation guarantees. Moreover, numerical experiment results are presented to validate the theoretical analysis. Our results not only fill a gap in the (non-)submodular maximization, but also generalize and improve several existing results on closely related optimization problems.

我们研究了非次模态最大化问题,在这个问题中,目标函数是由参数表征的,并受到卡方或(p)系统的约束。通过调整亚模态最大化的阈值-格雷迪算法,我们提出了两种近似求解非亚模态最大化问题的确定性算法。我们的分析表明,与现有算法相比,我们提出的算法所需的函数评估次数要少得多,同时还能提供类似的近似保证。此外,我们还给出了数值实验结果来验证理论分析。我们的结果不仅填补了(非)次模最大化领域的空白,而且还概括和改进了与之密切相关的优化问题的若干现有结果。
{"title":"Fast deterministic algorithms for non-submodular maximization with strong performance guarantees","authors":"Cheng Lu, Wenguo Yang","doi":"10.1007/s10898-024-01371-7","DOIUrl":"https://doi.org/10.1007/s10898-024-01371-7","url":null,"abstract":"<p>We study the non-submodular maximization problem, in which the objective function is characterized by parameters, subject to a cardinality or <span>(p)</span>-system constraint. By adapting the <span>Threshold-Greedy</span> algorithm for the submodular maximization, we present two deterministic algorithms for approximately solving the non-submodular maximization problem. Our analysis shows that the algorithms we propose requires much less function evaluations than existing algorithms, while providing comparable approximation guarantees. Moreover, numerical experiment results are presented to validate the theoretical analysis. Our results not only fill a gap in the (non-)submodular maximization, but also generalize and improve several existing results on closely related optimization problems.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"1 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139924361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A new dual-based cutting plane algorithm for nonlinear adjustable robust optimization 用于非线性可调鲁棒优化的新型基于对偶的切割面算法
IF 1.8 3区 数学 Q1 Mathematics Pub Date : 2024-02-22 DOI: 10.1007/s10898-023-01360-2

Abstract

This paper explores a class of nonlinear Adjustable Robust Optimization (ARO) problems, containing here-and-now and wait-and-see variables, with uncertainty in the objective function and constraints. By applying Fenchel’s duality on the wait-and-see variables, we obtain an equivalent dual reformulation, which is a nonlinear static robust optimization problem. Using the dual formulation, we provide conditions under which the ARO problem is convex on the here-and-now decision. Furthermore, since the dual formulation contains a non-concave maximization on the uncertain parameter, we use perspective relaxation and an alternating method to handle the non-concavity. By employing the perspective relaxation, we obtain an upper bound, which we show is the same as the static relaxation of the considered problem. Moreover, invoking the alternating method, we design a new dual-based cutting plane algorithm that is able to find a reasonable lower bound for the optimal objective value of the considered nonlinear ARO model. In addition to sketching and establishing the theoretical features of the algorithms, including convergence analysis, by numerical experiments we reveal the abilities of our cutting plane algorithm in producing locally robust solutions with an acceptable optimality gap.

摘要 本文探讨了一类非线性可调稳健优化(ARO)问题,该问题包含此时此地和等待观察变量,目标函数和约束条件具有不确定性。通过对 "等待-观察 "变量应用 Fenchel 对偶,我们得到了一个等价的对偶重述,即一个非线性静态鲁棒优化问题。利用对偶表述,我们提供了 ARO 问题在此时此地的决策上具有凸性的条件。此外,由于对偶表述包含对不确定参数的非凹性最大化,我们使用透视松弛和交替法来处理非凹性。通过使用透视松弛法,我们得到了一个上界,并证明它与所考虑问题的静态松弛法相同。此外,利用交替法,我们设计了一种新的基于对偶的切割面算法,能够为所考虑的非线性 ARO 模型的最优目标值找到一个合理的下界。除了勾勒和建立算法的理论特征(包括收敛性分析)外,我们还通过数值实验揭示了我们的切割面算法在产生具有可接受最优性差距的局部稳健解方面的能力。
{"title":"A new dual-based cutting plane algorithm for nonlinear adjustable robust optimization","authors":"","doi":"10.1007/s10898-023-01360-2","DOIUrl":"https://doi.org/10.1007/s10898-023-01360-2","url":null,"abstract":"<h3>Abstract</h3> <p>This paper explores a class of nonlinear Adjustable Robust Optimization (ARO) problems, containing here-and-now and wait-and-see variables, with uncertainty in the objective function and constraints. By applying Fenchel’s duality on the wait-and-see variables, we obtain an equivalent dual reformulation, which is a nonlinear static robust optimization problem. Using the dual formulation, we provide conditions under which the ARO problem is convex on the here-and-now decision. Furthermore, since the dual formulation contains a non-concave maximization on the uncertain parameter, we use perspective relaxation and an alternating method to handle the non-concavity. By employing the perspective relaxation, we obtain an upper bound, which we show is the same as the static relaxation of the considered problem. Moreover, invoking the alternating method, we design a new dual-based cutting plane algorithm that is able to find a reasonable lower bound for the optimal objective value of the considered nonlinear ARO model. In addition to sketching and establishing the theoretical features of the algorithms, including convergence analysis, by numerical experiments we reveal the abilities of our cutting plane algorithm in producing locally robust solutions with an acceptable optimality gap.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"36 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139924360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A criterion-space branch-reduction-bound algorithm for solving generalized multiplicative problems 解决广义乘法问题的准则空间分支还原约束算法
IF 1.8 3区 数学 Q1 Mathematics Pub Date : 2024-02-15 DOI: 10.1007/s10898-023-01358-w
Hongwei Jiao, Binbin Li, Wenqiang Yang

In this paper, we investigate a generalized multiplicative problem (GMP) that is known to be NP-hard even with one linear product term. We first introduce some criterion-space variables to obtain an equivalent problem of the GMP. A criterion-space branch-reduction-bound algorithm is then designed, which integrates some basic operations such as the two-level linear relaxation technique, rectangle branching rule and criterion-space region reduction technologies. The global convergence of the presented algorithm is proved by means of the subsequent solutions of a series of linear relaxation problems, and its maximum number of iterations is estimated on the basis of exhaustiveness of branching rule. Finally, numerical results demonstrate the presented algorithm can efficiently find the global optimum solutions for some test instances with the robustness.

在本文中,我们研究了一个广义乘法问题(GMP),已知该问题即使只有一个线性积项也是 NP-困难的。我们首先引入了一些准则空间变量,从而得到 GMP 的等价问题。然后设计了一种准则空间分支还原约束算法,它集成了一些基本操作,如两级线性松弛技术、矩形分支规则和准则空间区域还原技术。通过一系列线性松弛问题的后续求解证明了所提出算法的全局收敛性,并根据分支规则的穷竭性估算了算法的最大迭代次数。最后,数值结果表明所提出的算法能有效地找到某些测试实例的全局最优解,且具有鲁棒性。
{"title":"A criterion-space branch-reduction-bound algorithm for solving generalized multiplicative problems","authors":"Hongwei Jiao, Binbin Li, Wenqiang Yang","doi":"10.1007/s10898-023-01358-w","DOIUrl":"https://doi.org/10.1007/s10898-023-01358-w","url":null,"abstract":"<p>In this paper, we investigate a generalized multiplicative problem (GMP) that is known to be NP-hard even with one linear product term. We first introduce some criterion-space variables to obtain an equivalent problem of the GMP. A criterion-space branch-reduction-bound algorithm is then designed, which integrates some basic operations such as the two-level linear relaxation technique, rectangle branching rule and criterion-space region reduction technologies. The global convergence of the presented algorithm is proved by means of the subsequent solutions of a series of linear relaxation problems, and its maximum number of iterations is estimated on the basis of exhaustiveness of branching rule. Finally, numerical results demonstrate the presented algorithm can efficiently find the global optimum solutions for some test instances with the robustness.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"56 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139759405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A K-means Supported Reinforcement Learning Framework to Multi-dimensional Knapsack 针对多维包的 K-means 支持强化学习框架
IF 1.8 3区 数学 Q1 Mathematics Pub Date : 2024-02-15 DOI: 10.1007/s10898-024-01364-6
Sabah Bushaj, İ. Esra Büyüktahtakın

In this paper, we address the difficulty of solving large-scale multi-dimensional knapsack instances (MKP), presenting a novel deep reinforcement learning (DRL) framework. In this DRL framework, we train different agents compatible with a discrete action space for sequential decision-making while still satisfying any resource constraint of the MKP. This novel framework incorporates the decision variable values in the 2D DRL where the agent is responsible for assigning a value of 1 or 0 to each of the variables. To the best of our knowledge, this is the first DRL model of its kind in which a 2D environment is formulated, and an element of the DRL solution matrix represents an item of the MKP. Our framework is configured to solve MKP instances of different dimensions and distributions. We propose a K-means approach to obtain an initial feasible solution that is used to train the DRL agent. We train four different agents in our framework and present the results comparing each of them with the CPLEX commercial solver. The results show that our agents can learn and generalize over instances with different sizes and distributions. Our DRL framework shows that it can solve medium-sized instances at least 45 times faster in CPU solution time and at least 10 times faster for large instances, with a maximum solution gap of 0.28% compared to the performance of CPLEX. Furthermore, at least 95% of the items are predicted in line with the CPLEX solution. Computations with DRL also provide a better optimality gap with respect to state-of-the-art approaches.

在本文中,我们提出了一种新颖的深度强化学习(DRL)框架,以解决大规模多维knapsack实例(MKP)的求解难题。在这个 DRL 框架中,我们训练与离散行动空间兼容的不同代理,以便在满足 MKP 的任何资源限制的同时进行顺序决策。这种新颖的框架将决策变量值纳入了二维 DRL,由代理负责为每个变量赋值 1 或 0。据我们所知,这是首个二维环境下的 DRL 模型,DRL 解矩阵的一个元素代表 MKP 的一个项目。我们的框架可用于解决不同维度和分布的 MKP 实例。我们提出了一种 K-means 方法来获取初始可行解,并将其用于训练 DRL 代理。我们在框架中训练了四个不同的代理,并将每个代理的结果与 CPLEX 商业求解器进行了比较。结果表明,我们的代理可以对不同规模和分布的实例进行学习和泛化。我们的 DRL 框架显示,与 CPLEX 的性能相比,它解决中等规模实例的 CPU 解算时间至少快 45 倍,解决大型实例的 CPU 解算时间至少快 10 倍,最大解算差距为 0.28%。此外,至少 95% 的项目预测结果与 CPLEX 解决方案一致。与最先进的方法相比,使用 DRL 计算还能提供更好的优化差距。
{"title":"A K-means Supported Reinforcement Learning Framework to Multi-dimensional Knapsack","authors":"Sabah Bushaj, İ. Esra Büyüktahtakın","doi":"10.1007/s10898-024-01364-6","DOIUrl":"https://doi.org/10.1007/s10898-024-01364-6","url":null,"abstract":"<p>In this paper, we address the difficulty of solving large-scale multi-dimensional knapsack instances (MKP), presenting a novel deep reinforcement learning (DRL) framework. In this DRL framework, we train different agents compatible with a discrete action space for sequential decision-making while still satisfying any resource constraint of the MKP. This novel framework incorporates the decision variable values in the 2D DRL where the agent is responsible for assigning a value of 1 or 0 to each of the variables. To the best of our knowledge, this is the first DRL model of its kind in which a 2D environment is formulated, and an element of the DRL solution matrix represents an item of the MKP. Our framework is configured to solve MKP instances of different dimensions and distributions. We propose a K-means approach to obtain an initial feasible solution that is used to train the DRL agent. We train four different agents in our framework and present the results comparing each of them with the CPLEX commercial solver. The results show that our agents can learn and generalize over instances with different sizes and distributions. Our DRL framework shows that it can solve medium-sized instances at least 45 times faster in CPU solution time and at least 10 times faster for large instances, with a maximum solution gap of 0.28% compared to the performance of CPLEX. Furthermore, at least 95% of the items are predicted in line with the CPLEX solution. Computations with DRL also provide a better optimality gap with respect to state-of-the-art approaches.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"121 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139759470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Regret analysis of an online majorized semi-proximal ADMM for online composite optimization 用于在线复合优化的在线主要半近似 ADMM 的遗憾分析
IF 1.8 3区 数学 Q1 Mathematics Pub Date : 2024-02-15 DOI: 10.1007/s10898-024-01365-5
Zehao Xiao, Liwei Zhang

An online majorized semi-proximal alternating direction method of multiplier (Online-mspADMM) is proposed for a broad class of online linearly constrained composite optimization problems. A majorized technique is adopted to produce subproblems which can be easily solved. Under mild assumptions, we establish (mathcal {O}(sqrt{N})) objective regret and (mathcal {O}(sqrt{N})) constraint violation regret at round N. We apply the Online-mspADMM to solve different types of online regularized logistic regression problems. The numerical results on synthetic data sets verify the theoretical result about regrets.

针对各类在线线性约束复合优化问题,提出了一种在线大化半近似交替方向乘法(Online-mspADMM)。该方法采用大化技术来生成易于求解的子问题。在温和的假设条件下,我们在第 N 轮建立了 (mathcal {O}(sqrt{N})) 目标遗憾和 (mathcal {O}(sqrt{N})) 约束违反遗憾。在合成数据集上的数值结果验证了关于遗憾的理论结果。
{"title":"Regret analysis of an online majorized semi-proximal ADMM for online composite optimization","authors":"Zehao Xiao, Liwei Zhang","doi":"10.1007/s10898-024-01365-5","DOIUrl":"https://doi.org/10.1007/s10898-024-01365-5","url":null,"abstract":"<p>An online majorized semi-proximal alternating direction method of multiplier (Online-mspADMM) is proposed for a broad class of online linearly constrained composite optimization problems. A majorized technique is adopted to produce subproblems which can be easily solved. Under mild assumptions, we establish <span>(mathcal {O}(sqrt{N}))</span> objective regret and <span>(mathcal {O}(sqrt{N}))</span> constraint violation regret at round <i>N</i>. We apply the Online-mspADMM to solve different types of online regularized logistic regression problems. The numerical results on synthetic data sets verify the theoretical result about regrets.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"68 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139759413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Global Optimization
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1