首页 > 最新文献

Mathematical Programming最新文献

英文 中文
A characterization of maximal homogeneous-quadratic-free sets 最大同质无二次型集合的表征
IF 2.7 2区 数学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-05-23 DOI: 10.1007/s10107-024-02092-1
Gonzalo Muñoz, Joseph Paat, Felipe Serrano

The intersection cut framework was introduced by Balas in 1971 as a method for generating cutting planes in integer optimization. In this framework, one uses a full-dimensional convex S-free set, where S is the feasible region of the integer program, to derive a cut separating S from a non-integral vertex of a linear relaxation of S. Among all S-free sets, it is the inclusion-wise maximal ones that yield the strongest cuts. Recently, this framework has been extended beyond the integer case in order to obtain cutting planes in non-linear settings. In this work, we consider the specific setting when S is defined by a homogeneous quadratic inequality. In this ‘quadratic-free’ setting, every function (Gamma : D^m rightarrow D^n), where (D^k) is the unit sphere in (mathbb {R}^k), generates a representation of a quadratic-free set. While not every (Gamma ) generates a maximal quadratic free set, it is the case that every full-dimensional maximal quadratic free set is generated by some (Gamma ). Our main result shows that the corresponding quadratic-free set is full-dimensional and maximal if and only if (Gamma ) is non-expansive and satisfies a technical condition. This result yields a broader class of maximal S-free sets than previously known. Our result stems from a new characterization of maximal S-free sets (for general S beyond the quadratic setting) based on sequences that ‘expose’ inequalities defining the S-free set.

相交切框架由巴拉斯于 1971 年提出,是一种在整数优化中生成切平面的方法。在这个框架中,我们使用一个全维凸无 S 集(其中 S 是整数程序的可行区域)来导出一个切面,将 S 与 S 的线性松弛的非积分顶点分开。最近,这一框架已被扩展到整数情况之外,以获得非线性环境中的切割平面。在这项研究中,我们考虑了 S 由同质二次不等式定义的特殊情况。在这种 "无二次不等式 "设置中,每个函数(Gamma : D^m rightarrow D^n),其中((D^k)是(mathbb {R}^k) 中的单位球)都会生成一个无二次不等式集的表示。虽然并不是每一个 (Gamma ) 都会生成一个最大二次自由集,但每一个全维最大二次自由集都是由(Gamma ) 生成的。我们的主要结果表明,当且仅当(Gamma )是非扩张的并且满足一个技术条件时,相应的无二次方集合才是全维的和最大的。这一结果产生了一类比以前已知的更广泛的最大无S集。我们的结果源于对最大无S集的新描述(对于一般S,超出了二次设定),这种描述基于 "暴露 "定义无S集的不等式的序列。
{"title":"A characterization of maximal homogeneous-quadratic-free sets","authors":"Gonzalo Muñoz, Joseph Paat, Felipe Serrano","doi":"10.1007/s10107-024-02092-1","DOIUrl":"https://doi.org/10.1007/s10107-024-02092-1","url":null,"abstract":"<p>The intersection cut framework was introduced by Balas in 1971 as a method for generating cutting planes in integer optimization. In this framework, one uses a full-dimensional convex <i>S</i>-free set, where <i>S</i> is the feasible region of the integer program, to derive a cut separating <i>S</i> from a non-integral vertex of a linear relaxation of <i>S</i>. Among all <i>S</i>-free sets, it is the inclusion-wise maximal ones that yield the strongest cuts. Recently, this framework has been extended beyond the integer case in order to obtain cutting planes in non-linear settings. In this work, we consider the specific setting when <i>S</i> is defined by a homogeneous quadratic inequality. In this ‘quadratic-free’ setting, every function <span>(Gamma : D^m rightarrow D^n)</span>, where <span>(D^k)</span> is the unit sphere in <span>(mathbb {R}^k)</span>, generates a representation of a quadratic-free set. While not every <span>(Gamma )</span> generates a maximal quadratic free set, it is the case that every full-dimensional maximal quadratic free set is generated by some <span>(Gamma )</span>. Our main result shows that the corresponding quadratic-free set is full-dimensional and maximal if and only if <span>(Gamma )</span> is non-expansive and satisfies a technical condition. This result yields a broader class of maximal <i>S</i>-free sets than previously known. Our result stems from a new characterization of maximal <i>S</i>-free sets (for general <i>S</i> beyond the quadratic setting) based on sequences that ‘expose’ inequalities defining the <i>S</i>-free set.</p>","PeriodicalId":18297,"journal":{"name":"Mathematical Programming","volume":"25 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141152037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimal general factor problem and jump system intersection 最优一般因素问题与跳跃系统交集
IF 2.7 2区 数学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-05-23 DOI: 10.1007/s10107-024-02098-9
Yusuke Kobayashi

In the optimal general factor problem, given a graph (G=(V, E)) and a set (B(v) subseteq {mathbb {Z}}) of integers for each (v in V), we seek for an edge subset F of maximum cardinality subject to (d_F(v) in B(v)) for (v in V), where (d_F(v)) denotes the number of edges in F incident to v. A recent crucial work by Dudycz and Paluch shows that this problem can be solved in polynomial time if each B(v) has no gap of length more than one. While their algorithm is very simple, its correctness proof is quite complicated. In this paper, we formulate the optimal general factor problem as the jump system intersection, and reveal when the algorithm by Dudycz and Paluch can be applied to this abstract form of the problem. By using this abstraction, we give another correctness proof of the algorithm, which is simpler than the original one. We also extend our result to the valuated case.

在最优一般因子问题中,给定一个图(G=(V, E))和一个整数集(B(v) subseteq{mathbb{Z}}),对于每个(v in V)、我们要为(v 在 V 中)寻找一个最大卡片数的边子集 F,其中 (d_F(v))表示 F 中与 v 有关的边的数量。Dudycz 和 Paluch 最近的一项重要工作表明,如果每个 B(v) 的间隙长度不超过 1,那么这个问题可以在多项式时间内解决。虽然他们的算法非常简单,但其正确性证明却相当复杂。在本文中,我们将最优一般因子问题表述为跳跃系统交集,并揭示了 Dudycz 和 Paluch 的算法何时可以应用于该问题的这种抽象形式。通过使用这种抽象形式,我们给出了另一种算法的正确性证明,它比原来的算法更简单。我们还将结果扩展到了估值情况。
{"title":"Optimal general factor problem and jump system intersection","authors":"Yusuke Kobayashi","doi":"10.1007/s10107-024-02098-9","DOIUrl":"https://doi.org/10.1007/s10107-024-02098-9","url":null,"abstract":"<p>In the optimal general factor problem, given a graph <span>(G=(V, E))</span> and a set <span>(B(v) subseteq {mathbb {Z}})</span> of integers for each <span>(v in V)</span>, we seek for an edge subset <i>F</i> of maximum cardinality subject to <span>(d_F(v) in B(v))</span> for <span>(v in V)</span>, where <span>(d_F(v))</span> denotes the number of edges in <i>F</i> incident to <i>v</i>. A recent crucial work by Dudycz and Paluch shows that this problem can be solved in polynomial time if each <i>B</i>(<i>v</i>) has no gap of length more than one. While their algorithm is very simple, its correctness proof is quite complicated. In this paper, we formulate the optimal general factor problem as the jump system intersection, and reveal when the algorithm by Dudycz and Paluch can be applied to this abstract form of the problem. By using this abstraction, we give another correctness proof of the algorithm, which is simpler than the original one. We also extend our result to the valuated case.</p>","PeriodicalId":18297,"journal":{"name":"Mathematical Programming","volume":"45 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141152157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A linear time algorithm for linearizing quadratic and higher-order shortest path problems 二次方和高阶最短路径问题线性化的线性时间算法
IF 2.7 2区 数学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-05-09 DOI: 10.1007/s10107-024-02086-z
Eranda Çela, Bettina Klinz, Stefan Lendl, Gerhard J. Woeginger, Lasse Wulf

An instance of the NP-hard Quadratic Shortest Path Problem (QSPP) is called linearizable iff it is equivalent to an instance of the classic Shortest Path Problem (SPP) on the same input digraph. The linearization problem for the QSPP (LinQSPP) decides whether a given QSPP instance is linearizable and determines the corresponding SPP instance in the positive case. We provide a novel linear time algorithm for the LinQSPP on acyclic digraphs which runs considerably faster than the previously best algorithm. The algorithm is based on a new insight revealing that the linearizability of the QSPP for acyclic digraphs can be seen as a local property. Our approach extends to the more general higher-order shortest path problem.

如果一个 NP 难度极高的二次最短路径问题(QSPP)实例等同于同一输入图上的经典最短路径问题(SPP)实例,则该实例被称为可线性化。QSPP 的线性化问题(LinQSPP)决定给定的 QSPP 实例是否可线性化,并在线性化的情况下确定相应的 SPP 实例。我们为非循环图上的 LinQSPP 提供了一种新的线性时间算法,其运行速度大大快于之前的最佳算法。该算法基于一个新见解,即无循环图的 QSPP 的线性化可视为一个局部属性。我们的方法可扩展到更一般的高阶最短路径问题。
{"title":"A linear time algorithm for linearizing quadratic and higher-order shortest path problems","authors":"Eranda Çela, Bettina Klinz, Stefan Lendl, Gerhard J. Woeginger, Lasse Wulf","doi":"10.1007/s10107-024-02086-z","DOIUrl":"https://doi.org/10.1007/s10107-024-02086-z","url":null,"abstract":"<p>An instance of the NP-hard Quadratic Shortest Path Problem (QSPP) is called linearizable iff it is equivalent to an instance of the classic Shortest Path Problem (SPP) on the same input digraph. The linearization problem for the QSPP (LinQSPP) decides whether a given QSPP instance is linearizable and determines the corresponding SPP instance in the positive case. We provide a novel linear time algorithm for the LinQSPP on acyclic digraphs which runs considerably faster than the previously best algorithm. The algorithm is based on a new insight revealing that the linearizability of the QSPP for acyclic digraphs can be seen as a local property. Our approach extends to the more general higher-order shortest path problem.</p>","PeriodicalId":18297,"journal":{"name":"Mathematical Programming","volume":"1 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140941067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gaining or losing perspective for convex multivariate functions on box domains 盒域上凸多元函数的得失视角
IF 2.7 2区 数学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-05-08 DOI: 10.1007/s10107-024-02087-y
Luze Xu, Jon Lee

Mixed-integer nonlinear optimization formulations of the disjunction between the origin and a polytope via a binary indicator variable is broadly used in nonlinear combinatorial optimization for modeling a fixed cost associated with carrying out a group of activities and a convex cost function associated with the levels of the activities. The perspective relaxation of such models is often used to solve to global optimality in a branch-and-bound context, but it typically requires suitable conic solvers and is not compatible with general-purpose NLP software in the presence of other classes of constraints. This motivates the investigation of when simpler but weaker relaxations may be adequate. Comparing the volume (i.e., Lebesgue measure) of the relaxations as a measure of tightness, we lift some of the results related to the simplex case to the box case. In order to compare the volumes of different relaxations in the box case, it is necessary to find an appropriate concave upper bound that preserves the convexity and is minimal, which is more difficult than in the simplex case. To address the challenge beyond the simplex case, the triangulation approach is used.

在非线性组合优化中,通过二元指示变量对原点和多面体之间的析取进行混合整数非线性优化表述被广泛用于模拟与开展一组活动相关的固定成本和与活动水平相关的凸成本函数。此类模型的透视松弛通常用于在分支和边界背景下求解全局最优性,但它通常需要合适的圆锥求解器,并且在存在其他类别约束的情况下与通用 NLP 软件不兼容。这就促使我们研究更简单但更弱的松弛何时才能满足要求。比较松弛的体积(即 Lebesgue 度量)作为松紧度的度量,我们将一些与单纯形案例相关的结果推广到盒状案例中。为了比较盒状情形下不同松弛的体积,有必要找到一个合适的凹上界,它既能保持凸性,又是最小的,这比单纯形情形下的松弛要困难得多。为了应对这一超越单纯形情况的挑战,我们采用了三角剖分法。
{"title":"Gaining or losing perspective for convex multivariate functions on box domains","authors":"Luze Xu, Jon Lee","doi":"10.1007/s10107-024-02087-y","DOIUrl":"https://doi.org/10.1007/s10107-024-02087-y","url":null,"abstract":"<p>Mixed-integer nonlinear optimization formulations of the disjunction between the origin and a polytope via a binary indicator variable is broadly used in nonlinear combinatorial optimization for modeling a fixed cost associated with carrying out a group of activities and a convex cost function associated with the levels of the activities. The perspective relaxation of such models is often used to solve to global optimality in a branch-and-bound context, but it typically requires suitable conic solvers and is not compatible with general-purpose NLP software in the presence of other classes of constraints. This motivates the investigation of when simpler but weaker relaxations may be adequate. Comparing the volume (i.e., Lebesgue measure) of the relaxations as a measure of tightness, we lift some of the results related to the simplex case to the box case. In order to compare the volumes of different relaxations in the box case, it is necessary to find an appropriate concave upper bound that preserves the convexity and is minimal, which is more difficult than in the simplex case. To address the challenge beyond the simplex case, the triangulation approach is used.</p>","PeriodicalId":18297,"journal":{"name":"Mathematical Programming","volume":"28 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140941064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sample complexity analysis for adaptive optimization algorithms with stochastic oracles 带有随机通告的自适应优化算法的样本复杂度分析
IF 2.7 2区 数学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-04-29 DOI: 10.1007/s10107-024-02078-z
Billy Jin, Katya Scheinberg, Miaolan Xie

Several classical adaptive optimization algorithms, such as line search and trust-region methods, have been recently extended to stochastic settings where function values, gradients, and Hessians in some cases, are estimated via stochastic oracles. Unlike the majority of stochastic methods, these methods do not use a pre-specified sequence of step size parameters, but adapt the step size parameter according to the estimated progress of the algorithm and use it to dictate the accuracy required from the stochastic oracles. The requirements on the stochastic oracles are, thus, also adaptive and the oracle costs can vary from iteration to iteration. The step size parameters in these methods can increase and decrease based on the perceived progress, but unlike the deterministic case they are not bounded away from zero due to possible oracle failures, and bounds on the step size parameter have not been previously derived. This creates obstacles in the total complexity analysis of such methods, because the oracle costs are typically decreasing in the step size parameter, and could be arbitrarily large as the step size parameter goes to 0. Thus, until now only the total iteration complexity of these methods has been analyzed. In this paper, we derive a lower bound on the step size parameter that holds with high probability for a large class of adaptive stochastic methods. We then use this lower bound to derive a framework for analyzing the expected and high probability total oracle complexity of any method in this class. Finally, we apply this framework to analyze the total sample complexity of two particular algorithms, STORM (Blanchet et al. in INFORMS J Optim 1(2):92–119, 2019) and SASS (Jin et al. in High probability complexity bounds for adaptive step search based on stochastic oracles, 2021. https://doi.org/10.48550/ARXIV.2106.06454), in the expected risk minimization problem.

一些经典的自适应优化算法,如直线搜索法和信任区域法,最近已被扩展到随机设置中,在随机设置中,函数值、梯度和某些情况下的赫西亚斯(Hessians)都是通过随机信号来估计的。与大多数随机方法不同的是,这些方法不使用预先指定的步长参数序列,而是根据算法的估计进度调整步长参数,并用它来决定对随机神谕的精度要求。因此,对随机神谕的要求也是自适应的,神谕成本也会随着迭代的不同而变化。这些方法中的步长参数可以根据所感知的进度增大或减小,但与确定性方法不同的是,由于可能出现的神谕失败,步长参数并没有远离零的界限,而且以前也没有推导出步长参数的界限。这给此类方法的总复杂度分析造成了障碍,因为甲骨文成本通常随步长参数递减,当步长参数为 0 时,甲骨文成本可能会任意增大。因此,到目前为止,我们只分析了这些方法的总迭代复杂度。在本文中,我们推导出了步长参数的下限,该下限对于一大类自适应随机方法来说很有可能成立。然后,我们利用这个下限推导出一个框架,用于分析该类方法的预期和高概率总迭代复杂度。最后,我们应用这个框架分析了两种特定算法的总样本复杂度,即 STORM(Blanchet 等人,载于 INFORMS J Optim 1(2):92-119, 2019)和 SASS(Jin 等人,载于 High probability complexity bounds for adaptive step search based on stochastic oracles, 2021. https://doi.org/10.48550/ARXIV.2106.06454),在预期风险最小化问题中的总样本复杂度。
{"title":"Sample complexity analysis for adaptive optimization algorithms with stochastic oracles","authors":"Billy Jin, Katya Scheinberg, Miaolan Xie","doi":"10.1007/s10107-024-02078-z","DOIUrl":"https://doi.org/10.1007/s10107-024-02078-z","url":null,"abstract":"<p>Several classical adaptive optimization algorithms, such as line search and trust-region methods, have been recently extended to stochastic settings where function values, gradients, and Hessians in some cases, are estimated via stochastic oracles. Unlike the majority of stochastic methods, these methods do not use a pre-specified sequence of step size parameters, but adapt the step size parameter according to the estimated progress of the algorithm and use it to dictate the accuracy required from the stochastic oracles. The requirements on the stochastic oracles are, thus, also adaptive and the oracle costs can vary from iteration to iteration. The step size parameters in these methods can increase and decrease based on the perceived progress, but unlike the deterministic case they are not bounded away from zero due to possible oracle failures, and bounds on the step size parameter have not been previously derived. This creates obstacles in the total complexity analysis of such methods, because the oracle costs are typically decreasing in the step size parameter, and could be arbitrarily large as the step size parameter goes to 0. Thus, until now only the total iteration complexity of these methods has been analyzed. In this paper, we derive a lower bound on the step size parameter that holds with high probability for a large class of adaptive stochastic methods. We then use this lower bound to derive a framework for analyzing the expected and high probability total oracle complexity of any method in this class. Finally, we apply this framework to analyze the total sample complexity of two particular algorithms, STORM (Blanchet et al. in INFORMS J Optim 1(2):92–119, 2019) and SASS (Jin et al. in High probability complexity bounds for adaptive step search based on stochastic oracles, 2021. https://doi.org/10.48550/ARXIV.2106.06454), in the expected risk minimization problem.</p>","PeriodicalId":18297,"journal":{"name":"Mathematical Programming","volume":"161 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140886228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From approximate to exact integer programming 从近似整数编程到精确整数编程
IF 2.7 2区 数学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-04-24 DOI: 10.1007/s10107-024-02084-1
Daniel Dadush, Friedrich Eisenbrand, Thomas Rothvoss

Approximate integer programming is the following: For a given convex body (K subseteq {mathbb {R}}^n), either determine whether (K cap {mathbb {Z}}^n) is empty, or find an integer point in the convex body (2cdot (K - c) +c) which is K, scaled by 2 from its center of gravity c. Approximate integer programming can be solved in time (2^{O(n)}) while the fastest known methods for exact integer programming run in time (2^{O(n)} cdot n^n). So far, there are no efficient methods for integer programming known that are based on approximate integer programming. Our main contribution are two such methods, each yielding novel complexity results. First, we show that an integer point (x^* in (K cap {mathbb {Z}}^n)) can be found in time (2^{O(n)}), provided that the remainders of each component (x_i^* mod ell ) for some arbitrarily fixed (ell ge 5(n+1)) of (x^*) are given. The algorithm is based on a cutting-plane technique, iteratively halving the volume of the feasible set. The cutting planes are determined via approximate integer programming. Enumeration of the possible remainders gives a (2^{O(n)}n^n) algorithm for general integer programming. This matches the current best bound of an algorithm by Dadush (Integer programming, lattice algorithms, and deterministic, vol. Estimation. Georgia Institute of Technology, Atlanta, 2012) that is considerably more involved. Our algorithm also relies on a new asymmetric approximate Carathéodory theorem that might be of interest on its own. Our second method concerns integer programming problems in equation-standard form (Ax = b, 0 le x le u, , x in {mathbb {Z}}^n). Such a problem can be reduced to the solution of (prod _i O(log u_i +1)) approximate integer programming problems. This implies, for example that knapsack or subset-sum problems with polynomial variable range (0 le x_i le p(n)) can be solved in time ((log n)^{O(n)}). For these problems, the best running time so far was (n^n cdot 2^{O(n)}).

近似整数编程如下:对于给定的凸体 (K subseteq {mathbb {R}}^n),要么确定 (K cap {mathbb {Z}}^n) 是否为空,要么在凸体 (2cdot (K - c) +c)中找到一个整数点,该点是 K,从其重心 c 起按比例缩放 2。近似整数编程可以在(2^{O(n)})时间内求解,而已知最快的精确整数编程方法运行时间为(2^{O(n)} cdot n^n)。迄今为止,还没有基于近似整数编程的高效整数编程方法。我们的主要贡献是两个这样的方法,每个方法都产生了新的复杂性结果。首先,我们证明,只要给定 x^* 的某个任意固定的 (ell ge 5(n+1)) 的每个分量 (x_i^* mod ell ) 的余数,就可以在 (2^{O(n)}) 的时间内找到 (K cap {mathbb {Z}}^n) 中的整数点 (x^*) 。该算法基于切割平面技术,迭代地将可行集的体积减半。切割面是通过近似整数编程确定的。对可能余数的枚举给出了一般整数编程的 (2^{O(n)}n^n) 算法。这与达杜什(Dadush,《整数编程、网格算法和确定性》,估算卷,佐治亚理工学院,亚特兰大)提出的算法的当前最佳界限相吻合。佐治亚理工学院,亚特兰大,2012 年),该算法涉及的内容要多得多。我们的算法还依赖于一个新的非对称近似 Carathéodory 定理,它本身可能也很有趣。我们的第二种方法涉及方程标准形式的整数编程问题(Ax = b, 0 le x le u, , x in {mathbb {Z}}^n )。这样的问题可以简化为(prod _i O(log u_i +1)) 近似整数编程问题的求解。例如,这意味着具有多项式变量范围的knapsack或子集和问题可以在((log n)^{O(n)} )时间内求解。对于这些问题,迄今为止最好的运行时间是 (n^n cdot 2^{O(n)}).
{"title":"From approximate to exact integer programming","authors":"Daniel Dadush, Friedrich Eisenbrand, Thomas Rothvoss","doi":"10.1007/s10107-024-02084-1","DOIUrl":"https://doi.org/10.1007/s10107-024-02084-1","url":null,"abstract":"<p>Approximate integer programming is the following: For a given convex body <span>(K subseteq {mathbb {R}}^n)</span>, either determine whether <span>(K cap {mathbb {Z}}^n)</span> is empty, or find an integer point in the convex body <span>(2cdot (K - c) +c)</span> which is <i>K</i>, scaled by 2 from its center of gravity <i>c</i>. Approximate integer programming can be solved in time <span>(2^{O(n)})</span> while the fastest known methods for exact integer programming run in time <span>(2^{O(n)} cdot n^n)</span>. So far, there are no efficient methods for integer programming known that are based on approximate integer programming. Our main contribution are two such methods, each yielding novel complexity results. First, we show that an integer point <span>(x^* in (K cap {mathbb {Z}}^n))</span> can be found in time <span>(2^{O(n)})</span>, provided that the <i>remainders</i> of each component <span>(x_i^* mod ell )</span> for some arbitrarily fixed <span>(ell ge 5(n+1))</span> of <span>(x^*)</span> are given. The algorithm is based on a <i>cutting-plane technique</i>, iteratively halving the volume of the feasible set. The cutting planes are determined via approximate integer programming. Enumeration of the possible remainders gives a <span>(2^{O(n)}n^n)</span> algorithm for general integer programming. This matches the current best bound of an algorithm by Dadush (Integer programming, lattice algorithms, and deterministic, vol. Estimation. Georgia Institute of Technology, Atlanta, 2012) that is considerably more involved. Our algorithm also relies on a new <i>asymmetric approximate Carathéodory theorem</i> that might be of interest on its own. Our second method concerns integer programming problems in equation-standard form <span>(Ax = b, 0 le x le u, , x in {mathbb {Z}}^n)</span>. Such a problem can be reduced to the solution of <span>(prod _i O(log u_i +1))</span> approximate integer programming problems. This implies, for example that <i>knapsack</i> or <i>subset-sum</i> problems with <i>polynomial variable range</i> <span>(0 le x_i le p(n))</span> can be solved in time <span>((log n)^{O(n)})</span>. For these problems, the best running time so far was <span>(n^n cdot 2^{O(n)})</span>.</p>","PeriodicalId":18297,"journal":{"name":"Mathematical Programming","volume":"27 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140885825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An update-and-stabilize framework for the minimum-norm-point problem 最小规范点问题的更新与稳定框架
IF 2.7 2区 数学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-04-18 DOI: 10.1007/s10107-024-02077-0
Satoru Fujishige, Tomonari Kitahara, László A. Végh

We consider the minimum-norm-point (MNP) problem over polyhedra, a well-studied problem that encompasses linear programming. We present a general algorithmic framework that combines two fundamental approaches for this problem: active set methods and first order methods. Our algorithm performs first order update steps, followed by iterations that aim to ‘stabilize’ the current iterate with additional projections, i.e., find a locally optimal solution whilst keeping the current tight inequalities. Such steps have been previously used in active set methods for the nonnegative least squares (NNLS) problem. We bound on the number of iterations polynomially in the dimension and in the associated circuit imbalance measure. In particular, the algorithm is strongly polynomial for network flow instances. Classical NNLS algorithms such as the Lawson–Hanson algorithm are special instantiations of our framework; as a consequence, we obtain convergence bounds for these algorithms. Our preliminary computational experiments show promising practical performance.

我们考虑的是多面体上的最小规范点(MNP)问题,这是一个经过深入研究的包含线性规划的问题。我们提出了一种通用算法框架,它结合了解决这一问题的两种基本方法:有源集方法和一阶方法。我们的算法执行一阶更新步骤,然后进行迭代,旨在通过额外的投影 "稳定 "当前迭代,即在保持当前紧不等式的同时找到局部最优解。这种步骤以前曾用于非负最小二乘法(NNLS)问题的主动集方法中。我们在维度和相关电路不平衡度量上对迭代次数进行了多项式约束。特别是,对于网络流实例,该算法是强多项式的。经典的 NNLS 算法(如 Lawson-Hanson 算法)是我们框架的特殊实例;因此,我们获得了这些算法的收敛边界。我们的初步计算实验显示了良好的实用性能。
{"title":"An update-and-stabilize framework for the minimum-norm-point problem","authors":"Satoru Fujishige, Tomonari Kitahara, László A. Végh","doi":"10.1007/s10107-024-02077-0","DOIUrl":"https://doi.org/10.1007/s10107-024-02077-0","url":null,"abstract":"<p>We consider the minimum-norm-point (MNP) problem over polyhedra, a well-studied problem that encompasses linear programming. We present a general algorithmic framework that combines two fundamental approaches for this problem: active set methods and first order methods. Our algorithm performs first order update steps, followed by iterations that aim to ‘stabilize’ the current iterate with additional projections, i.e., find a locally optimal solution whilst keeping the current tight inequalities. Such steps have been previously used in active set methods for the nonnegative least squares (NNLS) problem. We bound on the number of iterations polynomially in the dimension and in the associated circuit imbalance measure. In particular, the algorithm is strongly polynomial for network flow instances. Classical NNLS algorithms such as the Lawson–Hanson algorithm are special instantiations of our framework; as a consequence, we obtain convergence bounds for these algorithms. Our preliminary computational experiments show promising practical performance.</p>","PeriodicalId":18297,"journal":{"name":"Mathematical Programming","volume":"27 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140885829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extended convergence analysis of the Scholtes-type regularization for cardinality-constrained optimization problems 针对心量受限优化问题的肖尔特斯型正则化的扩展收敛分析
IF 2.7 2区 数学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-04-09 DOI: 10.1007/s10107-024-02082-3
Sebastian Lämmel, Vladimir Shikhman

We extend the convergence analysis of the Scholtes-type regularization method for cardinality-constrained optimization problems. Its behavior is clarified in the vicinity of saddle points, and not just of minimizers as it has been done in the literature before. This becomes possible by using as an intermediate step the recently introduced regularized continuous reformulation of a cardinality-constrained optimization problem. We show that the Scholtes-type regularization method is well-defined locally around a nondegenerate T-stationary point of this regularized continuous reformulation. Moreover, the nondegenerate Karush–Kuhn–Tucker points of the corresponding Scholtes-type regularization converge to a T-stationary point having the same index, i.e. its topological type persists. As consequence, we conclude that the global structure of the Scholtes-type regularization essentially coincides with that of CCOP.

我们扩展了肖尔特斯型正则化方法对有数量限制的优化问题的收敛性分析。我们明确了该方法在鞍点附近的行为,而不仅仅是之前文献中提到的最小值。作为中间步骤,我们使用了最近引入的对有卡数量限制优化问题的正则化连续重述,从而使这一方法成为可能。我们证明,肖尔特斯类型的正则化方法在该正则化连续重构的非enerate T-stationary 点周围具有良好的局部定义。此外,相应的 Scholtes 型正则化的非enerate Karush-Kuhn-Tucker 点收敛于具有相同指数的 T-stationary 点,即其拓扑类型持续存在。因此,我们得出结论:Scholtes 型正则化的全局结构与 CCOP 的全局结构基本一致。
{"title":"Extended convergence analysis of the Scholtes-type regularization for cardinality-constrained optimization problems","authors":"Sebastian Lämmel, Vladimir Shikhman","doi":"10.1007/s10107-024-02082-3","DOIUrl":"https://doi.org/10.1007/s10107-024-02082-3","url":null,"abstract":"<p>We extend the convergence analysis of the Scholtes-type regularization method for cardinality-constrained optimization problems. Its behavior is clarified in the vicinity of saddle points, and not just of minimizers as it has been done in the literature before. This becomes possible by using as an intermediate step the recently introduced regularized continuous reformulation of a cardinality-constrained optimization problem. We show that the Scholtes-type regularization method is well-defined locally around a nondegenerate T-stationary point of this regularized continuous reformulation. Moreover, the nondegenerate Karush–Kuhn–Tucker points of the corresponding Scholtes-type regularization converge to a T-stationary point having the same index, i.e. its topological type persists. As consequence, we conclude that the global structure of the Scholtes-type regularization essentially coincides with that of CCOP.</p>","PeriodicalId":18297,"journal":{"name":"Mathematical Programming","volume":"48 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140885824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Compressing branch-and-bound trees 压缩分支绑定树
IF 2.7 2区 数学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-04-06 DOI: 10.1007/s10107-024-02080-5
Gonzalo Muñoz, Joseph Paat, Álinson S. Xavier

A branch-and-bound (BB) tree certifies a dual bound on the value of an integer program. In this work, we introduce the tree compression problem (TCP): Given a BB tree T that certifies a dual bound, can we obtain a smaller tree with the same (or stronger) bound by either (1) applying a different disjunction at some node in T or (2) removing leaves from T? We believe such post-hoc analysis of BB trees may assist in identifying helpful general disjunctions in BB algorithms. We initiate our study by considering computational complexity and limitations of TCP. We then conduct experiments to evaluate the compressibility of realistic branch-and-bound trees generated by commonly-used branching strategies, using both an exact and a heuristic compression algorithm.

分支约束(BB)树证明了整数程序值的双重约束。在这项工作中,我们引入了树压缩问题(TCP):给定一棵证明了对偶约束的分支约束树 T,我们能否通过(1)在 T 中的某个节点应用不同的析取或(2)从 T 中删除叶子,得到一棵具有相同(或更强)约束的更小的树?我们相信,这种对 BB 树的事后分析可能有助于识别 BB 算法中有用的通用析取。我们的研究首先考虑了 TCP 的计算复杂性和局限性。然后,我们使用精确压缩算法和启发式压缩算法进行实验,以评估由常用分支策略生成的现实分支约束树的可压缩性。
{"title":"Compressing branch-and-bound trees","authors":"Gonzalo Muñoz, Joseph Paat, Álinson S. Xavier","doi":"10.1007/s10107-024-02080-5","DOIUrl":"https://doi.org/10.1007/s10107-024-02080-5","url":null,"abstract":"<p>A branch-and-bound (BB) tree certifies a dual bound on the value of an integer program. In this work, we introduce the tree compression problem (TCP): <i>Given a BB tree</i> <i>T</i> <i>that certifies a dual bound, can we obtain a smaller tree with the same (or stronger) bound by either (1) applying a different disjunction at some node in</i> <i>T</i> <i>or (2) removing leaves from</i> <i>T</i>? We believe such post-hoc analysis of BB trees may assist in identifying helpful general disjunctions in BB algorithms. We initiate our study by considering computational complexity and limitations of TCP. We then conduct experiments to evaluate the compressibility of realistic branch-and-bound trees generated by commonly-used branching strategies, using both an exact and a heuristic compression algorithm.</p>","PeriodicalId":18297,"journal":{"name":"Mathematical Programming","volume":"32 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140595792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Finding global minima via kernel approximations 通过核近似找到全局最小值
IF 2.7 2区 数学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-04-04 DOI: 10.1007/s10107-024-02081-4
Alessandro Rudi, Ulysse Marteau-Ferey, Francis Bach

We consider the global minimization of smooth functions based solely on function evaluations. Algorithms that achieve the optimal number of function evaluations for a given precision level typically rely on explicitly constructing an approximation of the function which is then minimized with algorithms that have exponential running-time complexity. In this paper, we consider an approach that jointly models the function to approximate and finds a global minimum. This is done by using infinite sums of square smooth functions and has strong links with polynomial sum-of-squares hierarchies. Leveraging recent representation properties of reproducing kernel Hilbert spaces, the infinite-dimensional optimization problem can be solved by subsampling in time polynomial in the number of function evaluations, and with theoretical guarantees on the obtained minimum. Given n samples, the computational cost is (O(n^{3.5})) in time, (O(n^2)) in space, and we achieve a convergence rate to the global optimum that is (O(n^{-m/d + 1/2 + 3/d})) where m is the degree of differentiability of the function and d the number of dimensions. The rate is nearly optimal in the case of Sobolev functions and more generally makes the proposed method particularly suitable for functions with many derivatives. Indeed, when m is in the order of d, the convergence rate to the global optimum does not suffer from the curse of dimensionality, which affects only the worst-case constants (that we track explicitly through the paper).

我们只考虑基于函数求值的平滑函数全局最小化问题。在给定精度水平下,实现最佳函数求值次数的算法通常依赖于显式构建函数近似值,然后用运行时间复杂度呈指数级的算法将其最小化。在本文中,我们考虑了一种方法,即联合建立函数近似模型并找到全局最小值。这种方法通过使用平方平滑函数的无限和来实现,并与多项式平方和层次结构有着密切联系。利用重现核希尔伯特空间的最新表示特性,可以通过子采样在函数求值次数为多项式的时间内求解无穷维优化问题,并从理论上保证求得最小值。给定 n 个样本,计算成本在时间上是(O(n^{3.5})),在空间上是(O(n^2)),我们达到全局最优的收敛速率是(O(n^{-m/d + 1/2 + 3/d})) 其中 m 是函数的可微分程度,d 是维数。在 Sobolev 函数的情况下,这个比率几乎是最优的,而且在更广泛的情况下,所提出的方法特别适用于具有许多导数的函数。事实上,当 m 在 d 的数量级时,向全局最优的收敛率不会受到维数诅咒的影响,维数诅咒只影响最坏情况下的常数(我们在论文中明确跟踪了这些常数)。
{"title":"Finding global minima via kernel approximations","authors":"Alessandro Rudi, Ulysse Marteau-Ferey, Francis Bach","doi":"10.1007/s10107-024-02081-4","DOIUrl":"https://doi.org/10.1007/s10107-024-02081-4","url":null,"abstract":"<p>We consider the global minimization of smooth functions based solely on function evaluations. Algorithms that achieve the optimal number of function evaluations for a given precision level typically rely on explicitly constructing an approximation of the function which is then minimized with algorithms that have exponential running-time complexity. In this paper, we consider an approach that jointly models the function to approximate and finds a global minimum. This is done by using infinite sums of square smooth functions and has strong links with polynomial sum-of-squares hierarchies. Leveraging recent representation properties of reproducing kernel Hilbert spaces, the infinite-dimensional optimization problem can be solved by subsampling in time polynomial in the number of function evaluations, and with theoretical guarantees on the obtained minimum. Given <i>n</i> samples, the computational cost is <span>(O(n^{3.5}))</span> in time, <span>(O(n^2))</span> in space, and we achieve a convergence rate to the global optimum that is <span>(O(n^{-m/d + 1/2 + 3/d}))</span> where <i>m</i> is the degree of differentiability of the function and <i>d</i> the number of dimensions. The rate is nearly optimal in the case of Sobolev functions and more generally makes the proposed method particularly suitable for functions with many derivatives. Indeed, when <i>m</i> is in the order of <i>d</i>, the convergence rate to the global optimum does not suffer from the curse of dimensionality, which affects only the worst-case constants (that we track explicitly through the paper).\u0000</p>","PeriodicalId":18297,"journal":{"name":"Mathematical Programming","volume":"2 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140595779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Mathematical Programming
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1