首页 > 最新文献

EURO Journal on Computational Optimization最新文献

英文 中文
Newton-MR: Inexact Newton Method with minimum residual sub-problem solver Newton- mr:带最小残差子问题求解的不精确牛顿法
IF 2.4 Q2 OPERATIONS RESEARCH & MANAGEMENT SCIENCE Pub Date : 2022-01-01 DOI: 10.1016/j.ejco.2022.100035
Fred Roosta , Yang Liu , Peng Xu , Michael W. Mahoney

We consider a variant of inexact Newton Method [20], [40], called Newton-MR, in which the least-squares sub-problems are solved approximately using Minimum Residual method [79]. By construction, Newton-MR can be readily applied for unconstrained optimization of a class of non-convex problems known as invex, which subsumes convexity as a sub-class. For invex optimization, instead of the classical Lipschitz continuity assumptions on gradient and Hessian, Newton-MR's global convergence can be guaranteed under a weaker notion of joint regularity of Hessian and gradient. We also obtain Newton-MR's problem-independent local convergence to the set of minima. We show that fast local/global convergence can be guaranteed under a novel inexactness condition, which, to our knowledge, is much weaker than the prior related works. Numerical results demonstrate the performance of Newton-MR as compared with several other Newton-type alternatives on a few machine learning problems.

我们考虑了非精确牛顿法的一种变体,称为Newton- mr,其中使用最小残差法近似求解最小二乘子问题[79]。通过构造,牛顿- mr可以很容易地应用于一类非凸问题的无约束优化,即逆问题,它将凸性作为子类。对于逆优化,在较弱的Hessian和梯度的联合正则性概念下,Newton-MR的全局收敛性可以得到保证,而不是经典的关于梯度和Hessian的Lipschitz连续性假设。我们还得到了Newton-MR对极小集的独立于问题的局部收敛性。我们证明了在新的不精确条件下可以保证快速的局部/全局收敛,据我们所知,这比之前的相关工作弱得多。数值结果表明,在一些机器学习问题上,与其他几种牛顿型替代方法相比,牛顿- mr的性能更好。
{"title":"Newton-MR: Inexact Newton Method with minimum residual sub-problem solver","authors":"Fred Roosta ,&nbsp;Yang Liu ,&nbsp;Peng Xu ,&nbsp;Michael W. Mahoney","doi":"10.1016/j.ejco.2022.100035","DOIUrl":"10.1016/j.ejco.2022.100035","url":null,"abstract":"<div><p>We consider a variant of inexact Newton Method <span>[20]</span>, <span>[40]</span>, called Newton-MR, in which the least-squares sub-problems are solved approximately using Minimum Residual method <span>[79]</span>. By construction, Newton-MR can be readily applied for unconstrained optimization of a class of non-convex problems known as invex, which subsumes convexity as a sub-class. For invex optimization, instead of the classical Lipschitz continuity assumptions on gradient and Hessian, Newton-MR's global convergence can be guaranteed under a weaker notion of joint regularity of Hessian and gradient. We also obtain Newton-MR's problem-independent local convergence to the set of minima. We show that fast local/global convergence can be guaranteed under a novel inexactness condition, which, to our knowledge, is much weaker than the prior related works. Numerical results demonstrate the performance of Newton-MR as compared with several other Newton-type alternatives on a few machine learning problems.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100035"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440622000119/pdfft?md5=d469cd05ef15c6b063a51fd431c7a8dd&pid=1-s2.0-S2192440622000119-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123761580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Progress in mathematical programming solvers from 2001 to 2020 2001 - 2020年数学规划解算器的进展
IF 2.4 Q2 OPERATIONS RESEARCH & MANAGEMENT SCIENCE Pub Date : 2022-01-01 DOI: 10.1016/j.ejco.2022.100031
Thorsten Koch , Timo Berthold , Jaap Pedersen , Charlie Vanaret

This study investigates the progress made in lp and milp solver performance during the last two decades by comparing the solver software from the beginning of the millennium with the codes available today. On average, we found out that for solving lp/milp, computer hardware got about 20 times faster, and the algorithms improved by a factor of about nine for lp and around 50 for milp, which gives a total speed-up of about 180 and 1,000 times, respectively. However, these numbers have a very high variance and they considerably underestimate the progress made on the algorithmic side: many problem instances can nowadays be solved within seconds, which the old codes are not able to solve within any reasonable time.

本研究通过比较千年之初的求解器软件与今天可用的代码,调查了lp和milp求解器性能在过去二十年中取得的进展。平均而言,我们发现在求解lp/milp时,计算机硬件的速度提高了大约20倍,而lp和milp的算法分别提高了约9倍和约50倍,这使得总速度分别提高了约180倍和1,000倍。然而,这些数字有很大的差异,它们大大低估了算法方面的进展:现在许多问题实例可以在几秒钟内解决,而旧代码无法在任何合理的时间内解决。
{"title":"Progress in mathematical programming solvers from 2001 to 2020","authors":"Thorsten Koch ,&nbsp;Timo Berthold ,&nbsp;Jaap Pedersen ,&nbsp;Charlie Vanaret","doi":"10.1016/j.ejco.2022.100031","DOIUrl":"10.1016/j.ejco.2022.100031","url":null,"abstract":"<div><p>This study investigates the progress made in <span>lp</span> and <span>milp</span> solver performance during the last two decades by comparing the solver software from the beginning of the millennium with the codes available today. On average, we found out that for solving <span>lp</span>/<span>milp</span>, computer hardware got about 20 times faster, and the algorithms improved by a factor of about nine for <span>lp</span> and around 50 for <span>milp</span>, which gives a total speed-up of about 180 and 1,000 times, respectively. However, these numbers have a very high variance and they considerably underestimate the progress made on the algorithmic side: many problem instances can nowadays be solved within seconds, which the old codes are not able to solve within any reasonable time.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100031"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440622000077/pdfft?md5=79377e1d524040849993372a12f99ead&pid=1-s2.0-S2192440622000077-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131506187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Upper and lower bounds based on linear programming for the b-coloring problem 基于线性规划的b-着色问题的上下界
IF 2.4 Q2 OPERATIONS RESEARCH & MANAGEMENT SCIENCE Pub Date : 2022-01-01 DOI: 10.1016/j.ejco.2022.100049
Roberto Montemanni , Xiaochen Chou , Derek H. Smith

B-coloring is a problem in graph theory. It can model some real applications, as well as being used to enhance solution methods for the classical graph coloring problem. In turn, improved solutions for the classical coloring problem would impact a larger pool of practical applications in several different fields such as scheduling, timetabling and telecommunications. Given a graph G=(V,E), the b-coloring problem aims to maximize the number of colors used while assigning a color to every vertex in V, preventing adjacent vertices from receiving the same color, with every color represented by a special vertex, called a b-vertex. A vertex can be a b-vertex only if the set of colors assigned to its adjacent vertices includes all the colors, apart from the one assigned to the vertex itself.

This work employs methods based on Linear Programming to derive new upper and lower bounds for the problem. In particular, starting from a Mixed Integer Linear Programming model recently presented, upper bounds are obtained through partial linear relaxations of this model, while lower bounds are derived by considering different variations of the original model, modified to target a specific number of colors provided as input. The experimental campaign documented in the paper led to several improvements to the state-of-the-art results.

b染色是图论中的一个问题。它可以模拟一些实际应用,并用于改进经典图着色问题的求解方法。反过来,经典着色问题的改进解决方案将影响几个不同领域的更大的实际应用,如调度、时间表和电信。给定一个图G=(V,E), b着色问题的目标是在为V中的每个顶点分配颜色时最大化使用的颜色数量,防止相邻的顶点接收相同的颜色,每种颜色由一个特殊的顶点表示,称为b顶点。只有当分配给相邻顶点的颜色集包括除了分配给顶点本身的颜色之外的所有颜色时,一个顶点才能成为b顶点。本文采用基于线性规划的方法推导出问题的新的上界和下界。特别是,从最近提出的混合整数线性规划模型开始,通过该模型的部分线性松弛得到上界,而下界是通过考虑原始模型的不同变化而得到的,修改为针对提供的特定数量的颜色作为输入。论文中记录的实验活动导致了对最先进结果的几项改进。
{"title":"Upper and lower bounds based on linear programming for the b-coloring problem","authors":"Roberto Montemanni ,&nbsp;Xiaochen Chou ,&nbsp;Derek H. Smith","doi":"10.1016/j.ejco.2022.100049","DOIUrl":"10.1016/j.ejco.2022.100049","url":null,"abstract":"<div><p>B-coloring is a problem in graph theory. It can model some real applications, as well as being used to enhance solution methods for the classical graph coloring problem. In turn, improved solutions for the classical coloring problem would impact a larger pool of practical applications in several different fields such as scheduling, timetabling and telecommunications. Given a graph <span><math><mi>G</mi><mo>=</mo><mo>(</mo><mi>V</mi><mo>,</mo><mi>E</mi><mo>)</mo></math></span>, the <em>b-coloring problem</em> aims to maximize the number of colors used while assigning a color to every vertex in <em>V</em>, preventing adjacent vertices from receiving the same color, with every color represented by a special vertex, called a b-vertex. A vertex can be a <em>b-vertex</em> only if the set of colors assigned to its adjacent vertices includes all the colors, apart from the one assigned to the vertex itself.</p><p>This work employs methods based on Linear Programming to derive new upper and lower bounds for the problem. In particular, starting from a Mixed Integer Linear Programming model recently presented, upper bounds are obtained through partial linear relaxations of this model, while lower bounds are derived by considering different variations of the original model, modified to target a specific number of colors provided as input. The experimental campaign documented in the paper led to several improvements to the state-of-the-art results.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100049"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440622000259/pdfft?md5=4554fc69f0108b024eff3393ae695fc6&pid=1-s2.0-S2192440622000259-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128592840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A nonlinear conjugate gradient method with complexity guarantees and its application to nonconvex regression 具有复杂度保证的非线性共轭梯度法及其在非凸回归中的应用
IF 2.4 Q2 OPERATIONS RESEARCH & MANAGEMENT SCIENCE Pub Date : 2022-01-01 DOI: 10.1016/j.ejco.2022.100044
Rémi Chan–Renous-Legoubin , Clément W. Royer

Nonlinear conjugate gradients are among the most popular techniques for solving continuous optimization problems. Although these schemes have long been studied from a global convergence standpoint, their worst-case complexity properties have yet to be fully understood, especially in the nonconvex setting. In particular, it is unclear whether nonlinear conjugate gradient methods possess better guarantees than first-order methods such as gradient descent. Meanwhile, recent experiments have shown impressive performance of standard nonlinear conjugate gradient techniques on certain nonconvex problems, even when compared with methods endowed with the best known complexity guarantees.

In this paper, we propose a nonlinear conjugate gradient scheme based on a simple line-search paradigm and a modified restart condition. These two ingredients allow for monitoring the properties of the search directions, which is instrumental in obtaining complexity guarantees. Our complexity results illustrate the possible discrepancy between nonlinear conjugate gradient methods and classical gradient descent. A numerical investigation on nonconvex robust regression problems as well as a standard benchmark illustrate that the restarting condition can track the behavior of a standard implementation.

非线性共轭梯度是求解连续优化问题最常用的技术之一。尽管这些格式从全局收敛的角度研究了很长时间,但它们的最坏情况复杂性性质尚未得到充分理解,特别是在非凸设置下。特别是,非线性共轭梯度方法是否比梯度下降等一阶方法具有更好的保证尚不清楚。与此同时,最近的实验表明,标准非线性共轭梯度技术在某些非凸问题上的表现令人印象深刻,即使与赋予最著名的复杂性保证的方法相比也是如此。本文提出了一种基于简单的直线搜索范式和修正的重启条件的非线性共轭梯度格式。这两种成分允许监视搜索方向的属性,这有助于获得复杂性保证。我们的复杂度结果说明了非线性共轭梯度法与经典梯度下降法之间可能存在的差异。通过对非凸鲁棒回归问题的数值研究和一个标准基准测试表明,重新启动条件可以跟踪标准实现的行为。
{"title":"A nonlinear conjugate gradient method with complexity guarantees and its application to nonconvex regression","authors":"Rémi Chan–Renous-Legoubin ,&nbsp;Clément W. Royer","doi":"10.1016/j.ejco.2022.100044","DOIUrl":"10.1016/j.ejco.2022.100044","url":null,"abstract":"<div><p>Nonlinear conjugate gradients are among the most popular techniques for solving continuous optimization problems. Although these schemes have long been studied from a global convergence standpoint, their worst-case complexity properties have yet to be fully understood, especially in the nonconvex setting. In particular, it is unclear whether nonlinear conjugate gradient methods possess better guarantees than first-order methods such as gradient descent. Meanwhile, recent experiments have shown impressive performance of standard nonlinear conjugate gradient techniques on certain nonconvex problems, even when compared with methods endowed with the best known complexity guarantees.</p><p>In this paper, we propose a nonlinear conjugate gradient scheme based on a simple line-search paradigm and a modified restart condition. These two ingredients allow for monitoring the properties of the search directions, which is instrumental in obtaining complexity guarantees. Our complexity results illustrate the possible discrepancy between nonlinear conjugate gradient methods and classical gradient descent. A numerical investigation on nonconvex robust regression problems as well as a standard benchmark illustrate that the restarting condition can track the behavior of a standard implementation.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100044"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S219244062200020X/pdfft?md5=32a8c7d35ac8b53e431514d2573efa79&pid=1-s2.0-S219244062200020X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114696474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Accelerated variance-reduced methods for saddle-point problems 鞍点问题的加速方差缩减方法
IF 2.4 Q2 OPERATIONS RESEARCH & MANAGEMENT SCIENCE Pub Date : 2022-01-01 DOI: 10.1016/j.ejco.2022.100048
Ekaterina Borodich , Vladislav Tominin , Yaroslav Tominin , Dmitry Kovalev , Alexander Gasnikov , Pavel Dvurechensky

We consider composite minimax optimization problems where the goal is to find a saddle-point of a large sum of non-bilinear objective functions augmented by simple composite regularizers for the primal and dual variables. For such problems, under the average-smoothness assumption, we propose accelerated stochastic variance-reduced algorithms with optimal up to logarithmic factors complexity bounds. In particular, we consider strongly-convex-strongly-concave, convex-strongly-concave, and convex-concave objectives. To the best of our knowledge, these are the first nearly-optimal algorithms for this setting.

我们考虑了复合极大极小优化问题,其目标是为原始变量和对偶变量找到由简单复合正则器增广的大量非线性目标函数的鞍点。针对这类问题,在平均平滑假设下,我们提出了具有最优到对数因子复杂度界的加速随机减方差算法。特别地,我们考虑强凸-强凹、凸-强凹和凸-凹目标。据我们所知,这些是针对这种设置的第一个接近最优的算法。
{"title":"Accelerated variance-reduced methods for saddle-point problems","authors":"Ekaterina Borodich ,&nbsp;Vladislav Tominin ,&nbsp;Yaroslav Tominin ,&nbsp;Dmitry Kovalev ,&nbsp;Alexander Gasnikov ,&nbsp;Pavel Dvurechensky","doi":"10.1016/j.ejco.2022.100048","DOIUrl":"10.1016/j.ejco.2022.100048","url":null,"abstract":"<div><p>We consider composite minimax optimization problems where the goal is to find a saddle-point of a large sum of non-bilinear objective functions augmented by simple composite regularizers for the primal and dual variables. For such problems, under the average-smoothness assumption, we propose accelerated stochastic variance-reduced algorithms with optimal up to logarithmic factors complexity bounds. In particular, we consider strongly-convex-strongly-concave, convex-strongly-concave, and convex-concave objectives. To the best of our knowledge, these are the first nearly-optimal algorithms for this setting.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100048"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440622000247/pdfft?md5=41248ad222d5ad361783568adf860824&pid=1-s2.0-S2192440622000247-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116295129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
EUROpt, the Continuous Optimization Working Group of EURO: From idea to maturity EUROpt,欧洲持续优化工作组:从构想到成熟
IF 2.4 Q2 OPERATIONS RESEARCH & MANAGEMENT SCIENCE Pub Date : 2022-01-01 DOI: 10.1016/j.ejco.2022.100033
Tibor Illés , Tamás Terlaky

This brief note presents a personal recollection of the early history of EUROpt, the Continuous Optimization Working Group of EURO. This historical note details the events that happened before the formation of EUROpt Working Group and the first five years of its existence. During the early years EUROpt Working Group established a conference series, organized thematic EURO Mini conferences, launched the EUROpt Fellow program, developed an effective rotating management structure, and grown to a large, matured, very active and high impact EURO Working Group.

这篇简短的笔记是对EUROpt(欧元的持续优化工作组)早期历史的个人回忆。这份历史记录详细介绍了欧洲劳工组织工作组成立之前发生的事件及其成立的头五年。在早期,EUROpt工作组建立了系列会议,组织了专题EURO Mini会议,推出了EUROpt Fellow计划,建立了有效的轮换管理结构,发展成为一个规模大、成熟、非常活跃和具有高影响力的欧元工作组。
{"title":"EUROpt, the Continuous Optimization Working Group of EURO: From idea to maturity","authors":"Tibor Illés ,&nbsp;Tamás Terlaky","doi":"10.1016/j.ejco.2022.100033","DOIUrl":"10.1016/j.ejco.2022.100033","url":null,"abstract":"<div><p>This brief note presents a personal recollection of the early history of EUR<em>O</em>pt, the Continuous Optimization Working Group of EURO. This historical note details the events that happened before the formation of EUR<em>O</em>pt Working Group and the first five years of its existence. During the early years EUR<em>O</em>pt Working Group established a conference series, organized thematic EURO Mini conferences, launched the EUR<em>O</em>pt Fellow program, developed an effective rotating management structure, and grown to a large, matured, very active and high impact EURO Working Group.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100033"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440622000090/pdfft?md5=a62c5ab91e77a43689d735471635b334&pid=1-s2.0-S2192440622000090-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115932034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A mixed integer formulation and an efficient metaheuristic for the unrelated parallel machine scheduling problem: Total tardiness minimization 不相关并行机调度问题的混合整数公式和有效的元启发式:总延误最小化
IF 2.4 Q2 OPERATIONS RESEARCH & MANAGEMENT SCIENCE Pub Date : 2022-01-01 DOI: 10.1016/j.ejco.2022.100034
Héctor G.-de-Alba , Samuel Nucamendi-Guillén , Oliver Avalos-Rosales

In this paper, the unrelated parallel machine scheduling problem with the objective of minimizing the total tardiness is addressed. For such a problem, a mixed-integer linear programming (MILP) formulation, that considers assignment and positional variables, is presented. In addition, an iterated local search (ILS) algorithm that produces high-quality solutions in reasonable times is proposed for large size instances. The ILS robustness was determined by comparing its performance with the results provided by the MILP. The instances used in this paper were constructed under a new approach which results in tighter due dates than the previous generation method for this problem. The proposed MILP formulation was able to solve instances of up to 150 jobs and 20 machines. Regarding the ILS, it yielded high-quality solutions in a reasonable time, solving instances of a size up to 400 jobs and 20 machines. Experimental results confirm that both approaches are efficient and promising.

研究了以总延迟最小为目标的不相关并行机调度问题。针对这类问题,给出了考虑赋值和位置变量的混合整数线性规划(MILP)公式。此外,针对大型实例,提出了一种迭代局部搜索算法,可以在合理的时间内生成高质量的解。通过将其性能与MILP提供的结果进行比较来确定ILS的鲁棒性。本文中使用的实例是在一种新的方法下构造的,与以前的生成方法相比,该方法的交货期更短。提出的MILP公式能够解决多达150个工作和20台机器的实例。对于ILS,它在合理的时间内产生了高质量的解决方案,解决了多达400个作业和20台机器的实例。实验结果证实了这两种方法的有效性和前景。
{"title":"A mixed integer formulation and an efficient metaheuristic for the unrelated parallel machine scheduling problem: Total tardiness minimization","authors":"Héctor G.-de-Alba ,&nbsp;Samuel Nucamendi-Guillén ,&nbsp;Oliver Avalos-Rosales","doi":"10.1016/j.ejco.2022.100034","DOIUrl":"https://doi.org/10.1016/j.ejco.2022.100034","url":null,"abstract":"<div><p>In this paper, the unrelated parallel machine scheduling problem with the objective of minimizing the total tardiness is addressed. For such a problem, a mixed-integer linear programming (MILP) formulation, that considers assignment and positional variables, is presented. In addition, an iterated local search (ILS) algorithm that produces high-quality solutions in reasonable times is proposed for large size instances. The ILS robustness was determined by comparing its performance with the results provided by the MILP. The instances used in this paper were constructed under a new approach which results in tighter due dates than the previous generation method for this problem. The proposed MILP formulation was able to solve instances of up to 150 jobs and 20 machines. Regarding the ILS, it yielded high-quality solutions in a reasonable time, solving instances of a size up to 400 jobs and 20 machines. Experimental results confirm that both approaches are efficient and promising.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100034"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440622000107/pdfft?md5=fe6b0c8e039b76ee7c40763ee43095a1&pid=1-s2.0-S2192440622000107-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92090668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Hyperfast second-order local solvers for efficient statistically preconditioned distributed optimization 高效统计预条件分布优化的超快速二阶局部求解
IF 2.4 Q2 OPERATIONS RESEARCH & MANAGEMENT SCIENCE Pub Date : 2022-01-01 DOI: 10.1016/j.ejco.2022.100045
Pavel Dvurechensky , Dmitry Kamzolov , Aleksandr Lukashevich , Soomin Lee , Erik Ordentlich , César A. Uribe , Alexander Gasnikov

Statistical preconditioning enables fast methods for distributed large-scale empirical risk minimization problems. In this approach, multiple worker nodes compute gradients in parallel, which are then used by the central node to update the parameter by solving an auxiliary (preconditioned) smaller-scale optimization problem. The recently proposed Statistically Preconditioned Accelerated Gradient (SPAG) method [1] has complexity bounds superior to other such algorithms but requires an exact solution for computationally intensive auxiliary optimization problems at every iteration. In this paper, we propose an Inexact SPAG (InSPAG) and explicitly characterize the accuracy by which the corresponding auxiliary subproblem needs to be solved to guarantee the same convergence rate as the exact method. We build our results by first developing an inexact adaptive accelerated Bregman proximal gradient method for general optimization problems under relative smoothness and strong convexity assumptions, which may be of independent interest. Moreover, we explore the properties of the auxiliary problem in the InSPAG algorithm assuming Lipschitz third-order derivatives and strong convexity. For such problem class, we develop a linearly convergent Hyperfast second-order method and estimate the total complexity of the InSPAG method with hyperfast auxiliary problem solver. Finally, we illustrate the proposed method's practical efficiency by performing large-scale numerical experiments on logistic regression models. To the best of our knowledge, these are the first empirical results on implementing high-order methods on large-scale problems, as we work with data where the dimension is of the order of 3 million, and the number of samples is 700 million.

统计预处理使分布式大规模经验风险最小化问题的快速方法成为可能。在这种方法中,多个工作节点并行计算梯度,然后由中心节点通过解决一个辅助的(预置的)小规模优化问题来更新参数。最近提出的统计预条件加速梯度(statistical Preconditioned Accelerated Gradient, SPAG)方法[1]具有优于其他此类算法的复杂度界限,但在每次迭代时都需要对计算密集型辅助优化问题的精确解。在本文中,我们提出了一个不精确的SPAG (InSPAG),并明确地描述了相应的辅助子问题需要解决的精度,以保证与精确方法相同的收敛速度。我们首先开发了一种非精确自适应加速Bregman近端梯度方法,用于相对光滑和强凸性假设下的一般优化问题,这可能是一个独立的兴趣。此外,我们还探讨了InSPAG算法中假设Lipschitz三阶导数和强凸性的辅助问题的性质。针对这类问题,我们开发了一种线性收敛的超快二阶方法,并利用超快辅助问题求解器估计了InSPAG方法的总复杂度。最后,我们通过在逻辑回归模型上进行大规模数值实验来说明所提出方法的实际有效性。据我们所知,这些是在大规模问题上实施高阶方法的第一个实证结果,因为我们处理的数据维度为300万,样本数量为7亿。
{"title":"Hyperfast second-order local solvers for efficient statistically preconditioned distributed optimization","authors":"Pavel Dvurechensky ,&nbsp;Dmitry Kamzolov ,&nbsp;Aleksandr Lukashevich ,&nbsp;Soomin Lee ,&nbsp;Erik Ordentlich ,&nbsp;César A. Uribe ,&nbsp;Alexander Gasnikov","doi":"10.1016/j.ejco.2022.100045","DOIUrl":"10.1016/j.ejco.2022.100045","url":null,"abstract":"<div><p>Statistical preconditioning enables fast methods for distributed large-scale empirical risk minimization problems. In this approach, multiple worker nodes compute gradients in parallel, which are then used by the central node to update the parameter by solving an auxiliary (preconditioned) smaller-scale optimization problem. The recently proposed Statistically Preconditioned Accelerated Gradient (SPAG) method <span>[1]</span> has complexity bounds superior to other such algorithms but requires an exact solution for computationally intensive auxiliary optimization problems at every iteration. In this paper, we propose an Inexact SPAG (InSPAG) and explicitly characterize the accuracy by which the corresponding auxiliary subproblem needs to be solved to guarantee the same convergence rate as the exact method. We build our results by first developing an inexact adaptive accelerated Bregman proximal gradient method for general optimization problems under relative smoothness and strong convexity assumptions, which may be of independent interest. Moreover, we explore the properties of the auxiliary problem in the InSPAG algorithm assuming Lipschitz third-order derivatives and strong convexity. For such problem class, we develop a linearly convergent Hyperfast second-order method and estimate the total complexity of the InSPAG method with hyperfast auxiliary problem solver. Finally, we illustrate the proposed method's practical efficiency by performing large-scale numerical experiments on logistic regression models. To the best of our knowledge, these are the first empirical results on implementing high-order methods on large-scale problems, as we work with data where the dimension is of the order of 3 million, and the number of samples is 700 million.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100045"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440622000211/pdfft?md5=295cb611041330f3ffad8993cf73fef2&pid=1-s2.0-S2192440622000211-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121213587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Trust-region algorithms: Probabilistic complexity and intrinsic noise with applications to subsampling techniques 可信域算法:概率复杂性和内在噪声与应用于子采样技术
IF 2.4 Q2 OPERATIONS RESEARCH & MANAGEMENT SCIENCE Pub Date : 2022-01-01 DOI: 10.1016/j.ejco.2022.100043
S. Bellavia , G. Gurioli , B. Morini , Ph.L. Toint

A trust-region algorithm is presented for finding approximate minimizers of smooth unconstrained functions whose values and derivatives are subject to random noise. It is shown that, under suitable probabilistic assumptions, the new method finds (in expectation) an ϵ-approximate minimizer of arbitrary order q1 in at most O(ϵ(q+1)) inexact evaluations of the function and its derivatives, providing the first such result for general optimality orders. The impact of intrinsic noise limiting the validity of the assumptions is also discussed and it is shown that difficulties are unlikely to occur in the first-order version of the algorithm for sufficiently large gradients. Conversely, should these assumptions fail for specific realizations, then “degraded” optimality guarantees are shown to hold when failure occurs. These conclusions are then discussed and illustrated in the context of subsampling methods for finite-sum optimization.

提出了一种求值和导数受随机噪声影响的光滑无约束函数的近似极小值的信任域算法。结果表明,在适当的概率假设下,新方法(在期望中)找到任意阶q≥1的ϵ-approximate最小值,最多O(λ−(q+1))个函数及其导数的不精确评估,为一般最优性阶提供了第一个这样的结果。还讨论了限制假设有效性的固有噪声的影响,并表明在足够大的梯度下,算法的一阶版本不太可能出现困难。相反,如果这些假设在特定的实现中失败,那么当失败发生时,“退化的”最优性保证将被证明是有效的。然后在有限和优化的子抽样方法的背景下讨论和说明这些结论。
{"title":"Trust-region algorithms: Probabilistic complexity and intrinsic noise with applications to subsampling techniques","authors":"S. Bellavia ,&nbsp;G. Gurioli ,&nbsp;B. Morini ,&nbsp;Ph.L. Toint","doi":"10.1016/j.ejco.2022.100043","DOIUrl":"10.1016/j.ejco.2022.100043","url":null,"abstract":"<div><p>A trust-region algorithm is presented for finding approximate minimizers of smooth unconstrained functions whose values and derivatives are subject to random noise. It is shown that, under suitable probabilistic assumptions, the new method finds (in expectation) an <em>ϵ</em>-approximate minimizer of arbitrary order <span><math><mi>q</mi><mo>≥</mo><mn>1</mn></math></span> in at most <span><math><mi>O</mi><mo>(</mo><msup><mrow><mi>ϵ</mi></mrow><mrow><mo>−</mo><mo>(</mo><mi>q</mi><mo>+</mo><mn>1</mn><mo>)</mo></mrow></msup><mo>)</mo></math></span> inexact evaluations of the function and its derivatives, providing the first such result for general optimality orders. The impact of intrinsic noise limiting the validity of the assumptions is also discussed and it is shown that difficulties are unlikely to occur in the first-order version of the algorithm for sufficiently large gradients. Conversely, should these assumptions fail for specific realizations, then “degraded” optimality guarantees are shown to hold when failure occurs. These conclusions are then discussed and illustrated in the context of subsampling methods for finite-sum optimization.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100043"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440622000193/pdfft?md5=746d8300ed25b919398d91159dcb575f&pid=1-s2.0-S2192440622000193-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124064710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Performance comparison of two recently proposed copositivity tests 最近提出的两种复合率测试的性能比较
IF 2.4 Q2 OPERATIONS RESEARCH & MANAGEMENT SCIENCE Pub Date : 2022-01-01 DOI: 10.1016/j.ejco.2022.100037
Bo Peng

Recently and simultaneously, two MILP-based approaches to copositivity testing were proposed. This note tries a performance comparison, using a group of test sets containing a large number of designed instances. According to the numerical results, we find that one copositivity detection approach performs better when the function value of the defined function h of a matrix is large while the other one performs better when the dimension of problems is increasing moderately. A problem set that is hard for both approaches is also presented, which may be used as a test bed for future competing approaches. An improved variant of one of the approaches is also proposed to handle those hard instances more efficiently.

最近,同时提出了两种基于milp的组合性测试方法。本文尝试使用一组包含大量设计实例的测试集进行性能比较。数值结果表明,当矩阵的定义函数h的函数值较大时,一种检测方法性能较好,而当问题的维数适度增加时,另一种检测方法性能较好。本文还提出了一个两种方法都难以解决的问题集,它可以作为未来竞争方法的测试平台。为了更有效地处理这些困难实例,还提出了其中一种方法的改进变体。
{"title":"Performance comparison of two recently proposed copositivity tests","authors":"Bo Peng","doi":"10.1016/j.ejco.2022.100037","DOIUrl":"10.1016/j.ejco.2022.100037","url":null,"abstract":"<div><p>Recently and simultaneously, two MILP-based approaches to copositivity testing were proposed. This note tries a performance comparison, using a group of test sets containing a large number of designed instances. According to the numerical results, we find that one copositivity detection approach performs better when the function value of the defined function <em>h</em> of a matrix is large while the other one performs better when the dimension of problems is increasing moderately. A problem set that is hard for both approaches is also presented, which may be used as a test bed for future competing approaches. An improved variant of one of the approaches is also proposed to handle those hard instances more efficiently.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100037"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440622000132/pdfft?md5=abbd19fbc87e563c0963318349831747&pid=1-s2.0-S2192440622000132-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115890151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
EURO Journal on Computational Optimization
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1