首页 > 最新文献

EURO Journal on Computational Optimization最新文献

英文 中文
Accelerated variance-reduced methods for saddle-point problems 鞍点问题的加速方差缩减方法
IF 2.4 Q2 OPERATIONS RESEARCH & MANAGEMENT SCIENCE Pub Date : 2022-01-01 DOI: 10.1016/j.ejco.2022.100048
Ekaterina Borodich , Vladislav Tominin , Yaroslav Tominin , Dmitry Kovalev , Alexander Gasnikov , Pavel Dvurechensky

We consider composite minimax optimization problems where the goal is to find a saddle-point of a large sum of non-bilinear objective functions augmented by simple composite regularizers for the primal and dual variables. For such problems, under the average-smoothness assumption, we propose accelerated stochastic variance-reduced algorithms with optimal up to logarithmic factors complexity bounds. In particular, we consider strongly-convex-strongly-concave, convex-strongly-concave, and convex-concave objectives. To the best of our knowledge, these are the first nearly-optimal algorithms for this setting.

我们考虑了复合极大极小优化问题,其目标是为原始变量和对偶变量找到由简单复合正则器增广的大量非线性目标函数的鞍点。针对这类问题,在平均平滑假设下,我们提出了具有最优到对数因子复杂度界的加速随机减方差算法。特别地,我们考虑强凸-强凹、凸-强凹和凸-凹目标。据我们所知,这些是针对这种设置的第一个接近最优的算法。
{"title":"Accelerated variance-reduced methods for saddle-point problems","authors":"Ekaterina Borodich ,&nbsp;Vladislav Tominin ,&nbsp;Yaroslav Tominin ,&nbsp;Dmitry Kovalev ,&nbsp;Alexander Gasnikov ,&nbsp;Pavel Dvurechensky","doi":"10.1016/j.ejco.2022.100048","DOIUrl":"10.1016/j.ejco.2022.100048","url":null,"abstract":"<div><p>We consider composite minimax optimization problems where the goal is to find a saddle-point of a large sum of non-bilinear objective functions augmented by simple composite regularizers for the primal and dual variables. For such problems, under the average-smoothness assumption, we propose accelerated stochastic variance-reduced algorithms with optimal up to logarithmic factors complexity bounds. In particular, we consider strongly-convex-strongly-concave, convex-strongly-concave, and convex-concave objectives. To the best of our knowledge, these are the first nearly-optimal algorithms for this setting.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100048"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440622000247/pdfft?md5=41248ad222d5ad361783568adf860824&pid=1-s2.0-S2192440622000247-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116295129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
EUROpt, the Continuous Optimization Working Group of EURO: From idea to maturity EUROpt,欧洲持续优化工作组:从构想到成熟
IF 2.4 Q2 OPERATIONS RESEARCH & MANAGEMENT SCIENCE Pub Date : 2022-01-01 DOI: 10.1016/j.ejco.2022.100033
Tibor Illés , Tamás Terlaky

This brief note presents a personal recollection of the early history of EUROpt, the Continuous Optimization Working Group of EURO. This historical note details the events that happened before the formation of EUROpt Working Group and the first five years of its existence. During the early years EUROpt Working Group established a conference series, organized thematic EURO Mini conferences, launched the EUROpt Fellow program, developed an effective rotating management structure, and grown to a large, matured, very active and high impact EURO Working Group.

这篇简短的笔记是对EUROpt(欧元的持续优化工作组)早期历史的个人回忆。这份历史记录详细介绍了欧洲劳工组织工作组成立之前发生的事件及其成立的头五年。在早期,EUROpt工作组建立了系列会议,组织了专题EURO Mini会议,推出了EUROpt Fellow计划,建立了有效的轮换管理结构,发展成为一个规模大、成熟、非常活跃和具有高影响力的欧元工作组。
{"title":"EUROpt, the Continuous Optimization Working Group of EURO: From idea to maturity","authors":"Tibor Illés ,&nbsp;Tamás Terlaky","doi":"10.1016/j.ejco.2022.100033","DOIUrl":"10.1016/j.ejco.2022.100033","url":null,"abstract":"<div><p>This brief note presents a personal recollection of the early history of EUR<em>O</em>pt, the Continuous Optimization Working Group of EURO. This historical note details the events that happened before the formation of EUR<em>O</em>pt Working Group and the first five years of its existence. During the early years EUR<em>O</em>pt Working Group established a conference series, organized thematic EURO Mini conferences, launched the EUR<em>O</em>pt Fellow program, developed an effective rotating management structure, and grown to a large, matured, very active and high impact EURO Working Group.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100033"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440622000090/pdfft?md5=a62c5ab91e77a43689d735471635b334&pid=1-s2.0-S2192440622000090-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115932034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A mixed integer formulation and an efficient metaheuristic for the unrelated parallel machine scheduling problem: Total tardiness minimization 不相关并行机调度问题的混合整数公式和有效的元启发式:总延误最小化
IF 2.4 Q2 OPERATIONS RESEARCH & MANAGEMENT SCIENCE Pub Date : 2022-01-01 DOI: 10.1016/j.ejco.2022.100034
Héctor G.-de-Alba , Samuel Nucamendi-Guillén , Oliver Avalos-Rosales

In this paper, the unrelated parallel machine scheduling problem with the objective of minimizing the total tardiness is addressed. For such a problem, a mixed-integer linear programming (MILP) formulation, that considers assignment and positional variables, is presented. In addition, an iterated local search (ILS) algorithm that produces high-quality solutions in reasonable times is proposed for large size instances. The ILS robustness was determined by comparing its performance with the results provided by the MILP. The instances used in this paper were constructed under a new approach which results in tighter due dates than the previous generation method for this problem. The proposed MILP formulation was able to solve instances of up to 150 jobs and 20 machines. Regarding the ILS, it yielded high-quality solutions in a reasonable time, solving instances of a size up to 400 jobs and 20 machines. Experimental results confirm that both approaches are efficient and promising.

研究了以总延迟最小为目标的不相关并行机调度问题。针对这类问题,给出了考虑赋值和位置变量的混合整数线性规划(MILP)公式。此外,针对大型实例,提出了一种迭代局部搜索算法,可以在合理的时间内生成高质量的解。通过将其性能与MILP提供的结果进行比较来确定ILS的鲁棒性。本文中使用的实例是在一种新的方法下构造的,与以前的生成方法相比,该方法的交货期更短。提出的MILP公式能够解决多达150个工作和20台机器的实例。对于ILS,它在合理的时间内产生了高质量的解决方案,解决了多达400个作业和20台机器的实例。实验结果证实了这两种方法的有效性和前景。
{"title":"A mixed integer formulation and an efficient metaheuristic for the unrelated parallel machine scheduling problem: Total tardiness minimization","authors":"Héctor G.-de-Alba ,&nbsp;Samuel Nucamendi-Guillén ,&nbsp;Oliver Avalos-Rosales","doi":"10.1016/j.ejco.2022.100034","DOIUrl":"https://doi.org/10.1016/j.ejco.2022.100034","url":null,"abstract":"<div><p>In this paper, the unrelated parallel machine scheduling problem with the objective of minimizing the total tardiness is addressed. For such a problem, a mixed-integer linear programming (MILP) formulation, that considers assignment and positional variables, is presented. In addition, an iterated local search (ILS) algorithm that produces high-quality solutions in reasonable times is proposed for large size instances. The ILS robustness was determined by comparing its performance with the results provided by the MILP. The instances used in this paper were constructed under a new approach which results in tighter due dates than the previous generation method for this problem. The proposed MILP formulation was able to solve instances of up to 150 jobs and 20 machines. Regarding the ILS, it yielded high-quality solutions in a reasonable time, solving instances of a size up to 400 jobs and 20 machines. Experimental results confirm that both approaches are efficient and promising.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100034"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440622000107/pdfft?md5=fe6b0c8e039b76ee7c40763ee43095a1&pid=1-s2.0-S2192440622000107-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92090668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Hyperfast second-order local solvers for efficient statistically preconditioned distributed optimization 高效统计预条件分布优化的超快速二阶局部求解
IF 2.4 Q2 OPERATIONS RESEARCH & MANAGEMENT SCIENCE Pub Date : 2022-01-01 DOI: 10.1016/j.ejco.2022.100045
Pavel Dvurechensky , Dmitry Kamzolov , Aleksandr Lukashevich , Soomin Lee , Erik Ordentlich , César A. Uribe , Alexander Gasnikov

Statistical preconditioning enables fast methods for distributed large-scale empirical risk minimization problems. In this approach, multiple worker nodes compute gradients in parallel, which are then used by the central node to update the parameter by solving an auxiliary (preconditioned) smaller-scale optimization problem. The recently proposed Statistically Preconditioned Accelerated Gradient (SPAG) method [1] has complexity bounds superior to other such algorithms but requires an exact solution for computationally intensive auxiliary optimization problems at every iteration. In this paper, we propose an Inexact SPAG (InSPAG) and explicitly characterize the accuracy by which the corresponding auxiliary subproblem needs to be solved to guarantee the same convergence rate as the exact method. We build our results by first developing an inexact adaptive accelerated Bregman proximal gradient method for general optimization problems under relative smoothness and strong convexity assumptions, which may be of independent interest. Moreover, we explore the properties of the auxiliary problem in the InSPAG algorithm assuming Lipschitz third-order derivatives and strong convexity. For such problem class, we develop a linearly convergent Hyperfast second-order method and estimate the total complexity of the InSPAG method with hyperfast auxiliary problem solver. Finally, we illustrate the proposed method's practical efficiency by performing large-scale numerical experiments on logistic regression models. To the best of our knowledge, these are the first empirical results on implementing high-order methods on large-scale problems, as we work with data where the dimension is of the order of 3 million, and the number of samples is 700 million.

统计预处理使分布式大规模经验风险最小化问题的快速方法成为可能。在这种方法中,多个工作节点并行计算梯度,然后由中心节点通过解决一个辅助的(预置的)小规模优化问题来更新参数。最近提出的统计预条件加速梯度(statistical Preconditioned Accelerated Gradient, SPAG)方法[1]具有优于其他此类算法的复杂度界限,但在每次迭代时都需要对计算密集型辅助优化问题的精确解。在本文中,我们提出了一个不精确的SPAG (InSPAG),并明确地描述了相应的辅助子问题需要解决的精度,以保证与精确方法相同的收敛速度。我们首先开发了一种非精确自适应加速Bregman近端梯度方法,用于相对光滑和强凸性假设下的一般优化问题,这可能是一个独立的兴趣。此外,我们还探讨了InSPAG算法中假设Lipschitz三阶导数和强凸性的辅助问题的性质。针对这类问题,我们开发了一种线性收敛的超快二阶方法,并利用超快辅助问题求解器估计了InSPAG方法的总复杂度。最后,我们通过在逻辑回归模型上进行大规模数值实验来说明所提出方法的实际有效性。据我们所知,这些是在大规模问题上实施高阶方法的第一个实证结果,因为我们处理的数据维度为300万,样本数量为7亿。
{"title":"Hyperfast second-order local solvers for efficient statistically preconditioned distributed optimization","authors":"Pavel Dvurechensky ,&nbsp;Dmitry Kamzolov ,&nbsp;Aleksandr Lukashevich ,&nbsp;Soomin Lee ,&nbsp;Erik Ordentlich ,&nbsp;César A. Uribe ,&nbsp;Alexander Gasnikov","doi":"10.1016/j.ejco.2022.100045","DOIUrl":"10.1016/j.ejco.2022.100045","url":null,"abstract":"<div><p>Statistical preconditioning enables fast methods for distributed large-scale empirical risk minimization problems. In this approach, multiple worker nodes compute gradients in parallel, which are then used by the central node to update the parameter by solving an auxiliary (preconditioned) smaller-scale optimization problem. The recently proposed Statistically Preconditioned Accelerated Gradient (SPAG) method <span>[1]</span> has complexity bounds superior to other such algorithms but requires an exact solution for computationally intensive auxiliary optimization problems at every iteration. In this paper, we propose an Inexact SPAG (InSPAG) and explicitly characterize the accuracy by which the corresponding auxiliary subproblem needs to be solved to guarantee the same convergence rate as the exact method. We build our results by first developing an inexact adaptive accelerated Bregman proximal gradient method for general optimization problems under relative smoothness and strong convexity assumptions, which may be of independent interest. Moreover, we explore the properties of the auxiliary problem in the InSPAG algorithm assuming Lipschitz third-order derivatives and strong convexity. For such problem class, we develop a linearly convergent Hyperfast second-order method and estimate the total complexity of the InSPAG method with hyperfast auxiliary problem solver. Finally, we illustrate the proposed method's practical efficiency by performing large-scale numerical experiments on logistic regression models. To the best of our knowledge, these are the first empirical results on implementing high-order methods on large-scale problems, as we work with data where the dimension is of the order of 3 million, and the number of samples is 700 million.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100045"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440622000211/pdfft?md5=295cb611041330f3ffad8993cf73fef2&pid=1-s2.0-S2192440622000211-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121213587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Trust-region algorithms: Probabilistic complexity and intrinsic noise with applications to subsampling techniques 可信域算法:概率复杂性和内在噪声与应用于子采样技术
IF 2.4 Q2 OPERATIONS RESEARCH & MANAGEMENT SCIENCE Pub Date : 2022-01-01 DOI: 10.1016/j.ejco.2022.100043
S. Bellavia , G. Gurioli , B. Morini , Ph.L. Toint

A trust-region algorithm is presented for finding approximate minimizers of smooth unconstrained functions whose values and derivatives are subject to random noise. It is shown that, under suitable probabilistic assumptions, the new method finds (in expectation) an ϵ-approximate minimizer of arbitrary order q1 in at most O(ϵ(q+1)) inexact evaluations of the function and its derivatives, providing the first such result for general optimality orders. The impact of intrinsic noise limiting the validity of the assumptions is also discussed and it is shown that difficulties are unlikely to occur in the first-order version of the algorithm for sufficiently large gradients. Conversely, should these assumptions fail for specific realizations, then “degraded” optimality guarantees are shown to hold when failure occurs. These conclusions are then discussed and illustrated in the context of subsampling methods for finite-sum optimization.

提出了一种求值和导数受随机噪声影响的光滑无约束函数的近似极小值的信任域算法。结果表明,在适当的概率假设下,新方法(在期望中)找到任意阶q≥1的ϵ-approximate最小值,最多O(λ−(q+1))个函数及其导数的不精确评估,为一般最优性阶提供了第一个这样的结果。还讨论了限制假设有效性的固有噪声的影响,并表明在足够大的梯度下,算法的一阶版本不太可能出现困难。相反,如果这些假设在特定的实现中失败,那么当失败发生时,“退化的”最优性保证将被证明是有效的。然后在有限和优化的子抽样方法的背景下讨论和说明这些结论。
{"title":"Trust-region algorithms: Probabilistic complexity and intrinsic noise with applications to subsampling techniques","authors":"S. Bellavia ,&nbsp;G. Gurioli ,&nbsp;B. Morini ,&nbsp;Ph.L. Toint","doi":"10.1016/j.ejco.2022.100043","DOIUrl":"10.1016/j.ejco.2022.100043","url":null,"abstract":"<div><p>A trust-region algorithm is presented for finding approximate minimizers of smooth unconstrained functions whose values and derivatives are subject to random noise. It is shown that, under suitable probabilistic assumptions, the new method finds (in expectation) an <em>ϵ</em>-approximate minimizer of arbitrary order <span><math><mi>q</mi><mo>≥</mo><mn>1</mn></math></span> in at most <span><math><mi>O</mi><mo>(</mo><msup><mrow><mi>ϵ</mi></mrow><mrow><mo>−</mo><mo>(</mo><mi>q</mi><mo>+</mo><mn>1</mn><mo>)</mo></mrow></msup><mo>)</mo></math></span> inexact evaluations of the function and its derivatives, providing the first such result for general optimality orders. The impact of intrinsic noise limiting the validity of the assumptions is also discussed and it is shown that difficulties are unlikely to occur in the first-order version of the algorithm for sufficiently large gradients. Conversely, should these assumptions fail for specific realizations, then “degraded” optimality guarantees are shown to hold when failure occurs. These conclusions are then discussed and illustrated in the context of subsampling methods for finite-sum optimization.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100043"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440622000193/pdfft?md5=746d8300ed25b919398d91159dcb575f&pid=1-s2.0-S2192440622000193-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124064710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Performance comparison of two recently proposed copositivity tests 最近提出的两种复合率测试的性能比较
IF 2.4 Q2 OPERATIONS RESEARCH & MANAGEMENT SCIENCE Pub Date : 2022-01-01 DOI: 10.1016/j.ejco.2022.100037
Bo Peng

Recently and simultaneously, two MILP-based approaches to copositivity testing were proposed. This note tries a performance comparison, using a group of test sets containing a large number of designed instances. According to the numerical results, we find that one copositivity detection approach performs better when the function value of the defined function h of a matrix is large while the other one performs better when the dimension of problems is increasing moderately. A problem set that is hard for both approaches is also presented, which may be used as a test bed for future competing approaches. An improved variant of one of the approaches is also proposed to handle those hard instances more efficiently.

最近,同时提出了两种基于milp的组合性测试方法。本文尝试使用一组包含大量设计实例的测试集进行性能比较。数值结果表明,当矩阵的定义函数h的函数值较大时,一种检测方法性能较好,而当问题的维数适度增加时,另一种检测方法性能较好。本文还提出了一个两种方法都难以解决的问题集,它可以作为未来竞争方法的测试平台。为了更有效地处理这些困难实例,还提出了其中一种方法的改进变体。
{"title":"Performance comparison of two recently proposed copositivity tests","authors":"Bo Peng","doi":"10.1016/j.ejco.2022.100037","DOIUrl":"10.1016/j.ejco.2022.100037","url":null,"abstract":"<div><p>Recently and simultaneously, two MILP-based approaches to copositivity testing were proposed. This note tries a performance comparison, using a group of test sets containing a large number of designed instances. According to the numerical results, we find that one copositivity detection approach performs better when the function value of the defined function <em>h</em> of a matrix is large while the other one performs better when the dimension of problems is increasing moderately. A problem set that is hard for both approaches is also presented, which may be used as a test bed for future competing approaches. An improved variant of one of the approaches is also proposed to handle those hard instances more efficiently.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100037"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440622000132/pdfft?md5=abbd19fbc87e563c0963318349831747&pid=1-s2.0-S2192440622000132-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115890151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Lagrangian heuristics for balancing the average weighted completion times of two classes of jobs in a single-machine scheduling problem 用拉格朗日启发式方法平衡单机调度问题中两类作业的平均加权完成时间
IF 2.4 Q2 OPERATIONS RESEARCH & MANAGEMENT SCIENCE Pub Date : 2022-01-01 DOI: 10.1016/j.ejco.2022.100032
Matteo Avolio, Antonio Fuduli

We tackle a new single-machine scheduling problem, whose objective is to balance the average weighted completion times of two classes of jobs. Because both the job sets contribute to the same objective function, this problem can be interpreted as a cooperative two-agent scheduling problem, in contraposition to the standard multiagent problems, which are of the competitive type since each class of job is involved only in optimizing its agent's criterion. Balancing the completion times of different sets of tasks finds application in many fields, such as in logistics for balancing the delivery times, in manufacturing for balancing the assembly lines and in services for balancing the waiting times of groups of people.

To solve the problem, for which we show the NP-hardness, a Lagrangian heuristic algorithm is proposed. In particular, starting from a nonsmooth variant of the quadratic assignment problem, our approach is based on the Lagrangian relaxation of a linearized model and reduces to solve a finite sequence of successive linear assignment problems.

Numerical results are presented on a set of randomly generated test problems, showing the efficiency of the proposed technique.

我们解决了一个新的单机调度问题,其目标是平衡两类作业的平均加权完成时间。由于这两个作业集对相同的目标函数都有贡献,因此该问题可以被解释为一个合作的双智能体调度问题,而不是标准的多智能体问题,后者是竞争类型的,因为每一类作业只涉及优化其智能体的标准。平衡不同任务集的完成时间在许多领域都有应用,例如在物流中平衡交付时间,在制造业中平衡装配线,在服务业中平衡人群的等待时间。为了解决具有np -硬度的问题,提出了一种拉格朗日启发式算法。特别是,从二次分配问题的非光滑变体开始,我们的方法基于线性化模型的拉格朗日松弛,并简化为解决连续线性分配问题的有限序列。在一组随机生成的测试问题上给出了数值结果,表明了该方法的有效性。
{"title":"A Lagrangian heuristics for balancing the average weighted completion times of two classes of jobs in a single-machine scheduling problem","authors":"Matteo Avolio,&nbsp;Antonio Fuduli","doi":"10.1016/j.ejco.2022.100032","DOIUrl":"https://doi.org/10.1016/j.ejco.2022.100032","url":null,"abstract":"<div><p>We tackle a new single-machine scheduling problem, whose objective is to balance the average weighted completion times of two classes of jobs. Because both the job sets contribute to the same objective function, this problem can be interpreted as a cooperative two-agent scheduling problem, in contraposition to the standard multiagent problems, which are of the competitive type since each class of job is involved only in optimizing its agent's criterion. Balancing the completion times of different sets of tasks finds application in many fields, such as in logistics for balancing the delivery times, in manufacturing for balancing the assembly lines and in services for balancing the waiting times of groups of people.</p><p>To solve the problem, for which we show the NP-hardness, a Lagrangian heuristic algorithm is proposed. In particular, starting from a nonsmooth variant of the quadratic assignment problem, our approach is based on the Lagrangian relaxation of a linearized model and reduces to solve a finite sequence of successive linear assignment problems.</p><p>Numerical results are presented on a set of randomly generated test problems, showing the efficiency of the proposed technique.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100032"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440622000089/pdfft?md5=94a7acd23a11f1e16b1bbcf7a942c573&pid=1-s2.0-S2192440622000089-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92146562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A simplified convergence theory for Byzantine resilient stochastic gradient descent 拜占庭弹性随机梯度下降的简化收敛理论
IF 2.4 Q2 OPERATIONS RESEARCH & MANAGEMENT SCIENCE Pub Date : 2022-01-01 DOI: 10.1016/j.ejco.2022.100038
Lindon Roberts , Edward Smyth

In distributed learning, a central server trains a model according to updates provided by nodes holding local data samples. In the presence of one or more malicious servers sending incorrect information (a Byzantine adversary), standard algorithms for model training such as stochastic gradient descent (SGD) fail to converge. In this paper, we present a simplified convergence theory for the generic Byzantine Resilient SGD method originally proposed by Blanchard et al. (2017) [3]. Compared to the existing analysis, we shown convergence to a stationary point in expectation under standard assumptions on the (possibly nonconvex) objective function and flexible assumptions on the stochastic gradients.

在分布式学习中,中央服务器根据保存本地数据样本的节点提供的更新来训练模型。在存在一个或多个发送错误信息的恶意服务器(拜占庭对手)的情况下,用于模型训练的标准算法(如随机梯度下降(SGD))无法收敛。在本文中,我们提出了一种简化的收敛理论,适用于最初由Blanchard等人(2017)[3]提出的通用拜占庭弹性SGD方法。与现有的分析相比,我们在目标函数(可能是非凸的)的标准假设和随机梯度的灵活假设下证明了期望收敛到平稳点。
{"title":"A simplified convergence theory for Byzantine resilient stochastic gradient descent","authors":"Lindon Roberts ,&nbsp;Edward Smyth","doi":"10.1016/j.ejco.2022.100038","DOIUrl":"10.1016/j.ejco.2022.100038","url":null,"abstract":"<div><p>In distributed learning, a central server trains a model according to updates provided by nodes holding local data samples. In the presence of one or more malicious servers sending incorrect information (a Byzantine adversary), standard algorithms for model training such as stochastic gradient descent (SGD) fail to converge. In this paper, we present a simplified convergence theory for the generic Byzantine Resilient SGD method originally proposed by Blanchard et al. (2017) <span>[3]</span>. Compared to the existing analysis, we shown convergence to a stationary point in expectation under standard assumptions on the (possibly nonconvex) objective function and flexible assumptions on the stochastic gradients.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100038"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440622000144/pdfft?md5=bbd4aa4ea37b8349470f121ce86051dd&pid=1-s2.0-S2192440622000144-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123785179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Exponential extrapolation memory for tabu search 禁忌搜索的指数外推记忆
IF 2.4 Q2 OPERATIONS RESEARCH & MANAGEMENT SCIENCE Pub Date : 2022-01-01 DOI: 10.1016/j.ejco.2022.100028
Håkon Bentsen, Arild Hoff, Lars Magnus Hvattum

Tabu search is a well-established metaheuristic framework for solving hard combinatorial optimization problems. At its core, the method uses different forms of memory to guide a local search through the solution space so as to identify high-quality local optima while avoiding getting stuck in the vicinity of any particular local optimum. This paper examines characteristics of moves that can be exploited to make good decisions about steps that lead away from recently visited local optima and towards a new local optimum. Our approach uses a new type of adaptive memory based on a construction called exponential extrapolation. The memory operates by means of threshold inequalities that ensure selected moves will not lead to a specified number of most recently encountered local optima. Computational experiments on a set of one hundred different benchmark instances for the binary integer programming problem suggest that exponential extrapolation is a useful type of memory to incorporate into a tabu search.

禁忌搜索是解决难组合优化问题的一种成熟的元启发式框架。该方法的核心是使用不同形式的内存来引导局部搜索通过解空间,从而识别高质量的局部最优,同时避免陷入任何特定局部最优附近。本文研究了可以用来做出正确决策的移动的特征,这些决策可以从最近访问的局部最优点转向新的局部最优点。我们的方法使用了一种新型的自适应记忆,基于一种叫做指数外推的结构。内存通过阈值不平等来操作,以确保所选的移动不会导致指定数量的最近遇到的局部最优。对二进制整数规划问题的100个不同基准实例进行的计算实验表明,指数外推是一种有用的内存类型,可以合并到禁忌搜索中。
{"title":"Exponential extrapolation memory for tabu search","authors":"Håkon Bentsen,&nbsp;Arild Hoff,&nbsp;Lars Magnus Hvattum","doi":"10.1016/j.ejco.2022.100028","DOIUrl":"10.1016/j.ejco.2022.100028","url":null,"abstract":"<div><p>Tabu search is a well-established metaheuristic framework for solving hard combinatorial optimization problems. At its core, the method uses different forms of memory to guide a local search through the solution space so as to identify high-quality local optima while avoiding getting stuck in the vicinity of any particular local optimum. This paper examines characteristics of moves that can be exploited to make good decisions about steps that lead away from recently visited local optima and towards a new local optimum. Our approach uses a new type of adaptive memory based on a construction called exponential extrapolation. The memory operates by means of threshold inequalities that ensure selected moves will not lead to a specified number of most recently encountered local optima. Computational experiments on a set of one hundred different benchmark instances for the binary integer programming problem suggest that exponential extrapolation is a useful type of memory to incorporate into a tabu search.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100028"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440622000041/pdfft?md5=d79a522b1d114e009dc737ac4d866cee&pid=1-s2.0-S2192440622000041-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126242208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A reinforcement learning approach to the stochastic cutting stock problem 随机切削库存问题的强化学习方法
IF 2.4 Q2 OPERATIONS RESEARCH & MANAGEMENT SCIENCE Pub Date : 2022-01-01 DOI: 10.1016/j.ejco.2022.100027
Anselmo R. Pitombeira-Neto , Arthur H.F. Murta

We propose a formulation of the stochastic cutting stock problem as a discounted infinite-horizon Markov decision process. At each decision epoch, given current inventory of items, an agent chooses in which patterns to cut objects in stock in anticipation of the unknown demand. An optimal solution corresponds to a policy that associates each state with a decision and minimizes the expected total cost. Since exact algorithms scale exponentially with the state-space dimension, we develop a heuristic solution approach based on reinforcement learning. We propose an approximate policy iteration algorithm in which we apply a linear model to approximate the action-value function of a policy. Policy evaluation is performed by solving the projected Bellman equation from a sample of state transitions, decisions and costs obtained by simulation. Due to the large decision space, policy improvement is performed via the cross-entropy method. Computational experiments are carried out with the use of realistic data to illustrate the application of the algorithm. Heuristic policies obtained with polynomial and Fourier basis functions are compared with myopic and random policies. Results indicate the possibility of obtaining policies capable of adequately controlling inventories with an average cost up to 80% lower than the cost obtained by a myopic policy.

提出了随机切削库存问题作为折现无限视界马尔可夫决策过程的一种表述。在每个决策时刻,给定当前物品的库存,智能体在预测未知需求的情况下选择以何种模式切割库存物品。最优解决方案对应于将每个状态与决策相关联并使预期总成本最小化的策略。由于精确算法随状态空间维度呈指数级增长,我们开发了一种基于强化学习的启发式解决方法。本文提出了一种近似策略迭代算法,该算法采用线性模型来近似策略的动作值函数。通过求解由模拟得到的状态转移、决策和成本样本的投影Bellman方程来执行策略评估。由于决策空间大,采用交叉熵方法进行策略改进。利用实际数据进行了计算实验,以说明该算法的应用。利用多项式和傅立叶基函数得到启发式策略,并与近视策略和随机策略进行了比较。结果表明,有可能获得能够充分控制库存的政策,其平均成本比短视政策所获得的成本低80%。
{"title":"A reinforcement learning approach to the stochastic cutting stock problem","authors":"Anselmo R. Pitombeira-Neto ,&nbsp;Arthur H.F. Murta","doi":"10.1016/j.ejco.2022.100027","DOIUrl":"10.1016/j.ejco.2022.100027","url":null,"abstract":"<div><p>We propose a formulation of the stochastic cutting stock problem as a discounted infinite-horizon Markov decision process. At each decision epoch, given current inventory of items, an agent chooses in which patterns to cut objects in stock in anticipation of the unknown demand. An optimal solution corresponds to a policy that associates each state with a decision and minimizes the expected total cost. Since exact algorithms scale exponentially with the state-space dimension, we develop a heuristic solution approach based on reinforcement learning. We propose an approximate policy iteration algorithm in which we apply a linear model to approximate the action-value function of a policy. Policy evaluation is performed by solving the projected Bellman equation from a sample of state transitions, decisions and costs obtained by simulation. Due to the large decision space, policy improvement is performed via the cross-entropy method. Computational experiments are carried out with the use of realistic data to illustrate the application of the algorithm. Heuristic policies obtained with polynomial and Fourier basis functions are compared with myopic and random policies. Results indicate the possibility of obtaining policies capable of adequately controlling inventories with an average cost up to 80% lower than the cost obtained by a myopic policy.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100027"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S219244062200003X/pdfft?md5=135d32e50b9857c32c1577a7a14985fc&pid=1-s2.0-S219244062200003X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89403511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
EURO Journal on Computational Optimization
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1