首页 > 最新文献

EURO Journal on Computational Optimization最新文献

英文 中文
A Lagrangian heuristics for balancing the average weighted completion times of two classes of jobs in a single-machine scheduling problem 用拉格朗日启发式方法平衡单机调度问题中两类作业的平均加权完成时间
IF 2.4 Q2 OPERATIONS RESEARCH & MANAGEMENT SCIENCE Pub Date : 2022-01-01 DOI: 10.1016/j.ejco.2022.100032
Matteo Avolio, Antonio Fuduli

We tackle a new single-machine scheduling problem, whose objective is to balance the average weighted completion times of two classes of jobs. Because both the job sets contribute to the same objective function, this problem can be interpreted as a cooperative two-agent scheduling problem, in contraposition to the standard multiagent problems, which are of the competitive type since each class of job is involved only in optimizing its agent's criterion. Balancing the completion times of different sets of tasks finds application in many fields, such as in logistics for balancing the delivery times, in manufacturing for balancing the assembly lines and in services for balancing the waiting times of groups of people.

To solve the problem, for which we show the NP-hardness, a Lagrangian heuristic algorithm is proposed. In particular, starting from a nonsmooth variant of the quadratic assignment problem, our approach is based on the Lagrangian relaxation of a linearized model and reduces to solve a finite sequence of successive linear assignment problems.

Numerical results are presented on a set of randomly generated test problems, showing the efficiency of the proposed technique.

我们解决了一个新的单机调度问题,其目标是平衡两类作业的平均加权完成时间。由于这两个作业集对相同的目标函数都有贡献,因此该问题可以被解释为一个合作的双智能体调度问题,而不是标准的多智能体问题,后者是竞争类型的,因为每一类作业只涉及优化其智能体的标准。平衡不同任务集的完成时间在许多领域都有应用,例如在物流中平衡交付时间,在制造业中平衡装配线,在服务业中平衡人群的等待时间。为了解决具有np -硬度的问题,提出了一种拉格朗日启发式算法。特别是,从二次分配问题的非光滑变体开始,我们的方法基于线性化模型的拉格朗日松弛,并简化为解决连续线性分配问题的有限序列。在一组随机生成的测试问题上给出了数值结果,表明了该方法的有效性。
{"title":"A Lagrangian heuristics for balancing the average weighted completion times of two classes of jobs in a single-machine scheduling problem","authors":"Matteo Avolio,&nbsp;Antonio Fuduli","doi":"10.1016/j.ejco.2022.100032","DOIUrl":"https://doi.org/10.1016/j.ejco.2022.100032","url":null,"abstract":"<div><p>We tackle a new single-machine scheduling problem, whose objective is to balance the average weighted completion times of two classes of jobs. Because both the job sets contribute to the same objective function, this problem can be interpreted as a cooperative two-agent scheduling problem, in contraposition to the standard multiagent problems, which are of the competitive type since each class of job is involved only in optimizing its agent's criterion. Balancing the completion times of different sets of tasks finds application in many fields, such as in logistics for balancing the delivery times, in manufacturing for balancing the assembly lines and in services for balancing the waiting times of groups of people.</p><p>To solve the problem, for which we show the NP-hardness, a Lagrangian heuristic algorithm is proposed. In particular, starting from a nonsmooth variant of the quadratic assignment problem, our approach is based on the Lagrangian relaxation of a linearized model and reduces to solve a finite sequence of successive linear assignment problems.</p><p>Numerical results are presented on a set of randomly generated test problems, showing the efficiency of the proposed technique.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100032"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440622000089/pdfft?md5=94a7acd23a11f1e16b1bbcf7a942c573&pid=1-s2.0-S2192440622000089-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92146562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A simplified convergence theory for Byzantine resilient stochastic gradient descent 拜占庭弹性随机梯度下降的简化收敛理论
IF 2.4 Q2 OPERATIONS RESEARCH & MANAGEMENT SCIENCE Pub Date : 2022-01-01 DOI: 10.1016/j.ejco.2022.100038
Lindon Roberts , Edward Smyth

In distributed learning, a central server trains a model according to updates provided by nodes holding local data samples. In the presence of one or more malicious servers sending incorrect information (a Byzantine adversary), standard algorithms for model training such as stochastic gradient descent (SGD) fail to converge. In this paper, we present a simplified convergence theory for the generic Byzantine Resilient SGD method originally proposed by Blanchard et al. (2017) [3]. Compared to the existing analysis, we shown convergence to a stationary point in expectation under standard assumptions on the (possibly nonconvex) objective function and flexible assumptions on the stochastic gradients.

在分布式学习中,中央服务器根据保存本地数据样本的节点提供的更新来训练模型。在存在一个或多个发送错误信息的恶意服务器(拜占庭对手)的情况下,用于模型训练的标准算法(如随机梯度下降(SGD))无法收敛。在本文中,我们提出了一种简化的收敛理论,适用于最初由Blanchard等人(2017)[3]提出的通用拜占庭弹性SGD方法。与现有的分析相比,我们在目标函数(可能是非凸的)的标准假设和随机梯度的灵活假设下证明了期望收敛到平稳点。
{"title":"A simplified convergence theory for Byzantine resilient stochastic gradient descent","authors":"Lindon Roberts ,&nbsp;Edward Smyth","doi":"10.1016/j.ejco.2022.100038","DOIUrl":"10.1016/j.ejco.2022.100038","url":null,"abstract":"<div><p>In distributed learning, a central server trains a model according to updates provided by nodes holding local data samples. In the presence of one or more malicious servers sending incorrect information (a Byzantine adversary), standard algorithms for model training such as stochastic gradient descent (SGD) fail to converge. In this paper, we present a simplified convergence theory for the generic Byzantine Resilient SGD method originally proposed by Blanchard et al. (2017) <span>[3]</span>. Compared to the existing analysis, we shown convergence to a stationary point in expectation under standard assumptions on the (possibly nonconvex) objective function and flexible assumptions on the stochastic gradients.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100038"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440622000144/pdfft?md5=bbd4aa4ea37b8349470f121ce86051dd&pid=1-s2.0-S2192440622000144-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123785179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Exponential extrapolation memory for tabu search 禁忌搜索的指数外推记忆
IF 2.4 Q2 OPERATIONS RESEARCH & MANAGEMENT SCIENCE Pub Date : 2022-01-01 DOI: 10.1016/j.ejco.2022.100028
Håkon Bentsen, Arild Hoff, Lars Magnus Hvattum

Tabu search is a well-established metaheuristic framework for solving hard combinatorial optimization problems. At its core, the method uses different forms of memory to guide a local search through the solution space so as to identify high-quality local optima while avoiding getting stuck in the vicinity of any particular local optimum. This paper examines characteristics of moves that can be exploited to make good decisions about steps that lead away from recently visited local optima and towards a new local optimum. Our approach uses a new type of adaptive memory based on a construction called exponential extrapolation. The memory operates by means of threshold inequalities that ensure selected moves will not lead to a specified number of most recently encountered local optima. Computational experiments on a set of one hundred different benchmark instances for the binary integer programming problem suggest that exponential extrapolation is a useful type of memory to incorporate into a tabu search.

禁忌搜索是解决难组合优化问题的一种成熟的元启发式框架。该方法的核心是使用不同形式的内存来引导局部搜索通过解空间,从而识别高质量的局部最优,同时避免陷入任何特定局部最优附近。本文研究了可以用来做出正确决策的移动的特征,这些决策可以从最近访问的局部最优点转向新的局部最优点。我们的方法使用了一种新型的自适应记忆,基于一种叫做指数外推的结构。内存通过阈值不平等来操作,以确保所选的移动不会导致指定数量的最近遇到的局部最优。对二进制整数规划问题的100个不同基准实例进行的计算实验表明,指数外推是一种有用的内存类型,可以合并到禁忌搜索中。
{"title":"Exponential extrapolation memory for tabu search","authors":"Håkon Bentsen,&nbsp;Arild Hoff,&nbsp;Lars Magnus Hvattum","doi":"10.1016/j.ejco.2022.100028","DOIUrl":"10.1016/j.ejco.2022.100028","url":null,"abstract":"<div><p>Tabu search is a well-established metaheuristic framework for solving hard combinatorial optimization problems. At its core, the method uses different forms of memory to guide a local search through the solution space so as to identify high-quality local optima while avoiding getting stuck in the vicinity of any particular local optimum. This paper examines characteristics of moves that can be exploited to make good decisions about steps that lead away from recently visited local optima and towards a new local optimum. Our approach uses a new type of adaptive memory based on a construction called exponential extrapolation. The memory operates by means of threshold inequalities that ensure selected moves will not lead to a specified number of most recently encountered local optima. Computational experiments on a set of one hundred different benchmark instances for the binary integer programming problem suggest that exponential extrapolation is a useful type of memory to incorporate into a tabu search.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100028"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440622000041/pdfft?md5=d79a522b1d114e009dc737ac4d866cee&pid=1-s2.0-S2192440622000041-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126242208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A reinforcement learning approach to the stochastic cutting stock problem 随机切削库存问题的强化学习方法
IF 2.4 Q2 OPERATIONS RESEARCH & MANAGEMENT SCIENCE Pub Date : 2022-01-01 DOI: 10.1016/j.ejco.2022.100027
Anselmo R. Pitombeira-Neto , Arthur H.F. Murta

We propose a formulation of the stochastic cutting stock problem as a discounted infinite-horizon Markov decision process. At each decision epoch, given current inventory of items, an agent chooses in which patterns to cut objects in stock in anticipation of the unknown demand. An optimal solution corresponds to a policy that associates each state with a decision and minimizes the expected total cost. Since exact algorithms scale exponentially with the state-space dimension, we develop a heuristic solution approach based on reinforcement learning. We propose an approximate policy iteration algorithm in which we apply a linear model to approximate the action-value function of a policy. Policy evaluation is performed by solving the projected Bellman equation from a sample of state transitions, decisions and costs obtained by simulation. Due to the large decision space, policy improvement is performed via the cross-entropy method. Computational experiments are carried out with the use of realistic data to illustrate the application of the algorithm. Heuristic policies obtained with polynomial and Fourier basis functions are compared with myopic and random policies. Results indicate the possibility of obtaining policies capable of adequately controlling inventories with an average cost up to 80% lower than the cost obtained by a myopic policy.

提出了随机切削库存问题作为折现无限视界马尔可夫决策过程的一种表述。在每个决策时刻,给定当前物品的库存,智能体在预测未知需求的情况下选择以何种模式切割库存物品。最优解决方案对应于将每个状态与决策相关联并使预期总成本最小化的策略。由于精确算法随状态空间维度呈指数级增长,我们开发了一种基于强化学习的启发式解决方法。本文提出了一种近似策略迭代算法,该算法采用线性模型来近似策略的动作值函数。通过求解由模拟得到的状态转移、决策和成本样本的投影Bellman方程来执行策略评估。由于决策空间大,采用交叉熵方法进行策略改进。利用实际数据进行了计算实验,以说明该算法的应用。利用多项式和傅立叶基函数得到启发式策略,并与近视策略和随机策略进行了比较。结果表明,有可能获得能够充分控制库存的政策,其平均成本比短视政策所获得的成本低80%。
{"title":"A reinforcement learning approach to the stochastic cutting stock problem","authors":"Anselmo R. Pitombeira-Neto ,&nbsp;Arthur H.F. Murta","doi":"10.1016/j.ejco.2022.100027","DOIUrl":"10.1016/j.ejco.2022.100027","url":null,"abstract":"<div><p>We propose a formulation of the stochastic cutting stock problem as a discounted infinite-horizon Markov decision process. At each decision epoch, given current inventory of items, an agent chooses in which patterns to cut objects in stock in anticipation of the unknown demand. An optimal solution corresponds to a policy that associates each state with a decision and minimizes the expected total cost. Since exact algorithms scale exponentially with the state-space dimension, we develop a heuristic solution approach based on reinforcement learning. We propose an approximate policy iteration algorithm in which we apply a linear model to approximate the action-value function of a policy. Policy evaluation is performed by solving the projected Bellman equation from a sample of state transitions, decisions and costs obtained by simulation. Due to the large decision space, policy improvement is performed via the cross-entropy method. Computational experiments are carried out with the use of realistic data to illustrate the application of the algorithm. Heuristic policies obtained with polynomial and Fourier basis functions are compared with myopic and random policies. Results indicate the possibility of obtaining policies capable of adequately controlling inventories with an average cost up to 80% lower than the cost obtained by a myopic policy.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100027"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S219244062200003X/pdfft?md5=135d32e50b9857c32c1577a7a14985fc&pid=1-s2.0-S219244062200003X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89403511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Decentralized personalized federated learning: Lower bounds and optimal algorithm for all personalization modes 分散个性化联邦学习:所有个性化模式的下界和最优算法
IF 2.4 Q2 OPERATIONS RESEARCH & MANAGEMENT SCIENCE Pub Date : 2022-01-01 DOI: 10.1016/j.ejco.2022.100041
Abdurakhmon Sadiev , Ekaterina Borodich , Aleksandr Beznosikov , Darina Dvinskikh , Saveliy Chezhegov , Rachael Tappenden , Martin Takáč , Alexander Gasnikov

This paper considers the problem of decentralized, personalized federated learning. For centralized personalized federated learning, a penalty that measures the deviation from the local model and its average, is often added to the objective function. However, in a decentralized setting this penalty is expensive in terms of communication costs, so here, a different penalty — one that is built to respect the structure of the underlying computational network — is used instead. We present lower bounds on the communication and local computation costs for this problem formulation and we also present provably optimal methods for decentralized personalized federated learning. Numerical experiments are presented to demonstrate the practical performance of our methods.

本文研究了分散的、个性化的联邦学习问题。对于集中式个性化联邦学习,通常会在目标函数中添加一个度量偏离局部模型及其平均值的惩罚。然而,在去中心化设置中,就通信成本而言,这种惩罚是昂贵的,因此这里使用了另一种惩罚——一种尊重底层计算网络结构的惩罚。我们给出了该问题表述的通信和局部计算成本的下界,并提出了可证明的分散个性化联邦学习的最优方法。数值实验验证了所提方法的实际性能。
{"title":"Decentralized personalized federated learning: Lower bounds and optimal algorithm for all personalization modes","authors":"Abdurakhmon Sadiev ,&nbsp;Ekaterina Borodich ,&nbsp;Aleksandr Beznosikov ,&nbsp;Darina Dvinskikh ,&nbsp;Saveliy Chezhegov ,&nbsp;Rachael Tappenden ,&nbsp;Martin Takáč ,&nbsp;Alexander Gasnikov","doi":"10.1016/j.ejco.2022.100041","DOIUrl":"10.1016/j.ejco.2022.100041","url":null,"abstract":"<div><p>This paper considers the problem of decentralized, personalized federated learning. For centralized personalized federated learning, a penalty that measures the deviation from the local model and its average, is often added to the objective function. However, in a decentralized setting this penalty is expensive in terms of communication costs, so here, a different penalty — one that is built to respect the structure of the underlying computational network — is used instead. We present lower bounds on the communication and local computation costs for this problem formulation and we also present provably optimal methods for decentralized personalized federated learning. Numerical experiments are presented to demonstrate the practical performance of our methods.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100041"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S219244062200017X/pdfft?md5=e8af747bbfba4e47278ad3d8a99e3881&pid=1-s2.0-S219244062200017X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122953591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Twenty years of EUROPT, the EURO working group on Continuous Optimization 二十年的EUROPT,欧洲持续优化工作组
IF 2.4 Q2 OPERATIONS RESEARCH & MANAGEMENT SCIENCE Pub Date : 2022-01-01 DOI: 10.1016/j.ejco.2022.100039
Sonia Cafieri , Tatiana Tchemisova , Gerhard-Wilhelm Weber

EUROPT, the Continuous Optimization working group of EURO, celebrated its 20 years of activity in 2020. We trace the history of this working group by presenting the major milestones that have led to its current structure and organization and its major trademarks, such as the annual EUROPT workshop and the EUROPT Fellow recognition.

2020年,欧洲持续优化工作组(EUROPT)庆祝了其成立20周年。我们通过介绍导致其目前结构和组织及其主要商标的主要里程碑来追溯该工作组的历史,例如年度EUROPT研讨会和EUROPT Fellow认可。
{"title":"Twenty years of EUROPT, the EURO working group on Continuous Optimization","authors":"Sonia Cafieri ,&nbsp;Tatiana Tchemisova ,&nbsp;Gerhard-Wilhelm Weber","doi":"10.1016/j.ejco.2022.100039","DOIUrl":"10.1016/j.ejco.2022.100039","url":null,"abstract":"<div><p>EUROPT, the Continuous Optimization working group of EURO, celebrated its 20 years of activity in 2020. We trace the history of this working group by presenting the major milestones that have led to its current structure and organization and its major trademarks, such as the annual EUROPT workshop and the EUROPT Fellow recognition.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100039"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440622000156/pdfft?md5=d70136dd19d5184ddafe323e89eb1929&pid=1-s2.0-S2192440622000156-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129955865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
New neighborhoods and an iterated local search algorithm for the generalized traveling salesman problem 广义旅行商问题的新邻域及迭代局部搜索算法
IF 2.4 Q2 OPERATIONS RESEARCH & MANAGEMENT SCIENCE Pub Date : 2022-01-01 DOI: 10.1016/j.ejco.2022.100029
Jeanette Schmidt, Stefan Irnich

For a given graph with a vertex set that is partitioned into clusters, the generalized traveling salesman problem (GTSP) is the problem of finding a cost-minimal cycle that contains exactly one vertex of every cluster. We introduce three new GTSP neighborhoods that allow the simultaneous permutation of the sequence of the clusters and the selection of vertices from each cluster. The three neighborhoods and some known neighborhoods from the literature are combined into an effective iterated local search (ILS) for the GTSP. The ILS performs a straightforward random neighborhood selection within the local search and applies an ordinary record-to-record ILS acceptance criterion. The computational experiments on four symmetric standard GTSP libraries show that, with some purposeful refinements, the ILS can compete with state-of-the-art GTSP algorithms.

对于一个顶点集被划分为簇的给定图,广义旅行推销员问题(GTSP)是寻找一个成本最小循环的问题,该循环只包含每个簇的一个顶点。我们引入了三个新的GTSP邻域,允许同时排列簇的序列和从每个簇中选择顶点。这三个邻域和一些已知的文献邻域被组合成一个有效的迭代局部搜索(ILS)。盲降系统在局部搜索中执行直接的随机邻域选择,并应用普通的记录到记录盲降接受标准。在四个对称标准GTSP库上的计算实验表明,经过一些有目的的改进,ILS可以与最先进的GTSP算法竞争。
{"title":"New neighborhoods and an iterated local search algorithm for the generalized traveling salesman problem","authors":"Jeanette Schmidt,&nbsp;Stefan Irnich","doi":"10.1016/j.ejco.2022.100029","DOIUrl":"10.1016/j.ejco.2022.100029","url":null,"abstract":"<div><p>For a given graph with a vertex set that is partitioned into clusters, the generalized traveling salesman problem (GTSP) is the problem of finding a cost-minimal cycle that contains exactly one vertex of every cluster. We introduce three new GTSP neighborhoods that allow the simultaneous permutation of the sequence of the clusters and the selection of vertices from each cluster. The three neighborhoods and some known neighborhoods from the literature are combined into an effective iterated local search (ILS) for the GTSP. The ILS performs a straightforward random neighborhood selection within the local search and applies an ordinary record-to-record ILS acceptance criterion. The computational experiments on four symmetric standard GTSP libraries show that, with some purposeful refinements, the ILS can compete with state-of-the-art GTSP algorithms.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100029"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440622000053/pdfft?md5=f5688517686dac40484c0d65534f3440&pid=1-s2.0-S2192440622000053-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130152340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Chance-constrained optimization under limited distributional information: A review of reformulations based on sampling and distributional robustness 有限分布信息下的机会约束优化:基于抽样和分布鲁棒性的重新表述综述
IF 2.4 Q2 OPERATIONS RESEARCH & MANAGEMENT SCIENCE Pub Date : 2022-01-01 DOI: 10.1016/j.ejco.2022.100030
Simge Küçükyavuz , Ruiwei Jiang

Chance-constrained programming (CCP) is one of the most difficult classes of optimization problems that has attracted the attention of researchers since the 1950s. In this survey, we focus on cases when only limited information on the distribution is available, such as a sample from the distribution, or the moments of the distribution. We first review recent developments in mixed-integer linear formulations of chance-constrained programs that arise from finite discrete distributions (or sample average approximation). We highlight successful reformulations and decomposition techniques that enable the solution of large-scale instances. We then review active research in distributionally robust CCP, which is a framework to address the ambiguity in the distribution of the random data. The focal point of our review is on scalable formulations that can be readily implemented with state-of-the-art optimization software. Furthermore, we highlight the prevalence of CCPs with a review of applications across multiple domains.

机会约束规划(CCP)是20世纪50年代以来备受关注的一类最困难的优化问题。在本调查中,我们关注的是只有有限信息可用的分布情况,例如来自分布的样本,或分布的矩。我们首先回顾了由有限离散分布(或样本平均近似)产生的机会约束规划的混合整数线性公式的最新进展。我们强调了能够解决大规模实例的成功的重新表述和分解技术。然后,我们回顾了分布鲁棒CCP的活跃研究,这是一个解决随机数据分布模糊性的框架。我们审查的重点是可扩展的配方,可以很容易地实现与最先进的优化软件。此外,我们通过回顾多个领域的应用来强调ccp的流行。
{"title":"Chance-constrained optimization under limited distributional information: A review of reformulations based on sampling and distributional robustness","authors":"Simge Küçükyavuz ,&nbsp;Ruiwei Jiang","doi":"10.1016/j.ejco.2022.100030","DOIUrl":"10.1016/j.ejco.2022.100030","url":null,"abstract":"<div><p>Chance-constrained programming (CCP) is one of the most difficult classes of optimization problems that has attracted the attention of researchers since the 1950s. In this survey, we focus on cases when only limited information on the distribution is available, such as a sample from the distribution, or the moments of the distribution. We first review recent developments in mixed-integer linear formulations of chance-constrained programs that arise from finite discrete distributions (or sample average approximation). We highlight successful reformulations and decomposition techniques that enable the solution of large-scale instances. We then review active research in distributionally robust CCP, which is a framework to address the ambiguity in the distribution of the random data. The focal point of our review is on scalable formulations that can be readily implemented with state-of-the-art optimization software. Furthermore, we highlight the prevalence of CCPs with a review of applications across multiple domains.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100030"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440622000065/pdfft?md5=9e123764c3bf29d9a58fa0f64cbc4b9a&pid=1-s2.0-S2192440622000065-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131035749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Direct nonlinear acceleration 直接非线性加速度
IF 2.4 Q2 OPERATIONS RESEARCH & MANAGEMENT SCIENCE Pub Date : 2022-01-01 DOI: 10.1016/j.ejco.2022.100047
Aritra Dutta , El Houcine Bergou , Yunming Xiao , Marco Canini , Peter Richtárik

Optimization acceleration techniques such as momentum play a key role in state-of-the-art machine learning algorithms. Recently, generic vector sequence extrapolation techniques, such as regularized nonlinear acceleration (RNA) of Scieur et al. [22], were proposed and shown to accelerate fixed point iterations. In contrast to RNA which computes extrapolation coefficients by (approximately) setting the gradient of the objective function to zero at the extrapolated point, we propose a more direct approach, which we call direct nonlinear acceleration (DNA). In DNA, we aim to minimize (an approximation of) the function value at the extrapolated point instead. We adopt a regularized approach with regularizers designed to prevent the model from entering a region in which the functional approximation is less precise. While the computational cost of DNA is comparable to that of RNA, our direct approach significantly outperforms RNA on both synthetic and real-world datasets. While the focus of this paper is on convex problems, we obtain very encouraging results in accelerating the training of neural networks.

优化加速技术,如动量在最先进的机器学习算法中起着关键作用。最近,提出了通用的向量序列外推技术,如Scieur等人[22]的正则化非线性加速(RNA),并证明了它可以加速不动点迭代。RNA通过(近似地)将目标函数的梯度在外推点设置为零来计算外推系数,与此相反,我们提出了一种更直接的方法,我们称之为直接非线性加速(DNA)。在DNA中,我们的目标是最小化(近似)外推点的函数值。我们采用了一种正则化的方法,其目的是防止模型进入一个函数近似不太精确的区域。虽然DNA的计算成本与RNA相当,但我们的直接方法在合成和实际数据集上都明显优于RNA。虽然本文的重点是凸问题,但我们在加速神经网络的训练方面取得了非常令人鼓舞的结果。
{"title":"Direct nonlinear acceleration","authors":"Aritra Dutta ,&nbsp;El Houcine Bergou ,&nbsp;Yunming Xiao ,&nbsp;Marco Canini ,&nbsp;Peter Richtárik","doi":"10.1016/j.ejco.2022.100047","DOIUrl":"10.1016/j.ejco.2022.100047","url":null,"abstract":"<div><p>Optimization acceleration techniques such as momentum play a key role in state-of-the-art machine learning algorithms. Recently, generic vector sequence extrapolation techniques, such as regularized nonlinear acceleration (RNA) of Scieur et al. <span>[22]</span>, were proposed and shown to accelerate fixed point iterations. In contrast to RNA which computes extrapolation coefficients by (approximately) setting the gradient of the objective function to zero at the extrapolated point, we propose a more direct approach, which we call <em>direct nonlinear acceleration (DNA)</em>. In DNA, we aim to minimize (an approximation of) the function value at the extrapolated point instead. We adopt a regularized approach with regularizers designed to prevent the model from entering a region in which the functional approximation is less precise. While the computational cost of DNA is comparable to that of RNA, our direct approach significantly outperforms RNA on both synthetic and real-world datasets. While the focus of this paper is on convex problems, we obtain very encouraging results in accelerating the training of neural networks.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100047"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440622000235/pdfft?md5=1af83969ee833bb0a8954f808f6ca4ee&pid=1-s2.0-S2192440622000235-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131687887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
First-Order Methods for Convex Optimization 凸优化的一阶方法
IF 2.4 Q2 OPERATIONS RESEARCH & MANAGEMENT SCIENCE Pub Date : 2021-01-01 DOI: 10.1016/j.ejco.2021.100015
Pavel Dvurechensky , Shimrit Shtern , Mathias Staudigl

First-order methods for solving convex optimization problems have been at the forefront of mathematical optimization in the last 20 years. The rapid development of this important class of algorithms is motivated by the success stories reported in various applications, including most importantly machine learning, signal processing, imaging and control theory. First-order methods have the potential to provide low accuracy solutions at low computational complexity which makes them an attractive set of tools in large-scale optimization problems. In this survey, we cover a number of key developments in gradient-based optimization methods. This includes non-Euclidean extensions of the classical proximal gradient method, and its accelerated versions. Additionally we survey recent developments within the class of projection-free methods, and proximal versions of primal-dual schemes. We give complete proofs for various key results, and highlight the unifying aspects of several optimization algorithms.

在过去的20年里,求解凸优化问题的一阶方法一直处于数学优化的前沿。这类重要算法的快速发展是由各种应用的成功案例所驱动的,包括最重要的机器学习、信号处理、成像和控制理论。一阶方法具有在低计算复杂度下提供低精度解的潜力,这使其成为解决大规模优化问题的一组有吸引力的工具。在本调查中,我们涵盖了基于梯度的优化方法的一些关键发展。这包括经典近端梯度法的非欧几里得扩展,以及它的加速版本。此外,我们还调查了无投影方法类的最新发展,以及原始对偶格式的近端版本。我们给出了各种关键结果的完整证明,并强调了几种优化算法的统一方面。
{"title":"First-Order Methods for Convex Optimization","authors":"Pavel Dvurechensky ,&nbsp;Shimrit Shtern ,&nbsp;Mathias Staudigl","doi":"10.1016/j.ejco.2021.100015","DOIUrl":"10.1016/j.ejco.2021.100015","url":null,"abstract":"<div><p>First-order methods for solving convex optimization problems have been at the forefront of mathematical optimization in the last 20 years. The rapid development of this important class of algorithms is motivated by the success stories reported in various applications, including most importantly machine learning, signal processing, imaging and control theory. First-order methods have the potential to provide low accuracy solutions at low computational complexity which makes them an attractive set of tools in large-scale optimization problems. In this survey, we cover a number of key developments in gradient-based optimization methods. This includes non-Euclidean extensions of the classical proximal gradient method, and its accelerated versions. Additionally we survey recent developments within the class of projection-free methods, and proximal versions of primal-dual schemes. We give complete proofs for various key results, and highlight the unifying aspects of several optimization algorithms.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"9 ","pages":"Article 100015"},"PeriodicalIF":2.4,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440621001428/pdfft?md5=19763cbf839252d3f78a91ae92c0f36f&pid=1-s2.0-S2192440621001428-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128767137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
期刊
EURO Journal on Computational Optimization
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1