Pub Date : 2025-01-01DOI: 10.1016/j.ejco.2024.100101
Roberto Montemanni , Sara Ceschia , Andrea Schaerf
Home healthcare has become more and more central in the last decades, due to the advantages it can bring to both healthcare institutions and patients. Planning activities in this context, however, presents significant challenges related to route planning and mutual synchronization of caregivers.
In this paper we propose a new compact model for the combined optimization of scheduling (of the activities) and routing (of the caregivers) characterized by fewer variables and constraints when compared with the models previously available in the literature. The new model is solved by a constraint programming solver and compared experimentally with the exact and metaheuristic approaches available in the literature on the common datasets adopted by the community. The results show that the new model provides improved lower bounds for the vast majority of the instances, while producing at the same time high quality heuristic solutions, comparable to those of tailored metaheuristics, for small/medium size instances.
{"title":"A compact model for the home healthcare routing and scheduling problem","authors":"Roberto Montemanni , Sara Ceschia , Andrea Schaerf","doi":"10.1016/j.ejco.2024.100101","DOIUrl":"10.1016/j.ejco.2024.100101","url":null,"abstract":"<div><div>Home healthcare has become more and more central in the last decades, due to the advantages it can bring to both healthcare institutions and patients. Planning activities in this context, however, presents significant challenges related to route planning and mutual synchronization of caregivers.</div><div>In this paper we propose a new compact model for the combined optimization of scheduling (of the activities) and routing (of the caregivers) characterized by fewer variables and constraints when compared with the models previously available in the literature. The new model is solved by a constraint programming solver and compared experimentally with the exact and metaheuristic approaches available in the literature on the common datasets adopted by the community. The results show that the new model provides improved lower bounds for the vast majority of the instances, while producing at the same time high quality heuristic solutions, comparable to those of tailored metaheuristics, for small/medium size instances.</div></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"13 ","pages":"Article 100101"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143169821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1016/j.ejco.2025.100105
Jacek Gondzio
Interior point methods (IPMs) have hugely influenced the field of optimization. Their fast development has been triggered by the seminal paper of Narendra Karmarkar published in 1984 which delivered a polynomial algorithm for linear programming and suggested that it might be implemented into a very efficient method in practice. Indeed, this has been demonstrated within a few years after 1984 and has gained IPMs a status of exceptionally powerful optimization tool. Linear Programming (LP) is at the centre of many operational research techniques including mixed-integer programming, network optimization and various decomposition techniques. Therefore, any progress in LP has far-reaching consequences. IPMs certainly did not disappoint in this context: they have become a heavily used methodology in modern optimization and operational research. Their accuracy, efficiency and reliability have been particularly appreciated when IPMs are applied to truly large scale problems which challenge any alternative approaches.
In this survey we will discuss several issues related to interior point methods. We will recall techniques which provide the building blocks of IPMs, and observe that actually at least some of them have been developed before 1984. We will briefly comment on the worst-case complexity results for different variants of IPMs and then focus on key aspects of their implementation. We will also address some of the most spectacular features of IPMs and discuss their potential advantages when applied in decomposition algorithms, cutting planes scheme and column generation technique.
{"title":"Interior point methods in the year 2025","authors":"Jacek Gondzio","doi":"10.1016/j.ejco.2025.100105","DOIUrl":"10.1016/j.ejco.2025.100105","url":null,"abstract":"<div><div>Interior point methods (IPMs) have hugely influenced the field of optimization. Their fast development has been triggered by the seminal paper of Narendra Karmarkar published in 1984 which delivered a polynomial algorithm for linear programming and suggested that it might be implemented into a very efficient method in practice. Indeed, this has been demonstrated within a few years after 1984 and has gained IPMs a status of exceptionally powerful optimization tool. Linear Programming (LP) is at the centre of many operational research techniques including mixed-integer programming, network optimization and various decomposition techniques. Therefore, any progress in LP has far-reaching consequences. IPMs certainly did not disappoint in this context: they have become a heavily used methodology in modern optimization and operational research. Their accuracy, efficiency and reliability have been particularly appreciated when IPMs are applied to truly large scale problems which challenge any alternative approaches.</div><div>In this survey we will discuss several issues related to interior point methods. We will recall techniques which provide the building blocks of IPMs, and observe that actually at least some of them have been developed before 1984. We will briefly comment on the worst-case complexity results for different variants of IPMs and then focus on key aspects of their implementation. We will also address some of the most spectacular features of IPMs and discuss their potential advantages when applied in decomposition algorithms, cutting planes scheme and column generation technique.</div></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"13 ","pages":"Article 100105"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143428084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1016/j.ejco.2025.100112
Stefano Ardizzoni, Luca Consolini, Mattia Laurini, Marco Locatelli
In this paper we address the speed planning problem for a vehicle over an assigned path with the aim of minimizing a weighted sum of travel time and energy consumption under suitable constraints (maximum allowed speed, maximum traction or braking force, maximum power consumption). The resulting mathematical model is a non-convex optimization problem. We prove that, under some mild assumptions, a convex reformulation of the non-convex problem is exact. In particular, the convex reformulation is a Second Order Cone Programming (SOCP) problem, for which efficient solvers exist. Through the numerical experiments we confirm that the convex relaxation can be solved very efficiently and, moreover, we also provide the Pareto front of the trade-off between the two objectives (travel time and energy consumption).
{"title":"Speed planning by minimizing travel time and energy consumption","authors":"Stefano Ardizzoni, Luca Consolini, Mattia Laurini, Marco Locatelli","doi":"10.1016/j.ejco.2025.100112","DOIUrl":"10.1016/j.ejco.2025.100112","url":null,"abstract":"<div><div>In this paper we address the speed planning problem for a vehicle over an assigned path with the aim of minimizing a weighted sum of travel time and energy consumption under suitable constraints (maximum allowed speed, maximum traction or braking force, maximum power consumption). The resulting mathematical model is a non-convex optimization problem. We prove that, under some mild assumptions, a convex reformulation of the non-convex problem is exact. In particular, the convex reformulation is a Second Order Cone Programming (SOCP) problem, for which efficient solvers exist. Through the numerical experiments we confirm that the convex relaxation can be solved very efficiently and, moreover, we also provide the Pareto front of the trade-off between the two objectives (travel time and energy consumption).</div></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"13 ","pages":"Article 100112"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144686427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1016/j.ejco.2025.100108
Mihály Gencsi, Boglárka G.-Tóth
The Interval Branch and Bound (IBB) method is a widely used approach for solving nonlinear programming problems where a rigorous solution is required. The method uses Interval Arithmetic (IA) to handle rounding errors in calculations. In the literature, a wide range of variations of IBB exists. However, few IBB implementations use the Karush-Kuhn-Tucker (KKT) or the Fritz-John (FJ) optimality conditions to eliminate non-optimal boxes. The application of the FJ conditions implies to solve a system of interval linear equations, which is often challenging due to overestimation of the boxes. This study focuses on the geometric perspective of the FJ optimality conditions. A preliminary test is introduced, namely the Geometrical Test, which tries to decide when the optimality conditions cannot hold or whether it is convenient to compute the Fritz-John Test. Furthermore, a test case generator is presented that transforms unconstrained problems into constrained test cases by setting a given number of active and inactive constraints at a global optimizer. The efficiency of the Geometrical Test was considered through computational experiments on the generated benchmark. Six variations of the IBB were compared, with or without the FJ condition system and Geometrical Test. The best methods for solving the 272 generated test cases use the designed Geometrical Test with the Lagrange estimator and the Newton step on the normalized interval FJ conditions in most cases.
{"title":"Efficient use of optimality conditions in Interval Branch and Bound methods","authors":"Mihály Gencsi, Boglárka G.-Tóth","doi":"10.1016/j.ejco.2025.100108","DOIUrl":"10.1016/j.ejco.2025.100108","url":null,"abstract":"<div><div>The Interval Branch and Bound (IBB) method is a widely used approach for solving nonlinear programming problems where a rigorous solution is required. The method uses Interval Arithmetic (IA) to handle rounding errors in calculations. In the literature, a wide range of variations of IBB exists. However, few IBB implementations use the Karush-Kuhn-Tucker (KKT) or the Fritz-John (FJ) optimality conditions to eliminate non-optimal boxes. The application of the FJ conditions implies to solve a system of interval linear equations, which is often challenging due to overestimation of the boxes. This study focuses on the geometric perspective of the FJ optimality conditions. A preliminary test is introduced, namely the Geometrical Test, which tries to decide when the optimality conditions cannot hold or whether it is convenient to compute the Fritz-John Test. Furthermore, a test case generator is presented that transforms unconstrained problems into constrained test cases by setting a given number of active and inactive constraints at a global optimizer. The efficiency of the Geometrical Test was considered through computational experiments on the generated benchmark. Six variations of the IBB were compared, with or without the FJ condition system and Geometrical Test. The best methods for solving the 272 generated test cases use the designed Geometrical Test with the Lagrange estimator and the Newton step on the normalized interval FJ conditions in most cases.</div></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"13 ","pages":"Article 100108"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144068274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1016/j.ejco.2025.100119
Yi Zhang , Nikolaos V. Sahinidis
Presolve techniques have been experimentally shown to significantly accelerate the performance of optimization solvers, achieving speedups of several orders of magnitude in widely used benchmarks. Building on the success of these techniques in linear and mixed-integer linear optimization problems, we introduce novel presolve methods specifically designed for nonlinear optimization. These methods aim to reduce model size and nonlinearity while preserving convexity and ensuring global optimality. We propose a combined linear and nonlinear presolve approach that integrates classical linear presolve strategies with novel methods for reformulating nonlinear expressions and simplifying models. For instance, monotonicity arguments are used to fix nonlinear variables, and linear constraints are exploited to tighten bilinear products. Computational experiments on diverse nonlinear benchmarks and continuous relaxations of discrete nonlinear problems demonstrate the efficacy of our approach. The results show that the proposed methods significantly enhance the performance of one global and four local nonlinear optimization solvers.
{"title":"A combined linear and nonlinear presolve for nonlinear optimization","authors":"Yi Zhang , Nikolaos V. Sahinidis","doi":"10.1016/j.ejco.2025.100119","DOIUrl":"10.1016/j.ejco.2025.100119","url":null,"abstract":"<div><div>Presolve techniques have been experimentally shown to significantly accelerate the performance of optimization solvers, achieving speedups of several orders of magnitude in widely used benchmarks. Building on the success of these techniques in linear and mixed-integer linear optimization problems, we introduce novel presolve methods specifically designed for nonlinear optimization. These methods aim to reduce model size and nonlinearity while preserving convexity and ensuring global optimality. We propose a combined linear and nonlinear presolve approach that integrates classical linear presolve strategies with novel methods for reformulating nonlinear expressions and simplifying models. For instance, monotonicity arguments are used to fix nonlinear variables, and linear constraints are exploited to tighten bilinear products. Computational experiments on diverse nonlinear benchmarks and continuous relaxations of discrete nonlinear problems demonstrate the efficacy of our approach. The results show that the proposed methods significantly enhance the performance of one global and four local nonlinear optimization solvers.</div></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"13 ","pages":"Article 100119"},"PeriodicalIF":1.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145361992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1016/j.ejco.2025.100120
Gabriele Eichfelder, Tobias Gerlach, Ernest Quintana, Stefan Rocktäschel
In this paper, we investigate two known solution approaches for set-valued optimization problems, both of which are based on so-called vectorization strategies. These strategies consist of deriving a parametric family of multi-objective optimization problems whose optimal solution sets approximate those of the original set-valued problem with arbitrary accuracy in a certain sense. Thus, these approaches can serve as a basis for the numerical solution of set-valued optimization problems using established solution algorithms from multi-objective optimization. We show that many properties that have already been obtained for one of the two vectorization schemes also hold for the other similarly. Thereby, it turns out that under certain assumptions there exist problem classes for both vectorization schemes in which the set-valued initial problems are even equivalent to the corresponding multi-objective replacement problems. This property is fulfilled, for example, for set-valued optimization problems with a finite feasible set, a polytope-valued objective map, or a convex graph. This was already known for one of the two vectorization schemes, and could now also be shown for the other scheme.
{"title":"On two vectorization schemes for set-valued optimization","authors":"Gabriele Eichfelder, Tobias Gerlach, Ernest Quintana, Stefan Rocktäschel","doi":"10.1016/j.ejco.2025.100120","DOIUrl":"10.1016/j.ejco.2025.100120","url":null,"abstract":"<div><div>In this paper, we investigate two known solution approaches for set-valued optimization problems, both of which are based on so-called vectorization strategies. These strategies consist of deriving a parametric family of multi-objective optimization problems whose optimal solution sets approximate those of the original set-valued problem with arbitrary accuracy in a certain sense. Thus, these approaches can serve as a basis for the numerical solution of set-valued optimization problems using established solution algorithms from multi-objective optimization. We show that many properties that have already been obtained for one of the two vectorization schemes also hold for the other similarly. Thereby, it turns out that under certain assumptions there exist problem classes for both vectorization schemes in which the set-valued initial problems are even equivalent to the corresponding multi-objective replacement problems. This property is fulfilled, for example, for set-valued optimization problems with a finite feasible set, a polytope-valued objective map, or a convex graph. This was already known for one of the two vectorization schemes, and could now also be shown for the other scheme.</div></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"13 ","pages":"Article 100120"},"PeriodicalIF":1.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145415234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper is concerned with personalized pricing models aimed at maximizing the expected revenues or profits for a single item. While it is essential for personalized pricing to predict the purchase probabilities for each consumer, these predicted values are inherently subject to unavoidable prediction errors that can negatively impact the realized revenues and profits. To resolve this challenge, we focus on robust optimization techniques that yield reliable solutions to optimization problems under uncertainty. Specifically, we propose a robust optimization model for personalized pricing that accounts for the uncertainty of predicted purchase probabilities. This model can be formulated as a mixed-integer linear optimization problem, which can be solved exactly using mathematical optimization solvers. We also develop a Lagrangian decomposition algorithm combined with the golden section search to efficiently find high-quality solutions to large-scale problems. Experimental results demonstrate the effectiveness of our robust optimization model and highlight the utility of our Lagrangian decomposition algorithm in terms of both computational efficiency and solution quality.
{"title":"Robust personalized pricing under uncertainty of purchase probabilities","authors":"Shunnosuke Ikeda , Naoki Nishimura , Noriyoshi Sukegawa , Yuichi Takano","doi":"10.1016/j.ejco.2025.100114","DOIUrl":"10.1016/j.ejco.2025.100114","url":null,"abstract":"<div><div>This paper is concerned with personalized pricing models aimed at maximizing the expected revenues or profits for a single item. While it is essential for personalized pricing to predict the purchase probabilities for each consumer, these predicted values are inherently subject to unavoidable prediction errors that can negatively impact the realized revenues and profits. To resolve this challenge, we focus on robust optimization techniques that yield reliable solutions to optimization problems under uncertainty. Specifically, we propose a robust optimization model for personalized pricing that accounts for the uncertainty of predicted purchase probabilities. This model can be formulated as a mixed-integer linear optimization problem, which can be solved exactly using mathematical optimization solvers. We also develop a Lagrangian decomposition algorithm combined with the golden section search to efficiently find high-quality solutions to large-scale problems. Experimental results demonstrate the effectiveness of our robust optimization model and highlight the utility of our Lagrangian decomposition algorithm in terms of both computational efficiency and solution quality.</div></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"13 ","pages":"Article 100114"},"PeriodicalIF":1.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144864734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1016/j.ejco.2025.100118
Laurie Boveroux, Damien Ernst, Quentin Louveaux
The Job Shop Scheduling Problem (JSSP) is a well-known optimization problem in manufacturing, where the goal is to determine the optimal sequence of jobs across different machines to minimize a given objective. In this work, we focus on minimizing the weighted sum of job completion times. We explore the potential of Monte Carlo Tree Search (MCTS), a heuristic-based reinforcement learning technique, to solve large-scale JSSPs, especially those with recirculation. We propose several Markov Decision Process (MDP) formulations to model the JSSP for the MCTS algorithm. In addition, we introduce a new synthetic benchmark derived from real manufacturing data, which captures the computational burden of large, non-rectangular instances often encountered in practice. Our experimental results show that MCTS effectively produces good-quality solutions for large-scale JSSP instances, outperforming our constraint programming approach.
{"title":"Investigating the Monte-Carlo Tree Search approach for the job shop scheduling problem","authors":"Laurie Boveroux, Damien Ernst, Quentin Louveaux","doi":"10.1016/j.ejco.2025.100118","DOIUrl":"10.1016/j.ejco.2025.100118","url":null,"abstract":"<div><div>The Job Shop Scheduling Problem (JSSP) is a well-known optimization problem in manufacturing, where the goal is to determine the optimal sequence of jobs across different machines to minimize a given objective. In this work, we focus on minimizing the weighted sum of job completion times. We explore the potential of Monte Carlo Tree Search (MCTS), a heuristic-based reinforcement learning technique, to solve large-scale JSSPs, especially those with recirculation. We propose several Markov Decision Process (MDP) formulations to model the JSSP for the MCTS algorithm. In addition, we introduce a new synthetic benchmark derived from real manufacturing data, which captures the computational burden of large, non-rectangular instances often encountered in practice. Our experimental results show that MCTS effectively produces good-quality solutions for large-scale JSSP instances, outperforming our constraint programming approach.</div></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"13 ","pages":"Article 100118"},"PeriodicalIF":1.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145319640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1016/j.ejco.2025.100107
Katrin Halbig , Alexander Hoen , Ambros Gleixner , Jakob Witzig , Dieter Weninger
Semi-continuous decision variables arise naturally in many real-world applications. They are defined to take either value zero or any value within a specified range, and occur mainly to prevent small nonzero values in the solution. One particular challenge that can come with semi-continuous variables in practical models is that their upper bound may be large or even infinite. In this article, we briefly discuss these challenges, and present a new diving heuristic tailored for mixed-integer optimization problems with general semi-continuous variables. The heuristic is designed to work independently of whether the semi-continuous variables are bounded from above, and thus circumvents the specific difficulties that come with unbounded semi-continuous variables. We conduct extensive computational experiments on three different test sets, integrating the heuristic in an open-source MIP solver. The results indicate that this heuristic is a successful tool for finding high-quality solutions in negligible time. At the root node the primal gap is reduced by an average of 5% up to 21%, and considering the overall performance improvement, the primal integral is reduced by 2% to 17% on average.
{"title":"A diving heuristic for mixed-integer problems with unbounded semi-continuous variables","authors":"Katrin Halbig , Alexander Hoen , Ambros Gleixner , Jakob Witzig , Dieter Weninger","doi":"10.1016/j.ejco.2025.100107","DOIUrl":"10.1016/j.ejco.2025.100107","url":null,"abstract":"<div><div>Semi-continuous decision variables arise naturally in many real-world applications. They are defined to take either value zero or any value within a specified range, and occur mainly to prevent small nonzero values in the solution. One particular challenge that can come with semi-continuous variables in practical models is that their upper bound may be large or even infinite. In this article, we briefly discuss these challenges, and present a new diving heuristic tailored for mixed-integer optimization problems with general semi-continuous variables. The heuristic is designed to work independently of whether the semi-continuous variables are bounded from above, and thus circumvents the specific difficulties that come with unbounded semi-continuous variables. We conduct extensive computational experiments on three different test sets, integrating the heuristic in an open-source MIP solver. The results indicate that this heuristic is a successful tool for finding high-quality solutions in negligible time. At the root node the primal gap is reduced by an average of 5% up to 21%, and considering the overall performance improvement, the primal integral is reduced by 2% to 17% on average.</div></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"13 ","pages":"Article 100107"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143879297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1016/j.ejco.2025.100116
Olga Krylova , Frank Phillipson
Recent advances in deep learning techniques pose a question of whether they can facilitate the task of finding good quality solutions to combinatorial optimization (CO) problems in a practically relevant solution time. Specifically, it is of practical relevance to determine to what extent graph neural networks (GNNs) can be applied to CO problems that can be formulated as QUBOs and thus be naturally interpreted as graph problems. In this research a GNN solver is applied to two classical CO problems–the maximum cut problem and maximum independent set problem–in an unsupervised learning setting. We show that while GNN solver consistently finds good quality solutions for the Max Cut problem irrespective of the size and density of the graph, solving MIS problems is challenging for all but very sparse graphs. We further show how this problem can be addressed by embedding transfer between these two problems and compare two different GNN architectures–GCN and GraphSAGE on their robustness with respect to graph density and symmetry. Finally we demonstrate that changing the widely used Adam optimizer to Rprop optimizer can lead to considerable reduction in solution times.
{"title":"Unsupervised learning with GNNs for QUBO-based combinatorial optimization","authors":"Olga Krylova , Frank Phillipson","doi":"10.1016/j.ejco.2025.100116","DOIUrl":"10.1016/j.ejco.2025.100116","url":null,"abstract":"<div><div>Recent advances in deep learning techniques pose a question of whether they can facilitate the task of finding good quality solutions to combinatorial optimization (CO) problems in a practically relevant solution time. Specifically, it is of practical relevance to determine to what extent graph neural networks (GNNs) can be applied to CO problems that can be formulated as QUBOs and thus be naturally interpreted as graph problems. In this research a GNN solver is applied to two classical CO problems–the maximum cut problem and maximum independent set problem–in an unsupervised learning setting. We show that while GNN solver consistently finds good quality solutions for the Max Cut problem irrespective of the size and density of the graph, solving MIS problems is challenging for all but very sparse graphs. We further show how this problem can be addressed by embedding transfer between these two problems and compare two different GNN architectures–GCN and GraphSAGE on their robustness with respect to graph density and symmetry. Finally we demonstrate that changing the widely used Adam optimizer to Rprop optimizer can lead to considerable reduction in solution times.</div></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"13 ","pages":"Article 100116"},"PeriodicalIF":1.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145265103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}