Pub Date : 2022-08-01DOI: 10.1016/j.disopt.2022.100723
Sapna Grover , Neelima Gupta , Samir Khuller
In this paper, we study uniform hard capacitated facility location problem. The standard LP for the problem is known to have an unbounded integrality gap. We present constant factor approximation by rounding a solution to the standard LP with a slight violation in the capacities.
Our result shows that the standard LP is not too bad.
Our algorithm is simple and more efficient as compared to the strengthened LP-based true approximation that uses the inefficient ellipsoid method with a separation oracle. True approximations are also known for the problem using local search techniques that suffer from the problem of convergence. Moreover, solutions based on standard LP are easier to integrate with other LP-based algorithms.
The result is also extended to give the first approximation for uniform hard capacitated -facility location problem violating the capacities by a factor of and breaking the barrier of 2 in capacity violation. The result violates the cardinality by a factor of .
{"title":"LP-based approximation for uniform capacitated facility location problem","authors":"Sapna Grover , Neelima Gupta , Samir Khuller","doi":"10.1016/j.disopt.2022.100723","DOIUrl":"10.1016/j.disopt.2022.100723","url":null,"abstract":"<div><p><span><span>In this paper, we study uniform hard capacitated facility location problem. The standard LP for the problem is known to have an unbounded integrality gap. We present constant factor approximation by </span>rounding a solution to the standard LP with a slight </span><span><math><mrow><mo>(</mo><mn>1</mn><mo>+</mo><mi>ϵ</mi><mo>)</mo></mrow></math></span> violation in the capacities.</p><p>Our result shows that the standard LP is not too bad.</p><p>Our algorithm is simple and more efficient as compared to the strengthened LP-based true approximation that uses the inefficient ellipsoid method with a separation oracle. True approximations are also known for the problem using local search techniques that suffer from the problem of convergence. Moreover, solutions based on standard LP are easier to integrate with other LP-based algorithms.</p><p>The result is also extended to give the first approximation for uniform hard capacitated <span><math><mi>k</mi></math></span>-facility location problem violating the capacities by a factor of <span><math><mrow><mo>(</mo><mn>1</mn><mo>+</mo><mi>ϵ</mi><mo>)</mo></mrow></math></span> and breaking the barrier of 2 in capacity violation. The result violates the cardinality by a factor of <span><math><mfrac><mrow><mn>2</mn></mrow><mrow><mn>1</mn><mo>+</mo><mi>ϵ</mi></mrow></mfrac></math></span>.</p></div>","PeriodicalId":50571,"journal":{"name":"Discrete Optimization","volume":"45 ","pages":"Article 100723"},"PeriodicalIF":1.1,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131270426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-01DOI: 10.1016/j.disopt.2022.100725
Rebekah Herrman , Stephen G.Z. Smith
An L-sequence of a graph is a sequence of distinct vertices such that . The length of a longest L-sequence is called the L-Grundy domination number, denoted . In this paper, we prove , which was conjectured by Brešar, Gologranc, Henning, and Kos. We also prove some initial results about characteristics of -vertex graphs satisfying .
{"title":"On the length of L-Grundy sequences","authors":"Rebekah Herrman , Stephen G.Z. Smith","doi":"10.1016/j.disopt.2022.100725","DOIUrl":"10.1016/j.disopt.2022.100725","url":null,"abstract":"<div><p>An L-sequence of a graph <span><math><mi>G</mi></math></span> is a sequence of distinct vertices <span><math><mrow><mi>S</mi><mo>=</mo><mrow><mo>(</mo><msub><mrow><mi>v</mi></mrow><mrow><mn>1</mn></mrow></msub><mo>,</mo><mo>…</mo><mo>,</mo><msub><mrow><mi>v</mi></mrow><mrow><mi>k</mi></mrow></msub><mo>)</mo></mrow></mrow></math></span> such that <span><math><mrow><mi>N</mi><mrow><mo>[</mo><msub><mrow><mi>v</mi></mrow><mrow><mi>i</mi></mrow></msub><mo>]</mo></mrow><mo>∖</mo><msubsup><mrow><mo>∪</mo></mrow><mrow><mi>j</mi><mo>=</mo><mn>1</mn></mrow><mrow><mi>i</mi><mo>−</mo><mn>1</mn></mrow></msubsup><mi>N</mi><mrow><mo>(</mo><msub><mrow><mi>v</mi></mrow><mrow><mi>j</mi></mrow></msub><mo>)</mo></mrow><mo>≠</mo><mo>0̸</mo></mrow></math></span>. The length of a longest L-sequence is called the L-Grundy domination number, denoted <span><math><mrow><msubsup><mrow><mi>γ</mi></mrow><mrow><mi>g</mi><mi>r</mi></mrow><mrow><mi>L</mi></mrow></msubsup><mrow><mo>(</mo><mi>G</mi><mo>)</mo></mrow></mrow></math></span>. In this paper, we prove <span><math><mrow><msubsup><mrow><mi>γ</mi></mrow><mrow><mi>g</mi><mi>r</mi></mrow><mrow><mi>L</mi></mrow></msubsup><mrow><mo>(</mo><mi>G</mi><mo>)</mo></mrow><mo>≤</mo><mi>n</mi><mrow><mo>(</mo><mi>G</mi><mo>)</mo></mrow><mo>−</mo><mi>δ</mi><mrow><mo>(</mo><mi>G</mi><mo>)</mo></mrow><mo>+</mo><mn>1</mn></mrow></math></span>, which was conjectured by Brešar, Gologranc, Henning, and Kos. We also prove some initial results about characteristics of <span><math><mi>n</mi></math></span>-vertex graphs satisfying <span><math><mrow><msubsup><mrow><mi>γ</mi></mrow><mrow><mi>g</mi><mi>r</mi></mrow><mrow><mi>L</mi></mrow></msubsup><mrow><mo>(</mo><mi>G</mi><mo>)</mo></mrow><mo>=</mo><mi>n</mi></mrow></math></span>.</p></div>","PeriodicalId":50571,"journal":{"name":"Discrete Optimization","volume":"45 ","pages":"Article 100725"},"PeriodicalIF":1.1,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121208941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-01DOI: 10.1016/j.disopt.2022.100700
Torbjörn Larsson, Nils-Hassan Quttineh
Large scale set covering problems have often been approached by constructive greedy heuristics, and much research has been devoted to the design and evaluation of various greedy criteria for such heuristics. A criterion proposed by Caprara et al. (1999) is based on reduced costs with respect to the yet unfulfilled constraints, and the resulting greedy heuristic is reported to be superior to those based on original costs or ordinary reduced costs.
We give a theoretical justification of the greedy criterion proposed by Caprara et al. by deriving it from a global optimality condition for general non-convex optimisation problems. It is shown that this criterion is in fact greedy with respect to incremental contributions to a quantity which at termination coincides with the deviation between a Lagrangian dual bound and the objective value of the feasible solution found.
{"title":"A theoretical justification of the set covering greedy heuristic of Caprara et al.","authors":"Torbjörn Larsson, Nils-Hassan Quttineh","doi":"10.1016/j.disopt.2022.100700","DOIUrl":"10.1016/j.disopt.2022.100700","url":null,"abstract":"<div><p>Large scale set covering problems have often been approached by constructive greedy heuristics, and much research has been devoted to the design and evaluation of various greedy criteria for such heuristics. A criterion proposed by Caprara et al. (1999) is based on reduced costs with respect to the yet unfulfilled constraints, and the resulting greedy heuristic is reported to be superior to those based on original costs or ordinary reduced costs.</p><p>We give a theoretical justification of the greedy criterion proposed by Caprara et al. by deriving it from a global optimality condition for general non-convex optimisation problems. It is shown that this criterion is in fact greedy with respect to incremental contributions to a quantity which at termination coincides with the deviation between a Lagrangian dual bound and the objective value of the feasible solution found.</p></div>","PeriodicalId":50571,"journal":{"name":"Discrete Optimization","volume":"45 ","pages":"Article 100700"},"PeriodicalIF":1.1,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1572528622000135/pdfft?md5=c47eb15145df224a815aa72a5c23497a&pid=1-s2.0-S1572528622000135-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123680892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Capacitated Vehicle Routing Problem (CVRP) consists of finding the cheapest way to serve a set of customers with a fleet of vehicles of a given capacity. While serving a particular customer, each vehicle picks up its demand and carries its weight throughout the rest of its route. While costs in the classical CVRP are measured in terms of a given arc distance, the Cumulative Vehicle Routing Problem (CmVRP) is a variant of the problem that aims to minimize total energy consumption. Each arc’s energy consumption is defined as the product of the arc distance by the weight accumulated since the beginning of the route.
The purpose of this work is to propose several different formulations for the CmVRP and to study their Linear Programming (LP) relaxations. In particular, the goal is to study formulations based on combining an arc-item concept (that keeps track of whether a given customer has already been visited when traversing a specific arc) with another formulation from the recent literature, the Arc-Load formulation (that determines how much load goes through an arc).
Both formulations have been studied independently before – the Arc-Item is very similar to a multi-commodity-flow formulation in Letchford and Salazar-González (2015) and the Arc-Load formulation has been studied in Fukasawa et al. (2016) – and their LP relaxations are incomparable. Nonetheless, we show that a formulation combining the two (called Arc-Item-Load) may lead to a significantly stronger LP relaxation, thereby indicating that the two formulations capture complementary aspects of the problem. In addition, we study how set partitioning based formulations can be combined with these formulations. We present computational experiments on several well-known benchmark instances that highlight the advantages and drawbacks of the LP relaxation of each formulation and point to potential avenues of future research.
{"title":"The Arc-Item-Load and Related Formulations for the Cumulative Vehicle Routing Problem","authors":"Mauro Henrique Mulati , Ricardo Fukasawa , Flávio Keidi Miyazawa","doi":"10.1016/j.disopt.2022.100710","DOIUrl":"10.1016/j.disopt.2022.100710","url":null,"abstract":"<div><p><span>The Capacitated Vehicle Routing Problem (</span><span>CVRP</span>) consists of finding the cheapest way to serve a set of customers with a fleet of vehicles of a given capacity. While serving a particular customer, each vehicle picks up its demand and carries its weight throughout the rest of its route. While costs in the classical <span>CVRP</span> are measured in terms of a given arc distance, the Cumulative Vehicle Routing Problem (<span>CmVRP</span>) is a variant of the problem that aims to minimize total energy consumption. Each arc’s energy consumption is defined as the product of the arc distance by the weight accumulated since the beginning of the route.</p><p>The purpose of this work is to propose several different formulations for the <span>CmVRP</span> and to study their Linear Programming (LP) relaxations. In particular, the goal is to study formulations based on combining an arc-item concept (that keeps track of whether a given customer has already been visited when traversing a specific arc) with another formulation from the recent literature, the Arc-Load formulation (that determines how much load goes through an arc).</p><p>Both formulations have been studied independently before – the Arc-Item is very similar to a multi-commodity-flow formulation in Letchford and Salazar-González (2015) and the Arc-Load formulation has been studied in Fukasawa et al. (2016) – and their LP relaxations are incomparable. Nonetheless, we show that a formulation combining the two (called Arc-Item-Load) may lead to a significantly stronger LP relaxation, thereby indicating that the two formulations capture complementary aspects of the problem. In addition, we study how set partitioning based formulations can be combined with these formulations. We present computational experiments on several well-known benchmark instances that highlight the advantages and drawbacks of the LP relaxation of each formulation and point to potential avenues of future research.</p></div>","PeriodicalId":50571,"journal":{"name":"Discrete Optimization","volume":"45 ","pages":"Article 100710"},"PeriodicalIF":1.1,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124959123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-01DOI: 10.1016/j.disopt.2020.100579
Franklin Djeumou Fomeni , Konstantinos Kaparis , Adam N. Letchford
The Quadratic Knapsack Problem (QKP) is a well-known -hard combinatorial optimisation problem, with many practical applications. We present a ‘cut-and-branch’ algorithm for the QKP, in which a cutting-plane phase is followed by a branch-and-bound phase. The cutting-plane phase is more sophisticated than the existing ones in the literature, incorporating several classes of cutting planes, two primal heuristics, and several rules for eliminating variables and constraints. Computational results show that the algorithm is competitive.
{"title":"A cut-and-branch algorithm for the Quadratic Knapsack Problem","authors":"Franklin Djeumou Fomeni , Konstantinos Kaparis , Adam N. Letchford","doi":"10.1016/j.disopt.2020.100579","DOIUrl":"10.1016/j.disopt.2020.100579","url":null,"abstract":"<div><p>The <em>Quadratic Knapsack Problem</em> (QKP) is a well-known <span><math><mi>NP</mi></math></span><span>-hard combinatorial optimisation problem, with many practical applications. We present a ‘cut-and-branch’ algorithm for the QKP, in which a cutting-plane phase is followed by a branch-and-bound phase. The cutting-plane phase is more sophisticated than the existing ones in the literature, incorporating several classes of cutting planes, two primal heuristics, and several rules for eliminating variables and constraints. Computational results show that the algorithm is competitive.</span></p></div>","PeriodicalId":50571,"journal":{"name":"Discrete Optimization","volume":"44 ","pages":"Article 100579"},"PeriodicalIF":1.1,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.disopt.2020.100579","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115826892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-01DOI: 10.1016/j.disopt.2020.100594
Amit Verma, Mark Lewis
Quadratic Unconstrained Binary Optimization (QUBO) modeling has become a unifying framework for solving a wide variety of both unconstrained as well as constrained optimization problems. More recently, QUBO (or equivalent Ising Spin) models are a requirement for quantum annealing computers. Noisy Intermediate-Scale Quantum (NISQ) computing refers to classical computing preparing or compiling problem instances for compatibility with quantum hardware architectures. The process of converting a constrained problem to a QUBO compatible quantum annealing problem is an important part of the quantum compiler architecture and specifically when converting constrained models to unconstrained the choice of penalty magnitude is not trivial because using a large penalty to enforce constraints can overwhelm the solution landscape, while having too small a penalty allows infeasible optimal solutions. In this paper we present NISQ approaches to bound the magnitude of the penalty scalar and demonstrate efficacy on a benchmark set of problems having a single equality constraint and present a QUBO partitioning approach validated by experimentation.
{"title":"Penalty and partitioning techniques to improve performance of QUBO solvers","authors":"Amit Verma, Mark Lewis","doi":"10.1016/j.disopt.2020.100594","DOIUrl":"10.1016/j.disopt.2020.100594","url":null,"abstract":"<div><p>Quadratic Unconstrained Binary Optimization (QUBO) modeling has become a unifying framework for solving a wide variety of both unconstrained as well as constrained optimization problems. More recently, QUBO (or equivalent <span><math><mrow><mo>−</mo><mn>1</mn><mo>/</mo><mo>+</mo><mn>1</mn></mrow></math></span> Ising Spin) models are a requirement for quantum annealing computers. Noisy Intermediate-Scale Quantum (NISQ) computing refers to classical computing preparing or compiling problem instances for compatibility with quantum hardware architectures. The process of converting a constrained problem to a QUBO compatible quantum annealing problem is an important part of the quantum compiler architecture and specifically when converting constrained models to unconstrained the choice of penalty magnitude is not trivial because using a large penalty to enforce constraints can overwhelm the solution landscape, while having too small a penalty allows infeasible optimal solutions. In this paper we present NISQ approaches to bound the magnitude of the penalty scalar <span><math><mi>M</mi></math></span> and demonstrate efficacy on a benchmark set of problems having a single equality constraint and present a QUBO partitioning approach validated by experimentation.</p></div>","PeriodicalId":50571,"journal":{"name":"Discrete Optimization","volume":"44 ","pages":"Article 100594"},"PeriodicalIF":1.1,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.disopt.2020.100594","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126767619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-01DOI: 10.1016/j.disopt.2021.100622
Nicolò Gusmeroli , Angelika Wiegele
We address the problem of minimizing a quadratic function subject to linear constraints over binary variables. We introduce the exact solution method called EXPEDIS