Lingxiao Wu, Y. Adulyasak, J. Cordeau, Shuaian Wang
An Integrated Approach to Managing Vessel Service in Seaports Efficient vessel service is of utmost importance in the maritime supply chain. When serving a group of incoming vessels, berth allocation and pilotage planning are the two most important decisions made by a seaport. Although they are closely correlated, the berth allocation problem and pilotage planning problem are often solved sequentially, leading to suboptimal or even infeasible solutions for vessel services. In “Vessel Service Planning in Seaports,” Wu, Adulyasak, Cordeau, and Wang focus on a vessel service planning problem that optimizes berth allocation and pilotage planning in combination. To solve the joint problem, the authors develop an exact solution method that combines Benders decomposition and column generation within an efficient branch-and-bound framework. They also propose acceleration strategies that significantly improve the performance of the algorithm. Test instances from one of the world's largest seaports are used to validate the effectiveness of the approach and demonstrate the value of integrated planning.
{"title":"Vessel Service Planning in Seaports","authors":"Lingxiao Wu, Y. Adulyasak, J. Cordeau, Shuaian Wang","doi":"10.1287/opre.2021.2228","DOIUrl":"https://doi.org/10.1287/opre.2021.2228","url":null,"abstract":"An Integrated Approach to Managing Vessel Service in Seaports Efficient vessel service is of utmost importance in the maritime supply chain. When serving a group of incoming vessels, berth allocation and pilotage planning are the two most important decisions made by a seaport. Although they are closely correlated, the berth allocation problem and pilotage planning problem are often solved sequentially, leading to suboptimal or even infeasible solutions for vessel services. In “Vessel Service Planning in Seaports,” Wu, Adulyasak, Cordeau, and Wang focus on a vessel service planning problem that optimizes berth allocation and pilotage planning in combination. To solve the joint problem, the authors develop an exact solution method that combines Benders decomposition and column generation within an efficient branch-and-bound framework. They also propose acceleration strategies that significantly improve the performance of the algorithm. Test instances from one of the world's largest seaports are used to validate the effectiveness of the approach and demonstrate the value of integrated planning.","PeriodicalId":19546,"journal":{"name":"Oper. Res.","volume":"5 1","pages":"2032-2053"},"PeriodicalIF":0.0,"publicationDate":"2022-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79966836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Zhalechian, Esmaeil Keyvanshokooh, Cong Shi, M. P. Oyen
Joint online learning and resource allocation is a fundamental problem inherent in many applications. In a general setting, heterogeneous customers arrive sequentially, each of which can be allocated to a resource in an online fashion. Customers stochastically consume the resources, allocations yield stochastic rewards, and the system receives feedback outcomes with delay. In “Online Resource Allocation with Personalized Learning,” Zhalechian, Keyvanshokooh, Shi, and Van Oyen introduce a generic framework to solve this problem. It judiciously synergizes online learning with a broad class of online resource allocation mechanisms, where the sequence of customer contexts is adversarial, and the customer reward and resource consumption are stochastic and unknown. They propose online algorithms that strike a three-way balance between exploration, exploitation, and hedging against adversarial arrival sequence. A performance guarantee is provided for each online algorithm, and the efficacy of their algorithms is demonstrated using clinical data from a health system.
{"title":"Online Resource Allocation with Personalized Learning","authors":"M. Zhalechian, Esmaeil Keyvanshokooh, Cong Shi, M. P. Oyen","doi":"10.2139/ssrn.3538509","DOIUrl":"https://doi.org/10.2139/ssrn.3538509","url":null,"abstract":"Joint online learning and resource allocation is a fundamental problem inherent in many applications. In a general setting, heterogeneous customers arrive sequentially, each of which can be allocated to a resource in an online fashion. Customers stochastically consume the resources, allocations yield stochastic rewards, and the system receives feedback outcomes with delay. In “Online Resource Allocation with Personalized Learning,” Zhalechian, Keyvanshokooh, Shi, and Van Oyen introduce a generic framework to solve this problem. It judiciously synergizes online learning with a broad class of online resource allocation mechanisms, where the sequence of customer contexts is adversarial, and the customer reward and resource consumption are stochastic and unknown. They propose online algorithms that strike a three-way balance between exploration, exploitation, and hedging against adversarial arrival sequence. A performance guarantee is provided for each online algorithm, and the efficacy of their algorithms is demonstrated using clinical data from a health system.","PeriodicalId":19546,"journal":{"name":"Oper. Res.","volume":"10 1","pages":"2138-2161"},"PeriodicalIF":0.0,"publicationDate":"2022-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78925050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Diagnostic processes are difficult to manage because they require the decision maker (DM) to dynamically balance the benefit of acquiring more diagnostic information against the cost of doing so. When additional and unattended diagnostic tasks build up over time, making this tradeoff becomes especially challenging. In their study “Mismanaging Diagnostic Accuracy Under Congestion,” Kremer and de Véricourt uncover different biases to which DMs are subject when making diagnostic decisions while unattended diagnostic tasks accumulate over time. The authors find that, in their experiments, DMs are overall insufficiently sensitive to congestion. As a result, DMs acquire too little information at low congestion levels, but too much at high levels, compared with an optimal normative benchmark. This in fact increases both the diagnostic errors and congestion levels in the system. The authors disentangle the underlying mechanisms for these effects and suggests different approaches to debias the DMs.
{"title":"Mismanaging Diagnostic Accuracy Under Congestion","authors":"Mirko Kremer, F. Véricourt","doi":"10.1287/opre.2022.2292","DOIUrl":"https://doi.org/10.1287/opre.2022.2292","url":null,"abstract":"Diagnostic processes are difficult to manage because they require the decision maker (DM) to dynamically balance the benefit of acquiring more diagnostic information against the cost of doing so. When additional and unattended diagnostic tasks build up over time, making this tradeoff becomes especially challenging. In their study “Mismanaging Diagnostic Accuracy Under Congestion,” Kremer and de Véricourt uncover different biases to which DMs are subject when making diagnostic decisions while unattended diagnostic tasks accumulate over time. The authors find that, in their experiments, DMs are overall insufficiently sensitive to congestion. As a result, DMs acquire too little information at low congestion levels, but too much at high levels, compared with an optimal normative benchmark. This in fact increases both the diagnostic errors and congestion levels in the system. The authors disentangle the underlying mechanisms for these effects and suggests different approaches to debias the DMs.","PeriodicalId":19546,"journal":{"name":"Oper. Res.","volume":"92 1","pages":"895-916"},"PeriodicalIF":0.0,"publicationDate":"2022-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85845396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carlos Lamas-Fernandez, J. Bennell, A. Martinez-Sykora
Packing Three-Dimensional Irregular Objects Because of its many applications in practice, the cutting and packing literature is extensive and well established. It is mostly concerned with problems in one and two dimensions or with problems where some regularity of the pieces is assumed (e.g., packing boxes). However, the rise of applications in the realm of three-dimensional printing and additive manufacturing has created a demand for efficient packing of three-dimensional irregular objects. In “Voxel-Based Solution Approaches to the Three-Dimensional Irregular Packing Problem,” Lamas-Fernandez, Martinez-Sykora, and Bennell propose a series of tools to tackle this problem using voxels. These include geometric tools, a mathematical model, local search neighborhoods, and details on implementation of metaheuristic algorithms. These tools are tested extensively, and computational results provided show their effectiveness compared with state-of-the-art literature.
{"title":"Voxel-Based Solution Approaches to the Three-Dimensional Irregular Packing Problem","authors":"Carlos Lamas-Fernandez, J. Bennell, A. Martinez-Sykora","doi":"10.1287/opre.2022.2260","DOIUrl":"https://doi.org/10.1287/opre.2022.2260","url":null,"abstract":"Packing Three-Dimensional Irregular Objects Because of its many applications in practice, the cutting and packing literature is extensive and well established. It is mostly concerned with problems in one and two dimensions or with problems where some regularity of the pieces is assumed (e.g., packing boxes). However, the rise of applications in the realm of three-dimensional printing and additive manufacturing has created a demand for efficient packing of three-dimensional irregular objects. In “Voxel-Based Solution Approaches to the Three-Dimensional Irregular Packing Problem,” Lamas-Fernandez, Martinez-Sykora, and Bennell propose a series of tools to tackle this problem using voxels. These include geometric tools, a mathematical model, local search neighborhoods, and details on implementation of metaheuristic algorithms. These tools are tested extensively, and computational results provided show their effectiveness compared with state-of-the-art literature.","PeriodicalId":19546,"journal":{"name":"Oper. Res.","volume":"463 1","pages":"1298-1317"},"PeriodicalIF":0.0,"publicationDate":"2022-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79844933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When our successors write about the first century of Operations Research, the name of Kenneth Arrow will be in lights. It is fitting that at this time, not long after his death, that we take a moment to consider Ken Arrow’s academic legacy. This introduction to the Special Issue honoring Ken Arrow highlights his contributions in the field of Operations Research and summarizes the papers published in the special issue that speak to his legacy.
{"title":"Introduction: Special Issue Honoring Kenneth Arrow","authors":"A. Abbas, D. E. Bell","doi":"10.1287/opre.2022.2296","DOIUrl":"https://doi.org/10.1287/opre.2022.2296","url":null,"abstract":"When our successors write about the first century of Operations Research, the name of Kenneth Arrow will be in lights. It is fitting that at this time, not long after his death, that we take a moment to consider Ken Arrow’s academic legacy. This introduction to the Special Issue honoring Ken Arrow highlights his contributions in the field of Operations Research and summarizes the papers published in the special issue that speak to his legacy.","PeriodicalId":19546,"journal":{"name":"Oper. Res.","volume":"45 1","pages":"1293-1295"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81332904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Despite modern solvers being able to tackle mixed-integer quadratic programming problems (MIQPs) for several years, the theoretical and computational implications of the employed resolution techniques are not fully grasped yet. An interesting question concerns the choice of whether to linearize the quadratic part of a convex MIQP: although in theory no approach dominates the other, the decision is typically performed during the preprocessing phase and can thus substantially condition the downstream performance of the solver. In “A Classifier to Decide on the Linearization of Mixed-Integer Quadratic Problems in CPLEX,” Bonami, Lodi, and Zarpellon use machine learning (ML) to cast a prediction on this algorithmic choice. The whole experimental framework aims at integrating optimization knowledge in the learning pipeline and contributes a general methodology for using ML in MIP technology. The workflow is fine-tuned to enable online predictions in the IBM-CPLEX solver ecosystem, and, as a practical result, a classifier deciding on MIQP linearization is successfully deployed in CPLEX 12.10.0.
{"title":"A Classifier to Decide on the Linearization of Mixed-Integer Quadratic Problems in CPLEX","authors":"Pierre Bonami, Andrea Lodi, Giulia Zarpellon","doi":"10.1287/opre.2022.2267","DOIUrl":"https://doi.org/10.1287/opre.2022.2267","url":null,"abstract":"Despite modern solvers being able to tackle mixed-integer quadratic programming problems (MIQPs) for several years, the theoretical and computational implications of the employed resolution techniques are not fully grasped yet. An interesting question concerns the choice of whether to linearize the quadratic part of a convex MIQP: although in theory no approach dominates the other, the decision is typically performed during the preprocessing phase and can thus substantially condition the downstream performance of the solver. In “A Classifier to Decide on the Linearization of Mixed-Integer Quadratic Problems in CPLEX,” Bonami, Lodi, and Zarpellon use machine learning (ML) to cast a prediction on this algorithmic choice. The whole experimental framework aims at integrating optimization knowledge in the learning pipeline and contributes a general methodology for using ML in MIP technology. The workflow is fine-tuned to enable online predictions in the IBM-CPLEX solver ecosystem, and, as a practical result, a classifier deciding on MIQP linearization is successfully deployed in CPLEX 12.10.0.","PeriodicalId":19546,"journal":{"name":"Oper. Res.","volume":"34 1","pages":"3303-3320"},"PeriodicalIF":0.0,"publicationDate":"2022-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87688557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the field of data-driven optimization under uncertainty, scenario reduction is a commonly used technique for computing a smaller number of scenarios to improve computational tractability and interpretability. However traditional approaches do not consider the decision quality when computing these scenarios. In “Optimization-Based Scenario Reduction for Data-Driven Two-Stage Stochastic Optimization,” Bertsimas and Mundru present a novel optimization-based method that explicitly considers the objective and problem structure for reducing the number of scenarios needed for solving two-stage stochastic optimization problems. This new proposed method is generally applicable and has significantly better performance when the number of reduced scenarios is 1%–2% of the full sample size compared with other state-of-the-art optimization and randomization methods, which suggests this improves both tractability and interpretability.
{"title":"Optimization-Based Scenario Reduction for Data-Driven Two-Stage Stochastic Optimization","authors":"D. Bertsimas, Nishanth Mundru","doi":"10.1287/opre.2022.2265","DOIUrl":"https://doi.org/10.1287/opre.2022.2265","url":null,"abstract":"In the field of data-driven optimization under uncertainty, scenario reduction is a commonly used technique for computing a smaller number of scenarios to improve computational tractability and interpretability. However traditional approaches do not consider the decision quality when computing these scenarios. In “Optimization-Based Scenario Reduction for Data-Driven Two-Stage Stochastic Optimization,” Bertsimas and Mundru present a novel optimization-based method that explicitly considers the objective and problem structure for reducing the number of scenarios needed for solving two-stage stochastic optimization problems. This new proposed method is generally applicable and has significantly better performance when the number of reduced scenarios is 1%–2% of the full sample size compared with other state-of-the-art optimization and randomization methods, which suggests this improves both tractability and interpretability.","PeriodicalId":19546,"journal":{"name":"Oper. Res.","volume":"104 1","pages":"1343-1361"},"PeriodicalIF":0.0,"publicationDate":"2022-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75669817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrew Perrykkad, Andreas T. Ernst, M. Krishnamoorthy
When shipping ports are colocated with major population centers, the exclusive use of road transport for moving shipping containers across the metropolitan area is undesirable from both social and economic perspectives. Port shuttles, an integrated road and short-haul rail transport modality, are thereby gaining significant interest from governments and industry alike, especially in the Australian context. In “A Simultaneous Magnanti-Wong Method to Accelerate Benders Decomposition for the Metropolitan Container Transportation Problem,” Perrykkad, Ernst, and Krishnamoorthy explore the mathematics behind the optimal integration of road and port shuttle modalities for container transportation in metropolitan areas, including proofs of NP harness, a Benders decomposition, and an extensive computational study. Critically, to accelerate their Benders decomposition the authors develop the simultaneous Magnanti-Wong method: an extension of the classical Magnanti-Wong acceleration that preserves this problem's important network substructure. In addition to the problem at hand, this technique shows promise more generally for Benders decompositions with special subproblem structure.
{"title":"A Simultaneous Magnanti-Wong Method to Accelerate Benders Decomposition for the Metropolitan Container Transportation Problem","authors":"Andrew Perrykkad, Andreas T. Ernst, M. Krishnamoorthy","doi":"10.1287/opre.2020.2032","DOIUrl":"https://doi.org/10.1287/opre.2020.2032","url":null,"abstract":"When shipping ports are colocated with major population centers, the exclusive use of road transport for moving shipping containers across the metropolitan area is undesirable from both social and economic perspectives. Port shuttles, an integrated road and short-haul rail transport modality, are thereby gaining significant interest from governments and industry alike, especially in the Australian context. In “A Simultaneous Magnanti-Wong Method to Accelerate Benders Decomposition for the Metropolitan Container Transportation Problem,” Perrykkad, Ernst, and Krishnamoorthy explore the mathematics behind the optimal integration of road and port shuttle modalities for container transportation in metropolitan areas, including proofs of NP harness, a Benders decomposition, and an extensive computational study. Critically, to accelerate their Benders decomposition the authors develop the simultaneous Magnanti-Wong method: an extension of the classical Magnanti-Wong acceleration that preserves this problem's important network substructure. In addition to the problem at hand, this technique shows promise more generally for Benders decompositions with special subproblem structure.","PeriodicalId":19546,"journal":{"name":"Oper. Res.","volume":"51 1","pages":"1531-1559"},"PeriodicalIF":0.0,"publicationDate":"2022-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84644730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents methods to obtain analytical solutions to a class of continuous traffic equilibrium problems, where continuously distributed customers from a bounded two-dimensional service region seek service from one of several discretely located facilities via the least congested travel path. We show that under certain conditions, the traffic flux at equilibrium, which is governed by a set of partial differential equations, can be decomposed with respect to each facility and solved analytically. This finding paves the foundation for an efficient solution scheme. Closed-form solution to the equilibrium problem can be obtained readily when the service region has a certain regular shape, or through an additional conformal mapping if the service region has an arbitrary simply connected shape. These results shed light on some interesting properties of traffic equilibrium in a continuous space. This paper also discusses how service facility locations can be easily optimized by incorporating analytical formulas for the total generalized cost of spatially distributed customers under congestion. Examples of application contexts include gates or booths for pedestrian traffic, as well as launching sites for air vehicles. Numerical examples are used to show the superiority of the proposed optimization framework, in terms of both solution quality and computation time, as compared with traditional approaches based on discrete mathematical programming and partial differential equation solution methods. An example with the metro station entrances at the Beijing Railway Station is also presented to illustrate the usefulness of the proposed traffic equilibrium and location design models.
{"title":"On Solving a Class of Continuous Traffic Equilibrium Problems and Planning Facility Location Under Congestion","authors":"Zhaodong Wang, Y. Ouyang, Ruifeng She","doi":"10.1287/opre.2021.2213","DOIUrl":"https://doi.org/10.1287/opre.2021.2213","url":null,"abstract":"This paper presents methods to obtain analytical solutions to a class of continuous traffic equilibrium problems, where continuously distributed customers from a bounded two-dimensional service region seek service from one of several discretely located facilities via the least congested travel path. We show that under certain conditions, the traffic flux at equilibrium, which is governed by a set of partial differential equations, can be decomposed with respect to each facility and solved analytically. This finding paves the foundation for an efficient solution scheme. Closed-form solution to the equilibrium problem can be obtained readily when the service region has a certain regular shape, or through an additional conformal mapping if the service region has an arbitrary simply connected shape. These results shed light on some interesting properties of traffic equilibrium in a continuous space. This paper also discusses how service facility locations can be easily optimized by incorporating analytical formulas for the total generalized cost of spatially distributed customers under congestion. Examples of application contexts include gates or booths for pedestrian traffic, as well as launching sites for air vehicles. Numerical examples are used to show the superiority of the proposed optimization framework, in terms of both solution quality and computation time, as compared with traditional approaches based on discrete mathematical programming and partial differential equation solution methods. An example with the metro station entrances at the Beijing Railway Station is also presented to illustrate the usefulness of the proposed traffic equilibrium and location design models.","PeriodicalId":19546,"journal":{"name":"Oper. Res.","volume":"70 1","pages":"1465-1484"},"PeriodicalIF":0.0,"publicationDate":"2022-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87730177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Soheil Behnezhad, Sina Dehghani, M. Derakhshan, M. Hajiaghayi, Saeed Seddighin
The Colonel Blotto game (initially introduced by Borel in 1921) is commonly used for analyzing a wide range of applications from the U.S.Ppresidential election to innovative technology competitions to advertising, sports, and politics. After around a century Ahmadinejad et al. provided the first polynomial-time algorithm for computing the Nash equilibria in Colonel Blotto games. However, their algorithm consists of an exponential-size LP solved by the ellipsoid method, which is highly impractical. In “Fast and Simple Solutions of Blotto Games,” Behnezhad, Dehghani, Derakhshan, Hajighayi, and Seddighin provide the first polynomial-size LP formulation of the optimal strategies for the Colonel Blotto game using linear extension techniques. They use this polynomial-size LP to provide a simpler and significantly faster algorithm for finding optimal strategies of the Colonel Blotto game. They further show this representation is asymptotically tight, which means there exists no other linear representation of the strategy space with fewer constraints.
{"title":"Fast and Simple Solutions of Blotto Games","authors":"Soheil Behnezhad, Sina Dehghani, M. Derakhshan, M. Hajiaghayi, Saeed Seddighin","doi":"10.1287/opre.2022.2261","DOIUrl":"https://doi.org/10.1287/opre.2022.2261","url":null,"abstract":"The Colonel Blotto game (initially introduced by Borel in 1921) is commonly used for analyzing a wide range of applications from the U.S.Ppresidential election to innovative technology competitions to advertising, sports, and politics. After around a century Ahmadinejad et al. provided the first polynomial-time algorithm for computing the Nash equilibria in Colonel Blotto games. However, their algorithm consists of an exponential-size LP solved by the ellipsoid method, which is highly impractical. In “Fast and Simple Solutions of Blotto Games,” Behnezhad, Dehghani, Derakhshan, Hajighayi, and Seddighin provide the first polynomial-size LP formulation of the optimal strategies for the Colonel Blotto game using linear extension techniques. They use this polynomial-size LP to provide a simpler and significantly faster algorithm for finding optimal strategies of the Colonel Blotto game. They further show this representation is asymptotically tight, which means there exists no other linear representation of the strategy space with fewer constraints.","PeriodicalId":19546,"journal":{"name":"Oper. Res.","volume":"193 1","pages":"506-516"},"PeriodicalIF":0.0,"publicationDate":"2022-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88476375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}