Wanyi Chen, N. T. Argon, T. Bohrmann, B. Linthicum, Kenneth K. Lopiano, A. Mehrotra, D. Travers, S. Ziya
In emergency departments (EDs), one of the major reasons behind long waiting times and crowding overall is the time it takes to move admitted patients from the ED to an appropriate bed in the main hospital. In “Using Hospital Admission Predictions at Triage for Improving Patient Length of Stay in Emergency Departments,” Chen et al. develop a methodology that can be used to shorten these times by predicting the likelihood of admission for each patient at the time of triage and starting the process of identifying a suitable hospital bed and making preparations for the patient’s eventual transfer to the bed right away if the predicted probability of admission is deemed high enough. A simulation study suggests that the proposed methodology, particularly when it takes into account ED census levels, has the potential to shorten average waiting times in the ED without leading to too many false early bed requests.
{"title":"Using Hospital Admission Predictions at Triage for Improving Patient Length of Stay in Emergency Departments","authors":"Wanyi Chen, N. T. Argon, T. Bohrmann, B. Linthicum, Kenneth K. Lopiano, A. Mehrotra, D. Travers, S. Ziya","doi":"10.1287/opre.2022.2405","DOIUrl":"https://doi.org/10.1287/opre.2022.2405","url":null,"abstract":"In emergency departments (EDs), one of the major reasons behind long waiting times and crowding overall is the time it takes to move admitted patients from the ED to an appropriate bed in the main hospital. In “Using Hospital Admission Predictions at Triage for Improving Patient Length of Stay in Emergency Departments,” Chen et al. develop a methodology that can be used to shorten these times by predicting the likelihood of admission for each patient at the time of triage and starting the process of identifying a suitable hospital bed and making preparations for the patient’s eventual transfer to the bed right away if the predicted probability of admission is deemed high enough. A simulation study suggests that the proposed methodology, particularly when it takes into account ED census levels, has the potential to shorten average waiting times in the ED without leading to too many false early bed requests.","PeriodicalId":49809,"journal":{"name":"Military Operations Research","volume":"3 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89767724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sean R. Sinclair, Gauri Jain, Siddhartha Banerjee, C. Yu
Optimizing Mobile Food Pantry Operations Under Demand Uncertainty Managing complex systems often involves making trade-offs between different objectives. A common example is seeking fairness guarantees in sequential resource allocation problems. For example, mobile food pantries are tasked with allocating resources under demand uncertainty with the goal of simultaneously minimizing inefficiency (leftover resources) and envy (deviations in allocations). In this work, we tackle a problem established from a partnership with the Food Bank of the Southern Tier in optimizing their mobile food-pantry operations. We provide an exact characterization of the achievable (envy, efficiency) pairs, showing that any algorithm achieving low envy must suffer from poor inefficiency and vice versa. We complement this exact characterization with a simple algorithm capable of achieving any desired point along the trade-off curve.
{"title":"Sequential Fair Allocation: Achieving the Optimal Envy-Efficiency Trade-off Curve","authors":"Sean R. Sinclair, Gauri Jain, Siddhartha Banerjee, C. Yu","doi":"10.1287/opre.2022.2397","DOIUrl":"https://doi.org/10.1287/opre.2022.2397","url":null,"abstract":"Optimizing Mobile Food Pantry Operations Under Demand Uncertainty Managing complex systems often involves making trade-offs between different objectives. A common example is seeking fairness guarantees in sequential resource allocation problems. For example, mobile food pantries are tasked with allocating resources under demand uncertainty with the goal of simultaneously minimizing inefficiency (leftover resources) and envy (deviations in allocations). In this work, we tackle a problem established from a partnership with the Food Bank of the Southern Tier in optimizing their mobile food-pantry operations. We provide an exact characterization of the achievable (envy, efficiency) pairs, showing that any algorithm achieving low envy must suffer from poor inefficiency and vice versa. We complement this exact characterization with a simple algorithm capable of achieving any desired point along the trade-off curve.","PeriodicalId":49809,"journal":{"name":"Military Operations Research","volume":"8 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2022-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86037086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents the first comprehensive study of a data-driven formulation of the distributionally robust second order stochastic dominance constrained problem (DRSSDCP) that hinges on using a type-1 Wasserstein ambiguity set. It is, furthermore, for the first time shown to be axiomatically motivated in an environment with distribution ambiguity. We formulate the DRSSDCP as a multistage robust optimization problem and further propose a tractable conservative approximation that exploits finite adaptability and a scenario-based lower bounding problem. We then propose the first exact optimization algorithm for this DRSSDCP. We illustrate how the data-driven DRSSDCP can be applied in practice on resource-allocation problems with both synthetic and real data. Our empirical results show that, with a proper adjustment of the size of the Wasserstein ball, DRSSDCP can reach acceptable out-of-sample feasibility yet still generating strictly better performance than what is achieved by the reference strategy.
{"title":"Data-Driven Optimization with Distributionally Robust Second Order Stochastic Dominance Constraints","authors":"Chun Peng, E. Delage","doi":"10.1287/opre.2022.2387","DOIUrl":"https://doi.org/10.1287/opre.2022.2387","url":null,"abstract":"This paper presents the first comprehensive study of a data-driven formulation of the distributionally robust second order stochastic dominance constrained problem (DRSSDCP) that hinges on using a type-1 Wasserstein ambiguity set. It is, furthermore, for the first time shown to be axiomatically motivated in an environment with distribution ambiguity. We formulate the DRSSDCP as a multistage robust optimization problem and further propose a tractable conservative approximation that exploits finite adaptability and a scenario-based lower bounding problem. We then propose the first exact optimization algorithm for this DRSSDCP. We illustrate how the data-driven DRSSDCP can be applied in practice on resource-allocation problems with both synthetic and real data. Our empirical results show that, with a proper adjustment of the size of the Wasserstein ball, DRSSDCP can reach acceptable out-of-sample feasibility yet still generating strictly better performance than what is achieved by the reference strategy.","PeriodicalId":49809,"journal":{"name":"Military Operations Research","volume":"11 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2022-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89236424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Grand-Clément, Carri W. Chan, Vineet Goyal, G. Escobar
Patients whose transfer to the intensive care unit (ICU) is unplanned are prone to higher mortality rates. In “Robustness of Proactive Intensive Care Unit Transfer Policies,” the authors study the problem of finding robust patient transfer policies to the ICU, which account for uncertainty in statistical estimates because of data limitations when optimizing to improve overall patient care. Under general assumptions, it is shown that an optimal transfer policy has a threshold structure. A robust policy also has a threshold structure, and it is more aggressive in transferring patients than the optimal nominal policy, which does not consider parameter uncertainty. The sensitivity of various hospital metrics to small changes in the parameters is highlighted using a data set of close to 300,000 hospitalizations at 21 Kaiser Permanente Northern California hospitals. This work provides useful insights into the impact of parameter uncertainty on deriving simple policies for proactive ICU transfer that have strong empirical performance and theoretical guarantees.
{"title":"Robustness of Proactive Intensive Care Unit Transfer Policies","authors":"J. Grand-Clément, Carri W. Chan, Vineet Goyal, G. Escobar","doi":"10.1287/opre.2022.2403","DOIUrl":"https://doi.org/10.1287/opre.2022.2403","url":null,"abstract":"Patients whose transfer to the intensive care unit (ICU) is unplanned are prone to higher mortality rates. In “Robustness of Proactive Intensive Care Unit Transfer Policies,” the authors study the problem of finding robust patient transfer policies to the ICU, which account for uncertainty in statistical estimates because of data limitations when optimizing to improve overall patient care. Under general assumptions, it is shown that an optimal transfer policy has a threshold structure. A robust policy also has a threshold structure, and it is more aggressive in transferring patients than the optimal nominal policy, which does not consider parameter uncertainty. The sensitivity of various hospital metrics to small changes in the parameters is highlighted using a data set of close to 300,000 hospitalizations at 21 Kaiser Permanente Northern California hospitals. This work provides useful insights into the impact of parameter uncertainty on deriving simple policies for proactive ICU transfer that have strong empirical performance and theoretical guarantees.","PeriodicalId":49809,"journal":{"name":"Military Operations Research","volume":"7 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2022-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87600663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Markov Decision Process Tayloring for Approximation Design Optimal control problems are difficult to solve for problems on large state spaces, calling for the development of approximate solution methods. In “A Low-rank Approximation for MDPs via Moment Coupling,” Zhang and Gurvich introduce a novel framework to approximate Markov decision processes (MDPs) that stands on two pillars: (i) state aggregation, as the algorithmic infrastructure, and (ii) central-limit-theorem-type approximations, as the mathematical underpinning. The theoretical guarantees are grounded in the approximation of the Bellman equation by a partial differential equation (PDE) where, in the spirit of the central limit theorem, the transition matrix of the controlled Markov chain is reduced to its local first and second moments. Instead of solving the PDE, the algorithm introduced in the paper constructs a “sister”' (controlled) Markov chain whose two local transition moments are approximately identical with those of the focal chain. Because of this moment matching, the original chain and its sister are coupled through the PDE, facilitating optimality guarantees. Embedded into standard soft aggregation, moment matching provides a disciplined mechanism to tune the aggregation and disaggregation probabilities.
{"title":"A Low-Rank Approximation for MDPs via Moment Coupling","authors":"Amy Zhang, Itai Gurvich","doi":"10.1287/opre.2022.2392","DOIUrl":"https://doi.org/10.1287/opre.2022.2392","url":null,"abstract":"Markov Decision Process Tayloring for Approximation Design Optimal control problems are difficult to solve for problems on large state spaces, calling for the development of approximate solution methods. In “A Low-rank Approximation for MDPs via Moment Coupling,” Zhang and Gurvich introduce a novel framework to approximate Markov decision processes (MDPs) that stands on two pillars: (i) state aggregation, as the algorithmic infrastructure, and (ii) central-limit-theorem-type approximations, as the mathematical underpinning. The theoretical guarantees are grounded in the approximation of the Bellman equation by a partial differential equation (PDE) where, in the spirit of the central limit theorem, the transition matrix of the controlled Markov chain is reduced to its local first and second moments. Instead of solving the PDE, the algorithm introduced in the paper constructs a “sister”' (controlled) Markov chain whose two local transition moments are approximately identical with those of the focal chain. Because of this moment matching, the original chain and its sister are coupled through the PDE, facilitating optimality guarantees. Embedded into standard soft aggregation, moment matching provides a disciplined mechanism to tune the aggregation and disaggregation probabilities.","PeriodicalId":49809,"journal":{"name":"Military Operations Research","volume":"1 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2022-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77925512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A Novel Practical Stochastic Pricing Model for Multi-Interval Real-Time Markets Practical implementations of economic dispatch with associated pricing systems are crucial for operating electricity markets. Because of the high volatility caused by the increasing integration of renewable energy, consideration of the underlying stochastic problem is becoming more important than ever. It is a challenge to incorporate the uncertain nature of real-time operations into an already complex multi-interval dynamic problem with intertemporal constraints. Because solving a standard multi-stage stochastic programming problem is too burdensome in terms of calculation time for real-time markets, it has been standard practice in electricity markets to use a deterministic approximation with varying degrees of look-ahead. Cho and Papavasiliou, in their article “Pricing Under Uncertainty in Multi-Interval Real-Time Markets”, introduced a practical alternative method for pricing under uncertainty in multi-interval real-time markets. Using slightly different stochastic formulations, these authors propose an approach that preserves the attractive features from both the deterministic formulation (simpler calculation) and the standard stochastic formulation (better performance).
{"title":"Pricing Under Uncertainty in Multi-Interval Real-Time Markets","authors":"J. Cho, A. Papavasiliou","doi":"10.1287/opre.2022.2314","DOIUrl":"https://doi.org/10.1287/opre.2022.2314","url":null,"abstract":"A Novel Practical Stochastic Pricing Model for Multi-Interval Real-Time Markets Practical implementations of economic dispatch with associated pricing systems are crucial for operating electricity markets. Because of the high volatility caused by the increasing integration of renewable energy, consideration of the underlying stochastic problem is becoming more important than ever. It is a challenge to incorporate the uncertain nature of real-time operations into an already complex multi-interval dynamic problem with intertemporal constraints. Because solving a standard multi-stage stochastic programming problem is too burdensome in terms of calculation time for real-time markets, it has been standard practice in electricity markets to use a deterministic approximation with varying degrees of look-ahead. Cho and Papavasiliou, in their article “Pricing Under Uncertainty in Multi-Interval Real-Time Markets”, introduced a practical alternative method for pricing under uncertainty in multi-interval real-time markets. Using slightly different stochastic formulations, these authors propose an approach that preserves the attractive features from both the deterministic formulation (simpler calculation) and the standard stochastic formulation (better performance).","PeriodicalId":49809,"journal":{"name":"Military Operations Research","volume":"20 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88429429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joint replenishment problems constitute an important class of models in inventory management. They exhibit aspects of possible coordination among multiple products to save costs. Their computational complexity had been open even if there are just two products that need to be synced. In “Integer factorization: Why two-item joint replenishment is hard,” Schulz and Telha present a simple framework based on integer factorization to establish the computational hardness of two variants of the joint replenishment problem with two items. Whereas difficult to solve in practice and not believed to be solvable in polynomial time, integer factorization is not as difficult as NP-complete problems. The authors show that a similar technique can be used to show even the NP-completeness of one variant of the joint replenishment problem (again with just two items).
{"title":"Integer Factorization: Why Two-Item Joint Replenishment Is Hard","authors":"Andreas S. Schulz, C. Telha","doi":"10.1287/opre.2022.2390","DOIUrl":"https://doi.org/10.1287/opre.2022.2390","url":null,"abstract":"Joint replenishment problems constitute an important class of models in inventory management. They exhibit aspects of possible coordination among multiple products to save costs. Their computational complexity had been open even if there are just two products that need to be synced. In “Integer factorization: Why two-item joint replenishment is hard,” Schulz and Telha present a simple framework based on integer factorization to establish the computational hardness of two variants of the joint replenishment problem with two items. Whereas difficult to solve in practice and not believed to be solvable in polynomial time, integer factorization is not as difficult as NP-complete problems. The authors show that a similar technique can be used to show even the NP-completeness of one variant of the joint replenishment problem (again with just two items).","PeriodicalId":49809,"journal":{"name":"Military Operations Research","volume":"52 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84566698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Palma London, Shai Vardi, Reza Eghbali, A. Wierman
When and where was the study conducted: This work was done in 2018, 2019 and 2020 when Palma London was a PhD student at Caltech and Shai Vardi was a postdoc at Caltech. This work was also done in part while Palma London was visiting Purdue University, and while Reza Eghbali was a postdoctoral fellow the Simons Institute for the Theory of Computing. Adam Wierman is a professor at Caltech. Article Summary and Talking Points: Please describe the primary purpose/findings of your article in 3 sentences or less. This paper presents a framework for accelerating (speeding up) existing convex program solvers. Across engineering disciplines, a fundamental bottleneck is the availability of fast, efficient, accurate solvers. We present an acceleration method that speeds up linear programing solvers such as Gurobi and convex program solvers such as the Splitting Conic Solver by two orders of magnitude. Please include 3-5 short bullet points of “Need to Know” items regarding this research and your findings. - Optimizations problems arise in many engineering and science disciplines, and developing efficient optimization solvers is key to future innovation. - We speed up linear programing solver Gurobi by two orders of magnitude. - This work applies to optimization problems with monotone objective functions and packing constraints, which is a common problem formulation across many disciplines. Please identify 2 pull quotes from your article that best capture the novelty and impact of your research. “We propose a framework for accelerating exact and approximate convex programming solvers for packing linear programming problems and a family of convex programming problems with linear constraints. Analytically, we provide worst-case guarantees on the run time and the quality of the solution produced. Numerically, we demonstrate that our framework speeds up Gurobi and the Splitting Conic Solver by two orders of magnitude, while maintaining a near-optimal solution.” “Our focus in this paper is on a class of packing problems for which data is either very costly or hard to obtain. In these situations, the number of data points available is much smaller than the number of variables. In a machine-learning setting, this regime is increasingly prevalent because it is often advantageous to consider larger and larger feature spaces, while not necessarily obtaining proportionally more data.” Article Implications - Please describe in 5 sentences or less the innovative takeaway(s) of your research. This framework applies to optimization problems with monotone objective functions and packing constraints, which is a common problem formulation across many disciplines, including machine learning, inference, and resource allocation. Providing fast solvers for these problems is crucial. We exploit characteristics of the problem structure and leverage statistical properties of the problem constraints to allow us to speed up optimization solvers. We present worst-case guarantees on run
{"title":"Black-Box Acceleration of Monotone Convex Program Solvers","authors":"Palma London, Shai Vardi, Reza Eghbali, A. Wierman","doi":"10.1287/opre.2022.2352","DOIUrl":"https://doi.org/10.1287/opre.2022.2352","url":null,"abstract":"When and where was the study conducted: This work was done in 2018, 2019 and 2020 when Palma London was a PhD student at Caltech and Shai Vardi was a postdoc at Caltech. This work was also done in part while Palma London was visiting Purdue University, and while Reza Eghbali was a postdoctoral fellow the Simons Institute for the Theory of Computing. Adam Wierman is a professor at Caltech. Article Summary and Talking Points: Please describe the primary purpose/findings of your article in 3 sentences or less. This paper presents a framework for accelerating (speeding up) existing convex program solvers. Across engineering disciplines, a fundamental bottleneck is the availability of fast, efficient, accurate solvers. We present an acceleration method that speeds up linear programing solvers such as Gurobi and convex program solvers such as the Splitting Conic Solver by two orders of magnitude. Please include 3-5 short bullet points of “Need to Know” items regarding this research and your findings. - Optimizations problems arise in many engineering and science disciplines, and developing efficient optimization solvers is key to future innovation. - We speed up linear programing solver Gurobi by two orders of magnitude. - This work applies to optimization problems with monotone objective functions and packing constraints, which is a common problem formulation across many disciplines. Please identify 2 pull quotes from your article that best capture the novelty and impact of your research. “We propose a framework for accelerating exact and approximate convex programming solvers for packing linear programming problems and a family of convex programming problems with linear constraints. Analytically, we provide worst-case guarantees on the run time and the quality of the solution produced. Numerically, we demonstrate that our framework speeds up Gurobi and the Splitting Conic Solver by two orders of magnitude, while maintaining a near-optimal solution.” “Our focus in this paper is on a class of packing problems for which data is either very costly or hard to obtain. In these situations, the number of data points available is much smaller than the number of variables. In a machine-learning setting, this regime is increasingly prevalent because it is often advantageous to consider larger and larger feature spaces, while not necessarily obtaining proportionally more data.” Article Implications - Please describe in 5 sentences or less the innovative takeaway(s) of your research. This framework applies to optimization problems with monotone objective functions and packing constraints, which is a common problem formulation across many disciplines, including machine learning, inference, and resource allocation. Providing fast solvers for these problems is crucial. We exploit characteristics of the problem structure and leverage statistical properties of the problem constraints to allow us to speed up optimization solvers. We present worst-case guarantees on run","PeriodicalId":49809,"journal":{"name":"Military Operations Research","volume":"87 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2022-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89623477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Murray Lei, Sheng Liu, Stefanus Jasin, A. Vakhutinsky
In “Joint Inventory and Pricing for a One-Warehouse Multistore Problem: Spiraling Phenomena, Near Optimal Policies, and the Value of Dynamic Pricing,” Lei, Liu, Jasin, and Vakhutinsky consider a joint inventory and pricing problem with one warehouse and multiple stores with lost sales. The retailer makes a one-time decision on the amount of inventory to be placed at the warehouse at the beginning of the selling season, followed by periodic joint replenishment and pricing decisions for each store throughout the season. The authors first analyze the performance of two popular and simple heuristic policies that directly implement the solution of a deterministic approximation of the original stochastic problem. They show that simple reoptimization of the deterministic approximation may worsen the performance by causing a “spiraling up” movement in expected lost sales quantity. The authors further propose two improved heuristic policies with provably near-optimal performance. In particular, the first policy achieves the best possible performance among all policies that rely on static pricing, and the second policy outperforms the first one because of its use of carefully designed dynamic pricing scheme.
{"title":"Joint Inventory and Pricing for a One-Warehouse Multistore Problem: Spiraling Phenomena, Near Optimal Policies, and the Value of Dynamic Pricing","authors":"Murray Lei, Sheng Liu, Stefanus Jasin, A. Vakhutinsky","doi":"10.1287/opre.2022.2389","DOIUrl":"https://doi.org/10.1287/opre.2022.2389","url":null,"abstract":"In “Joint Inventory and Pricing for a One-Warehouse Multistore Problem: Spiraling Phenomena, Near Optimal Policies, and the Value of Dynamic Pricing,” Lei, Liu, Jasin, and Vakhutinsky consider a joint inventory and pricing problem with one warehouse and multiple stores with lost sales. The retailer makes a one-time decision on the amount of inventory to be placed at the warehouse at the beginning of the selling season, followed by periodic joint replenishment and pricing decisions for each store throughout the season. The authors first analyze the performance of two popular and simple heuristic policies that directly implement the solution of a deterministic approximation of the original stochastic problem. They show that simple reoptimization of the deterministic approximation may worsen the performance by causing a “spiraling up” movement in expected lost sales quantity. The authors further propose two improved heuristic policies with provably near-optimal performance. In particular, the first policy achieves the best possible performance among all policies that rely on static pricing, and the second policy outperforms the first one because of its use of carefully designed dynamic pricing scheme.","PeriodicalId":49809,"journal":{"name":"Military Operations Research","volume":"13 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2022-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83552945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In “Identifying Merger Opportunities: The Case of Air Traffic Control,” N. Adler, O. Olesen, and N. Volta propose a model to identify an optimal horizontal merger configuration at the level of an industry or firm with multiple branches. Assuming that each firm operates within a catchment area or owns part of a network, we extend the model to consider feasible mergers that cover a contiguous area, should network effects be a consideration. An application to the European air traffic control system suggests that four contiguous air navigation service providers should replace the current 29 providers and the nine functional airspace blocks proposed in the Single European Skies initiative. The technological developments in air traffic management in which regulators on both sides of the Atlantic have invested heavily, namely SESAR and NextGen, are unlikely to be used without a concomitant reduction in operating costs through economies of scale. We find that the politically oriented solution may save around one third of current costs, but an optimal solution will save closer to 46%.
在“识别合并机会:空中交通管制的案例”中,N. Adler, O. Olesen和N. Volta提出了一个模型来识别具有多个分支的行业或公司层面的最佳横向合并配置。假设每个公司都在一个集水区经营或拥有一部分网络,我们扩展模型,考虑覆盖连续区域的可行合并,如果考虑到网络效应。欧洲空中交通管制系统的一项申请表明,四个连续的空中导航服务提供商应该取代目前的29个提供商和欧洲单一天空倡议中提出的9个功能空域块。大西洋两岸的监管机构在空中交通管理方面投入了大量资金的技术发展,即SESAR和NextGen,如果没有规模经济带来的运营成本降低,就不太可能得到应用。我们发现,以政治为导向的解决方案可能会节省约三分之一的当前成本,但最优解决方案将节省近46%。
{"title":"Identifying Merger Opportunities: The Case of Air Traffic Control","authors":"N. Adler, O. Olesen, N. Volta","doi":"10.1287/opre.2022.2348","DOIUrl":"https://doi.org/10.1287/opre.2022.2348","url":null,"abstract":"In “Identifying Merger Opportunities: The Case of Air Traffic Control,” N. Adler, O. Olesen, and N. Volta propose a model to identify an optimal horizontal merger configuration at the level of an industry or firm with multiple branches. Assuming that each firm operates within a catchment area or owns part of a network, we extend the model to consider feasible mergers that cover a contiguous area, should network effects be a consideration. An application to the European air traffic control system suggests that four contiguous air navigation service providers should replace the current 29 providers and the nine functional airspace blocks proposed in the Single European Skies initiative. The technological developments in air traffic management in which regulators on both sides of the Atlantic have invested heavily, namely SESAR and NextGen, are unlikely to be used without a concomitant reduction in operating costs through economies of scale. We find that the politically oriented solution may save around one third of current costs, but an optimal solution will save closer to 46%.","PeriodicalId":49809,"journal":{"name":"Military Operations Research","volume":"9 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2022-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87269992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}