Pub Date : 2020-10-01DOI: 10.1007/s13675-020-00126-9
Diego Recalde , Ramiro Torres , Polo Vaca
In this work, a multi-constraint graph partitioning problem is introduced. The input is an undirected graph with costs on the edges and multiple weights on the nodes. The problem calls for a partition of the node set into a fixed number of clusters, such that each cluster satisfies a collection of node weight constraints, and the total cost of the edges whose end nodes are in the same cluster is minimized. It arises as a sub-problem of an integrated vehicle and pollster problem from a real-world application. Two integer programming formulations are provided, and several families of valid inequalities associated with the respective polyhedra are proved. An exact algorithm based on Branch & Bound and cutting planes is proposed, and it is tested on real-world instances.
{"title":"An exact approach for the multi-constraint graph partitioning problem","authors":"Diego Recalde , Ramiro Torres , Polo Vaca","doi":"10.1007/s13675-020-00126-9","DOIUrl":"10.1007/s13675-020-00126-9","url":null,"abstract":"<div><p>In this work, a multi-constraint graph partitioning problem is introduced. The input is an undirected graph with costs on the edges and multiple weights on the nodes. The problem calls for a partition of the node set into a fixed number of clusters, such that each cluster satisfies a collection of node weight constraints, and the total cost of the edges whose end nodes are in the same cluster is minimized. It arises as a sub-problem of an integrated vehicle and pollster problem from a real-world application. Two integer programming formulations are provided, and several families of valid inequalities associated with the respective polyhedra are proved. An exact algorithm based on Branch & Bound and cutting planes is proposed, and it is tested on real-world instances.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"8 3","pages":"Pages 289-308"},"PeriodicalIF":2.4,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s13675-020-00126-9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49298068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-01DOI: 10.1007/s13675-020-00125-w
Ernst Althaus , Felix Rauterberg , Sarah Ziegler
In the classical Euclidean Steiner minimum tree (SMT) problem, we are given a set of points in the Euclidean plane and we are supposed to find the minimum length tree that connects all these points, allowing the addition of arbitrary additional points. We investigate the variant of the problem where the input is a set of line segments. We allow these segments to have length 0, i.e., they are points and hence we generalize the classical problem. Furthermore, they are allowed to intersect such that we can model polygonal input. As in the GeoSteiner approach of Juhl et al. (Math Program Comput 10(2):487–532, 2018) for the classical case, we use a two-phase approach where we construct a superset of so-called full components of an SMT in the first phase. We prove a structural theorem for these full components, which allows us to use almost the same GeoSteiner algorithm as in the classical SMT problem. The second phase, the selection of a minimal cost subset of constructed full components, is exactly the same as in GeoSteiner approach. Finally, we report some experimental results that show that our approach is more efficient than the approximate solution that is obtained by sampling the segments.
{"title":"Computing Euclidean Steiner trees over segments","authors":"Ernst Althaus , Felix Rauterberg , Sarah Ziegler","doi":"10.1007/s13675-020-00125-w","DOIUrl":"10.1007/s13675-020-00125-w","url":null,"abstract":"<div><p>In the classical Euclidean Steiner minimum tree (SMT) problem, we are given a set of points in the Euclidean plane and we are supposed to find the minimum length tree that connects all these points, allowing the addition of arbitrary additional points. We investigate the variant of the problem where the input is a set of line segments. We allow these segments to have length 0, i.e., they are points and hence we generalize the classical problem. Furthermore, they are allowed to intersect such that we can model polygonal input. As in the GeoSteiner approach of Juhl et al. (Math Program Comput 10(2):487–532, 2018) for the classical case, we use a two-phase approach where we construct a superset of so-called full components of an SMT in the first phase. We prove a structural theorem for these full components, which allows us to use almost the same GeoSteiner algorithm as in the classical SMT problem. The second phase, the selection of a minimal cost subset of constructed full components, is exactly the same as in GeoSteiner approach. Finally, we report some experimental results that show that our approach is more efficient than the approximate solution that is obtained by sampling the segments.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"8 3","pages":"Pages 309-325"},"PeriodicalIF":2.4,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s13675-020-00125-w","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48047296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-01DOI: 10.1007/s13675-020-00127-8
Heiner Ackermann , Erik Diessel
Integrated packing and sequence-optimization problems appear in many industrial applications. As an example of this type of problem, we consider the production of glued laminated timber (glulam) in sawmills: Wood beams must be packed into a sequence of pressing steps subject to packing constraints of the press and subject to sequencing constraints. In this paper, we present a three-stage approach for solving this hard optimization problem: Firstly, we identify alternative packings for small parts of an instance. Secondly, we choose an optimal subset of these packings by solving a set cover problem. Finally, we apply a sequencing algorithm in order to find an optimal order of the selected subsequences. For every level of the hierarchy, we present tailored algorithms, analyze their performance and illustrate the efficiency of the overall approach by a comprehensive numerical study.
{"title":"A hierarchical approach for solving an integrated packing and sequence-optimization problem in production of glued laminated timber","authors":"Heiner Ackermann , Erik Diessel","doi":"10.1007/s13675-020-00127-8","DOIUrl":"10.1007/s13675-020-00127-8","url":null,"abstract":"<div><p>Integrated packing and sequence-optimization problems appear in many industrial applications. As an example of this type of problem, we consider the production of glued laminated timber (glulam) in sawmills: Wood beams must be packed into a sequence of pressing steps subject to packing constraints of the press and subject to sequencing constraints. In this paper, we present a three-stage approach for solving this hard optimization problem: Firstly, we identify alternative packings for small parts of an instance. Secondly, we choose an optimal subset of these packings by solving a set cover problem. Finally, we apply a sequencing algorithm in order to find an optimal order of the selected subsequences. For every level of the hierarchy, we present tailored algorithms, analyze their performance and illustrate the efficiency of the overall approach by a comprehensive numerical study.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"8 3","pages":"Pages 263-288"},"PeriodicalIF":2.4,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s13675-020-00127-8","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45235738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-01DOI: 10.1007/s13675-020-00129-6
Patrick Gemander , Wei-Kun Chen , Dieter Weninger , Leona Gottwald , Ambros Gleixner , Alexander Martin
In state-of-the-art mixed-integer programming solvers, a large array of reduction techniques are applied to simplify the problem and strengthen the model formulation before starting the actual branch-and-cut phase. Despite their mathematical simplicity, these methods can have significant impact on the solvability of a given problem. However, a crucial property for employing presolve techniques successfully is their speed. Hence, most methods inspect constraints or variables individually in order to guarantee linear complexity. In this paper, we present new hashing-based pairing mechanisms that help to overcome known performance limitations of more powerful presolve techniques that consider pairs of rows or columns. Additionally, we develop an enhancement to one of these presolve techniques by exploiting the presence of set-packing structures on binary variables in order to strengthen the resulting reductions without increasing runtime. We analyze the impact of these methods on the MIPLIB 2017 benchmark set based on an implementation in the MIP solver SCIP.
{"title":"Two-row and two-column mixed-integer presolve using hashing-based pairing methods","authors":"Patrick Gemander , Wei-Kun Chen , Dieter Weninger , Leona Gottwald , Ambros Gleixner , Alexander Martin","doi":"10.1007/s13675-020-00129-6","DOIUrl":"10.1007/s13675-020-00129-6","url":null,"abstract":"<div><p>In state-of-the-art mixed-integer programming solvers, a large array of reduction techniques are applied to simplify the problem and strengthen the model formulation before starting the actual branch-and-cut phase. Despite their mathematical simplicity, these methods can have significant impact on the solvability of a given problem. However, a crucial property for employing presolve techniques successfully is their speed. Hence, most methods inspect constraints or variables individually in order to guarantee linear complexity. In this paper, we present new hashing-based pairing mechanisms that help to overcome known performance limitations of more powerful presolve techniques that consider pairs of rows or columns. Additionally, we develop an enhancement to one of these presolve techniques by exploiting the presence of set-packing structures on binary variables in order to strengthen the resulting reductions without increasing runtime. We analyze the impact of these methods on the MIPLIB 2017 benchmark set based on an implementation in the MIP solver SCIP.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"8 3","pages":"Pages 205-240"},"PeriodicalIF":2.4,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s13675-020-00129-6","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129533614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-01DOI: 10.1007/s13675-020-00130-z
Gerald Gamrath , Timo Berthold , Domenico Salvagnin
Dual degeneracy, i.e., the presence of multiple optimal bases to a linear programming (LP) problem, heavily affects the solution process of mixed integer programming (MIP) solvers. Different optimal bases lead to different cuts being generated, different branching decisions being taken and different solutions being found by primal heuristics. Nevertheless, only a few methods have been published that either avoid or exploit dual degeneracy. The aim of the present paper is to conduct a thorough computational study on the presence of dual degeneracy for the instances of well-known public MIP instance collections. How many instances are affected by dual degeneracy? How degenerate are the affected models? How does branching affect degeneracy: Does it increase or decrease by fixing variables? Can we identify different types of degenerate MIPs? As a tool to answer these questions, we introduce a new measure for dual degeneracy: the variable–constraint ratio of the optimal face. It provides an estimate for the likelihood that a basic variable can be pivoted out of the basis. Furthermore, we study how the so-called cloud intervals—the projections of the optimal face of the LP relaxations onto the individual variables—evolve during tree search and the implications for reducing the set of branching candidates.
{"title":"An exploratory computational analysis of dual degeneracy in mixed-integer programming","authors":"Gerald Gamrath , Timo Berthold , Domenico Salvagnin","doi":"10.1007/s13675-020-00130-z","DOIUrl":"10.1007/s13675-020-00130-z","url":null,"abstract":"<div><p>Dual degeneracy, i.e., the presence of multiple optimal bases to a linear programming (LP) problem, heavily affects the solution process of mixed integer programming (MIP) solvers. Different optimal bases lead to different cuts being generated, different branching decisions being taken and different solutions being found by primal heuristics. Nevertheless, only a few methods have been published that either avoid or exploit dual degeneracy. The aim of the present paper is to conduct a thorough computational study on the presence of dual degeneracy for the instances of well-known public MIP instance collections. How many instances are affected by dual degeneracy? How degenerate are the affected models? How does branching affect degeneracy: Does it increase or decrease by fixing variables? Can we identify different types of degenerate MIPs? As a tool to answer these questions, we introduce a new measure for dual degeneracy: the variable–constraint ratio of the optimal face. It provides an estimate for the likelihood that a basic variable can be pivoted out of the basis. Furthermore, we study how the so-called cloud intervals—the projections of the optimal face of the LP relaxations onto the individual variables—evolve during tree search and the implications for reducing the set of branching candidates.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"8 3","pages":"Pages 241-261"},"PeriodicalIF":2.4,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s13675-020-00130-z","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129878471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-01DOI: 10.1007/s13675-020-00131-y
F.Antonio Medrano
The vertex p-center problem consists of locating p facilities among a set of M potential sites such that the maximum distance from any demand to its closest located facility is minimized. The complete vertex p-center problem solves the p-center problem for all p from 1 to the total number of sites, resulting in a multi-objective trade-off curve between the number of facilities and the service distance required to achieve full coverage. This trade-off provides a reference to planners and decision makers, enabling them to easily visualize the consequences of choosing different coverage design criteria for the given spatial configuration of the problem. We present two fast algorithms for solving the complete p-center problem: one using the classical formulation but trimming variables while still maintaining optimality and the other converting the problem to a location set covering problem and solving for all distances in the distance matrix. We also discuss scenarios where it makes sense to solve the problem via brute-force enumeration. All methods result in significant speedups, with the set covering method reducing computation times by many orders of magnitude.
{"title":"The complete vertex p-center problem","authors":"F.Antonio Medrano","doi":"10.1007/s13675-020-00131-y","DOIUrl":"10.1007/s13675-020-00131-y","url":null,"abstract":"<div><p>The vertex <em>p</em>-center problem consists of locating <em>p</em> facilities among a set of <em>M</em> potential sites such that the maximum distance from any demand to its closest located facility is minimized. The complete vertex <em>p</em>-center problem solves the <em>p</em>-center problem for all <em>p</em> from 1 to the total number of sites, resulting in a multi-objective trade-off curve between the number of facilities and the service distance required to achieve full coverage. This trade-off provides a reference to planners and decision makers, enabling them to easily visualize the consequences of choosing different coverage design criteria for the given spatial configuration of the problem. We present two fast algorithms for solving the complete <em>p</em>-center problem: one using the classical formulation but trimming variables while still maintaining optimality and the other converting the problem to a location set covering problem and solving for all distances in the distance matrix. We also discuss scenarios where it makes sense to solve the problem via brute-force enumeration. All methods result in significant speedups, with the set covering method reducing computation times by many orders of magnitude.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"8 3","pages":"Pages 327-343"},"PeriodicalIF":2.4,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s13675-020-00131-y","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132685968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1007/s13675-020-00121-0
Arash Gourtani , Tri-Dung Nguyen , Huifu Xu
In this paper, we consider a facility location problem where customer demand constitutes considerable uncertainty, and where complete information on the distribution of the uncertainty is unavailable. We formulate the optimal decision problem as a two-stage stochastic mixed integer programming problem: an optimal selection of facility locations in the first stage and an optimal decision on the operation of each facility in the second stage. A distributionally robust optimization framework is proposed to hedge risks arising from incomplete information on the distribution of the uncertainty. Specifically, by exploiting the moment information, we construct a set of distributions which contains the true distribution and where the optimal decision is based on the worst distribution from the set. We then develop two numerical schemes for solving the distributionally robust facility location problem: a semi-infinite programming approach which exploits moments of certain reference random variables and a semi-definite programming approach which utilizes the mean and correlation of the underlying random variables describing the demand uncertainty. In the semi-infinite programming approach, we apply the well-known linear decision rule approach to the robust dual problem and then approximate the semi-infinite constraints through the conditional value at risk measure. We provide numerical tests to demonstrate the computation and properties of the robust solutions.
{"title":"A distributionally robust optimization approach for two-stage facility location problems","authors":"Arash Gourtani , Tri-Dung Nguyen , Huifu Xu","doi":"10.1007/s13675-020-00121-0","DOIUrl":"10.1007/s13675-020-00121-0","url":null,"abstract":"<div><p>In this paper, we consider a facility location problem where customer demand constitutes considerable uncertainty, and where complete information on the distribution of the uncertainty is unavailable. We formulate the optimal decision problem as a two-stage stochastic mixed integer programming problem: an optimal selection of facility locations in the first stage and an optimal decision on the operation of each facility in the second stage. A distributionally robust optimization framework is proposed to hedge risks arising from incomplete information on the distribution of the uncertainty. Specifically, by exploiting the moment information, we construct a set of distributions which contains the true distribution and where the optimal decision is based on the worst distribution from the set. We then develop two numerical schemes for solving the distributionally robust facility location problem: a semi-infinite programming approach which exploits moments of certain reference random variables and a semi-definite programming approach which utilizes the mean and correlation of the underlying random variables describing the demand uncertainty. In the semi-infinite programming approach, we apply the well-known linear decision rule approach to the robust dual problem and then approximate the semi-infinite constraints through the conditional value at risk measure. We provide numerical tests to demonstrate the computation and properties of the robust solutions.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"8 2","pages":"Pages 141-172"},"PeriodicalIF":2.4,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s13675-020-00121-0","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41454378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1007/s13675-020-00124-x
Marc Demange , Virginie Gabrel , MarcelA. Haddad , Cécile Murat
The location of shelters in different areas threatened by wildfires is one of the possible ways to reduce fatalities in a context of an increasing number of catastrophic and severe wildfires. These shelters will enable the population in the area to be protected in case of fire outbreaks. The subject of our study is to determine the best place for shelters in a given territory. The territory, divided into zones, is represented by a graph in which each zone corresponds to a node and two nodes are linked by an edge if it is feasible to go directly from one zone to the other. The problem is to locate p shelters on nodes so that the maximum distance of any node to its nearest shelter is minimized. When the uncertainty of fire outbreaks is not considered, this problem corresponds to the well-known p-Center problem on a graph. In this article, the uncertainty of fire outbreaks is introduced taking into account a finite set of fire scenarios. A scenario defines a fire outbreak on a single zone with the main consequence of modifying evacuation paths. Several evacuation paths may become impracticable and the ensuing evacuation decisions made under pressure may no longer be rational. In this context, the new issue under consideration is to place p shelters on a graph so that the maximum evacuation distance of any node to its nearest shelter in any scenario is minimized. We refer to this problem as the Robust p-Center problem under Pressure. After proving the NP-hardness of this problem on subgraphs of grids, we propose a first formulation based on 0-1 Linear Programming. For real size instances, the sizes of the 0-1 Linear Programs are huge and we propose a decomposition scheme to solve them exactly. Experimental results outline the efficiency of our approach.
Work supported by the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie Grant agreement No 691161.
{"title":"A robust p-Center problem under pressure to locate shelters in wildfire context","authors":"Marc Demange , Virginie Gabrel , MarcelA. Haddad , Cécile Murat","doi":"10.1007/s13675-020-00124-x","DOIUrl":"10.1007/s13675-020-00124-x","url":null,"abstract":"<div><p>The location of shelters in different areas threatened by wildfires is one of the possible ways to reduce fatalities in a context of an increasing number of catastrophic and severe wildfires. These shelters will enable the population in the area to be protected in case of fire outbreaks. The subject of our study is to determine the best place for shelters in a given territory. The territory, divided into zones, is represented by a graph in which each zone corresponds to a node and two nodes are linked by an edge if it is feasible to go directly from one zone to the other. The problem is to locate <em>p</em> shelters on nodes so that the maximum distance of any node to its nearest shelter is minimized. When the uncertainty of fire outbreaks is not considered, this problem corresponds to the well-known <em>p</em>-Center problem on a graph. In this article, the uncertainty of fire outbreaks is introduced taking into account a finite set of fire scenarios. A scenario defines a fire outbreak on a single zone with the main consequence of modifying evacuation paths. Several evacuation paths may become impracticable and the ensuing evacuation decisions made under pressure may no longer be rational. In this context, the new issue under consideration is to place <em>p</em> shelters on a graph so that the maximum evacuation distance of any node to its nearest shelter in any scenario is minimized. We refer to this problem as the Robust <em>p</em>-Center problem under Pressure. After proving the NP-hardness of this problem on subgraphs of grids, we propose a first formulation based on 0-1 Linear Programming. For real size instances, the sizes of the 0-1 Linear Programs are huge and we propose a decomposition scheme to solve them exactly. Experimental results outline the efficiency of our approach.</p><p>Work supported by the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie Grant agreement No 691161.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"8 2","pages":"Pages 103-139"},"PeriodicalIF":2.4,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s13675-020-00124-x","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120815763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1007/s13675-020-00123-y
W. van Ackooij , X. Warin
Multistage stochastic programs arise in many applications from engineering whenever a set of inventories or stocks has to be valued. Such is the case in seasonal storage valuation of a set of cascaded reservoir chains in hydro management. A popular method is stochastic dual dynamic programming (SDDP), especially when the dimensionality of the problem is large and dynamic programming is no longer an option. The usual assumption of SDDP is that uncertainty is stage-wise independent, which is highly restrictive from a practical viewpoint. When possible, the usual remedy is to increase the state-space to account for some degree of dependency. In applications, this may not be possible or it may increase the state-space by too much. In this paper, we present an alternative based on keeping a functional dependency in the SDDP—cuts related to the conditional expectations in the dynamic programming equations. Our method is based on popular methodology in mathematical finance, where it has progressively replaced scenario trees due to superior numerical performance. We demonstrate the interest of combining this way of handling dependency in uncertainty and SDDP on a set of numerical examples. Our method is readily available in the open-source software package StOpt.
{"title":"On conditional cuts for stochastic dual dynamic programming","authors":"W. van Ackooij , X. Warin","doi":"10.1007/s13675-020-00123-y","DOIUrl":"10.1007/s13675-020-00123-y","url":null,"abstract":"<div><p>Multistage stochastic programs arise in many applications from engineering whenever a set of inventories or stocks has to be valued. Such is the case in seasonal storage valuation of a set of cascaded reservoir chains in hydro management. A popular method is stochastic dual dynamic programming (SDDP), especially when the dimensionality of the problem is large and dynamic programming is no longer an option. The usual assumption of SDDP is that uncertainty is stage-wise independent, which is highly restrictive from a practical viewpoint. When possible, the usual remedy is to increase the state-space to account for some degree of dependency. In applications, this may not be possible or it may increase the state-space by too much. In this paper, we present an alternative based on keeping a functional dependency in the SDDP—cuts related to the conditional expectations in the dynamic programming equations. Our method is based on popular methodology in mathematical finance, where it has progressively replaced scenario trees due to superior numerical performance. We demonstrate the interest of combining this way of handling dependency in uncertainty and SDDP on a set of numerical examples. Our method is readily available in the open-source software package StOpt.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"8 2","pages":"Pages 173-199"},"PeriodicalIF":2.4,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s13675-020-00123-y","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47060596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}