Pub Date : 2023-01-01Epub Date: 2023-07-22DOI: 10.1016/j.ejco.2023.100070
Annabella Astorino , Matteo Avolio , Antonio Fuduli
Multiple Instance Learning (MIL) is a kind of weak supervised learning, where each sample is represented by a bag of instances. The main characteristic of such problems resides in the training phase, since the class labels are provided only for each bag, whereas the instance labels are unknown.
We focus on binary MIL problems characterized by two types of instances (positive and negative): based on the standard MIL assumption, a bag is considered positive if at least one of its instances is positive and it is considered negative otherwise. Then our idea is to generate a maximum-margin polyhedral separation surface such that, for each positive bag, at least one of its instances is inside the polyhedron and all the instances of the negative bags are outside. The resulting optimization problem is a nonlinear, nonconvex and nonsmooth mixed integer program, that we heuristically solve by a Block Coordinate Descent type method, based on repeatedly applying the DC (Difference of Convex) Algorithm.
Numerical results are presented on a set of benchmark datasets.
{"title":"Maximum-margin polyhedral separation for binary Multiple Instance Learning","authors":"Annabella Astorino , Matteo Avolio , Antonio Fuduli","doi":"10.1016/j.ejco.2023.100070","DOIUrl":"10.1016/j.ejco.2023.100070","url":null,"abstract":"<div><p>Multiple Instance Learning (MIL) is a kind of weak supervised learning, where each sample is represented by a bag of instances. The main characteristic of such problems resides in the training phase, since the class labels are provided only for each bag, whereas the instance labels are unknown.</p><p>We focus on binary MIL problems characterized by two types of instances (positive and negative): based on the standard MIL assumption, a bag is considered positive if at least one of its instances is positive and it is considered negative otherwise. Then our idea is to generate a maximum-margin polyhedral separation surface such that, for each positive bag, at least one of its instances is inside the polyhedron and all the instances of the negative bags are outside. The resulting optimization problem is a nonlinear, nonconvex and nonsmooth mixed integer program, that we heuristically solve by a Block Coordinate Descent type method, based on repeatedly applying the DC (Difference of Convex) Algorithm.</p><p>Numerical results are presented on a set of benchmark datasets.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"11 ","pages":"Article 100070"},"PeriodicalIF":2.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42034951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2023-03-23DOI: 10.1016/j.ejco.2023.100065
Immanuel Bomze (Editor-in-Chief)
{"title":"The Marguerite Frank Award for the best EJCO paper 2022","authors":"Immanuel Bomze (Editor-in-Chief)","doi":"10.1016/j.ejco.2023.100065","DOIUrl":"https://doi.org/10.1016/j.ejco.2023.100065","url":null,"abstract":"","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"11 ","pages":"Article 100065"},"PeriodicalIF":2.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49742545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2023-01-20DOI: 10.1016/j.ejco.2023.100057
Kabiru Ahmed , Mohammed Yusuf Waziri , Abubakar Sani Halilu , Salisu Murtala
The Dai-Kou method Dai and Kou (2013), [12] is efficient for solving unconstrained optimization problems. However, its modified variants are quite rare for constrained nonlinear monotone equations. In an attempt to address this, two adaptive versions of the scheme with new and efficient parameter choices are presented in this paper. The schemes are obtained by analyzing eigenvalues of a modified Dai-Kou iteration matrix and constructing two new directions, which are used in the scheme's algorithms. The new methods are derivative-free, which is an attribute required for handling problems with very large dimensions. Both methods also satisfy the required condition for analyzing global convergence in the literature. By applying mild conditions, it is shown that the schemes are globally convergent and description of their effectiveness is achieved through experiments with four effective schemes for solving constrained nonlinear monotone equations. Furthermore, the methods are applied to recover images that are contaminated by impulse noise in compressed sensing.
Dai-Kou方法Dai and Kou(2013)[12]是求解无约束优化问题的有效方法。然而,对于约束非线性单调方程,它的修正变体很少出现。为了解决这个问题,本文提出了两种具有新的有效参数选择的自适应方案。通过分析改进的Dai-Kou迭代矩阵的特征值,构造两个新的方向,得到该方案的算法。新方法是无导数的,这是处理非常大维度问题所需的属性。两种方法均满足文献中分析全局收敛性的必要条件。在较温和的条件下,通过对四种求解约束非线性单调方程的有效格式的实验,证明了这些格式具有全局收敛性,并描述了它们的有效性。此外,还将该方法应用于压缩感知中受脉冲噪声污染的图像的恢复。
{"title":"On two symmetric Dai-Kou type schemes for constrained monotone equations with image recovery application","authors":"Kabiru Ahmed , Mohammed Yusuf Waziri , Abubakar Sani Halilu , Salisu Murtala","doi":"10.1016/j.ejco.2023.100057","DOIUrl":"10.1016/j.ejco.2023.100057","url":null,"abstract":"<div><p>The Dai-Kou method Dai and Kou (2013), <span>[12]</span> is efficient for solving unconstrained optimization problems. However, its modified variants are quite rare for constrained nonlinear monotone equations. In an attempt to address this, two adaptive versions of the scheme with new and efficient parameter choices are presented in this paper. The schemes are obtained by analyzing eigenvalues of a modified Dai-Kou iteration matrix and constructing two new directions, which are used in the scheme's algorithms. The new methods are derivative-free, which is an attribute required for handling problems with very large dimensions. Both methods also satisfy the required condition for analyzing global convergence in the literature. By applying mild conditions, it is shown that the schemes are globally convergent and description of their effectiveness is achieved through experiments with four effective schemes for solving constrained nonlinear monotone equations. Furthermore, the methods are applied to recover images that are contaminated by impulse noise in compressed sensing.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"11 ","pages":"Article 100057"},"PeriodicalIF":2.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46322938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2023-05-19DOI: 10.1016/j.ejco.2023.100067
Majid H.M. Chauhdry
Stochastic optimization algorithms such as genetic algorithm (GA), particle swarm optimization (PSO), estimation of distribution algorithms (EDAs), and nested partitions algorithm (NPA) are used in many problems including nonlinear model predictive control and task assignment. Some of these algorithms, however, lack global convergence guarantee such as PSO, or require strict convergence assumptions such as NPA. To enhance these methods in terms of convergence, a common underlying framework towards representing the seemingly unrelated methods is established as the updating of the distribution of the population through iterative sampling, and the methods that fit into this framework are called population distribution-based methods. Global convergence conditions for this framework are innovatively developed by building a shadow NPA structure for the population evolution process. The result is generic and is capable of analyzing convergence of many methods including GA, PSO, EDA, and NPA. It can be further exploited to improve convergence by modifying these methods. The existing and modified variants of these methods are then applied to case studies to show the improvement.
{"title":"A framework using nested partitions algorithm for convergence analysis of population distribution-based methods","authors":"Majid H.M. Chauhdry","doi":"10.1016/j.ejco.2023.100067","DOIUrl":"10.1016/j.ejco.2023.100067","url":null,"abstract":"<div><p>Stochastic optimization algorithms such as genetic algorithm (GA), particle swarm optimization (PSO), estimation of distribution algorithms (EDAs), and nested partitions algorithm (NPA) are used in many problems including nonlinear model predictive control and task assignment. Some of these algorithms, however, lack global convergence guarantee such as PSO, or require strict convergence assumptions such as NPA. To enhance these methods in terms of convergence, a common underlying framework towards representing the seemingly unrelated methods is established as the updating of the distribution of the population through iterative sampling, and the methods that fit into this framework are called <em>population distribution-based methods</em>. Global convergence conditions for this framework are innovatively developed by building a shadow NPA structure for the population evolution process. The result is generic and is capable of analyzing convergence of many methods including GA, PSO, EDA, and NPA. It can be further exploited to improve convergence by modifying these methods. The existing and modified variants of these methods are then applied to case studies to show the improvement.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"11 ","pages":"Article 100067"},"PeriodicalIF":2.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47575572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2023-08-04DOI: 10.1016/j.ejco.2023.100072
Tibor Illés , Petra Renáta Rigó , Roland Török
We introduce a new predictor-corrector interior-point algorithm for solving -linear complementarity problems which works in a wide neighbourhood of the central path. We use the technique of algebraic equivalent transformation of the centering equations of the central path system. In this technique, we apply the function in order to obtain the new search directions. We define the new wide neighbourhood . In this way, we obtain the first interior-point method, where not only the central path system is transformed, but the definition of the neighbourhood is also modified taking into consideration the algebraic equivalent transformation technique. This gives a new direction in the research of interior-point algorithms. We prove that the interior-point method has iteration complexity. Furthermore, we show the efficiency of the proposed predictor-corrector algorithm by providing numerical results. To our best knowledge, this is the first predictor-corrector interior-point algorithm which works in the neighbourhood using .
{"title":"Large-step predictor-corrector interior point method for sufficient linear complementarity problems based on the algebraic equivalent transformation","authors":"Tibor Illés , Petra Renáta Rigó , Roland Török","doi":"10.1016/j.ejco.2023.100072","DOIUrl":"10.1016/j.ejco.2023.100072","url":null,"abstract":"<div><p>We introduce a new predictor-corrector interior-point algorithm for solving <span><math><msub><mrow><mi>P</mi></mrow><mrow><mo>⁎</mo></mrow></msub><mo>(</mo><mi>κ</mi><mo>)</mo></math></span>-linear complementarity problems which works in a wide neighbourhood of the central path. We use the technique of algebraic equivalent transformation of the centering equations of the central path system. In this technique, we apply the function <span><math><mi>φ</mi><mo>(</mo><mi>t</mi><mo>)</mo><mo>=</mo><msqrt><mrow><mi>t</mi></mrow></msqrt></math></span> in order to obtain the new search directions. We define the new wide neighbourhood <span><math><msub><mrow><mi>D</mi></mrow><mrow><mi>φ</mi></mrow></msub></math></span>. In this way, we obtain the first interior-point method, where not only the central path system is transformed, but the definition of the neighbourhood is also modified taking into consideration the algebraic equivalent transformation technique. This gives a new direction in the research of interior-point algorithms. We prove that the interior-point method has <span><math><mi>O</mi><mrow><mo>(</mo><mo>(</mo><mn>1</mn><mo>+</mo><mi>κ</mi><mo>)</mo><mi>n</mi><mi>log</mi><mo></mo><mrow><mo>(</mo><mfrac><mrow><msup><mrow><mo>(</mo><msup><mrow><mi>x</mi></mrow><mrow><mn>0</mn></mrow></msup><mo>)</mo></mrow><mrow><mi>T</mi></mrow></msup><msup><mrow><mi>s</mi></mrow><mrow><mn>0</mn></mrow></msup></mrow><mrow><mi>ϵ</mi></mrow></mfrac><mo>)</mo></mrow><mo>)</mo></mrow></math></span> iteration complexity. Furthermore, we show the efficiency of the proposed predictor-corrector algorithm by providing numerical results. To our best knowledge, this is the first predictor-corrector interior-point algorithm which works in the <span><math><msub><mrow><mi>D</mi></mrow><mrow><mi>φ</mi></mrow></msub></math></span> neighbourhood using <span><math><mi>φ</mi><mo>(</mo><mi>t</mi><mo>)</mo><mo>=</mo><msqrt><mrow><mi>t</mi></mrow></msqrt></math></span>.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"11 ","pages":"Article 100072"},"PeriodicalIF":2.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49251533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2023-03-22DOI: 10.1016/j.ejco.2023.100062
Mauro Passacantando , Fabio Raciti , Anna Nagurney
In this paper, we consider policy interventions for international migrant flows and quantify their ramifications. In particular, we further develop a recent equilibrium model of international human migration in which some of the destination countries form coalitions to establish a common upper bound on the migratory flows that they agree to accept jointly. We also consider here a scenario where some countries can leave or join an initial coalition and investigate the problem of finding the coalitions that maximize the overall social welfare. Moreover, we compare the social welfare at equilibrium with the one that a supranational organization might suggest in an ideal scenario. This research adds to the literature on the development of mathematical models to address pressing issues associated with problems of human migration with insights for policy and decision-makers.
{"title":"International migrant flows: Coalition formation among countries and social welfare","authors":"Mauro Passacantando , Fabio Raciti , Anna Nagurney","doi":"10.1016/j.ejco.2023.100062","DOIUrl":"10.1016/j.ejco.2023.100062","url":null,"abstract":"<div><p>In this paper, we consider policy interventions for international migrant flows and quantify their ramifications. In particular, we further develop a recent equilibrium model of international human migration in which some of the destination countries form coalitions to establish a common upper bound on the migratory flows that they agree to accept jointly. We also consider here a scenario where some countries can leave or join an initial coalition and investigate the problem of finding the coalitions that maximize the overall social welfare. Moreover, we compare the social welfare at equilibrium with the one that a supranational organization might suggest in an ideal scenario. This research adds to the literature on the development of mathematical models to address pressing issues associated with problems of human migration with insights for policy and decision-makers.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"11 ","pages":"Article 100062"},"PeriodicalIF":2.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41285647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2023-03-30DOI: 10.1016/j.ejco.2023.100063
Maximilian Löffler, Enrico Bartolini, Michael Schneider
Location-routing problems (LRPs) jointly optimize the location of depots and the routing of vehicles. The most studied LRP variant, the capacitated LRP (CLRP), has been addressed by a large number of metaheuristic approaches. These methods often decompose the problem into a location stage to determine a promising depot configuration and a routing stage, in which a vehicle-routing problem is solved to assess the quality of the previously determined depot configuration. Unfortunately, the CLRP literature does not shed much light on the important question which algorithmic features have the biggest influence on the solution quality and runtime of such heuristics. The purpose of this paper is to propose a conceptually simple (yet reasonably effective) heuristic for the CLRP and to provide some insights on the design of successful metaheuristics for this problem. Our algorithm is a hybrid combining (i) a GRASP phase that uses a variable neighborhood descent for local improvement in the location stage, and (ii) a variable neighborhood search in the routing stage. We analyze the impact of the algorithmic components on solution quality and runtime. In addition, we find that the suboptimal routing solutions used to assess the quality of the investigated depot configurations in tendency lead to depot configurations with too many open depots. We propose a depot configuration refinement phase that alleviates this drawback, and we show that this algorithmic component significantly contributes to the solution quality of our method, enabling it to provide reasonable results in comparison to the state-of-the-art methods from the literature.
{"title":"A conceptually simple algorithm for the capacitated location-routing problem","authors":"Maximilian Löffler, Enrico Bartolini, Michael Schneider","doi":"10.1016/j.ejco.2023.100063","DOIUrl":"10.1016/j.ejco.2023.100063","url":null,"abstract":"<div><p>Location-routing problems (LRPs) jointly optimize the location of depots and the routing of vehicles. The most studied LRP variant, the capacitated LRP (CLRP), has been addressed by a large number of metaheuristic approaches. These methods often decompose the problem into a location stage to determine a promising depot configuration and a routing stage, in which a vehicle-routing problem is solved to assess the quality of the previously determined depot configuration. Unfortunately, the CLRP literature does not shed much light on the important question which algorithmic features have the biggest influence on the solution quality and runtime of such heuristics. The purpose of this paper is to propose a conceptually simple (yet reasonably effective) heuristic for the CLRP and to provide some insights on the design of successful metaheuristics for this problem. Our algorithm is a hybrid combining (i) a GRASP phase that uses a variable neighborhood descent for local improvement in the location stage, and (ii) a variable neighborhood search in the routing stage. We analyze the impact of the algorithmic components on solution quality and runtime. In addition, we find that the suboptimal routing solutions used to assess the quality of the investigated depot configurations in tendency lead to depot configurations with too many open depots. We propose a depot configuration refinement phase that alleviates this drawback, and we show that this algorithmic component significantly contributes to the solution quality of our method, enabling it to provide reasonable results in comparison to the state-of-the-art methods from the literature.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"11 ","pages":"Article 100063"},"PeriodicalIF":2.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49558632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2023-03-15DOI: 10.1016/j.ejco.2023.100061
Jake Weiner , Andreas T. Ernst , Xiaodong Li , Yuan Sun
Solving large-scale Mixed Integer Linear Programs (MIP) can be difficult without advanced algorithms such as decomposition based techniques. Even if a decomposition technique might be appropriate, there are still many possible decompositions for any large MIP and it may not be obvious which will be the most effective. The quality of a decomposition depends on both the tightness of the dual bound, in our case generated via Lagrangian Relaxation, and the computational time required to produce that bound. Both of these factors are difficult to predict, motivating the use of a Machine Learning function to predict decomposition quality based a score that combines both bound quality and computational time. This paper presents a comprehensive analysis of the predictive capabilities of a ML function for predicting the quality of MIP decompositions created via constraint relaxation. In this analysis, the role of instance similarity and ML prediction quality is explored, as well as the benchmarking of a ML ranking function against existing heuristic functions. For this analysis, a new dataset consisting of over 40000 unique decompositions sampled from across 24 instances from the MIPLIB2017 library has been established. These decompostions have been created by both a greedy relaxation algorithm as well as a population based multi-objective algorithm, which has previously been shown to produce high quality decompositions. In this paper, we demonstrate that a ML ranking function is able to provide state-of-the-art predictions when benchmarked against existing heuristic ranking functions. Additionally, we demonstrate that by only considering a small set of features related to the relaxed constraints in each decomposition, a ML ranking function is still able to be competitive with heuristic techniques. Such a finding is promising for future constraint relaxation approaches, as these features can be used to guide decomposition creation. Finally, we highlight where a ML ranking function would be beneficial in a decomposition creation framework.
{"title":"Ranking constraint relaxations for mixed integer programs using a machine learning approach","authors":"Jake Weiner , Andreas T. Ernst , Xiaodong Li , Yuan Sun","doi":"10.1016/j.ejco.2023.100061","DOIUrl":"10.1016/j.ejco.2023.100061","url":null,"abstract":"<div><p>Solving large-scale Mixed Integer Linear Programs (MIP) can be difficult without advanced algorithms such as decomposition based techniques. Even if a decomposition technique might be appropriate, there are still many possible decompositions for any large MIP and it may not be obvious which will be the most effective. The quality of a decomposition depends on both the tightness of the dual bound, in our case generated via Lagrangian Relaxation, and the computational time required to produce that bound. Both of these factors are difficult to predict, motivating the use of a Machine Learning function to predict decomposition quality based a score that combines both bound quality and computational time. This paper presents a comprehensive analysis of the predictive capabilities of a ML function for predicting the quality of MIP decompositions created via constraint relaxation. In this analysis, the role of instance similarity and ML prediction quality is explored, as well as the benchmarking of a ML ranking function against existing heuristic functions. For this analysis, a new dataset consisting of over 40000 unique decompositions sampled from across 24 instances from the MIPLIB2017 library has been established. These decompostions have been created by both a greedy relaxation algorithm as well as a population based multi-objective algorithm, which has previously been shown to produce high quality decompositions. In this paper, we demonstrate that a ML ranking function is able to provide state-of-the-art predictions when benchmarked against existing heuristic ranking functions. Additionally, we demonstrate that by only considering a small set of features related to the relaxed constraints in each decomposition, a ML ranking function is still able to be competitive with heuristic techniques. Such a finding is promising for future constraint relaxation approaches, as these features can be used to guide decomposition creation. Finally, we highlight where a ML ranking function would be beneficial in a decomposition creation framework.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"11 ","pages":"Article 100061"},"PeriodicalIF":2.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47555373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2023-02-02DOI: 10.1016/j.ejco.2023.100059
Goran Vojvodic , Luis J. Novoa , Ahmad I. Jarrah
The main purpose of solving a classical generation capacity expansion problem is to ensure that, in the medium- to long-term time frame, the electric utility has enough capacity available to reliably satisfy the demand for electricity from its customers. However, the ability to operate the newly built power plants also has to be considered. Operation of these plants could be curtailed by fuel availability, environmental constraints, or intermittency of renewable generation. This suggests that when generation capacity expansion problems are solved, along with the yearly timescale necessary to capture the long-term effect of the decisions, it is necessary to include a timescale granular enough to represent operations of generators with a credible fidelity. Additionally, given that the time horizon for a capacity expansion model is long, stochastic modeling of key parameters may generate more insightful, realistic, and judicious results. In the current model, we allow the demand for electricity and natural gas to behave stochastically. Together with the dual timescales, the randomness results in a large problem that is challenging to solve. In this paper, we experiment with synergistically combining elements of several methods that are, for the most part, based on Benders decomposition and construct an algorithm which allows us to find near-optimal solutions to the problem with reasonable run times.
{"title":"Experimentation with Benders decomposition for solving the two-timescale stochastic generation capacity expansion problem","authors":"Goran Vojvodic , Luis J. Novoa , Ahmad I. Jarrah","doi":"10.1016/j.ejco.2023.100059","DOIUrl":"10.1016/j.ejco.2023.100059","url":null,"abstract":"<div><p>The main purpose of solving a classical generation capacity expansion problem is to ensure that, in the medium- to long-term time frame, the electric utility has enough capacity available to reliably satisfy the demand for electricity from its customers. However, the ability to operate the newly built power plants also has to be considered. Operation of these plants could be curtailed by fuel availability, environmental constraints, or intermittency of renewable generation. This suggests that when generation capacity expansion problems are solved, along with the yearly timescale necessary to capture the long-term effect of the decisions, it is necessary to include a timescale granular enough to represent operations of generators with a credible fidelity. Additionally, given that the time horizon for a capacity expansion model is long, stochastic modeling of key parameters may generate more insightful, realistic, and judicious results. In the current model, we allow the demand for electricity and natural gas to behave stochastically. Together with the dual timescales, the randomness results in a large problem that is challenging to solve. In this paper, we experiment with synergistically combining elements of several methods that are, for the most part, based on Benders decomposition and construct an algorithm which allows us to find near-optimal solutions to the problem with reasonable run times.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"11 ","pages":"Article 100059"},"PeriodicalIF":2.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46676316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2023-07-22DOI: 10.1016/j.ejco.2023.100071
Sven Mallach
We present and compare novel binary programs for linear ordering problems that involve the notion of asymmetric betweenness and expose relations to the quadratic linear ordering problem and its linearization. While two of the binary programs prove particularly superior from a computational point of view when many or all betweenness relations shall be modeled, the others arise as natural formulations that resemble important theoretical correspondences and provide a compact alternative for sparse problem instances. A reasoning for the strengths and weaknesses of the different formulations is derived by means of polyhedral considerations with respect to their continuous relaxations.
{"title":"Binary programs for asymmetric betweenness problems and relations to the quadratic linear ordering problem","authors":"Sven Mallach","doi":"10.1016/j.ejco.2023.100071","DOIUrl":"10.1016/j.ejco.2023.100071","url":null,"abstract":"<div><p>We present and compare novel binary programs for linear ordering problems that involve the notion of asymmetric betweenness and expose relations to the quadratic linear ordering problem and its linearization. While two of the binary programs prove particularly superior from a computational point of view when many or all betweenness relations shall be modeled, the others arise as natural formulations that resemble important theoretical correspondences and provide a compact alternative for sparse problem instances. A reasoning for the strengths and weaknesses of the different formulations is derived by means of polyhedral considerations with respect to their continuous relaxations.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"11 ","pages":"Article 100071"},"PeriodicalIF":2.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48203200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}