We study the problem of computing the tightest upper and lower bounds on the probability that the sum of n dependent Bernoulli random variables exceeds an integer k. Under knowledge of all pairs of bivariate distributions denoted by a complete graph, the bounds are NP-hard to compute. When the bivariate distributions are specified on a tree graph, we show that tight bounds are computable in polynomial time using a compact linear program. These bounds provide robust probability estimates when the assumption of conditional independence in a tree-structured graphical model is violated. We demonstrate, through numericals, the computational advantage of our compact linear program over alternate approaches. A comparison of bounds under various knowledge assumptions, such as univariate information and conditional independence, is provided. An application is illustrated in the context of Chow–Liu trees, wherein our bounds distinguish between various trees that encode the maximum possible mutual information.
{"title":"Tree Bounds for Sums of Bernoulli Random Variables: A Linear Optimization Approach","authors":"Divya Padmanabhan, K. Natarajan","doi":"10.1287/ijoo.2019.0038","DOIUrl":"https://doi.org/10.1287/ijoo.2019.0038","url":null,"abstract":"We study the problem of computing the tightest upper and lower bounds on the probability that the sum of n dependent Bernoulli random variables exceeds an integer k. Under knowledge of all pairs of bivariate distributions denoted by a complete graph, the bounds are NP-hard to compute. When the bivariate distributions are specified on a tree graph, we show that tight bounds are computable in polynomial time using a compact linear program. These bounds provide robust probability estimates when the assumption of conditional independence in a tree-structured graphical model is violated. We demonstrate, through numericals, the computational advantage of our compact linear program over alternate approaches. A comparison of bounds under various knowledge assumptions, such as univariate information and conditional independence, is provided. An application is illustrated in the context of Chow–Liu trees, wherein our bounds distinguish between various trees that encode the maximum possible mutual information.","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46060216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Data-Driven Percentile Optimization for Multiclass Queueing Systems with Model Ambiguity: Theory and Application","authors":"Austin Bren, S. Saghafian","doi":"10.1287/ijoo.2018.0007","DOIUrl":"https://doi.org/10.1287/ijoo.2018.0007","url":null,"abstract":"","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1287/ijoo.2018.0007","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41532445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Consumers are increasingly navigating across sales channels to maximize the value of their purchase. The existing retail practices of pricing channels either independently or matching competitor pr...
{"title":"A Practical Price Optimization Approach for Omnichannel Retailing","authors":"P. Harsha, Shivaram Subramanian, M. Ettl","doi":"10.1287/IJOO.2019.0018","DOIUrl":"https://doi.org/10.1287/IJOO.2019.0018","url":null,"abstract":"Consumers are increasingly navigating across sales channels to maximize the value of their purchase. The existing retail practices of pricing channels either independently or matching competitor pr...","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1287/IJOO.2019.0018","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49252070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The study of viral marketing strategies on social networks has become an area of significant research interest. In this setting, we consider a combinatorial optimization problem, referred to as the...
{"title":"A Branch-and-Cut Approach for the Weighted Target Set Selection Problem on Social Networks","authors":"S. Raghavan, Rui Zhang","doi":"10.1287/IJOO.2019.0012","DOIUrl":"https://doi.org/10.1287/IJOO.2019.0012","url":null,"abstract":"The study of viral marketing strategies on social networks has become an area of significant research interest. In this setting, we consider a combinatorial optimization problem, referred to as the...","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1287/IJOO.2019.0012","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47285721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Despite its wide range of applications, the three-dimensional bin-packing problem is still one of the most difficult optimization problems to solve. Currently, medium- to large-size instances are o...
尽管三维装箱问题的应用范围很广,但它仍然是最难解决的优化问题之一。目前,中大型实例正在兴起。。。
{"title":"Three-Dimensional Bin Packing and Mixed-Case Palletization","authors":"S. Elhedhli, Fatma Gzara, Burak Yildiz","doi":"10.1287/IJOO.2019.0013","DOIUrl":"https://doi.org/10.1287/IJOO.2019.0013","url":null,"abstract":"Despite its wide range of applications, the three-dimensional bin-packing problem is still one of the most difficult optimization problems to solve. Currently, medium- to large-size instances are o...","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1287/IJOO.2019.0013","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46920137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robust optimization (RO) tackles data uncertainty by optimizing for the worst-case scenario of an uncertain parameter and, in its basic form, is sometimes criticized for producing overly conservative solutions. To reduce the level of conservatism in RO, one can use the well-known budget-of-uncertainty approach, which limits the amount of uncertainty to be considered in the model. In this paper, we study a class of problems with resource uncertainty and propose a robust optimization methodology that produces solutions that are even less conservative than the conventional budget-of-uncertainty approach. We propose a new tractable two-stage robust optimization approach that identifies the “ineffective” parts of the uncertainty set and optimizes for the “effective” worst-case scenario only. In the first stage, we identify the effective range of the uncertain parameter, and in the second stage, we provide a formulation that eliminates the unnecessary protection for the ineffective parts and, hence, produces less conservative solutions and provides intuitive insights on the trade-off between robustness and solution conservatism. We demonstrate the applicability of the proposed approach using a power dispatch optimization problem with wind uncertainty. We also provide examples of other application areas that would benefit from the proposed approach.
{"title":"Effective Budget of Uncertainty for Classes of Robust Optimization","authors":"Milad Dehghani Filabadi, H. Mahmoudzadeh","doi":"10.1287/ijoo.2021.0069","DOIUrl":"https://doi.org/10.1287/ijoo.2021.0069","url":null,"abstract":"Robust optimization (RO) tackles data uncertainty by optimizing for the worst-case scenario of an uncertain parameter and, in its basic form, is sometimes criticized for producing overly conservative solutions. To reduce the level of conservatism in RO, one can use the well-known budget-of-uncertainty approach, which limits the amount of uncertainty to be considered in the model. In this paper, we study a class of problems with resource uncertainty and propose a robust optimization methodology that produces solutions that are even less conservative than the conventional budget-of-uncertainty approach. We propose a new tractable two-stage robust optimization approach that identifies the “ineffective” parts of the uncertainty set and optimizes for the “effective” worst-case scenario only. In the first stage, we identify the effective range of the uncertain parameter, and in the second stage, we provide a formulation that eliminates the unnecessary protection for the ineffective parts and, hence, produces less conservative solutions and provides intuitive insights on the trade-off between robustness and solution conservatism. We demonstrate the applicability of the proposed approach using a power dispatch optimization problem with wind uncertainty. We also provide examples of other application areas that would benefit from the proposed approach.","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47110670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider Benders decomposition for solving two-stage stochastic programs with complete recourse based on finite samples of the uncertain parameters. We define the Benders cuts binding at the final optimal solution or the ones significantly improving bounds over iterations as valuable cuts. We propose a learning-enhanced Benders decomposition (LearnBD) algorithm, which adds a cut classification step in each iteration to selectively generate cuts that are more likely to be valuable cuts. The LearnBD algorithm includes two phases: (i) sampling cuts and collecting information from training problems and (ii) solving testing problems with a support vector machine (SVM) cut classifier. We run the LearnBD algorithm on instances of capacitated facility location and multicommodity network design under uncertain demand. Our results show that SVM cut classifier works effectively for identifying valuable cuts, and the LearnBD algorithm reduces the total solving time of all instances for different problems with various sizes and complexities.
{"title":"Benders Cut Classification via Support Vector Machines for Solving Two-Stage Stochastic Programs","authors":"Huiwen Jia, Siqian Shen","doi":"10.1287/IJOO.2019.0050","DOIUrl":"https://doi.org/10.1287/IJOO.2019.0050","url":null,"abstract":"We consider Benders decomposition for solving two-stage stochastic programs with complete recourse based on finite samples of the uncertain parameters. We define the Benders cuts binding at the final optimal solution or the ones significantly improving bounds over iterations as valuable cuts. We propose a learning-enhanced Benders decomposition (LearnBD) algorithm, which adds a cut classification step in each iteration to selectively generate cuts that are more likely to be valuable cuts. The LearnBD algorithm includes two phases: (i) sampling cuts and collecting information from training problems and (ii) solving testing problems with a support vector machine (SVM) cut classifier. We run the LearnBD algorithm on instances of capacitated facility location and multicommodity network design under uncertain demand. Our results show that SVM cut classifier works effectively for identifying valuable cuts, and the LearnBD algorithm reduces the total solving time of all instances for different problems with various sizes and complexities.","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43771918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article describes a new rule-enhanced penalized regression procedure for the generalized regression problem of predicting scalar responses from observation vectors in the absence of a preferre...
{"title":"REPR: Rule-Enhanced Penalized Regression","authors":"Jonathan Eckstein, Ai Kagawa, Noam Goldberg","doi":"10.1287/IJOO.2019.0015","DOIUrl":"https://doi.org/10.1287/IJOO.2019.0015","url":null,"abstract":"This article describes a new rule-enhanced penalized regression procedure for the generalized regression problem of predicting scalar responses from observation vectors in the absence of a preferre...","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1287/IJOO.2019.0015","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49262486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider the problem of estimating the parameters of a multivariate Gaussian mixture model (GMM) given access to n samples that are believed to have come from a mixture of multiple subpopulations. State-of-the-art algorithms used to recover these parameters use heuristics to either maximize the log-likelihood of the sample or try to fit first few moments of the GMM to the sample moments. In contrast, we present here a novel mixed-integer optimization (MIO) formulation that optimally recovers the parameters of the GMM by minimizing a discrepancy measure (either the Kolmogorov–Smirnov or the total variation distance) between the empirical distribution function and the distribution function of the GMM whenever the mixture component weights are known. We also present an algorithm for multidimensional data that optimally recovers corresponding means and covariance matrices. We show that the MIO approaches are practically solvable for data sets with n in the tens of thousands in minutes and achieve an average improvement of 60%–70% and 50%–60% on mean absolute percentage error in estimating the means and the covariance matrices, respectively, over the expectation–maximization (EM) algorithm independent of the sample size n. As the separation of the Gaussians decreases and, correspondingly, the problem becomes more difficult, the edge in performance in favor of the MIO methods widens. Finally, we also show that the MIO methods outperform the EM algorithm with an average improvement of 4%–5% on the out-of-sample accuracy for real-world data sets.
{"title":"Learning a Mixture of Gaussians via Mixed-Integer Optimization","authors":"H. Bandi, D. Bertsimas, R. Mazumder","doi":"10.1287/IJOO.2018.0009","DOIUrl":"https://doi.org/10.1287/IJOO.2018.0009","url":null,"abstract":"We consider the problem of estimating the parameters of a multivariate Gaussian mixture model (GMM) given access to n samples that are believed to have come from a mixture of multiple subpopulations. State-of-the-art algorithms used to recover these parameters use heuristics to either maximize the log-likelihood of the sample or try to fit first few moments of the GMM to the sample moments. In contrast, we present here a novel mixed-integer optimization (MIO) formulation that optimally recovers the parameters of the GMM by minimizing a discrepancy measure (either the Kolmogorov–Smirnov or the total variation distance) between the empirical distribution function and the distribution function of the GMM whenever the mixture component weights are known. We also present an algorithm for multidimensional data that optimally recovers corresponding means and covariance matrices. We show that the MIO approaches are practically solvable for data sets with n in the tens of thousands in minutes and achieve an average improvement of 60%–70% and 50%–60% on mean absolute percentage error in estimating the means and the covariance matrices, respectively, over the expectation–maximization (EM) algorithm independent of the sample size n. As the separation of the Gaussians decreases and, correspondingly, the problem becomes more difficult, the edge in performance in favor of the MIO methods widens. Finally, we also show that the MIO methods outperform the EM algorithm with an average improvement of 4%–5% on the out-of-sample accuracy for real-world data sets.","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1287/IJOO.2018.0009","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"66363407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In randomized clinical trials, there may be a benefit to identifying subgroups of the study population for which a treatment was exceptionally effective or ineffective. We present an efficient mixe...
在随机临床试验中,识别研究人群中治疗异常有效或无效的亚组可能有好处。我们提出了一个有效的混合。。。
{"title":"Identifying Exceptional Responders in Randomized Trials: An Optimization Approach","authors":"D. Bertsimas, N. Korolko, Alexander M. Weinstein","doi":"10.1287/IJOO.2018.0006","DOIUrl":"https://doi.org/10.1287/IJOO.2018.0006","url":null,"abstract":"In randomized clinical trials, there may be a benefit to identifying subgroups of the study population for which a treatment was exceptionally effective or ineffective. We present an efficient mixe...","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1287/IJOO.2018.0006","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41901324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}