首页 > 最新文献

INFORMS journal on optimization最新文献

英文 中文
Optimal Prescriptive Trees 最优规范树
Pub Date : 2019-04-01 DOI: 10.1287/IJOO.2018.0005
D. Bertsimas, Jack Dunn, Nishanth Mundru
Motivated by personalized decision making, given observational data [Formula: see text] involving features [Formula: see text], assigned treatments or prescriptions [Formula: see text], and outcomes [Formula: see text], we propose a tree-based algorithm called optimal prescriptive tree (OPT) that uses either constant or linear models in the leaves of the tree to predict the counterfactuals and assign optimal treatments to new samples. We propose an objective function that balances optimality and accuracy. OPTs are interpretable and highly scalable, accommodate multiple treatments, and provide high-quality prescriptions. We report results involving synthetic and real data that show that OPTs either outperform or are comparable with several state-of-the-art methods. Given their combination of interpretability, scalability, generalizability, and performance, OPTs are an attractive alternative for personalized decision making in a variety of areas, such as online advertising and personalized medicine.
受个性化决策的激励,给定的观察数据[公式:见正文]涉及特征[公式:参见正文]、指定的治疗或处方[公式:详见正文]和结果[公式:请见正文],我们提出了一种基于树的算法,称为最优规定树(OPT),该算法使用树叶中的常数或线性模型来预测反事实并为新样本分配最优处理。我们提出了一个平衡最优性和准确性的目标函数。OPT具有可解释性和高度可扩展性,可适应多种治疗,并提供高质量的处方。我们报告了涉及合成和真实数据的结果,这些数据表明OPT优于或可与几种最先进的方法相比较。鉴于其可解释性、可扩展性、可推广性和性能的结合,OPT是在线广告和个性化医疗等多个领域个性化决策的一种有吸引力的替代方案。
{"title":"Optimal Prescriptive Trees","authors":"D. Bertsimas, Jack Dunn, Nishanth Mundru","doi":"10.1287/IJOO.2018.0005","DOIUrl":"https://doi.org/10.1287/IJOO.2018.0005","url":null,"abstract":"Motivated by personalized decision making, given observational data [Formula: see text] involving features [Formula: see text], assigned treatments or prescriptions [Formula: see text], and outcomes [Formula: see text], we propose a tree-based algorithm called optimal prescriptive tree (OPT) that uses either constant or linear models in the leaves of the tree to predict the counterfactuals and assign optimal treatments to new samples. We propose an objective function that balances optimality and accuracy. OPTs are interpretable and highly scalable, accommodate multiple treatments, and provide high-quality prescriptions. We report results involving synthetic and real data that show that OPTs either outperform or are comparable with several state-of-the-art methods. Given their combination of interpretability, scalability, generalizability, and performance, OPTs are an attractive alternative for personalized decision making in a variety of areas, such as online advertising and personalized medicine.","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1287/IJOO.2018.0005","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48152889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 70
Machine Learning and Optimization: Introduction to the Special Issue 机器学习与优化:特刊导论
Pub Date : 2019-04-01 DOI: 10.1287/IJOO.2019.0024
D. Bertsimas
{"title":"Machine Learning and Optimization: Introduction to the Special Issue","authors":"D. Bertsimas","doi":"10.1287/IJOO.2019.0024","DOIUrl":"https://doi.org/10.1287/IJOO.2019.0024","url":null,"abstract":"","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1287/IJOO.2019.0024","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45410396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Distributionally Robust Optimization with Confidence Bands for Probability Density Functions 概率密度函数置信带分布鲁棒优化
Pub Date : 2019-01-08 DOI: 10.1287/ijoo.2021.0059
Xi Chen, Qihang Lin, Guanglin Xu
Distributionally robust optimization (DRO) has been introduced for solving stochastic programs in which the distribution of the random variables is unknown and must be estimated by samples from that distribution. A key element of DRO is the construction of the ambiguity set, which is a set of distributions that contains the true distribution with a high probability. Assuming that the true distribution has a probability density function, we propose a class of ambiguity sets based on confidence bands of the true density function. As examples, we consider the shape-restricted confidence bands and the confidence bands constructed with a kernel density estimation technique. The former allows us to incorporate the prior knowledge of the shape of the underlying density function (e.g., unimodality and monotonicity), and the latter enables us to handle multidimensional cases. Furthermore, we establish the convergence of the optimal value of DRO to that of the underlying stochastic program as the sample size increases. The DRO with our ambiguity set involves functional decision variables and infinitely many constraints. To address this challenge, we apply duality theory to reformulate the DRO to a finite-dimensional stochastic program, which is amenable to a stochastic subgradient scheme as a solution method.
引入分布鲁棒优化(DRO)来求解随机规划,其中随机变量的分布是未知的,必须从该分布的样本中估计。DRO的一个关键要素是歧义集的构造,它是一组包含高概率真实分布的分布。假设真分布有一个概率密度函数,我们提出了一类基于真密度函数置信带的模糊集。作为例子,我们考虑了形状受限置信带和用核密度估计技术构造的置信带。前者允许我们合并先验的密度函数形状的知识(例如,单模性和单调性),后者使我们能够处理多维情况。进一步,我们建立了随着样本量的增加,DRO的最优值收敛于底层随机规划的最优值。该模糊集的DRO涉及函数决策变量和无穷多个约束。为了解决这一挑战,我们应用对偶理论将DRO重新表述为有限维随机规划,该规划适用于随机亚梯度格式作为求解方法。
{"title":"Distributionally Robust Optimization with Confidence Bands for Probability Density Functions","authors":"Xi Chen, Qihang Lin, Guanglin Xu","doi":"10.1287/ijoo.2021.0059","DOIUrl":"https://doi.org/10.1287/ijoo.2021.0059","url":null,"abstract":"Distributionally robust optimization (DRO) has been introduced for solving stochastic programs in which the distribution of the random variables is unknown and must be estimated by samples from that distribution. A key element of DRO is the construction of the ambiguity set, which is a set of distributions that contains the true distribution with a high probability. Assuming that the true distribution has a probability density function, we propose a class of ambiguity sets based on confidence bands of the true density function. As examples, we consider the shape-restricted confidence bands and the confidence bands constructed with a kernel density estimation technique. The former allows us to incorporate the prior knowledge of the shape of the underlying density function (e.g., unimodality and monotonicity), and the latter enables us to handle multidimensional cases. Furthermore, we establish the convergence of the optimal value of DRO to that of the underlying stochastic program as the sample size increases. The DRO with our ambiguity set involves functional decision variables and infinitely many constraints. To address this challenge, we apply duality theory to reformulate the DRO to a finite-dimensional stochastic program, which is amenable to a stochastic subgradient scheme as a solution method.","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45099064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
From the Editor 来自编辑
Pub Date : 2019-01-01 DOI: 10.1287/ijoo.2019.0011
D. Bertsimas
{"title":"From the Editor","authors":"D. Bertsimas","doi":"10.1287/ijoo.2019.0011","DOIUrl":"https://doi.org/10.1287/ijoo.2019.0011","url":null,"abstract":"","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1287/ijoo.2019.0011","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46729307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Constraint Generation for Two-Stage Robust Network Flow Problems 两阶段鲁棒网络流问题的约束生成
Pub Date : 2019-01-01 DOI: 10.1287/IJOO.2018.0003
D. Simchi-Levi, He Wang, Y. Wei
In this paper, we propose new constraint generation (CG) algorithms for solving the two-stage robust minimum cost flow problem, a problem that arises from various applications such as transportatio...
在本文中,我们提出了新的约束生成(CG)算法来解决两阶段鲁棒最小成本流问题,这一问题出现在各种应用中,如运输…
{"title":"Constraint Generation for Two-Stage Robust Network Flow Problems","authors":"D. Simchi-Levi, He Wang, Y. Wei","doi":"10.1287/IJOO.2018.0003","DOIUrl":"https://doi.org/10.1287/IJOO.2018.0003","url":null,"abstract":"In this paper, we propose new constraint generation (CG) algorithms for solving the two-stage robust minimum cost flow problem, a problem that arises from various applications such as transportatio...","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":"62 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1287/IJOO.2018.0003","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"66363375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Robust Classification 健壮的分类
Pub Date : 2019-01-01 DOI: 10.1287/ijoo.2018.0001
D. Bertsimas, Jack Dunn, C. Pawlowski, Ying Daisy Zhuo
{"title":"Robust Classification","authors":"D. Bertsimas, Jack Dunn, C. Pawlowski, Ying Daisy Zhuo","doi":"10.1287/ijoo.2018.0001","DOIUrl":"https://doi.org/10.1287/ijoo.2018.0001","url":null,"abstract":"","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1287/ijoo.2018.0001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"66363371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 80
Semi-proximal Augmented Lagrangian-Based Decomposition Methods for Primal Block-Angular Convex Composite Quadratic Conic Programming Problems 原始块角凸复合二次圆锥规划问题的半近似增广拉格朗日分解方法
Pub Date : 2018-12-12 DOI: 10.1287/IJOO.2019.0048
Xin-Yee Lam, Defeng Sun, K. Toh
We first propose a semi-proximal augmented Lagrangian-based decomposition method to directly solve the primal form of a convex composite quadratic conic-programming problem with a primal block-angular structure. Using our algorithmic framework, we are able to naturally derive several well-known augmented Lagrangian-based decomposition methods for stochastic programming, such as the diagonal quadratic approximation method of Mulvey and Ruszczyński. Although it is natural to develop an augmented Lagrangian decomposition algorithm based on the primal problem, here, we demonstrate that it is, in fact, numerically more economical to solve the dual problem by an appropriately designed decomposition algorithm. In particular, we propose a semi-proximal symmetric Gauss–Seidel-based alternating direction method of multipliers (sGS-ADMM) for solving the corresponding dual problem. Numerical results show that our dual-based sGS-ADMM algorithm can very efficiently solve some very large instances of primal block-angular convex quadratic-programming problems. For example, one instance with more than 300,000 linear constraints and 12.5 million nonnegative variables is solved to the accuracy of 10-5 in the relative KKT residual in less than a minute on a modest desktop computer.
我们首先提出了一种基于半近似增广拉格朗日的分解方法来直接求解具有原始块角结构的凸复合二次圆锥规划问题的原始形式。使用我们的算法框架,我们能够自然地导出随机规划的几种著名的基于增广拉格朗日的分解方法,例如Mulvey和Ruszczyński的对角二次逼近方法。尽管基于原始问题开发增广拉格朗日分解算法是很自然的,但在这里,我们证明了,事实上,通过适当设计的分解算法来解决对偶问题在数值上更经济。特别地,我们提出了一种基于半近似对称高斯-塞德尔的交替方向乘法器方法(sGS-ADMM)来解决相应的对偶问题。数值结果表明,我们的基于对偶的sGS-ADMM算法可以非常有效地解决一些非常大的原始块角凸二次规划问题。例如,在一台普通的台式计算机上,一个具有超过300000个线性约束和1250万个非负变量的实例在不到一分钟的时间内被求解到相对KKT残差的10-5的精度。
{"title":"Semi-proximal Augmented Lagrangian-Based Decomposition Methods for Primal Block-Angular Convex Composite Quadratic Conic Programming Problems","authors":"Xin-Yee Lam, Defeng Sun, K. Toh","doi":"10.1287/IJOO.2019.0048","DOIUrl":"https://doi.org/10.1287/IJOO.2019.0048","url":null,"abstract":"We first propose a semi-proximal augmented Lagrangian-based decomposition method to directly solve the primal form of a convex composite quadratic conic-programming problem with a primal block-angular structure. Using our algorithmic framework, we are able to naturally derive several well-known augmented Lagrangian-based decomposition methods for stochastic programming, such as the diagonal quadratic approximation method of Mulvey and Ruszczyński. Although it is natural to develop an augmented Lagrangian decomposition algorithm based on the primal problem, here, we demonstrate that it is, in fact, numerically more economical to solve the dual problem by an appropriately designed decomposition algorithm. In particular, we propose a semi-proximal symmetric Gauss–Seidel-based alternating direction method of multipliers (sGS-ADMM) for solving the corresponding dual problem. Numerical results show that our dual-based sGS-ADMM algorithm can very efficiently solve some very large instances of primal block-angular convex quadratic-programming problems. For example, one instance with more than 300,000 linear constraints and 12.5 million nonnegative variables is solved to the accuracy of 10-5 in the relative KKT residual in less than a minute on a modest desktop computer.","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46490424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Adaptive Stochastic Variance Reduction for Subsampled Newton Method with Cubic Regularization 三次正则化子采样牛顿方法的自适应随机方差约简
Pub Date : 2018-11-28 DOI: 10.1287/ijoo.2021.0058
Junyu Zhang, Lin Xiao, Shuzhong Zhang
The cubic regularized Newton method of Nesterov and Polyak has become increasingly popular for nonconvex optimization because of its capability of finding an approximate local solution with a second order guarantee and its low iteration complexity. Several recent works extend this method to the setting of minimizing the average of N smooth functions by replacing the exact gradients and Hessians with subsampled approximations. It is shown that the total Hessian sample complexity can be reduced to be sublinear in N per iteration by leveraging stochastic variance reduction techniques. We present an adaptive variance reduction scheme for a subsampled Newton method with cubic regularization and show that the expected Hessian sample complexity is [Formula: see text] for finding an [Formula: see text]-approximate local solution (in terms of first and second order guarantees, respectively). Moreover, we show that the same Hessian sample complexity is retained with fixed sample sizes if exact gradients are used. The techniques of our analysis are different from previous works in that we do not rely on high probability bounds based on matrix concentration inequalities. Instead, we derive and utilize new bounds on the third and fourth order moments of the average of random matrices, which are of independent interest on their own.
Nesterov和Polyak的三次正则牛顿法由于具有二阶保证的近似局部解和较低的迭代复杂度,在非凸优化中得到越来越广泛的应用。最近的一些工作将这种方法扩展到最小化N个光滑函数的平均值的设置,通过用次采样近似代替精确梯度和Hessians。结果表明,利用随机方差约简技术,总Hessian样本复杂度在N次迭代中可以降低到亚线性。我们提出了一种具有三次正则化的次采样牛顿方法的自适应方差减少方案,并表明期望的Hessian样本复杂度为[公式:见文本],用于寻找[公式:见文本]-近似局部解(分别根据一阶和二阶保证)。此外,我们还表明,如果使用精确梯度,则在固定样本量下保持相同的Hessian样本复杂度。我们的分析技术与以前的工作不同,因为我们不依赖基于矩阵浓度不等式的高概率界限。相反,我们推导并利用了随机矩阵平均值的三阶和四阶矩的新界,它们本身是独立的。
{"title":"Adaptive Stochastic Variance Reduction for Subsampled Newton Method with Cubic Regularization","authors":"Junyu Zhang, Lin Xiao, Shuzhong Zhang","doi":"10.1287/ijoo.2021.0058","DOIUrl":"https://doi.org/10.1287/ijoo.2021.0058","url":null,"abstract":"The cubic regularized Newton method of Nesterov and Polyak has become increasingly popular for nonconvex optimization because of its capability of finding an approximate local solution with a second order guarantee and its low iteration complexity. Several recent works extend this method to the setting of minimizing the average of N smooth functions by replacing the exact gradients and Hessians with subsampled approximations. It is shown that the total Hessian sample complexity can be reduced to be sublinear in N per iteration by leveraging stochastic variance reduction techniques. We present an adaptive variance reduction scheme for a subsampled Newton method with cubic regularization and show that the expected Hessian sample complexity is [Formula: see text] for finding an [Formula: see text]-approximate local solution (in terms of first and second order guarantees, respectively). Moreover, we show that the same Hessian sample complexity is retained with fixed sample sizes if exact gradients are used. The techniques of our analysis are different from previous works in that we do not rely on high probability bounds based on matrix concentration inequalities. Instead, we derive and utilize new bounds on the third and fourth order moments of the average of random matrices, which are of independent interest on their own.","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45490024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
A Subsampling Line-Search Method with Second-Order Results 一种具有二阶结果的子采样线搜索方法
Pub Date : 2018-10-16 DOI: 10.1287/ijoo.2022.0072
E. Bergou, Y. Diouane, V. Kunc, V. Kungurtsev, C. Royer
In many contemporary optimization problems such as those arising in machine learning, it can be computationally challenging or even infeasible to evaluate an entire function or its derivatives. This motivates the use of stochastic algorithms that sample problem data, which can jeopardize the guarantees obtained through classical globalization techniques in optimization, such as a line search. Using subsampled function values is particularly challenging for the latter strategy, which relies upon multiple evaluations. For nonconvex data-related problems, such as training deep learning models, one aims at developing methods that converge to second-order stationary points quickly, that is, escape saddle points efficiently. This is particularly difficult to ensure when one only accesses subsampled approximations of the objective and its derivatives. In this paper, we describe a stochastic algorithm based on negative curvature and Newton-type directions that are computed for a subsampling model of the objective. A line-search technique is used to enforce suitable decrease for this model; for a sufficiently large sample, a similar amount of reduction holds for the true objective. We then present worst-case complexity guarantees for a notion of stationarity tailored to the subsampling context. Our analysis encompasses the deterministic regime and allows us to identify sampling requirements for second-order line-search paradigms. As we illustrate through real data experiments, these worst-case estimates need not be satisfied for our method to be competitive with first-order strategies in practice.
在许多当代优化问题中,例如机器学习中出现的问题,计算整个函数或其导数可能具有挑战性,甚至是不可行的。这促使使用随机算法对问题数据进行采样,这可能会危及通过经典全球化优化技术(如直线搜索)获得的保证。对于后一种策略,使用下采样函数值尤其具有挑战性,因为它依赖于多次评估。对于非凸数据相关的问题,例如训练深度学习模型,人们的目标是开发快速收敛到二阶平稳点的方法,即有效地逃离鞍点。当只访问目标及其导数的次采样近似值时,这一点尤其难以保证。在本文中,我们描述了一种基于负曲率和牛顿型方向的随机算法,这些方向是为目标的子抽样模型计算的。采用线搜索技术对该模型进行适当的减小;对于足够大的样本,类似的减少量适用于真正的目标。然后,我们提出了最坏情况下的复杂性保证,以适应子采样上下文的平稳性概念。我们的分析包含确定性制度,并允许我们确定二阶线搜索范式的抽样要求。正如我们通过实际数据实验说明的那样,为了使我们的方法在实践中与一阶策略竞争,这些最坏情况估计不需要满足。
{"title":"A Subsampling Line-Search Method with Second-Order Results","authors":"E. Bergou, Y. Diouane, V. Kunc, V. Kungurtsev, C. Royer","doi":"10.1287/ijoo.2022.0072","DOIUrl":"https://doi.org/10.1287/ijoo.2022.0072","url":null,"abstract":"In many contemporary optimization problems such as those arising in machine learning, it can be computationally challenging or even infeasible to evaluate an entire function or its derivatives. This motivates the use of stochastic algorithms that sample problem data, which can jeopardize the guarantees obtained through classical globalization techniques in optimization, such as a line search. Using subsampled function values is particularly challenging for the latter strategy, which relies upon multiple evaluations. For nonconvex data-related problems, such as training deep learning models, one aims at developing methods that converge to second-order stationary points quickly, that is, escape saddle points efficiently. This is particularly difficult to ensure when one only accesses subsampled approximations of the objective and its derivatives. In this paper, we describe a stochastic algorithm based on negative curvature and Newton-type directions that are computed for a subsampling model of the objective. A line-search technique is used to enforce suitable decrease for this model; for a sufficiently large sample, a similar amount of reduction holds for the true objective. We then present worst-case complexity guarantees for a notion of stationarity tailored to the subsampling context. Our analysis encompasses the deterministic regime and allows us to identify sampling requirements for second-order line-search paradigms. As we illustrate through real data experiments, these worst-case estimates need not be satisfied for our method to be competitive with first-order strategies in practice.","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45304134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Robust Facility Location Under Disruptions 设施位置处于中断状态
Pub Date : 2018-10-01 DOI: 10.1287/IJOO.2021.0054
Chun-Lai Cheng, Y. Adulyasak, Louis-Martin Rousseau
Facility networks can be disrupted by, for example, power outages, poor weather conditions, or natural disasters, and the probabilities of these events may be difficult to estimate. This could lead to costly recourse decisions because customers cannot be served by the planned facilities. In this paper, we study a fixed-charge location problem (FLP) that considers disruption risks. We adopt a two-stage robust optimization method, by which facility location decisions are made here and now and recourse decisions to reassign customers are made after the uncertainty information on the facility availability has been revealed. We implement a column-and-constraint generation (C&CG) algorithm to solve the robust models exactly. Instead of relying on dualization or reformulation techniques to deal with the subproblem, as is common in the literature, we use a linear programming–based enumeration method that allows us to take into account a discrete uncertainty set of facility failures. This also gives the flexibility to tackle cases when the dualization technique cannot be applied to the subproblem. We further develop an approximation scheme for instances of a realistic size. Numerical experiments show that the proposed C&CG algorithm outperforms existing methods for both the robust FLP and the robust p-median problem.
设施网络可能因停电、恶劣天气条件或自然灾害等原因而中断,而这些事件的概率可能难以估计。这可能导致昂贵的追索权决策,因为客户无法得到计划设施的服务。本文研究了考虑中断风险的固定收费定位问题(FLP)。采用两阶段鲁棒优化方法,此时此地做出设施选址决策,在设施可用性的不确定性信息暴露后做出客户重新分配的追索权决策。我们实现了一种列约束生成(C&CG)算法来精确求解鲁棒模型。而不是依赖于二元化或重新表述技术来处理子问题,正如在文献中常见的那样,我们使用基于线性规划的枚举方法,使我们能够考虑到设备故障的离散不确定性集。当二元化技术不能应用于子问题时,这也提供了处理这种情况的灵活性。我们进一步为实际大小的实例开发了一个近似方案。数值实验表明,C&CG算法在鲁棒FLP和鲁棒p中值问题上都优于现有方法。
{"title":"Robust Facility Location Under Disruptions","authors":"Chun-Lai Cheng, Y. Adulyasak, Louis-Martin Rousseau","doi":"10.1287/IJOO.2021.0054","DOIUrl":"https://doi.org/10.1287/IJOO.2021.0054","url":null,"abstract":"Facility networks can be disrupted by, for example, power outages, poor weather conditions, or natural disasters, and the probabilities of these events may be difficult to estimate. This could lead to costly recourse decisions because customers cannot be served by the planned facilities. In this paper, we study a fixed-charge location problem (FLP) that considers disruption risks. We adopt a two-stage robust optimization method, by which facility location decisions are made here and now and recourse decisions to reassign customers are made after the uncertainty information on the facility availability has been revealed. We implement a column-and-constraint generation (C&CG) algorithm to solve the robust models exactly. Instead of relying on dualization or reformulation techniques to deal with the subproblem, as is common in the literature, we use a linear programming–based enumeration method that allows us to take into account a discrete uncertainty set of facility failures. This also gives the flexibility to tackle cases when the dualization technique cannot be applied to the subproblem. We further develop an approximation scheme for instances of a realistic size. Numerical experiments show that the proposed C&CG algorithm outperforms existing methods for both the robust FLP and the robust p-median problem.","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42508210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
INFORMS journal on optimization
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1