Pub Date : 2024-08-08DOI: 10.1007/s10107-024-02126-8
Oussama Hanguir, Will Ma, Jiangze Han, Christopher Thomas Ryan
We consider the problem of designing a linear program that has diverse solutions as the right-hand side varies. This problem arises in video game settings where designers aim to have players use different “weapons” or “tactics” as they progress. We model this design question as a choice over the constraint matrix A and cost vector c to maximize the number of possible supports of unique optimal solutions (what we call “loadouts”) of Linear Programs (max {c^top x mid Ax le b, x ge 0}) with nonnegative data considered over all resource vectors b. We provide an upper bound on the optimal number of loadouts and provide a family of constructions that have an asymptotically optimal number of loadouts. The upper bound is based on a connection between our problem and the study of triangulations of point sets arising from polyhedral combinatorics, and specifically the combinatorics of the cyclic polytope. Our asymptotically optimal construction also draws inspiration from the properties of the cyclic polytope.
我们考虑的问题是,如何设计一个线性程序,使其随着右边的变化而有不同的解。这个问题出现在视频游戏中,设计者希望玩家在游戏过程中使用不同的 "武器 "或 "战术"。我们将这一设计问题建模为对约束矩阵 A 和成本向量 c 的选择,以最大化线性规划((max {c^top x mid Ax le b, x ge 0}) 的唯一最优解(我们称之为 "loadouts")的可能支持数,其中考虑了所有资源向量 b 的非负数据。这个上限是基于我们的问题与多面体组合学,特别是循环多面体组合学中的点集三角形研究之间的联系。我们的渐近最优构造也从循环多面体的特性中获得了灵感。
{"title":"Optimizing for strategy diversity in the design of video games","authors":"Oussama Hanguir, Will Ma, Jiangze Han, Christopher Thomas Ryan","doi":"10.1007/s10107-024-02126-8","DOIUrl":"https://doi.org/10.1007/s10107-024-02126-8","url":null,"abstract":"<p>We consider the problem of designing a linear program that has diverse solutions as the right-hand side varies. This problem arises in video game settings where designers aim to have players use different “weapons” or “tactics” as they progress. We model this design question as a choice over the constraint matrix <i>A</i> and cost vector <i>c</i> to maximize the number of possible <i>supports</i> of unique optimal solutions (what we call “loadouts”) of Linear Programs <span>(max {c^top x mid Ax le b, x ge 0})</span> with nonnegative data considered over all resource vectors <i>b</i>. We provide an upper bound on the optimal number of loadouts and provide a family of constructions that have an asymptotically optimal number of loadouts. The upper bound is based on a connection between our problem and the study of triangulations of point sets arising from polyhedral combinatorics, and specifically the combinatorics of the cyclic polytope. Our asymptotically optimal construction also draws inspiration from the properties of the cyclic polytope.</p>","PeriodicalId":18297,"journal":{"name":"Mathematical Programming","volume":"193 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141933215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-08DOI: 10.1007/s10107-024-02125-9
Dirk Banholzer, Jörg Fliege, Ralf Werner
We present a novel response surface method for global optimisation of an expensive and noisy (black-box) objective function, where error bounds on the deviation of the observed noisy function values from their true counterparts are available. The method is based on Gutmann’s well-established RBF method for minimising an expensive and deterministic objective function, which has become popular both from a theoretical and practical perspective. To construct suitable radial basis function approximants to the objective function and to determine new sample points for successive evaluation of the expensive noisy objective, the method uses a regularised least-squares criterion. In particular, new points are defined by means of a target value, analogous to the original RBF method. We provide essential convergence results, and provide a numerical illustration of the method by means of a simple test problem.
{"title":"A radial basis function method for noisy global optimisation","authors":"Dirk Banholzer, Jörg Fliege, Ralf Werner","doi":"10.1007/s10107-024-02125-9","DOIUrl":"https://doi.org/10.1007/s10107-024-02125-9","url":null,"abstract":"<p>We present a novel response surface method for global optimisation of an expensive and noisy (black-box) objective function, where error bounds on the deviation of the observed noisy function values from their true counterparts are available. The method is based on Gutmann’s well-established RBF method for minimising an expensive and deterministic objective function, which has become popular both from a theoretical and practical perspective. To construct suitable radial basis function approximants to the objective function and to determine new sample points for successive evaluation of the expensive noisy objective, the method uses a regularised least-squares criterion. In particular, new points are defined by means of a target value, analogous to the original RBF method. We provide essential convergence results, and provide a numerical illustration of the method by means of a simple test problem.\u0000</p>","PeriodicalId":18297,"journal":{"name":"Mathematical Programming","volume":"7 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141933213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-29DOI: 10.1007/s10107-024-02128-6
Silvana M. Pesenti, Qiuqi Wang, Ruodu Wang
Optimization of distortion riskmetrics with distributional uncertainty has wide applications in finance and operations research. Distortion riskmetrics include many commonly applied risk measures and deviation measures, which are not necessarily monotone or convex. One of our central findings is a unifying result that allows to convert an optimization of a non-convex distortion riskmetric with distributional uncertainty to a convex one induced from the concave envelope of the distortion function, leading to practical tractability. A sufficient condition to the unifying equivalence result is the novel notion of closedness under concentration, a variation of which is also shown to be necessary for the equivalence. Our results include many special cases that are well studied in the optimization literature, including but not limited to optimizing probabilities, Value-at-Risk, Expected Shortfall, Yaari’s dual utility, and differences between distortion risk measures, under various forms of distributional uncertainty. We illustrate our theoretical results via applications to portfolio optimization, optimization under moment constraints, and preference robust optimization.
{"title":"Optimizing distortion riskmetrics with distributional uncertainty","authors":"Silvana M. Pesenti, Qiuqi Wang, Ruodu Wang","doi":"10.1007/s10107-024-02128-6","DOIUrl":"https://doi.org/10.1007/s10107-024-02128-6","url":null,"abstract":"<p>Optimization of distortion riskmetrics with distributional uncertainty has wide applications in finance and operations research. Distortion riskmetrics include many commonly applied risk measures and deviation measures, which are not necessarily monotone or convex. One of our central findings is a unifying result that allows to convert an optimization of a non-convex distortion riskmetric with distributional uncertainty to a convex one induced from the concave envelope of the distortion function, leading to practical tractability. A sufficient condition to the unifying equivalence result is the novel notion of closedness under concentration, a variation of which is also shown to be necessary for the equivalence. Our results include many special cases that are well studied in the optimization literature, including but not limited to optimizing probabilities, Value-at-Risk, Expected Shortfall, Yaari’s dual utility, and differences between distortion risk measures, under various forms of distributional uncertainty. We illustrate our theoretical results via applications to portfolio optimization, optimization under moment constraints, and preference robust optimization.</p>","PeriodicalId":18297,"journal":{"name":"Mathematical Programming","volume":"48 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141866962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-27DOI: 10.1007/s10107-024-02124-w
D. Russell Luke
We present a Markov-chain analysis of blockwise-stochastic algorithms for solving partially block-separable optimization problems. Our main contributions to the extensive literature on these methods are statements about the Markov operators and distributions behind the iterates of stochastic algorithms, and in particular the regularity of Markov operators and rates of convergence of the distributions of the corresponding Markov chains. This provides a detailed characterization of the moments of the sequences beyond just the expected behavior. This also serves as a case study of how randomization restores favorable properties to algorithms that iterations of only partial information destroys. We demonstrate this on stochastic blockwise implementations of the forward–backward and Douglas–Rachford algorithms for nonconvex (and, as a special case, convex), nonsmooth optimization.
{"title":"Convergence in distribution of randomized algorithms: the case of partially separable optimization","authors":"D. Russell Luke","doi":"10.1007/s10107-024-02124-w","DOIUrl":"https://doi.org/10.1007/s10107-024-02124-w","url":null,"abstract":"<p>We present a Markov-chain analysis of blockwise-stochastic algorithms for solving partially block-separable optimization problems. Our main contributions to the extensive literature on these methods are statements about the Markov operators and distributions behind the iterates of stochastic algorithms, and in particular the regularity of Markov operators and rates of convergence of the distributions of the corresponding Markov chains. This provides a detailed characterization of the moments of the sequences beyond just the expected behavior. This also serves as a case study of how randomization restores favorable properties to algorithms that iterations of only partial information destroys. We demonstrate this on stochastic blockwise implementations of the forward–backward and Douglas–Rachford algorithms for nonconvex (and, as a special case, convex), nonsmooth optimization.</p>","PeriodicalId":18297,"journal":{"name":"Mathematical Programming","volume":"28 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141780803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-27DOI: 10.1007/s10107-024-02111-1
Ningji Wei, Jose L. Walteros
Supervalid inequalities are a specific type of constraints often used within the branch-and-cut framework to strengthen the linear relaxation of mixed-integer programs. These inequalities share the particular characteristic of potentially removing feasible integer solutions as long as they are already dominated by an incumbent solution. This paper focuses on supervalid inequalities for solving binary interdiction games. Specifically, we provide a general characterization of inequalities that are derived from bipartitions of the leader’s strategy set and develop an algorithmic approach to use them. This includes the design of two verification subroutines that we apply for separation purposes. We provide three general examples in which we apply our results to solve binary interdiction games targeting shortest paths, spanning trees, and vertex covers. Finally, we prove that the separation procedure is efficient for the class of interdiction games defined on greedoids—a type of set system that generalizes many others such as matroids and antimatroids.
{"title":"On supervalid inequalities for binary interdiction games","authors":"Ningji Wei, Jose L. Walteros","doi":"10.1007/s10107-024-02111-1","DOIUrl":"https://doi.org/10.1007/s10107-024-02111-1","url":null,"abstract":"<p>Supervalid inequalities are a specific type of constraints often used within the branch-and-cut framework to strengthen the linear relaxation of mixed-integer programs. These inequalities share the particular characteristic of potentially removing feasible integer solutions as long as they are already dominated by an incumbent solution. This paper focuses on supervalid inequalities for solving binary interdiction games. Specifically, we provide a general characterization of inequalities that are derived from bipartitions of the leader’s strategy set and develop an algorithmic approach to use them. This includes the design of two verification subroutines that we apply for separation purposes. We provide three general examples in which we apply our results to solve binary interdiction games targeting shortest paths, spanning trees, and vertex covers. Finally, we prove that the separation procedure is efficient for the class of interdiction games defined on greedoids—a type of set system that generalizes many others such as matroids and antimatroids.</p>","PeriodicalId":18297,"journal":{"name":"Mathematical Programming","volume":"21 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141780802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-22DOI: 10.1007/s10107-024-02122-y
Alberto Del Pia, Aida Khajavirad
With the goal of obtaining strong relaxations for binary polynomial optimization problems, we introduce the pseudo-Boolean polytope defined as the set of binary points (z in {0,1}^{V cup S}) satisfying a collection of equalities of the form (z_s = prod _{v in s} sigma _s(z_v)), for all (s in S), where (sigma _s(z_v) in {z_v, 1-z_v}), and where S is a multiset of subsets of V. By representing the pseudo-Boolean polytope via a signed hypergraph, we obtain sufficient conditions under which this polytope has a polynomial-size extended formulation. Our new framework unifies and extends all prior results on the existence of polynomial-size extended formulations for the convex hull of the feasible region of binary polynomial optimization problems of degree at least three.
为了获得二元多项式优化问题的强放松,我们引入了伪布尔多面体,它被定义为满足一系列等式的二元点的集合(z in {0,1}^{V cup S}) (z_s = prod _{v in s} )。通过用有符号的超图来表示伪布尔多面体,我们得到了该多面体具有多项式大小的扩展表述的充分条件。我们的新框架统一并扩展了之前关于至少三度二元多项式优化问题可行区域凸壳的多项式大小扩展公式存在性的所有结果。
{"title":"The pseudo-Boolean polytope and polynomial-size extended formulations for binary polynomial optimization","authors":"Alberto Del Pia, Aida Khajavirad","doi":"10.1007/s10107-024-02122-y","DOIUrl":"https://doi.org/10.1007/s10107-024-02122-y","url":null,"abstract":"<p>With the goal of obtaining strong relaxations for binary polynomial optimization problems, we introduce the pseudo-Boolean polytope defined as the set of binary points <span>(z in {0,1}^{V cup S})</span> satisfying a collection of equalities of the form <span>(z_s = prod _{v in s} sigma _s(z_v))</span>, for all <span>(s in S)</span>, where <span>(sigma _s(z_v) in {z_v, 1-z_v})</span>, and where <i>S</i> is a multiset of subsets of <i>V</i>. By representing the pseudo-Boolean polytope via a signed hypergraph, we obtain sufficient conditions under which this polytope has a polynomial-size extended formulation. Our new framework unifies and extends all prior results on the existence of polynomial-size extended formulations for the convex hull of the feasible region of binary polynomial optimization problems of degree at least three.</p>","PeriodicalId":18297,"journal":{"name":"Mathematical Programming","volume":"62 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141740861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-22DOI: 10.1007/s10107-024-02110-2
Wenqing Ouyang, Andre Milzarek
We propose a novel trust region method for solving a class of nonsmooth, nonconvex composite-type optimization problems. The approach embeds inexact semismooth Newton steps for finding zeros of a normal map-based stationarity measure for the problem in a trust region framework. Based on a new merit function and acceptance mechanism, global convergence and transition to fast local q-superlinear convergence are established under standard conditions. In addition, we verify that the proposed trust region globalization is compatible with the Kurdyka–Łojasiewicz inequality yielding finer convergence results. Experiments on sparse logistic regression, image compression, and a constrained log-determinant problem illustrate the efficiency of the proposed algorithm.
{"title":"A trust region-type normal map-based semismooth Newton method for nonsmooth nonconvex composite optimization","authors":"Wenqing Ouyang, Andre Milzarek","doi":"10.1007/s10107-024-02110-2","DOIUrl":"https://doi.org/10.1007/s10107-024-02110-2","url":null,"abstract":"<p>We propose a novel trust region method for solving a class of nonsmooth, nonconvex composite-type optimization problems. The approach embeds inexact semismooth Newton steps for finding zeros of a normal map-based stationarity measure for the problem in a trust region framework. Based on a new merit function and acceptance mechanism, global convergence and transition to fast local q-superlinear convergence are established under standard conditions. In addition, we verify that the proposed trust region globalization is compatible with the Kurdyka–Łojasiewicz inequality yielding finer convergence results. Experiments on sparse logistic regression, image compression, and a constrained log-determinant problem illustrate the efficiency of the proposed algorithm.</p>","PeriodicalId":18297,"journal":{"name":"Mathematical Programming","volume":"4 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141740860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-22DOI: 10.1007/s10107-024-02118-8
Sven Jäger, Guillaume Sagnol, Daniel Schmidt genannt Waldschmidt, Philipp Warode
We study kill-and-restart and preemptive strategies for the fundamental scheduling problem of minimizing the sum of weighted completion times on a single machine in the non-clairvoyant setting. First, we show a lower bound of 3 for any deterministic non-clairvoyant kill-and-restart strategy. Then, we give for any (b > 1) a tight analysis for the natural b-scaling kill-and-restart strategy as well as for a randomized variant of it. In particular, we show a competitive ratio of ((1+3sqrt{3})approx 6.197) for the deterministic and of (approx 3.032) for the randomized strategy, by making use of the largest eigenvalue of a Toeplitz matrix. In addition, we show that the preemptive Weighted Shortest Elapsed Time First (WSETF) rule is 2-competitive when jobs are released online, matching the lower bound for the unit weight case with trivial release dates for any non-clairvoyant algorithm. Using this result as well as the competitiveness of round-robin for multiple machines, we prove performance guarantees smaller than 10 for adaptions of the b-scaling strategy to online release dates and unweighted jobs on identical parallel machines.
{"title":"Competitive kill-and-restart and preemptive strategies for non-clairvoyant scheduling","authors":"Sven Jäger, Guillaume Sagnol, Daniel Schmidt genannt Waldschmidt, Philipp Warode","doi":"10.1007/s10107-024-02118-8","DOIUrl":"https://doi.org/10.1007/s10107-024-02118-8","url":null,"abstract":"<p>We study kill-and-restart and preemptive strategies for the fundamental scheduling problem of minimizing the sum of weighted completion times on a single machine in the non-clairvoyant setting. First, we show a lower bound of 3 for any deterministic non-clairvoyant kill-and-restart strategy. Then, we give for any <span>(b > 1)</span> a tight analysis for the natural <i>b</i>-scaling kill-and-restart strategy as well as for a randomized variant of it. In particular, we show a competitive ratio of <span>((1+3sqrt{3})approx 6.197)</span> for the deterministic and of <span>(approx 3.032)</span> for the randomized strategy, by making use of the largest eigenvalue of a Toeplitz matrix. In addition, we show that the preemptive Weighted Shortest Elapsed Time First (WSETF) rule is 2-competitive when jobs are released online, matching the lower bound for the unit weight case with trivial release dates for any non-clairvoyant algorithm. Using this result as well as the competitiveness of round-robin for multiple machines, we prove performance guarantees smaller than 10 for adaptions of the <i>b</i>-scaling strategy to online release dates and unweighted jobs on identical parallel machines.</p>","PeriodicalId":18297,"journal":{"name":"Mathematical Programming","volume":"26 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141746019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-17DOI: 10.1007/s10107-024-02109-9
Haihao Lu, Jinwen Yang
We study the convergence behaviors of primal–dual hybrid gradient (PDHG) for solving linear programming (LP). PDHG is the base algorithm of a new general-purpose first-order method LP solver, PDLP, which aims to scale up LP by taking advantage of modern computing architectures. Despite its numerical success, the theoretical understanding of PDHG for LP is still very limited; the previous complexity result relies on the global Hoffman constant of the KKT system, which is known to be very loose and uninformative. In this work, we aim to develop a fundamental understanding of the convergence behaviors of PDHG for LP and to develop a refined complexity rate that does not rely on the global Hoffman constant. We show that there are two major stages of PDHG for LP: in Stage I, PDHG identifies active variables and the length of the first stage is driven by a certain quantity which measures how close the non-degeneracy part of the LP instance is to degeneracy; in Stage II, PDHG effectively solves a homogeneous linear inequality system, and the complexity of the second stage is driven by a well-behaved local sharpness constant of the system. This finding is closely related to the concept of partial smoothness in non-smooth optimization, and it is the first complexity result of finite time identification without the non-degeneracy assumption. An interesting implication of our results is that degeneracy itself does not slow down the convergence of PDHG for LP, but near-degeneracy does.
{"title":"On the geometry and refined rate of primal–dual hybrid gradient for linear programming","authors":"Haihao Lu, Jinwen Yang","doi":"10.1007/s10107-024-02109-9","DOIUrl":"https://doi.org/10.1007/s10107-024-02109-9","url":null,"abstract":"<p>We study the convergence behaviors of primal–dual hybrid gradient (PDHG) for solving linear programming (LP). PDHG is the base algorithm of a new general-purpose first-order method LP solver, PDLP, which aims to scale up LP by taking advantage of modern computing architectures. Despite its numerical success, the theoretical understanding of PDHG for LP is still very limited; the previous complexity result relies on the global Hoffman constant of the KKT system, which is known to be very loose and uninformative. In this work, we aim to develop a fundamental understanding of the convergence behaviors of PDHG for LP and to develop a refined complexity rate that does not rely on the global Hoffman constant. We show that there are two major stages of PDHG for LP: in Stage I, PDHG identifies active variables and the length of the first stage is driven by a certain quantity which measures how close the non-degeneracy part of the LP instance is to degeneracy; in Stage II, PDHG effectively solves a homogeneous linear inequality system, and the complexity of the second stage is driven by a well-behaved local sharpness constant of the system. This finding is closely related to the concept of partial smoothness in non-smooth optimization, and it is the first complexity result of finite time identification without the non-degeneracy assumption. An interesting implication of our results is that degeneracy itself does not slow down the convergence of PDHG for LP, but near-degeneracy does.</p>","PeriodicalId":18297,"journal":{"name":"Mathematical Programming","volume":"30 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141717803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-17DOI: 10.1007/s10107-024-02114-y
Jian Hu, Dali Zhang, Huifu Xu, Sainan Zhang
Utility preference robust optimization (PRO) has recently been proposed to deal with optimal decision-making problems where the decision maker’s (DM’s) preference over gains and losses is ambiguous. In this paper, we take a step further to investigate the case that the DM’s preference is random. We propose to use a random utility function to describe the DM’s preference and develop distributional utility preference robust optimization (DUPRO) models when the distribution of the random utility function is ambiguous. We concentrate on data-driven problems where samples of the random parameters are obtainable but the sample size may be relatively small. In the case when the random utility functions are of piecewise linear structure, we propose a bootstrap method to construct the ambiguity set and demonstrate how the resulting DUPRO can be solved by a mixed-integer linear program. The piecewise linear structure is versatile in its ability to incorporate classical non-parametric utility assessment methods into the sample generation of a random utility function. Next, we expand the proposed DUPRO models and computational schemes to address general cases where the random utility functions are not necessarily piecewise linear. We show how the DUPRO models with piecewise linear random utility functions can serve as approximations for the DUPRO models with general random utility functions and allow us to quantify the approximation errors. Finally, we carry out some performance studies of the proposed bootstrap-based DUPRO model and report the preliminary numerical test results. This paper is the first attempt to use distributionally robust optimization methods for PRO problems.
{"title":"Distributional utility preference robust optimization models in multi-attribute decision making","authors":"Jian Hu, Dali Zhang, Huifu Xu, Sainan Zhang","doi":"10.1007/s10107-024-02114-y","DOIUrl":"https://doi.org/10.1007/s10107-024-02114-y","url":null,"abstract":"<p>Utility preference robust optimization (PRO) has recently been proposed to deal with optimal decision-making problems where the decision maker’s (DM’s) preference over gains and losses is ambiguous. In this paper, we take a step further to investigate the case that the DM’s preference is random. We propose to use a random utility function to describe the DM’s preference and develop distributional utility preference robust optimization (DUPRO) models when the distribution of the random utility function is ambiguous. We concentrate on data-driven problems where samples of the random parameters are obtainable but the sample size may be relatively small. In the case when the random utility functions are of piecewise linear structure, we propose a bootstrap method to construct the ambiguity set and demonstrate how the resulting DUPRO can be solved by a mixed-integer linear program. The piecewise linear structure is versatile in its ability to incorporate classical non-parametric utility assessment methods into the sample generation of a random utility function. Next, we expand the proposed DUPRO models and computational schemes to address general cases where the random utility functions are not necessarily piecewise linear. We show how the DUPRO models with piecewise linear random utility functions can serve as approximations for the DUPRO models with general random utility functions and allow us to quantify the approximation errors. Finally, we carry out some performance studies of the proposed bootstrap-based DUPRO model and report the preliminary numerical test results. This paper is the first attempt to use distributionally robust optimization methods for PRO problems.\u0000</p>","PeriodicalId":18297,"journal":{"name":"Mathematical Programming","volume":"31 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141717805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}