Pub Date : 2024-02-15DOI: 10.1007/s10898-024-01364-6
Sabah Bushaj, İ. Esra Büyüktahtakın
In this paper, we address the difficulty of solving large-scale multi-dimensional knapsack instances (MKP), presenting a novel deep reinforcement learning (DRL) framework. In this DRL framework, we train different agents compatible with a discrete action space for sequential decision-making while still satisfying any resource constraint of the MKP. This novel framework incorporates the decision variable values in the 2D DRL where the agent is responsible for assigning a value of 1 or 0 to each of the variables. To the best of our knowledge, this is the first DRL model of its kind in which a 2D environment is formulated, and an element of the DRL solution matrix represents an item of the MKP. Our framework is configured to solve MKP instances of different dimensions and distributions. We propose a K-means approach to obtain an initial feasible solution that is used to train the DRL agent. We train four different agents in our framework and present the results comparing each of them with the CPLEX commercial solver. The results show that our agents can learn and generalize over instances with different sizes and distributions. Our DRL framework shows that it can solve medium-sized instances at least 45 times faster in CPU solution time and at least 10 times faster for large instances, with a maximum solution gap of 0.28% compared to the performance of CPLEX. Furthermore, at least 95% of the items are predicted in line with the CPLEX solution. Computations with DRL also provide a better optimality gap with respect to state-of-the-art approaches.
{"title":"A K-means Supported Reinforcement Learning Framework to Multi-dimensional Knapsack","authors":"Sabah Bushaj, İ. Esra Büyüktahtakın","doi":"10.1007/s10898-024-01364-6","DOIUrl":"https://doi.org/10.1007/s10898-024-01364-6","url":null,"abstract":"<p>In this paper, we address the difficulty of solving large-scale multi-dimensional knapsack instances (MKP), presenting a novel deep reinforcement learning (DRL) framework. In this DRL framework, we train different agents compatible with a discrete action space for sequential decision-making while still satisfying any resource constraint of the MKP. This novel framework incorporates the decision variable values in the 2D DRL where the agent is responsible for assigning a value of 1 or 0 to each of the variables. To the best of our knowledge, this is the first DRL model of its kind in which a 2D environment is formulated, and an element of the DRL solution matrix represents an item of the MKP. Our framework is configured to solve MKP instances of different dimensions and distributions. We propose a K-means approach to obtain an initial feasible solution that is used to train the DRL agent. We train four different agents in our framework and present the results comparing each of them with the CPLEX commercial solver. The results show that our agents can learn and generalize over instances with different sizes and distributions. Our DRL framework shows that it can solve medium-sized instances at least 45 times faster in CPU solution time and at least 10 times faster for large instances, with a maximum solution gap of 0.28% compared to the performance of CPLEX. Furthermore, at least 95% of the items are predicted in line with the CPLEX solution. Computations with DRL also provide a better optimality gap with respect to state-of-the-art approaches.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"121 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139759470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-15DOI: 10.1007/s10898-024-01365-5
Zehao Xiao, Liwei Zhang
An online majorized semi-proximal alternating direction method of multiplier (Online-mspADMM) is proposed for a broad class of online linearly constrained composite optimization problems. A majorized technique is adopted to produce subproblems which can be easily solved. Under mild assumptions, we establish (mathcal {O}(sqrt{N})) objective regret and (mathcal {O}(sqrt{N})) constraint violation regret at round N. We apply the Online-mspADMM to solve different types of online regularized logistic regression problems. The numerical results on synthetic data sets verify the theoretical result about regrets.
针对各类在线线性约束复合优化问题,提出了一种在线大化半近似交替方向乘法(Online-mspADMM)。该方法采用大化技术来生成易于求解的子问题。在温和的假设条件下,我们在第 N 轮建立了 (mathcal {O}(sqrt{N})) 目标遗憾和 (mathcal {O}(sqrt{N})) 约束违反遗憾。在合成数据集上的数值结果验证了关于遗憾的理论结果。
{"title":"Regret analysis of an online majorized semi-proximal ADMM for online composite optimization","authors":"Zehao Xiao, Liwei Zhang","doi":"10.1007/s10898-024-01365-5","DOIUrl":"https://doi.org/10.1007/s10898-024-01365-5","url":null,"abstract":"<p>An online majorized semi-proximal alternating direction method of multiplier (Online-mspADMM) is proposed for a broad class of online linearly constrained composite optimization problems. A majorized technique is adopted to produce subproblems which can be easily solved. Under mild assumptions, we establish <span>(mathcal {O}(sqrt{N}))</span> objective regret and <span>(mathcal {O}(sqrt{N}))</span> constraint violation regret at round <i>N</i>. We apply the Online-mspADMM to solve different types of online regularized logistic regression problems. The numerical results on synthetic data sets verify the theoretical result about regrets.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"68 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139759413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-15DOI: 10.1007/s10898-024-01369-1
Kathrin Klamroth, Michael Stiglmayr, Claudia Totzeck
We propose a multi-swarm approach to approximate the Pareto front of general multi-objective optimization problems that is based on the consensus-based optimization method (CBO). The algorithm is motivated step by step beginning with a simple extension of CBO based on fixed scalarization weights. To overcome the issue of choosing the weights we propose an adaptive weight strategy in the second modeling step. The modeling process is concluded with the incorporation of a penalty strategy that avoids clusters along the Pareto front and a diffusion term that prevents collapsing swarms. Altogether the proposed K-swarm CBO algorithm is tailored for a diverse approximation of the Pareto front and, simultaneously, the efficient set of general non-convex multi-objective problems. The feasibility of the approach is justified by analytic results, including convergence proofs, and a performance comparison to the well-known non-dominated sorting genetic algorithms NSGA2 and NSGA3 as well as the recently proposed one-swarm approach for multi-objective problems involving consensus-based optimization.
{"title":"Consensus-based optimization for multi-objective problems: a multi-swarm approach","authors":"Kathrin Klamroth, Michael Stiglmayr, Claudia Totzeck","doi":"10.1007/s10898-024-01369-1","DOIUrl":"https://doi.org/10.1007/s10898-024-01369-1","url":null,"abstract":"<p>We propose a multi-swarm approach to approximate the Pareto front of general multi-objective optimization problems that is based on the consensus-based optimization method (CBO). The algorithm is motivated step by step beginning with a simple extension of CBO based on fixed scalarization weights. To overcome the issue of choosing the weights we propose an adaptive weight strategy in the second modeling step. The modeling process is concluded with the incorporation of a penalty strategy that avoids clusters along the Pareto front and a diffusion term that prevents collapsing swarms. Altogether the proposed <i>K</i>-swarm CBO algorithm is tailored for a diverse approximation of the Pareto front and, simultaneously, the efficient set of general non-convex multi-objective problems. The feasibility of the approach is justified by analytic results, including convergence proofs, and a performance comparison to the well-known non-dominated sorting genetic algorithms NSGA2 and NSGA3 as well as the recently proposed one-swarm approach for multi-objective problems involving consensus-based optimization.\u0000</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"14 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139759336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-13DOI: 10.1007/s10898-024-01372-6
Abstract
The problem of finding a globally optimal k-partition of a set (mathcal {A}) is a very intricate optimization problem for which in general, except in the case of one-dimensional data, i.e., for data with one feature ((mathcal {A}subset mathbb {R})), there is no method to solve. Only in the one-dimensional case, there are efficient methods based on the fact that the search for a globally optimal k-partition is equivalent to solving a global optimization problem for a symmetric Lipschitz-continuous function using the global optimization algorithm DIRECT. In the present paper, we propose a method for finding a globally optimal k-partition in the general case ((mathcal {A}subset mathbb {R}^n), (nge 1)), generalizing an idea for solving the Lipschitz global optimization for symmetric functions. To do this, we propose a method that combines a global optimization algorithm with linear constraints and the k-means algorithm. The first of these two algorithms is used only to find a good initial approximation for the k-means algorithm. The method was tested on a number of artificial datasets and on several examples from the UCI Machine Learning Repository, and an application in spectral clustering for linearly non-separable datasets is also demonstrated. Our proposed method proved to be very efficient.
{"title":"A method for searching for a globally optimal k-partition of higher-dimensional datasets","authors":"","doi":"10.1007/s10898-024-01372-6","DOIUrl":"https://doi.org/10.1007/s10898-024-01372-6","url":null,"abstract":"<h3>Abstract</h3> <p>The problem of finding a globally optimal <em>k</em>-partition of a set <span> <span>(mathcal {A})</span> </span> is a very intricate optimization problem for which in general, except in the case of one-dimensional data, i.e., for data with one feature (<span> <span>(mathcal {A}subset mathbb {R})</span> </span>), there is no method to solve. Only in the one-dimensional case, there are efficient methods based on the fact that the search for a globally optimal <em>k</em>-partition is equivalent to solving a global optimization problem for a symmetric Lipschitz-continuous function using the global optimization algorithm <span>DIRECT</span>. In the present paper, we propose a method for finding a globally optimal <em>k</em>-partition in the general case (<span> <span>(mathcal {A}subset mathbb {R}^n)</span> </span>, <span> <span>(nge 1)</span> </span>), generalizing an idea for solving the Lipschitz global optimization for symmetric functions. To do this, we propose a method that combines a global optimization algorithm with linear constraints and the <em>k</em>-means algorithm. The first of these two algorithms is used only to find a good initial approximation for the <em>k</em>-means algorithm. The method was tested on a number of artificial datasets and on several examples from the UCI Machine Learning Repository, and an application in spectral clustering for linearly non-separable datasets is also demonstrated. Our proposed method proved to be very efficient. </p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"25 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139759418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Interval branch-and-bound solvers provide reliable algorithms for handling non-convex optimization problems by ensuring the feasibility and the optimality of the computed solutions, i.e. independently from the floating-point rounding errors. Moreover, these solvers deal with a wide variety of mathematical operators. However, these solvers are not dedicated to quadratic optimization and do not exploit nonlinear convex relaxations in their framework. We present an interval branch-and-bound method that can efficiently solve quadratic optimization problems. At each node explored by the algorithm, our solver uses a quadratic convex relaxation which is as strong as a semi-definite programming relaxation, and a variable selection strategy dedicated to quadratic problems. The interval features can then propagate efficiently this information for contracting all variable domains. We also propose to make our algorithm rigorous by certifying firstly the convexity of the objective function of our relaxation, and secondly the validity of the lower bound calculated at each node. In the non-rigorous case, our experiments show significant speedups on general integer quadratic instances, and when reliability is required, our first results show that we are able to handle medium-sized instances in a reasonable running time.
{"title":"Global solution of quadratic problems using interval methods and convex relaxations","authors":"Sourour Elloumi, Amélie Lambert, Bertrand Neveu, Gilles Trombettoni","doi":"10.1007/s10898-024-01370-8","DOIUrl":"https://doi.org/10.1007/s10898-024-01370-8","url":null,"abstract":"<p>Interval branch-and-bound solvers provide reliable algorithms for handling non-convex optimization problems by ensuring the feasibility and the optimality of the computed solutions, i.e. independently from the floating-point rounding errors. Moreover, these solvers deal with a wide variety of mathematical operators. However, these solvers are not dedicated to quadratic optimization and do not exploit nonlinear convex relaxations in their framework. We present an interval branch-and-bound method that can efficiently solve quadratic optimization problems. At each node explored by the algorithm, our solver uses a quadratic convex relaxation which is as strong as a semi-definite programming relaxation, and a variable selection strategy dedicated to quadratic problems. The interval features can then propagate efficiently this information for contracting all variable domains. We also propose to make our algorithm rigorous by certifying firstly the convexity of the objective function of our relaxation, and secondly the validity of the lower bound calculated at each node. In the non-rigorous case, our experiments show significant speedups on general integer quadratic instances, and when reliability is required, our first results show that we are able to handle medium-sized instances in a reasonable running time.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"16 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139759499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper is devoted to the study of a class of multiobjective semi-infinite programming problems on Hadamard manifolds (in short, (MOSIP-HM)). We derive some alternative theorems analogous to Tucker’s theorem, Tucker’s first and second existence theorem, and Motzkin’s theorem of alternative in the framework of Hadamard manifolds. We employ Motzkin’s theorem of alternative to establish necessary and sufficient conditions that characterize KKT pseudoconvex functions using strong KKT vector critical points and efficient solutions of (MOSIP-HM). Moreover, we formulate the Mond-Weir and Wolfe-type dual problems related to (MOSIP-HM) and derive the weak and converse duality theorems relating (MOSIP-HM) and the dual problems. Several non-trivial numerical examples are provided to illustrate the significance of the derived results. The results deduced in the paper extend and generalize several notable works existing in the literature.
{"title":"Efficiency conditions and duality for multiobjective semi-infinite programming problems on Hadamard manifolds","authors":"Balendu Bhooshan Upadhyay, Arnav Ghosh, Savin Treanţă","doi":"10.1007/s10898-024-01367-3","DOIUrl":"https://doi.org/10.1007/s10898-024-01367-3","url":null,"abstract":"<p>This paper is devoted to the study of a class of multiobjective semi-infinite programming problems on Hadamard manifolds (in short, (MOSIP-HM)). We derive some alternative theorems analogous to Tucker’s theorem, Tucker’s first and second existence theorem, and Motzkin’s theorem of alternative in the framework of Hadamard manifolds. We employ Motzkin’s theorem of alternative to establish necessary and sufficient conditions that characterize KKT pseudoconvex functions using strong KKT vector critical points and efficient solutions of (MOSIP-HM). Moreover, we formulate the Mond-Weir and Wolfe-type dual problems related to (MOSIP-HM) and derive the weak and converse duality theorems relating (MOSIP-HM) and the dual problems. Several non-trivial numerical examples are provided to illustrate the significance of the derived results. The results deduced in the paper extend and generalize several notable works existing in the literature.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"4 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139656331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-28DOI: 10.1007/s10898-023-01361-1
Marc C. Robini, Lihui Wang, Yuemin Zhu
Majorization–minimization (MM) is a versatile optimization technique that operates on surrogate functions satisfying tangency and domination conditions. Our focus is on differentiable optimization using inexact MM with quadratic surrogates, which amounts to approximately solving a sequence of symmetric positive definite systems. We begin by investigating the convergence properties of this process, from subconvergence to R-linear convergence, with emphasis on tame objectives. Then we provide a numerically stable implementation based on truncated conjugate gradient. Applications to multidimensional scaling and regularized inversion are discussed and illustrated through numerical experiments on graph layout and X-ray tomography. In the end, quadratic MM not only offers solid guarantees of convergence and stability, but is robust to the choice of its control parameters.
主要化-最小化(MM)是一种通用的优化技术,可对满足切线和支配条件的代用函数进行操作。我们的重点是使用二次代函数的非精确 MM 进行可微分优化,这相当于近似求解一系列对称正定系统。我们首先研究了这一过程的收敛特性,从亚收敛到 R 线性收敛,重点是驯服目标。然后,我们提供了一种基于截断共轭梯度的数值稳定实现方法。我们讨论了多维缩放和正则化反演的应用,并通过图形布局和 X 射线断层扫描的数值实验进行了说明。最后,二次 MM 不仅在收敛性和稳定性方面提供了可靠保证,而且对其控制参数的选择也很稳健。
{"title":"The appeals of quadratic majorization–minimization","authors":"Marc C. Robini, Lihui Wang, Yuemin Zhu","doi":"10.1007/s10898-023-01361-1","DOIUrl":"https://doi.org/10.1007/s10898-023-01361-1","url":null,"abstract":"<p>Majorization–minimization (MM) is a versatile optimization technique that operates on surrogate functions satisfying tangency and domination conditions. Our focus is on differentiable optimization using inexact MM with quadratic surrogates, which amounts to approximately solving a sequence of symmetric positive definite systems. We begin by investigating the convergence properties of this process, from subconvergence to R-linear convergence, with emphasis on tame objectives. Then we provide a numerically stable implementation based on truncated conjugate gradient. Applications to multidimensional scaling and regularized inversion are discussed and illustrated through numerical experiments on graph layout and X-ray tomography. In the end, quadratic MM not only offers solid guarantees of convergence and stability, but is robust to the choice of its control parameters.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"171 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139583595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-25DOI: 10.1007/s10898-023-01359-9
Yingkai Song, Paul I. Barton
This article proposes new practical methods for furnishing generalized derivative information of optimal-value functions with embedded parameterized convex programs, with potential applications in nonsmooth equation-solving and optimization. We consider three cases of parameterized convex programs: (1) partial convexity—functions in the convex programs are convex with respect to decision variables for fixed values of parameters, (2) joint convexity—the functions are convex with respect to both decision variables and parameters, and (3) linear programs where the parameters appear in the objective function. These new methods calculate an LD-derivative, which is a recently established useful generalized derivative concept, by constructing and solving a sequence of auxiliary linear programs. In the general partial convexity case, our new method requires that the strong Slater conditions are satisfied for the embedded convex program’s decision space, and requires that the convex program has a unique optimal solution. It is shown that these conditions are essentially less stringent than the regularity conditions required by certain established methods, and our new method is at the same time computationally preferable over these methods. In the joint convexity case, the uniqueness requirement of an optimal solution is further relaxed, and to our knowledge, there is no established method for computing generalized derivatives prior to this work. In the linear program case, both the Slater conditions and the uniqueness of an optimal solution are not required by our new method.
{"title":"Generalized derivatives of optimal-value functions with parameterized convex programs embedded","authors":"Yingkai Song, Paul I. Barton","doi":"10.1007/s10898-023-01359-9","DOIUrl":"https://doi.org/10.1007/s10898-023-01359-9","url":null,"abstract":"<p>This article proposes new practical methods for furnishing generalized derivative information of optimal-value functions with embedded parameterized convex programs, with potential applications in nonsmooth equation-solving and optimization. We consider three cases of parameterized convex programs: (1) partial convexity—functions in the convex programs are convex with respect to decision variables for fixed values of parameters, (2) joint convexity—the functions are convex with respect to both decision variables and parameters, and (3) linear programs where the parameters appear in the objective function. These new methods calculate an LD-derivative, which is a recently established useful generalized derivative concept, by constructing and solving a sequence of auxiliary linear programs. In the general partial convexity case, our new method requires that the strong Slater conditions are satisfied for the embedded convex program’s decision space, and requires that the convex program has a unique optimal solution. It is shown that these conditions are essentially less stringent than the regularity conditions required by certain established methods, and our new method is at the same time computationally preferable over these methods. In the joint convexity case, the uniqueness requirement of an optimal solution is further relaxed, and to our knowledge, there is no established method for computing generalized derivatives prior to this work. In the linear program case, both the Slater conditions and the uniqueness of an optimal solution are not required by our new method.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"9 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139560266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-25DOI: 10.1007/s10898-023-01363-z
Xin Cheng, Xiang Li
The problem of finding the optimal flow allocation within an industrial water-using and treatment network can be formulated into nonconvex nonlinear program or nonconvex mixed-integer nonlinear program. The efficiency of global optimization of the nonconvex program relies heavily on the strength of the problem formulation. In this paper, we propose a variant of the commonly used P-formulation, called the P(^*)-formulation, for the water treatment network (WTN) and the total water network (TWN) that includes water-using and water treatment units. For either type of networks, we prove that the P(^*)-formulation is at least as strong as the P-formulation under mild bound consistency conditions. We also prove for either type of networks that the P(^*)-formulation is at least as strong as the split-fraction based formulation (called SF-formulation) under certain bound consistency conditions. The computational study shows that the P(^*)-formulation significantly outperforms the P- and the SF-formulations. For some problem instances, the P(^*)-formulation is faster than the other two formulations by several orders of magnitudes.
在工业用水和水处理网络中寻找最优流量分配的问题可以表述为非凸非线性程序或非凸混合整数非线性程序。非凸程序的全局优化效率在很大程度上取决于问题表述的强度。本文针对包括用水单位和水处理单位的水处理网络(WTN)和总水网络(TWN),提出了一种常用 P 公式的变体,称为 P (^*)公式。对于这两类网络,我们都证明了在温和的约束一致性条件下,P(^*)公式至少和 P 公式一样强。我们还证明,对于这两类网络,在某些约束一致性条件下,P(^*)公式至少与基于分割分数的公式(称为 SF 公式)一样强。计算研究表明,P(^*)公式明显优于P公式和SF公式。对于某些问题实例,P(^*)公式比其他两种公式快几个数量级。
{"title":"A strong P-formulation for global optimization of industrial water-using and treatment networks","authors":"Xin Cheng, Xiang Li","doi":"10.1007/s10898-023-01363-z","DOIUrl":"https://doi.org/10.1007/s10898-023-01363-z","url":null,"abstract":"<p>The problem of finding the optimal flow allocation within an industrial water-using and treatment network can be formulated into nonconvex nonlinear program or nonconvex mixed-integer nonlinear program. The efficiency of global optimization of the nonconvex program relies heavily on the strength of the problem formulation. In this paper, we propose a variant of the commonly used P-formulation, called the P<span>(^*)</span>-formulation, for the water treatment network (WTN) and the total water network (TWN) that includes water-using and water treatment units. For either type of networks, we prove that the P<span>(^*)</span>-formulation is at least as strong as the P-formulation under mild bound consistency conditions. We also prove for either type of networks that the P<span>(^*)</span>-formulation is at least as strong as the split-fraction based formulation (called SF-formulation) under certain bound consistency conditions. The computational study shows that the P<span>(^*)</span>-formulation significantly outperforms the P- and the SF-formulations. For some problem instances, the P<span>(^*)</span>-formulation is faster than the other two formulations by several orders of magnitudes.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"9 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139560375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-22DOI: 10.1007/s10898-023-01357-x
Jiawei Chen, Huasheng Su, Xiaoqing Ou, Yibing Lv
In this paper, we investigate optimality conditions of nonsmooth sparsity multiobjective optimization problem (shortly, SMOP) by the advanced variational analysis. We present the variational analysis characterizations, such as tangent cones, normal cones, dual cones and second-order tangent set, of the sparse set, and give the relationships among the sparse set and its tangent cones and second-order tangent set. The first-order necessary conditions for local weakly Pareto efficient solution of SMOP are established under some suitable conditions. We also obtain the equivalence between basic feasible point and stationary point defined by the Fréchet normal cone of SMOP. The sufficient optimality conditions of SMOP are derived under the pseudoconvexity. Moreover, the second-order necessary and sufficient optimality conditions of SMOP are established by the Dini directional derivatives of the objective function and the Bouligand tangent cone and second-order tangent set of the sparse set.
{"title":"First- and second-order optimality conditions of nonsmooth sparsity multiobjective optimization via variational analysis","authors":"Jiawei Chen, Huasheng Su, Xiaoqing Ou, Yibing Lv","doi":"10.1007/s10898-023-01357-x","DOIUrl":"https://doi.org/10.1007/s10898-023-01357-x","url":null,"abstract":"<p>In this paper, we investigate optimality conditions of nonsmooth sparsity multiobjective optimization problem (shortly, SMOP) by the advanced variational analysis. We present the variational analysis characterizations, such as tangent cones, normal cones, dual cones and second-order tangent set, of the sparse set, and give the relationships among the sparse set and its tangent cones and second-order tangent set. The first-order necessary conditions for local weakly Pareto efficient solution of SMOP are established under some suitable conditions. We also obtain the equivalence between basic feasible point and stationary point defined by the Fréchet normal cone of SMOP. The sufficient optimality conditions of SMOP are derived under the pseudoconvexity. Moreover, the second-order necessary and sufficient optimality conditions of SMOP are established by the Dini directional derivatives of the objective function and the Bouligand tangent cone and second-order tangent set of the sparse set.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"15 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139514724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}