Pub Date : 2024-02-21DOI: 10.1007/s10589-023-00549-1
Philip E. Gill, Minxin Zhang
This paper concerns the formulation and analysis of a new interior-point method for constrained optimization that combines a shifted primal-dual interior-point method with a projected-search method for bound-constrained optimization. The method involves the computation of an approximate Newton direction for a primal-dual penalty-barrier function that incorporates shifts on both the primal and dual variables. Shifts on the dual variables allow the method to be safely “warm started” from a good approximate solution and avoids the possibility of very large solutions of the associated path-following equations. The approximate Newton direction is used in conjunction with a new projected-search line-search algorithm that employs a flexible non-monotone quasi-Armijo line search for the minimization of each penalty-barrier function. Numerical results are presented for a large set of constrained optimization problems. For comparison purposes, results are also given for two primal-dual interior-point methods that do not use projection. The first is a method that shifts both the primal and dual variables. The second is a method that involves shifts on the primal variables only. The results show that the use of both primal and dual shifts in conjunction with projection gives a method that is more robust and requires significantly fewer iterations. In particular, the number of times that the search direction must be computed is substantially reduced. Results from a set of quadratic programming test problems indicate that the method is particularly well-suited to solving the quadratic programming subproblem in a sequential quadratic programming method for nonlinear optimization.
{"title":"A projected-search interior-point method for nonlinearly constrained optimization","authors":"Philip E. Gill, Minxin Zhang","doi":"10.1007/s10589-023-00549-1","DOIUrl":"https://doi.org/10.1007/s10589-023-00549-1","url":null,"abstract":"<p>This paper concerns the formulation and analysis of a new interior-point method for constrained optimization that combines a shifted primal-dual interior-point method with a projected-search method for bound-constrained optimization. The method involves the computation of an approximate Newton direction for a primal-dual penalty-barrier function that incorporates shifts on both the primal and dual variables. Shifts on the dual variables allow the method to be safely “warm started” from a good approximate solution and avoids the possibility of very large solutions of the associated path-following equations. The approximate Newton direction is used in conjunction with a new projected-search line-search algorithm that employs a flexible non-monotone quasi-Armijo line search for the minimization of each penalty-barrier function. Numerical results are presented for a large set of constrained optimization problems. For comparison purposes, results are also given for two primal-dual interior-point methods that do not use projection. The first is a method that shifts both the primal and dual variables. The second is a method that involves shifts on the primal variables only. The results show that the use of both primal and dual shifts in conjunction with projection gives a method that is more robust and requires significantly fewer iterations. In particular, the number of times that the search direction must be computed is substantially reduced. Results from a set of quadratic programming test problems indicate that the method is particularly well-suited to solving the quadratic programming subproblem in a sequential quadratic programming method for nonlinear optimization.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":"6 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139927309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-20DOI: 10.1007/s10589-024-00561-z
Einosuke Iida, Makoto Yamashita
An arc-search interior-point method is a type of interior-point method that approximates the central path by an ellipsoidal arc, and it can often reduce the number of iterations. In this work, to further reduce the number of iterations and the computation time for solving linear programming problems, we propose two arc-search interior-point methods using Nesterov’s restarting strategy which is a well-known method to accelerate the gradient method with a momentum term. The first one generates a sequence of iterations in the neighborhood, and we prove that the proposed method converges to an optimal solution and that it is a polynomial-time method. The second one incorporates the concept of the Mehrotra-type interior-point method to improve numerical performance. The numerical experiments demonstrate that the second one reduced the number of iterations and the computational time compared to existing interior-point methods due to the momentum term.
{"title":"An infeasible interior-point arc-search method with Nesterov’s restarting strategy for linear programming problems","authors":"Einosuke Iida, Makoto Yamashita","doi":"10.1007/s10589-024-00561-z","DOIUrl":"https://doi.org/10.1007/s10589-024-00561-z","url":null,"abstract":"<p>An arc-search interior-point method is a type of interior-point method that approximates the central path by an ellipsoidal arc, and it can often reduce the number of iterations. In this work, to further reduce the number of iterations and the computation time for solving linear programming problems, we propose two arc-search interior-point methods using Nesterov’s restarting strategy which is a well-known method to accelerate the gradient method with a momentum term. The first one generates a sequence of iterations in the neighborhood, and we prove that the proposed method converges to an optimal solution and that it is a polynomial-time method. The second one incorporates the concept of the Mehrotra-type interior-point method to improve numerical performance. The numerical experiments demonstrate that the second one reduced the number of iterations and the computational time compared to existing interior-point methods due to the momentum term.\u0000</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":"142 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139927251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-20DOI: 10.1007/s10589-024-00557-9
David E. Bernal Neira, Ignacio E. Grossmann
We propose the formulation of convex Generalized Disjunctive Programming (GDP) problems using conic inequalities leading to conic GDP problems. We then show the reformulation of conic GDPs into Mixed-Integer Conic Programming (MICP) problems through both the big-M and hull reformulations. These reformulations have the advantage that they are representable using the same cones as the original conic GDP. In the case of the hull reformulation, they require no approximation of the perspective function. Moreover, the MICP problems derived can be solved by specialized conic solvers and offer a natural extended formulation amenable to both conic and gradient-based solvers. We present the closed form of several convex functions and their respective perspectives in conic sets, allowing users to formulate their conic GDP problems easily. We finally implement a large set of conic GDP examples and solve them via the scalar nonlinear and conic mixed-integer reformulations. These examples include applications from Process Systems Engineering, Machine learning, and randomly generated instances. Our results show that the conic structure can be exploited to solve these challenging MICP problems more efficiently. Our main contribution is providing the reformulations, examples, and computational results that support the claim that taking advantage of conic formulations of convex GDP instead of their nonlinear algebraic descriptions can lead to a more efficient solution to these problems.
我们利用圆锥不等式提出了凸广义分条件程序设计(GDP)问题的公式,并由此引出圆锥 GDP 问题。然后,我们展示了通过 big-M 和 hull 重构将圆锥 GDP 重构为混合整数圆锥程序设计 (MICP) 问题的方法。这些重构的优势在于,它们可以使用与原始圆锥 GDP 相同的圆锥来表示。在船体重构的情况下,它们不需要对透视函数进行近似。此外,推导出的 MICP 问题可以用专门的圆锥求解器求解,并提供了一种自然的扩展表述,既适用于圆锥求解器,也适用于基于梯度的求解器。我们提出了若干凸函数的封闭形式以及它们在圆锥曲线集合中的各自视角,使用户能够轻松地提出他们的圆锥 GDP 问题。最后,我们实现了一大批圆锥 GDP 例子,并通过标量非线性和圆锥混合整数重整求解。这些实例包括流程系统工程、机器学习和随机生成实例中的应用。我们的研究结果表明,利用圆锥结构可以更高效地解决这些具有挑战性的 MICP 问题。我们的主要贡献在于提供了重构、示例和计算结果,这些结果支持了这样一种说法,即利用凸 GDP 的圆锥形式而不是其非线性代数描述,可以更高效地解决这些问题。
{"title":"Convex mixed-integer nonlinear programs derived from generalized disjunctive programming using cones","authors":"David E. Bernal Neira, Ignacio E. Grossmann","doi":"10.1007/s10589-024-00557-9","DOIUrl":"https://doi.org/10.1007/s10589-024-00557-9","url":null,"abstract":"<p>We propose the formulation of convex Generalized Disjunctive Programming (GDP) problems using conic inequalities leading to conic GDP problems. We then show the reformulation of conic GDPs into Mixed-Integer Conic Programming (MICP) problems through both the big-M and hull reformulations. These reformulations have the advantage that they are representable using the same cones as the original conic GDP. In the case of the hull reformulation, they require no approximation of the perspective function. Moreover, the MICP problems derived can be solved by specialized conic solvers and offer a natural extended formulation amenable to both conic and gradient-based solvers. We present the closed form of several convex functions and their respective perspectives in conic sets, allowing users to formulate their conic GDP problems easily. We finally implement a large set of conic GDP examples and solve them via the scalar nonlinear and conic mixed-integer reformulations. These examples include applications from Process Systems Engineering, Machine learning, and randomly generated instances. Our results show that the conic structure can be exploited to solve these challenging MICP problems more efficiently. Our main contribution is providing the reformulations, examples, and computational results that support the claim that taking advantage of conic formulations of convex GDP instead of their nonlinear algebraic descriptions can lead to a more efficient solution to these problems.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":"104 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139927259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-20DOI: 10.1007/s10589-024-00560-0
Ruyu Liu, Shaohua Pan, Yuqia Wu, Xiaoqi Yang
This paper focuses on the minimization of a sum of a twice continuously differentiable function f and a nonsmooth convex function. An inexact regularized proximal Newton method is proposed by an approximation to the Hessian of f involving the (varrho )th power of the KKT residual. For (varrho =0), we justify the global convergence of the iterate sequence for the KL objective function and its R-linear convergence rate for the KL objective function of exponent 1/2. For (varrho in (0,1)), by assuming that cluster points satisfy a locally Hölderian error bound of order q on a second-order stationary point set and a local error bound of order (q>1!+!varrho ) on the common stationary point set, respectively, we establish the global convergence of the iterate sequence and its superlinear convergence rate with order depending on q and (varrho ). A dual semismooth Newton augmented Lagrangian method is also developed for seeking an inexact minimizer of subproblems. Numerical comparisons with two state-of-the-art methods on (ell _1)-regularized Student’s t-regressions, group penalized Student’s t-regressions, and nonconvex image restoration confirm the efficiency of the proposed method.
本文主要研究两次连续可微分函数 f 与非光滑凸函数之和的最小化问题。通过对涉及 KKT 残差的((varrho )th 次幂的 f 的 Hessian 的近似,提出了一种非精确正则化的近似牛顿方法。对于(varrho =0),我们证明了KL目标函数的迭代序列的全局收敛性以及指数为1/2的KL目标函数的R线性收敛率。对于(0,1)中的(varrho),通过假设簇点在二阶静止点集合上满足阶数为q的局部霍尔德误差约束,以及在公共静止点集合上满足阶数为(q>1!+!varrho)的局部误差约束,我们分别建立了迭代序列的全局收敛性及其阶数取决于q和(varrho)的超线性收敛率。此外,我们还开发了一种对偶半滑牛顿增强拉格朗日方法,用于寻求子问题的非精确最小值。在 (ell _1)-regularized Student's t-regressions, group penalized Student's t-regressions 和非凸图像复原方面,与两种最先进的方法进行了数值比较,证实了所提方法的效率。
{"title":"An inexact regularized proximal Newton method for nonconvex and nonsmooth optimization","authors":"Ruyu Liu, Shaohua Pan, Yuqia Wu, Xiaoqi Yang","doi":"10.1007/s10589-024-00560-0","DOIUrl":"https://doi.org/10.1007/s10589-024-00560-0","url":null,"abstract":"<p>This paper focuses on the minimization of a sum of a twice continuously differentiable function <i>f</i> and a nonsmooth convex function. An inexact regularized proximal Newton method is proposed by an approximation to the Hessian of <i>f</i> involving the <span>(varrho )</span>th power of the KKT residual. For <span>(varrho =0)</span>, we justify the global convergence of the iterate sequence for the KL objective function and its R-linear convergence rate for the KL objective function of exponent 1/2. For <span>(varrho in (0,1))</span>, by assuming that cluster points satisfy a locally Hölderian error bound of order <i>q</i> on a second-order stationary point set and a local error bound of order <span>(q>1!+!varrho )</span> on the common stationary point set, respectively, we establish the global convergence of the iterate sequence and its superlinear convergence rate with order depending on <i>q</i> and <span>(varrho )</span>. A dual semismooth Newton augmented Lagrangian method is also developed for seeking an inexact minimizer of subproblems. Numerical comparisons with two state-of-the-art methods on <span>(ell _1)</span>-regularized Student’s <i>t</i>-regressions, group penalized Student’s <i>t</i>-regressions, and nonconvex image restoration confirm the efficiency of the proposed method.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":"40 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139927222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-13DOI: 10.1007/s10589-024-00555-x
E. Ruben van Beesten, Ward Romeijnders, Kees Jan Roodbergen
We consider two-stage risk-averse mixed-integer recourse models with law invariant coherent risk measures. As in the risk-neutral case, these models are generally non-convex as a result of the integer restrictions on the second-stage decision variables and hence, hard to solve. To overcome this issue, we propose a convex approximation approach. We derive a performance guarantee for this approximation in the form of an asymptotic error bound, which depends on the choice of risk measure. This error bound, which extends an existing error bound for the conditional value at risk, shows that our approximation method works particularly well if the distribution of the random parameters in the model is highly dispersed. For special cases we derive tighter, non-asymptotic error bounds. Whereas our error bounds are valid only for a continuously distributed second-stage right-hand side vector, practical optimization methods often require discrete distributions. In this context, we show that our error bounds provide statistical error bounds for the corresponding (discretized) sample average approximation (SAA) model. In addition, we construct a Benders’ decomposition algorithm that uses our convex approximations in an SAA-framework and we provide a performance guarantee for the resulting algorithm solution. Finally, we perform numerical experiments which show that for certain risk measures our approach works even better than our theoretical performance guarantees suggest.
我们考虑的是两阶段风险规避混合整数求助模型,该模型具有法律不变的相干风险度量。与风险中性模型一样,由于对第二阶段决策变量的整数限制,这些模型通常是非凸的,因此很难求解。为了克服这一问题,我们提出了一种凸近似方法。我们以渐近误差约束的形式推导出这种近似方法的性能保证,它取决于风险度量的选择。该误差约束扩展了现有的条件风险值误差约束,表明如果模型中随机参数的分布高度分散,我们的近似方法尤其有效。对于特殊情况,我们推导出了更严格的非渐近误差边界。虽然我们的误差边界只对连续分布的第二阶段右侧向量有效,但实际优化方法往往需要离散分布。在这种情况下,我们证明我们的误差边界为相应的(离散化)样本平均近似模型提供了统计误差边界。此外,我们还构建了一种本德尔分解算法,该算法在 SAA 框架中使用了我们的凸近似值,并为所得到的算法解决方案提供了性能保证。最后,我们进行了数值实验,结果表明,对于某些风险度量,我们的方法甚至比理论性能保证所暗示的效果更好。
{"title":"Convex approximations of two-stage risk-averse mixed-integer recourse models","authors":"E. Ruben van Beesten, Ward Romeijnders, Kees Jan Roodbergen","doi":"10.1007/s10589-024-00555-x","DOIUrl":"https://doi.org/10.1007/s10589-024-00555-x","url":null,"abstract":"<p>We consider two-stage risk-averse mixed-integer recourse models with law invariant coherent risk measures. As in the risk-neutral case, these models are generally non-convex as a result of the integer restrictions on the second-stage decision variables and hence, hard to solve. To overcome this issue, we propose a convex approximation approach. We derive a performance guarantee for this approximation in the form of an asymptotic error bound, which depends on the choice of risk measure. This error bound, which extends an existing error bound for the conditional value at risk, shows that our approximation method works particularly well if the distribution of the random parameters in the model is highly dispersed. For special cases we derive tighter, non-asymptotic error bounds. Whereas our error bounds are valid only for a continuously distributed second-stage right-hand side vector, practical optimization methods often require discrete distributions. In this context, we show that our error bounds provide statistical error bounds for the corresponding (discretized) sample average approximation (SAA) model. In addition, we construct a Benders’ decomposition algorithm that uses our convex approximations in an SAA-framework and we provide a performance guarantee for the resulting algorithm solution. Finally, we perform numerical experiments which show that for certain risk measures our approach works even better than our theoretical performance guarantees suggest.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":"4 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139753718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-13DOI: 10.1007/s10589-024-00556-w
Flavia Chorobura, Ion Necoara
This paper deals with convex nonsmooth optimization problems. We introduce a general smooth approximation framework for the original function and apply random (accelerated) coordinate descent methods for minimizing the corresponding smooth approximations. Our framework covers the most important classes of smoothing techniques from the literature. Based on this general framework for the smooth approximation and using coordinate descent type methods we derive convergence rates in function values for the original objective. Moreover, if the original function satisfies a growth condition, then we prove that the smooth approximations also inherits this condition and consequently the convergence rates are improved in this case. We also present a relative randomized coordinate descent algorithm for solving nonseparable minimization problems with the objective function relative smooth along coordinates w.r.t. a (possibly nonseparable) differentiable function. For this algorithm we also derive convergence rates in the convex case and under the growth condition for the objective.
{"title":"Coordinate descent methods beyond smoothness and separability","authors":"Flavia Chorobura, Ion Necoara","doi":"10.1007/s10589-024-00556-w","DOIUrl":"https://doi.org/10.1007/s10589-024-00556-w","url":null,"abstract":"<p>This paper deals with convex nonsmooth optimization problems. We introduce a general smooth approximation framework for the original function and apply random (accelerated) coordinate descent methods for minimizing the corresponding smooth approximations. Our framework covers the most important classes of smoothing techniques from the literature. Based on this general framework for the smooth approximation and using coordinate descent type methods we derive convergence rates in function values for the original objective. Moreover, if the original function satisfies a growth condition, then we prove that the smooth approximations also inherits this condition and consequently the convergence rates are improved in this case. We also present a relative randomized coordinate descent algorithm for solving nonseparable minimization problems with the objective function relative smooth along coordinates w.r.t. a (possibly nonseparable) differentiable function. For this algorithm we also derive convergence rates in the convex case and under the growth condition for the objective.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":"53 78 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139753813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-11DOI: 10.1007/s10589-023-00547-3
Paul-Emile Maingé, André Weng-Law
In this paper, we develop rapidly convergent forward–backward algorithms for computing zeroes of the sum of two maximally monotone operators. A modification of the classical forward–backward method is considered, by incorporating an inertial term (closed to the acceleration techniques introduced by Nesterov), a constant relaxation factor and a correction term, along with a preconditioning process. In a Hilbert space setting, we prove the weak convergence to equilibria of the iterates ((x_n)), with worst-case rates of ( o(n^{-1})) in terms of both the discrete velocity and the fixed point residual, instead of the rates of (mathcal {O}(n^{-1/2})) classically established for related algorithms. Our procedure can be also adapted to more general monotone inclusions. In particular, we propose a fast primal-dual algorithmic solution to some class of convex-concave saddle point problems. In addition, we provide a well-adapted framework for solving this class of problems by means of standard proximal-like algorithms dedicated to structured monotone inclusions. Numerical experiments are also performed so as to enlighten the efficiency of the proposed strategy.
{"title":"Accelerated forward–backward algorithms for structured monotone inclusions","authors":"Paul-Emile Maingé, André Weng-Law","doi":"10.1007/s10589-023-00547-3","DOIUrl":"https://doi.org/10.1007/s10589-023-00547-3","url":null,"abstract":"<p>In this paper, we develop rapidly convergent forward–backward algorithms for computing zeroes of the sum of two maximally monotone operators. A modification of the classical forward–backward method is considered, by incorporating an inertial term (closed to the acceleration techniques introduced by Nesterov), a constant relaxation factor and a correction term, along with a preconditioning process. In a Hilbert space setting, we prove the weak convergence to equilibria of the iterates <span>((x_n))</span>, with worst-case rates of <span>( o(n^{-1}))</span> in terms of both the discrete velocity and the fixed point residual, instead of the rates of <span>(mathcal {O}(n^{-1/2}))</span> classically established for related algorithms. Our procedure can be also adapted to more general monotone inclusions. In particular, we propose a fast primal-dual algorithmic solution to some class of convex-concave saddle point problems. In addition, we provide a well-adapted framework for solving this class of problems by means of standard proximal-like algorithms dedicated to structured monotone inclusions. Numerical experiments are also performed so as to enlighten the efficiency of the proposed strategy.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":"26 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139754018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-08DOI: 10.1007/s10589-024-00553-z
Abstract
A novel and highly efficient computational framework for reconstructing binary-type images suitable for models of various complexity seen in diverse biomedical applications is developed and validated. Efficiency in computational speed and accuracy is achieved by combining the advantages of recently developed optimization methods that use sample solutions with customized geometry and multiscale control space reduction, all paired with gradient-based techniques. The control space is effectively reduced based on the geometry of the samples and their individual contributions. The entire 3-step computational procedure has an easy-to-follow design due to a nominal number of tuning parameters making the approach simple for practical implementation in various settings. Fairly straightforward methods for computing gradients make the framework compatible with any optimization software, including black-box ones. The performance of the complete computational framework is tested in applications to 2D inverse problems of cancer detection by electrical impedance tomography (EIT) using data from models generated synthetically and obtained from medical images showing the natural development of cancerous regions of various sizes and shapes. The results demonstrate the superior performance of the new method and its high potential for improving the overall quality of the EIT-based procedures.
摘要 开发并验证了一种新颖高效的计算框架,用于重建二值型图像,适用于各种生物医学应用中出现的各种复杂模型。通过结合最近开发的优化方法的优势,实现了高效的计算速度和准确性,这些优化方法使用具有定制几何形状和多尺度控制空间缩减的样本解决方案,并与基于梯度的技术相结合。根据样本的几何形状及其各自的贡献,控制空间被有效缩小。整个三步计算程序的设计简单易懂,只需标称数量的调整参数,使该方法易于在各种环境中实际应用。相当简单的梯度计算方法使该框架与任何优化软件(包括黑盒软件)兼容。完整计算框架的性能在电阻抗断层扫描(EIT)癌症检测的二维逆问题应用中进行了测试,使用的数据来自合成生成的模型,以及从显示不同大小和形状的癌症区域自然发展的医学图像中获取的数据。结果表明,新方法性能优越,在提高基于 EIT 的程序的整体质量方面潜力巨大。
{"title":"Efficient gradient-based optimization for reconstructing binary images in applications to electrical impedance tomography","authors":"","doi":"10.1007/s10589-024-00553-z","DOIUrl":"https://doi.org/10.1007/s10589-024-00553-z","url":null,"abstract":"<h3>Abstract</h3> <p>A novel and highly efficient computational framework for reconstructing binary-type images suitable for models of various complexity seen in diverse biomedical applications is developed and validated. Efficiency in computational speed and accuracy is achieved by combining the advantages of recently developed optimization methods that use sample solutions with customized geometry and multiscale control space reduction, all paired with gradient-based techniques. The control space is effectively reduced based on the geometry of the samples and their individual contributions. The entire 3-step computational procedure has an easy-to-follow design due to a nominal number of tuning parameters making the approach simple for practical implementation in various settings. Fairly straightforward methods for computing gradients make the framework compatible with any optimization software, including black-box ones. The performance of the complete computational framework is tested in applications to 2D inverse problems of cancer detection by electrical impedance tomography (EIT) using data from models generated synthetically and obtained from medical images showing the natural development of cancerous regions of various sizes and shapes. The results demonstrate the superior performance of the new method and its high potential for improving the overall quality of the EIT-based procedures.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":"20 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139753808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-06DOI: 10.1007/s10589-024-00551-1
Matheus Bernardelli de Moraes, Guilherme Palermo Coelho
Multi-objective evolutionary algorithms (MOEAs) are a practical tool to solve non-linear problems with multiple objective functions. However, when applied to expensive black-box scenario-based optimization problems, MOEA’s performance becomes constrained due to computational or time limitations. Scenario-based optimization refers to problems that are subject to uncertainty, where each solution is evaluated over an ensemble of scenarios to reduce risks. A primary reason for MOEA’s failure is that algorithm development is challenging in these cases as many of these problems are black-box, high-dimensional, discrete, and computationally expensive. For this reason, this paper proposes a benchmark generator to create fast-to-compute scenario-based discrete test problems with different degrees of complexity. Our framework uses the structure of the Multi-Objective Knapsack Problem to create test problems that simulate characteristics of expensive scenario-based discrete problems. To validate our proposition, we tested four state-of-the-art MOEAs in 30 test instances generated with our framework, and the empirical results demonstrate that the suggested benchmark generator can analyze the ability of MOEAs in tackling expensive scenario-based discrete optimization problems.
{"title":"A benchmark generator for scenario-based discrete optimization","authors":"Matheus Bernardelli de Moraes, Guilherme Palermo Coelho","doi":"10.1007/s10589-024-00551-1","DOIUrl":"https://doi.org/10.1007/s10589-024-00551-1","url":null,"abstract":"<p>Multi-objective evolutionary algorithms (MOEAs) are a practical tool to solve non-linear problems with multiple objective functions. However, when applied to expensive black-box scenario-based optimization problems, MOEA’s performance becomes constrained due to computational or time limitations. Scenario-based optimization refers to problems that are subject to uncertainty, where each solution is evaluated over an ensemble of scenarios to reduce risks. A primary reason for MOEA’s failure is that algorithm development is challenging in these cases as many of these problems are black-box, high-dimensional, discrete, and computationally expensive. For this reason, this paper proposes a benchmark generator to create fast-to-compute scenario-based discrete test problems with different degrees of complexity. Our framework uses the structure of the Multi-Objective Knapsack Problem to create test problems that simulate characteristics of expensive scenario-based discrete problems. To validate our proposition, we tested four state-of-the-art MOEAs in 30 test instances generated with our framework, and the empirical results demonstrate that the suggested benchmark generator can analyze the ability of MOEAs in tackling expensive scenario-based discrete optimization problems.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":"10 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139753903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-06DOI: 10.1007/s10589-024-00552-0
Bennet Gebken
Approximation of subdifferentials is one of the main tasks when computing descent directions for nonsmooth optimization problems. In this article, we propose a bisection method for weakly lower semismooth functions which is able to compute new subgradients that improve a given approximation in case a direction with insufficient descent was computed. Combined with a recently proposed deterministic gradient sampling approach, this yields a deterministic and provably convergent way to approximate subdifferentials for computing descent directions.
{"title":"A note on the convergence of deterministic gradient sampling in nonsmooth optimization","authors":"Bennet Gebken","doi":"10.1007/s10589-024-00552-0","DOIUrl":"https://doi.org/10.1007/s10589-024-00552-0","url":null,"abstract":"<p>Approximation of subdifferentials is one of the main tasks when computing descent directions for nonsmooth optimization problems. In this article, we propose a bisection method for weakly lower semismooth functions which is able to compute new subgradients that improve a given approximation in case a direction with insufficient descent was computed. Combined with a recently proposed deterministic gradient sampling approach, this yields a deterministic and provably convergent way to approximate subdifferentials for computing descent directions.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":"158 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139753716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}