Full-low evaluation methods for derivative-free optimization

A. Berahas, Oumaima Sohab, L. N. Vicente
{"title":"Full-low evaluation methods for derivative-free optimization","authors":"A. Berahas, Oumaima Sohab, L. N. Vicente","doi":"10.1080/10556788.2022.2142582","DOIUrl":null,"url":null,"abstract":"We propose a new class of rigorous methods for derivative-free optimization with the aim of delivering efficient and robust numerical performance for functions of all types, from smooth to non-smooth, and under different noise regimes. To this end, we have developed a class of methods, called Full-Low Evaluation methods, organized around two main types of iterations. The first iteration type (called Full-Eval) is expensive in function evaluations, but exhibits good performance in the smooth and non-noisy cases. For the theory, we consider a line search based on an approximate gradient, backtracking until a sufficient decrease condition is satisfied. In practice, the gradient was approximated via finite differences, and the direction was calculated by a quasi-Newton step (BFGS). The second iteration type (called Low-Eval) is cheap in function evaluations, yet more robust in the presence of noise or non-smoothness. For the theory, we consider direct search, and in practice we use probabilistic direct search with one random direction and its negative. A switch condition from Full-Eval to Low-Eval iterations was developed based on the values of the line-search and direct-search stepsizes. If enough Full-Eval steps are taken, we derive a complexity result of gradient-descent type. Under failure of Full-Eval, the Low-Eval iterations become the drivers of convergence yielding non-smooth convergence. Full-Low Evaluation methods are shown to be efficient and robust in practice across problems with different levels of smoothness and noise.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"87 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optimization Methods and Software","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/10556788.2022.2142582","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

We propose a new class of rigorous methods for derivative-free optimization with the aim of delivering efficient and robust numerical performance for functions of all types, from smooth to non-smooth, and under different noise regimes. To this end, we have developed a class of methods, called Full-Low Evaluation methods, organized around two main types of iterations. The first iteration type (called Full-Eval) is expensive in function evaluations, but exhibits good performance in the smooth and non-noisy cases. For the theory, we consider a line search based on an approximate gradient, backtracking until a sufficient decrease condition is satisfied. In practice, the gradient was approximated via finite differences, and the direction was calculated by a quasi-Newton step (BFGS). The second iteration type (called Low-Eval) is cheap in function evaluations, yet more robust in the presence of noise or non-smoothness. For the theory, we consider direct search, and in practice we use probabilistic direct search with one random direction and its negative. A switch condition from Full-Eval to Low-Eval iterations was developed based on the values of the line-search and direct-search stepsizes. If enough Full-Eval steps are taken, we derive a complexity result of gradient-descent type. Under failure of Full-Eval, the Low-Eval iterations become the drivers of convergence yielding non-smooth convergence. Full-Low Evaluation methods are shown to be efficient and robust in practice across problems with different levels of smoothness and noise.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
无导数优化的全低评价方法
我们提出了一种新的严格的无导数优化方法,目的是为所有类型的函数提供高效和鲁棒的数值性能,从光滑到非光滑,以及在不同的噪声制度下。为此,我们开发了一类方法,称为Full-Low评估方法,围绕两种主要的迭代类型组织。第一种迭代类型(称为Full-Eval)在函数求值中代价高昂,但在平滑和无噪声的情况下表现出良好的性能。对于理论,我们考虑基于近似梯度的直线搜索,回溯直到满足充分减少条件。在实际应用中,梯度近似采用有限差分法,方向计算采用准牛顿步长(BFGS)。第二种迭代类型(称为Low-Eval)在函数求值方面比较便宜,但是在存在噪声或不平滑的情况下更加健壮。在理论上,我们考虑直接搜索,在实践中,我们使用一个随机方向和它的负数的概率直接搜索。基于行搜索步长和直接搜索步长,提出了从全求值迭代到低求值迭代的切换条件。如果采取足够的全eval步骤,我们得到一个梯度下降型的复杂性结果。在完全求值失效的情况下,低求值迭代成为收敛的驱动因素,产生非光滑收敛。实践表明,全低评价方法在具有不同平滑和噪声水平的问题中具有良好的鲁棒性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Maximizing the number of rides served for time-limited Dial-a-Ride* A family of limited memory three term conjugate gradient methods A semismooth conjugate gradients method – theoretical analysis A mixed-integer programming formulation for optimizing the double row layout problem Robust reverse 1-center problems on trees with interval costs
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1