简单超启发式算法最优控制领先者随机局部搜索的邻域大小*

IF 4.6 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Evolutionary Computation Pub Date : 2020-09-02 DOI:10.1162/evco_a_00258
Andrei Lissovoi;Pietro S. Oliveto;John Alasdair Warwicker
{"title":"简单超启发式算法最优控制领先者随机局部搜索的邻域大小*","authors":"Andrei Lissovoi;Pietro S. Oliveto;John Alasdair Warwicker","doi":"10.1162/evco_a_00258","DOIUrl":null,"url":null,"abstract":"<para>Selection hyper-heuristics (HHs) are randomised search methodologies which choose and execute heuristics during the optimisation process from a set of low-level heuristics. A machine learning mechanism is generally used to decide which low-level heuristic should be applied in each decision step. In this article, we analyse whether sophisticated learning mechanisms are always necessary for HHs to perform well. To this end we consider the most simple HHs from the literature and rigorously analyse their performance for the <small>LeadingOnes</small> benchmark function. Our analysis shows that the standard Simple Random, Permutation, Greedy, and Random Gradient HHs show no signs of learning. While the former HHs do not attempt to learn from the past performance of low-level heuristics, the idea behind the Random Gradient HH is to continue to exploit the currently selected heuristic as long as it is successful. Hence, it is embedded with a reinforcement learning mechanism with the shortest possible memory. However, the probability that a promising heuristic is successful in the next step is relatively low when perturbing a reasonable solution to a combinatorial optimisation problem. We generalise the “simple” Random Gradient HH so success can be measured over a fixed period of time <inline-formula><mml:math><mml:mi>τ</mml:mi></mml:math></inline-formula>, instead of a single iteration. For <small>LeadingOnes</small> we prove that the <italic>Generalised Random Gradient (GRG)</i> HH can learn to adapt the neighbourhood size of Randomised Local Search to optimality during the run. As a result, we prove it has the best possible performance achievable with the low-level heuristics (Randomised Local Search with different neighbourhood sizes), up to lower-order terms. We also prove that the performance of the HH improves as the number of low-level local search heuristics to choose from increases. In particular, with access to <inline-formula><mml:math><mml:mi>k</mml:mi></mml:math></inline-formula> low-level local search heuristics, it outperforms the best-possible algorithm using any subset of the <inline-formula><mml:math><mml:mi>k</mml:mi></mml:math></inline-formula> heuristics. Finally, we show that the advantages of GRG over Randomised Local Search and Evolutionary Algorithms using standard bit mutation increase if the anytime performance is considered (i.e., the performance gap is larger if approximate solutions are sought rather than exact ones). Experimental analyses confirm these results for different problem sizes (up to <inline-formula><mml:math><mml:mrow><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:msup><mml:mn>10</mml:mn><mml:mn>8</mml:mn></mml:msup></mml:mrow></mml:math></inline-formula>) and shed some light on the best choices for the parameter <inline-formula><mml:math><mml:mi>τ</mml:mi></mml:math></inline-formula> in various situations.</para>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":null,"pages":null},"PeriodicalIF":4.6000,"publicationDate":"2020-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/evco_a_00258","citationCount":"31","resultStr":"{\"title\":\"Simple Hyper-Heuristics Control the Neighbourhood Size of Randomised Local Search Optimally for LeadingOnes*\",\"authors\":\"Andrei Lissovoi;Pietro S. Oliveto;John Alasdair Warwicker\",\"doi\":\"10.1162/evco_a_00258\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<para>Selection hyper-heuristics (HHs) are randomised search methodologies which choose and execute heuristics during the optimisation process from a set of low-level heuristics. A machine learning mechanism is generally used to decide which low-level heuristic should be applied in each decision step. In this article, we analyse whether sophisticated learning mechanisms are always necessary for HHs to perform well. To this end we consider the most simple HHs from the literature and rigorously analyse their performance for the <small>LeadingOnes</small> benchmark function. Our analysis shows that the standard Simple Random, Permutation, Greedy, and Random Gradient HHs show no signs of learning. While the former HHs do not attempt to learn from the past performance of low-level heuristics, the idea behind the Random Gradient HH is to continue to exploit the currently selected heuristic as long as it is successful. Hence, it is embedded with a reinforcement learning mechanism with the shortest possible memory. However, the probability that a promising heuristic is successful in the next step is relatively low when perturbing a reasonable solution to a combinatorial optimisation problem. We generalise the “simple” Random Gradient HH so success can be measured over a fixed period of time <inline-formula><mml:math><mml:mi>τ</mml:mi></mml:math></inline-formula>, instead of a single iteration. For <small>LeadingOnes</small> we prove that the <italic>Generalised Random Gradient (GRG)</i> HH can learn to adapt the neighbourhood size of Randomised Local Search to optimality during the run. As a result, we prove it has the best possible performance achievable with the low-level heuristics (Randomised Local Search with different neighbourhood sizes), up to lower-order terms. We also prove that the performance of the HH improves as the number of low-level local search heuristics to choose from increases. In particular, with access to <inline-formula><mml:math><mml:mi>k</mml:mi></mml:math></inline-formula> low-level local search heuristics, it outperforms the best-possible algorithm using any subset of the <inline-formula><mml:math><mml:mi>k</mml:mi></mml:math></inline-formula> heuristics. Finally, we show that the advantages of GRG over Randomised Local Search and Evolutionary Algorithms using standard bit mutation increase if the anytime performance is considered (i.e., the performance gap is larger if approximate solutions are sought rather than exact ones). Experimental analyses confirm these results for different problem sizes (up to <inline-formula><mml:math><mml:mrow><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:msup><mml:mn>10</mml:mn><mml:mn>8</mml:mn></mml:msup></mml:mrow></mml:math></inline-formula>) and shed some light on the best choices for the parameter <inline-formula><mml:math><mml:mi>τ</mml:mi></mml:math></inline-formula> in various situations.</para>\",\"PeriodicalId\":50470,\"journal\":{\"name\":\"Evolutionary Computation\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2020-09-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1162/evco_a_00258\",\"citationCount\":\"31\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Evolutionary Computation\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/9185166/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Evolutionary Computation","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/9185166/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 31

摘要

选择超启发式(HHs)是一种随机搜索方法,在优化过程中从一组低级启发式中选择并执行启发式。机器学习机制通常用于决定在每个决策步骤中应该应用哪种低级启发式。在这篇文章中,我们分析了复杂的学习机制是否总是HHs表现良好所必需的。为此,我们考虑了文献中最简单的HH,并严格分析了它们在LeadingOnes基准函数中的性能。我们的分析表明,标准的简单随机、置换、贪婪和随机梯度HHs没有表现出学习的迹象。虽然以前的HH并不试图从低水平启发式的过去表现中学习,但随机梯度HH背后的想法是,只要成功,就继续利用当前选择的启发式。因此,它嵌入了一个记忆最短的强化学习机制。然而,当扰动组合优化问题的合理解时,有希望的启发式在下一步中成功的概率相对较低。我们推广了“简单”随机梯度HH,因此可以在固定的时间段τ内测量成功,而不是单次迭代。对于LeadingOnes,我们证明了广义随机梯度(GRG)HH可以在运行过程中学会将随机局部搜索的邻域大小调整为最优性。因此,我们证明了它具有低级别启发式(具有不同邻域大小的随机局部搜索)以及低阶项所能达到的最佳性能。我们还证明了HH的性能随着可供选择的低级局部搜索启发式的数量的增加而提高。特别地,通过访问k个低级局部搜索启发式,它优于使用k个启发式的任何子集的最佳可能算法。最后,我们表明,如果考虑任何时间的性能(即,如果寻求近似解而不是精确解,则性能差距更大),GRG相对于使用标准比特突变的随机局部搜索和进化算法的优势会增加。实验分析证实了不同问题大小(高达n=108)的这些结果,并阐明了在各种情况下参数τ的最佳选择。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Simple Hyper-Heuristics Control the Neighbourhood Size of Randomised Local Search Optimally for LeadingOnes*
Selection hyper-heuristics (HHs) are randomised search methodologies which choose and execute heuristics during the optimisation process from a set of low-level heuristics. A machine learning mechanism is generally used to decide which low-level heuristic should be applied in each decision step. In this article, we analyse whether sophisticated learning mechanisms are always necessary for HHs to perform well. To this end we consider the most simple HHs from the literature and rigorously analyse their performance for the LeadingOnes benchmark function. Our analysis shows that the standard Simple Random, Permutation, Greedy, and Random Gradient HHs show no signs of learning. While the former HHs do not attempt to learn from the past performance of low-level heuristics, the idea behind the Random Gradient HH is to continue to exploit the currently selected heuristic as long as it is successful. Hence, it is embedded with a reinforcement learning mechanism with the shortest possible memory. However, the probability that a promising heuristic is successful in the next step is relatively low when perturbing a reasonable solution to a combinatorial optimisation problem. We generalise the “simple” Random Gradient HH so success can be measured over a fixed period of time τ, instead of a single iteration. For LeadingOnes we prove that the Generalised Random Gradient (GRG) HH can learn to adapt the neighbourhood size of Randomised Local Search to optimality during the run. As a result, we prove it has the best possible performance achievable with the low-level heuristics (Randomised Local Search with different neighbourhood sizes), up to lower-order terms. We also prove that the performance of the HH improves as the number of low-level local search heuristics to choose from increases. In particular, with access to k low-level local search heuristics, it outperforms the best-possible algorithm using any subset of the k heuristics. Finally, we show that the advantages of GRG over Randomised Local Search and Evolutionary Algorithms using standard bit mutation increase if the anytime performance is considered (i.e., the performance gap is larger if approximate solutions are sought rather than exact ones). Experimental analyses confirm these results for different problem sizes (up to n=108) and shed some light on the best choices for the parameter τ in various situations.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Evolutionary Computation
Evolutionary Computation 工程技术-计算机:理论方法
CiteScore
6.40
自引率
1.50%
发文量
20
审稿时长
3 months
期刊介绍: Evolutionary Computation is a leading journal in its field. It provides an international forum for facilitating and enhancing the exchange of information among researchers involved in both the theoretical and practical aspects of computational systems drawing their inspiration from nature, with particular emphasis on evolutionary models of computation such as genetic algorithms, evolutionary strategies, classifier systems, evolutionary programming, and genetic programming. It welcomes articles from related fields such as swarm intelligence (e.g. Ant Colony Optimization and Particle Swarm Optimization), and other nature-inspired computation paradigms (e.g. Artificial Immune Systems). As well as publishing articles describing theoretical and/or experimental work, the journal also welcomes application-focused papers describing breakthrough results in an application domain or methodological papers where the specificities of the real-world problem led to significant algorithmic improvements that could possibly be generalized to other areas.
期刊最新文献
Genetic Programming for Automatically Evolving Multiple Features to Classification. A Tri-Objective Method for Bi-Objective Feature Selection in Classification. Preliminary Analysis of Simple Novelty Search. IOHexperimenter: Benchmarking Platform for Iterative Optimization Heuristics. Pflacco: Feature-Based Landscape Analysis of Continuous and Constrained Optimization Problems in Python.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1