Territorial Differential Meta-Evolution (TDME) is an efficient, versatile, and reliable algorithm for seeking all the global or desirable local optima of a multivariable function. It employs a progressive niching mechanism to optimize even challenging, highdimensional functions with multiple global optima and misleading local optima. This article introduces TDME and uses standard and novel benchmark problems to quantify its advantages over HillVallEA, which is the best-performing algorithm on the standard benchmark suite that has been used by all major multimodal optimization competitions since 2013. TDME matches HillVallEA on that benchmark suite and categorically outperforms it on a more comprehensive suite that better reflects the potential diversity of optimization problems. TDME achieves that performance without any problem-specific parameter tuning.
{"title":"Territorial Differential Meta-Evolution: An Algorithm for Seeking All the Desirable Optima of a Multivariable Function.","authors":"Richard Wehr, Scott R Saleska","doi":"10.1162/evco_a_00337","DOIUrl":"https://doi.org/10.1162/evco_a_00337","url":null,"abstract":"<p><p>Territorial Differential Meta-Evolution (TDME) is an efficient, versatile, and reliable algorithm for seeking all the global or desirable local optima of a multivariable function. It employs a progressive niching mechanism to optimize even challenging, highdimensional functions with multiple global optima and misleading local optima. This article introduces TDME and uses standard and novel benchmark problems to quantify its advantages over HillVallEA, which is the best-performing algorithm on the standard benchmark suite that has been used by all major multimodal optimization competitions since 2013. TDME matches HillVallEA on that benchmark suite and categorically outperforms it on a more comprehensive suite that better reflects the potential diversity of optimization problems. TDME achieves that performance without any problem-specific parameter tuning.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"1-31"},"PeriodicalIF":6.8,"publicationDate":"2023-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9726877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Arkadiy Dushatskiy, Marco Virgolin, Anton Bouter, Dirk Thierens, Peter A N Bosman
When it comes to solving optimization problems with evolutionary algorithms (EAs) in a reliable and scalable manner, detecting and exploiting linkage information, i.e., dependencies between variables, can be key. In this article, we present the latest version of, and propose substantial enhancements to, the Gene-pool Optimal Mixing Evoutionary Algorithm (GOMEA): an EA explicitly designed to estimate and exploit linkage information. We begin by performing a largescale search over several GOMEA design choices to understand what matters most and obtain a generally best-performing version of the algorithm. Next, we introduce a novel version of GOMEA, called CGOMEA, where linkage-based variation is further improved by filtering solution mating based on conditional dependencies. We compare our latest version of GOMEA, the newly introduced CGOMEA, and another contending linkage-aware EA, DSMGA-II, in an extensive experimental evaluation, involving a benchmark set of 9 black-box problems that can only be solved efficiently if their inherent dependency structure is unveiled and exploited. Finally, in an attempt to make EAs more usable and resilient to parameter choices, we investigate the performance of different automatic population management schemes for GOMEA and CGOMEA, de facto making the EAs parameterless. Our results show that GOMEA and CGOMEA significantly outperform the original GOMEA and DSMGA-II on most problems, setting a new state of the art for the field.
{"title":"Parameterless Gene-pool Optimal Mixing Evolutionary Algorithms.","authors":"Arkadiy Dushatskiy, Marco Virgolin, Anton Bouter, Dirk Thierens, Peter A N Bosman","doi":"10.1162/evco_a_00338","DOIUrl":"https://doi.org/10.1162/evco_a_00338","url":null,"abstract":"<p><p>When it comes to solving optimization problems with evolutionary algorithms (EAs) in a reliable and scalable manner, detecting and exploiting linkage information, i.e., dependencies between variables, can be key. In this article, we present the latest version of, and propose substantial enhancements to, the Gene-pool Optimal Mixing Evoutionary Algorithm (GOMEA): an EA explicitly designed to estimate and exploit linkage information. We begin by performing a largescale search over several GOMEA design choices to understand what matters most and obtain a generally best-performing version of the algorithm. Next, we introduce a novel version of GOMEA, called CGOMEA, where linkage-based variation is further improved by filtering solution mating based on conditional dependencies. We compare our latest version of GOMEA, the newly introduced CGOMEA, and another contending linkage-aware EA, DSMGA-II, in an extensive experimental evaluation, involving a benchmark set of 9 black-box problems that can only be solved efficiently if their inherent dependency structure is unveiled and exploited. Finally, in an attempt to make EAs more usable and resilient to parameter choices, we investigate the performance of different automatic population management schemes for GOMEA and CGOMEA, de facto making the EAs parameterless. Our results show that GOMEA and CGOMEA significantly outperform the original GOMEA and DSMGA-II on most problems, setting a new state of the art for the field.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"1-28"},"PeriodicalIF":6.8,"publicationDate":"2023-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10104132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a personal account of the author's 35 years “adventure” with Evolutionary Computation—from the first encounter in 1988 and many years of academic research through to working full-time in business—successfully implementing evolutionary algorithms for some of the world's largest corporations. The paper concludes with some observations and insights.
{"title":"A Personal Perspective on Evolutionary Computation: A 35-Year Journey","authors":"Zbigniew Michalewicz","doi":"10.1162/evco_a_00323","DOIUrl":"10.1162/evco_a_00323","url":null,"abstract":"This paper presents a personal account of the author's 35 years “adventure” with Evolutionary Computation—from the first encounter in 1988 and many years of academic research through to working full-time in business—successfully implementing evolutionary algorithms for some of the world's largest corporations. The paper concludes with some observations and insights.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"31 2","pages":"123-155"},"PeriodicalIF":6.8,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9662509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas H. W. Bäck;Anna V. Kononova;Bas van Stein;Hao Wang;Kirill A. Antonov;Roman T. Kalkreuth;Jacob de Nobel;Diederick Vermetten;Roy de Winter;Furong Ye
Thirty years, 1993–2023, is a huge time frame in science. We address some major developments in the field of evolutionary algorithms, with applications in parameter optimization, over these 30 years. These include the covariance matrix adaptation evolution strategy and some fast-growing fields such as multimodal optimization, surrogate-assisted optimization, multiobjective optimization, and automated algorithm design. Moreover, we also discuss particle swarm optimization and differential evolution, which did not exist 30 years ago, either. One of the key arguments made in the paper is that we need fewer algorithms, not more, which, however, is the current trend through continuously claiming paradigms from nature that are suggested to be useful as new optimization algorithms. Moreover, we argue that we need proper benchmarking procedures to sort out whether a newly proposed algorithm is useful or not. We also briefly discuss automated algorithm design approaches, including configurable algorithm design frameworks, as the proposed next step toward designing optimization algorithms automatically, rather than by hand.
{"title":"Evolutionary Algorithms for Parameter Optimization—Thirty Years Later","authors":"Thomas H. W. Bäck;Anna V. Kononova;Bas van Stein;Hao Wang;Kirill A. Antonov;Roman T. Kalkreuth;Jacob de Nobel;Diederick Vermetten;Roy de Winter;Furong Ye","doi":"10.1162/evco_a_00325","DOIUrl":"10.1162/evco_a_00325","url":null,"abstract":"Thirty years, 1993–2023, is a huge time frame in science. We address some major developments in the field of evolutionary algorithms, with applications in parameter optimization, over these 30 years. These include the covariance matrix adaptation evolution strategy and some fast-growing fields such as multimodal optimization, surrogate-assisted optimization, multiobjective optimization, and automated algorithm design. Moreover, we also discuss particle swarm optimization and differential evolution, which did not exist 30 years ago, either. One of the key arguments made in the paper is that we need fewer algorithms, not more, which, however, is the current trend through continuously claiming paradigms from nature that are suggested to be useful as new optimization algorithms. Moreover, we argue that we need proper benchmarking procedures to sort out whether a newly proposed algorithm is useful or not. We also briefly discuss automated algorithm design approaches, including configurable algorithm design frameworks, as the proposed next step toward designing optimization algorithms automatically, rather than by hand.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"31 2","pages":"81-122"},"PeriodicalIF":6.8,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9664596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We reflect on 30 years of the journal Evolutionary Computation. Taking the papers published in the first volume in 1993 as a springboard, as the founding and current Editors-in-Chief, we comment on the beginnings of the field, evaluate the extent to which the field has both grown and itself evolved, and provide our own perpectives on where the future lies.
{"title":"Editorial: Reflecting on Thirty Years of ECJ","authors":"Kenneth De Jong;Emma Hart","doi":"10.1162/evco_e_00324","DOIUrl":"10.1162/evco_e_00324","url":null,"abstract":"We reflect on 30 years of the journal Evolutionary Computation. Taking the papers published in the first volume in 1993 as a springboard, as the founding and current Editors-in-Chief, we comment on the beginnings of the field, evaluate the extent to which the field has both grown and itself evolved, and provide our own perpectives on where the future lies.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"31 2","pages":"73-79"},"PeriodicalIF":6.8,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9670242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
On the occasion of the 30-year anniversary of the Evolutionary Computation journal, I was invited by Professor Hart to offer some reflections on the article on evolving behaviors in the iterated prisoner's dilemma that I contributed to its first issue in 1993. It's an honor to do so. I would like to thank Professor Ken De Jong, the journal's first editor-in-chief, for his vision in creating the journal, and the editors who have followed and maintained that vision. This article contains some personal reflections on the topic and the field as a whole.
在《进化计算》杂志创刊30周年之际,Hart教授邀请我就我在1993年创刊的那篇关于反复囚徒困境中的进化行为的文章发表一些感想。我很荣幸能这样做。我要感谢该杂志的首任主编Ken De Jong教授,感谢他在创办该杂志时的远见卓识,以及追随并保持这一远见卓识的编辑们。这篇文章包含了对这个话题和整个领域的一些个人思考。
{"title":"Personal Reflections on Some Early Work in Evolving Strategies in the Iterated Prisoner's Dilemma","authors":"David B. Fogel","doi":"10.1162/evco_a_00322","DOIUrl":"10.1162/evco_a_00322","url":null,"abstract":"On the occasion of the 30-year anniversary of the Evolutionary Computation journal, I was invited by Professor Hart to offer some reflections on the article on evolving behaviors in the iterated prisoner's dilemma that I contributed to its first issue in 1993. It's an honor to do so. I would like to thank Professor Ken De Jong, the journal's first editor-in-chief, for his vision in creating the journal, and the editors who have followed and maintained that vision. This article contains some personal reflections on the topic and the field as a whole.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"31 2","pages":"157-161"},"PeriodicalIF":6.8,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9670243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently a mechanism called stagnation detection was proposed that automatically adjusts the mutation rate of evolutionary algorithms when they encounter local optima. The so-called SD-(1+1) EA introduced by Rajabi and Witt (2022) adds stagnation detection to the classical (1+1) EA with standard bit mutation. This algorithm flips each bit independently with some mutation rate, and stagnation detection raises the rate when the algorithm is likely to have encountered a local optimum. In this article, we investigate stagnation detection in the context of the k-bit flip operator of randomized local search that flips k bits chosen uniformly at random and let stagnation detection adjust the parameter k. We obtain improved runtime results compared with the SD-(1+1) EA amounting to a speedup of at least (1-o(1))2πm, where m is the so-called gap size, that is, the distance to the next improvement. Moreover, we propose additional schemes that prevent infinite optimization times even if the algorithm misses a working choice of k due to unlucky events. Finally, we present an example where standard bit mutation still outperforms the k-bit flip operator with stagnation detection.
{"title":"Stagnation Detection with Randomized Local Search*","authors":"Amirhossein Rajabi;Carsten Witt","doi":"10.1162/evco_a_00313","DOIUrl":"10.1162/evco_a_00313","url":null,"abstract":"Recently a mechanism called stagnation detection was proposed that automatically adjusts the mutation rate of evolutionary algorithms when they encounter local optima. The so-called SD-(1+1) EA introduced by Rajabi and Witt (2022) adds stagnation detection to the classical (1+1) EA with standard bit mutation. This algorithm flips each bit independently with some mutation rate, and stagnation detection raises the rate when the algorithm is likely to have encountered a local optimum. In this article, we investigate stagnation detection in the context of the k-bit flip operator of randomized local search that flips k bits chosen uniformly at random and let stagnation detection adjust the parameter k. We obtain improved runtime results compared with the SD-(1+1) EA amounting to a speedup of at least (1-o(1))2πm, where m is the so-called gap size, that is, the distance to the next improvement. Moreover, we propose additional schemes that prevent infinite optimization times even if the algorithm misses a working choice of k due to unlucky events. Finally, we present an example where standard bit mutation still outperforms the k-bit flip operator with stagnation detection.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"31 1","pages":"1-29"},"PeriodicalIF":6.8,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9359552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Model management is an essential component in data-driven surrogate-assisted evolutionary optimization. In model management, the solutions with a large degree of uncertainty in approximation play an important role. They can strengthen the exploration ability of algorithms and improve the accuracy of surrogates. However, there is no theoretical method to measure the uncertainty of prediction of Non-Gaussian process surrogates. To address this issue, this article proposes a method to measure the uncertainty. In this method, a stationary random field with a known zero mean is used to measure the uncertainty of prediction of Non-Gaussian process surrogates. Based on experimental analyses, this method is able to measure the uncertainty of prediction of Non-Gaussian process surrogates. The method's effectiveness is demonstrated on a set of benchmark problems in single surrogate and ensemble surrogates cases.
{"title":"An Uncertainty Measure for Prediction of Non-Gaussian Process Surrogates","authors":"Caie Hu;Sanyou Zeng;Changhe Li","doi":"10.1162/evco_a_00316","DOIUrl":"10.1162/evco_a_00316","url":null,"abstract":"Model management is an essential component in data-driven surrogate-assisted evolutionary optimization. In model management, the solutions with a large degree of uncertainty in approximation play an important role. They can strengthen the exploration ability of algorithms and improve the accuracy of surrogates. However, there is no theoretical method to measure the uncertainty of prediction of Non-Gaussian process surrogates. To address this issue, this article proposes a method to measure the uncertainty. In this method, a stationary random field with a known zero mean is used to measure the uncertainty of prediction of Non-Gaussian process surrogates. Based on experimental analyses, this method is able to measure the uncertainty of prediction of Non-Gaussian process surrogates. The method's effectiveness is demonstrated on a set of benchmark problems in single surrogate and ensemble surrogates cases.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"31 1","pages":"53-71"},"PeriodicalIF":6.8,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10801408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christian Cintrano;Javier Ferrer;Manuel López-Ibáñez;Enrique Alba
In the traffic light scheduling problem, the evaluation of candidate solutions requires the simulation of a process under various (traffic) scenarios. Thus, good solutions should not only achieve good objective function values, but they must be robust (low variance) across all different scenarios. Previous work has shown that combining IRACE with evolutionary operators is effective for this task due to the power of evolutionary operators in numerical optimization. In this article, we further explore the hybridization of evolutionary operators and the elitist iterated racing of IRACE for the simulation–optimization of traffic light programs. We review previous works from the literature to find the evolutionary operators performing the best when facing this problem to propose new hybrid algorithms. We evaluate our approach over a realistic case study derived from the traffic network of Málaga (Spain) with 275 traffic lights that should be scheduled optimally. The experimental analysis reveals that the hybrid algorithm comprising IRACE plus differential evolution offers statistically better results than the other algorithms when the budget of simulations is low. In contrast, IRACE performs better than the hybrids for a high simulations budget, although the optimization time is much longer.
{"title":"Hybridization of Evolutionary Operators with Elitist Iterated Racing for the Simulation Optimization of Traffic Lights Programs","authors":"Christian Cintrano;Javier Ferrer;Manuel López-Ibáñez;Enrique Alba","doi":"10.1162/evco_a_00314","DOIUrl":"10.1162/evco_a_00314","url":null,"abstract":"In the traffic light scheduling problem, the evaluation of candidate solutions requires the simulation of a process under various (traffic) scenarios. Thus, good solutions should not only achieve good objective function values, but they must be robust (low variance) across all different scenarios. Previous work has shown that combining IRACE with evolutionary operators is effective for this task due to the power of evolutionary operators in numerical optimization. In this article, we further explore the hybridization of evolutionary operators and the elitist iterated racing of IRACE for the simulation–optimization of traffic light programs. We review previous works from the literature to find the evolutionary operators performing the best when facing this problem to propose new hybrid algorithms. We evaluate our approach over a realistic case study derived from the traffic network of Málaga (Spain) with 275 traffic lights that should be scheduled optimally. The experimental analysis reveals that the hybrid algorithm comprising IRACE plus differential evolution offers statistically better results than the other algorithms when the budget of simulations is low. In contrast, IRACE performs better than the hybrids for a high simulations budget, although the optimization time is much longer.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"31 1","pages":"31-51"},"PeriodicalIF":6.8,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10814046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Active-set approaches are commonly used in algorithms for constrained numerical optimization. We propose that active-set techniques can beneficially be employed for evolutionary black-box optimization with explicit constraints and present an active-set evolution strategy. We experimentally evaluate its performance relative to those of several algorithms for constrained optimization and find that the active-set evolution strategy compares favourably for the problem set under consideration.
{"title":"Active Sets for Explicitly Constrained Evolutionary Optimization","authors":"Patrick Spettel;Zehao Ba;Dirk V. Arnold","doi":"10.1162/evco_a_00311","DOIUrl":"10.1162/evco_a_00311","url":null,"abstract":"Active-set approaches are commonly used in algorithms for constrained numerical optimization. We propose that active-set techniques can beneficially be employed for evolutionary black-box optimization with explicit constraints and present an active-set evolution strategy. We experimentally evaluate its performance relative to those of several algorithms for constrained optimization and find that the active-set evolution strategy compares favourably for the problem set under consideration.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"30 4","pages":"531-553"},"PeriodicalIF":6.8,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41589162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}