Benedict A. H. Jones;John L. P. Chouard;Bianca C. C. Branco;Eléonore G. B. Vissol-Gaudin;Christopher Pearson;Michael C. Petty;Noura Al Moubayed;Dagou A. Zeze;Chris Groves
Evolution-in-Materio is a computational paradigm in which an algorithm reconfigures a material's properties to achieve a specific computational function. This article addresses the question of how successful and well performing Evolution-in-Materio processors can be designed through the selection of nanomaterials and an evolutionary algorithm for a target application. A physical model of a nanomaterial network is developed which allows for both randomness, and the possibility of Ohmic and non-Ohmic conduction, that are characteristic of such materials. These differing networks are then exploited by differential evolution, which optimises several configuration parameters (e.g., configuration voltages, weights, etc.), to solve different classification problems. We show that ideal nanomaterial choice depends upon problem complexity, with more complex problems being favoured by complex voltage dependence of conductivity and vice versa. Furthermore, we highlight how intrinsic nanomaterial electrical properties can be exploited by differing configuration parameters, clarifying the role and limitations of these techniques. These findings provide guidance for the rational design of nanomaterials and algorithms for future Evolution-in-Materio processors.
{"title":"Towards Intelligently Designed Evolvable Processors","authors":"Benedict A. H. Jones;John L. P. Chouard;Bianca C. C. Branco;Eléonore G. B. Vissol-Gaudin;Christopher Pearson;Michael C. Petty;Noura Al Moubayed;Dagou A. Zeze;Chris Groves","doi":"10.1162/evco_a_00309","DOIUrl":"10.1162/evco_a_00309","url":null,"abstract":"Evolution-in-Materio is a computational paradigm in which an algorithm reconfigures a material's properties to achieve a specific computational function. This article addresses the question of how successful and well performing Evolution-in-Materio processors can be designed through the selection of nanomaterials and an evolutionary algorithm for a target application. A physical model of a nanomaterial network is developed which allows for both randomness, and the possibility of Ohmic and non-Ohmic conduction, that are characteristic of such materials. These differing networks are then exploited by differential evolution, which optimises several configuration parameters (e.g., configuration voltages, weights, etc.), to solve different classification problems. We show that ideal nanomaterial choice depends upon problem complexity, with more complex problems being favoured by complex voltage dependence of conductivity and vice versa. Furthermore, we highlight how intrinsic nanomaterial electrical properties can be exploited by differing configuration parameters, clarifying the role and limitations of these techniques. These findings provide guidance for the rational design of nanomaterials and algorithms for future Evolution-in-Materio processors.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"30 4","pages":"479-501"},"PeriodicalIF":6.8,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48671640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article investigates the performance of multistart next ascent hillclimbing and well-known evolutionary algorithms incorporating diversity preservation techniques on instances of the multimodal problem generator. This generator induces a class of problems in the bitstring domain which is interesting to study from a theoretical perspective in the context of multimodal optimization, as it is a generalization of the classical OneMax and TwoMax functions for an arbitrary number of peaks. An average-case runtime analysis for multistart next ascent hillclimbing is presented for uniformly distributed equal-height instances of this class of problems. It is shown empirically that conventional niching and mating restriction techniques incorporated in an evolutionary algorithm are not sufficient to make them competitive with the hillclimbing strategy. We conjecture the reason for this behavior is the lack of structure in the space of local optima on instances of this problem class, which makes an optimization algorithm unable to exploit information from one optimum to infer where another optimum might be. When no such structure exists, it seems that the best strategy for discovering all optima is a brute-force one. Overall, our study gives insights with respect to the adequacy of hillclimbers and evolutionary algorithms for multimodal optimization, depending on properties of the fitness landscape.
{"title":"When Hillclimbers Beat Genetic Algorithms in Multimodal Optimization","authors":"Fernando G. Lobo;Mosab Bazargani","doi":"10.1162/evco_a_00312","DOIUrl":"10.1162/evco_a_00312","url":null,"abstract":"This article investigates the performance of multistart next ascent hillclimbing and well-known evolutionary algorithms incorporating diversity preservation techniques on instances of the multimodal problem generator. This generator induces a class of problems in the bitstring domain which is interesting to study from a theoretical perspective in the context of multimodal optimization, as it is a generalization of the classical OneMax and TwoMax functions for an arbitrary number of peaks. An average-case runtime analysis for multistart next ascent hillclimbing is presented for uniformly distributed equal-height instances of this class of problems. It is shown empirically that conventional niching and mating restriction techniques incorporated in an evolutionary algorithm are not sufficient to make them competitive with the hillclimbing strategy. We conjecture the reason for this behavior is the lack of structure in the space of local optima on instances of this problem class, which makes an optimization algorithm unable to exploit information from one optimum to infer where another optimum might be. When no such structure exists, it seems that the best strategy for discovering all optima is a brute-force one. Overall, our study gives insights with respect to the adequacy of hillclimbers and evolutionary algorithms for multimodal optimization, depending on properties of the fitness landscape.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"30 4","pages":"535-559"},"PeriodicalIF":6.8,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41240529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a novel constraint-handling technique for the covariance matrix adaptation evolution strategy (CMA-ES). The proposed technique is aimed at solving explicitly constrained black-box continuous optimization problems, in which the explicit constraint is a constraint whereby the computational time for the constraint violation and its (numerical) gradient are negligible compared to that for the objective function. This method is designed to realize two invariance properties: invariance to the affine transformation of the search space, and invariance to the increasing transformation of the objective and constraint functions. The CMA-ES is designed to possess these properties for handling difficulties that appear in black-box optimization problems, such as non-separability, ill-conditioning, ruggedness, and the different orders of magnitude in the objective. The proposed constraint-handling technique (CHT), known as ARCH, modifies the underlying CMA-ES only in terms of the ranking of the candidate solutions. It employs a repair operator and an adaptive ranking aggregation strategy to compute the ranking. We developed test problems to evaluate the effects of the invariance properties, and performed experiments to empirically verify the invariance of the algorithm. We compared the proposed method with other CHTs on the CEC 2006 constrained optimization benchmark suite to demonstrate its efficacy. Empirical studies reveal that ARCH is able to exploit the explicitness of the constraint functions effectively, sometimes even more efficiently than an existing box-constraint handling technique on box-constrained problems, while exhibiting the invariance properties. Moreover, ARCH overwhelmingly outperforms CHTs by not exploiting the explicit constraints in terms of the number of objective function calls.
{"title":"Adaptive Ranking-Based Constraint Handling for Explicitly Constrained Black-Box Optimization","authors":"Naoki Sakamoto;Youhei Akimoto","doi":"10.1162/evco_a_00310","DOIUrl":"https://doi.org/10.1162/evco_a_00310","url":null,"abstract":"We propose a novel constraint-handling technique for the covariance matrix adaptation evolution strategy (CMA-ES). The proposed technique is aimed at solving explicitly constrained black-box continuous optimization problems, in which the explicit constraint is a constraint whereby the computational time for the constraint violation and its (numerical) gradient are negligible compared to that for the objective function. This method is designed to realize two invariance properties: invariance to the affine transformation of the search space, and invariance to the increasing transformation of the objective and constraint functions. The CMA-ES is designed to possess these properties for handling difficulties that appear in black-box optimization problems, such as non-separability, ill-conditioning, ruggedness, and the different orders of magnitude in the objective. The proposed constraint-handling technique (CHT), known as ARCH, modifies the underlying CMA-ES only in terms of the ranking of the candidate solutions. It employs a repair operator and an adaptive ranking aggregation strategy to compute the ranking. We developed test problems to evaluate the effects of the invariance properties, and performed experiments to empirically verify the invariance of the algorithm. We compared the proposed method with other CHTs on the CEC 2006 constrained optimization benchmark suite to demonstrate its efficacy. Empirical studies reveal that ARCH is able to exploit the explicitness of the constraint functions effectively, sometimes even more efficiently than an existing box-constraint handling technique on box-constrained problems, while exhibiting the invariance properties. Moreover, ARCH overwhelmingly outperforms CHTs by not exploiting the explicit constraints in terms of the number of objective function calls.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"30 4","pages":"503-529"},"PeriodicalIF":6.8,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71903200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Francisco Chicano;Gabriela Ochoa;L. Darrell Whitley;Renato Tinós
An optimal recombination operator for two-parent solutions provides the best solution among those that take the value for each variable from one of the parents (gene transmission property). If the solutions are bit strings, the offspring of an optimal recombination operator is optimal in the smallest hyperplane containing the two parent solutions. Exploring this hyperplane is computationally costly, in general, requiring exponential time in the worst case. However, when the variable interaction graph of the objective function is sparse, exploration can be done in polynomial time. In this article, we present a recombination operator, called Dynastic Potential Crossover (DPX), that runs in polynomial time and behaves like an optimal recombination operator for low-epistasis combinatorial problems. We compare this operator, both theoretically and experimentally, with traditional crossover operators, like uniform crossover and network crossover, and with two recently defined efficient recombination operators: partition crossover and articulation points partition crossover. The empirical comparison uses NKQ Landscapes and MAX-SAT instances. DPX outperforms the other crossover operators in terms of quality of the offspring and provides better results included in a trajectory and a population-based metaheuristic, but it requires more time and memory to compute the offspring.
{"title":"Dynastic Potential Crossover Operator","authors":"Francisco Chicano;Gabriela Ochoa;L. Darrell Whitley;Renato Tinós","doi":"10.1162/evco_a_00305","DOIUrl":"10.1162/evco_a_00305","url":null,"abstract":"An optimal recombination operator for two-parent solutions provides the best solution among those that take the value for each variable from one of the parents (gene transmission property). If the solutions are bit strings, the offspring of an optimal recombination operator is optimal in the smallest hyperplane containing the two parent solutions. Exploring this hyperplane is computationally costly, in general, requiring exponential time in the worst case. However, when the variable interaction graph of the objective function is sparse, exploration can be done in polynomial time. In this article, we present a recombination operator, called Dynastic Potential Crossover (DPX), that runs in polynomial time and behaves like an optimal recombination operator for low-epistasis combinatorial problems. We compare this operator, both theoretically and experimentally, with traditional crossover operators, like uniform crossover and network crossover, and with two recently defined efficient recombination operators: partition crossover and articulation points partition crossover. The empirical comparison uses NKQ Landscapes and MAX-SAT instances. DPX outperforms the other crossover operators in terms of quality of the offspring and provides better results included in a trajectory and a population-based metaheuristic, but it requires more time and memory to compute the offspring.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"30 3","pages":"409-446"},"PeriodicalIF":6.8,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/6720222/9931026/09931097.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39583646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Domination-based multiobjective (MO) evolutionary algorithms (EAs) are today arguably the most frequently used type of MOEA. These methods, however, stagnate when the majority of the population becomes nondominated, preventing further convergence to the Pareto set. Hypervolume-based MO optimization has shown promising results to overcome this. Direct use of the hypervolume, however, results in no selection pressure for dominated solutions. The recently introduced Sofomore framework overcomes this by solving multiple interleaved single-objective dynamic problems that iteratively improve a single approximation set, based on the uncrowded hypervolume improvement (UHVI). It thereby however loses many advantages of population-based MO optimization, such as handling multimodality. Here, we reformulate the UHVI as a quality measure for approximation sets, called the uncrowded hypervolume (UHV), which can be used to directly solve MO optimization problems with a single-objective optimizer. We use the state-of-the-art gene-pool optimal mixing evolutionary algorithm (GOMEA) that is capable of efficiently exploiting the intrinsically available grey-box properties of this problem. The resulting algorithm, UHV-GOMEA, is compared with Sofomore equipped with GOMEA, and the domination-based MO-GOMEA. In doing so, we investigate in which scenarios either domination-based or hypervolume-based methods are preferred. Finally, we construct a simple hybrid approach that combines MO-GOMEA with UHV-GOMEA and outperforms both.
{"title":"Uncrowded Hypervolume-Based Multiobjective Optimization with Gene-Pool Optimal Mixing","authors":"S.C. Maree;T. Alderliesten;P.A.N. Bosman","doi":"10.1162/evco_a_00303","DOIUrl":"10.1162/evco_a_00303","url":null,"abstract":"Domination-based multiobjective (MO) evolutionary algorithms (EAs) are today arguably the most frequently used type of MOEA. These methods, however, stagnate when the majority of the population becomes nondominated, preventing further convergence to the Pareto set. Hypervolume-based MO optimization has shown promising results to overcome this. Direct use of the hypervolume, however, results in no selection pressure for dominated solutions. The recently introduced Sofomore framework overcomes this by solving multiple interleaved single-objective dynamic problems that iteratively improve a single approximation set, based on the uncrowded hypervolume improvement (UHVI). It thereby however loses many advantages of population-based MO optimization, such as handling multimodality. Here, we reformulate the UHVI as a quality measure for approximation sets, called the uncrowded hypervolume (UHV), which can be used to directly solve MO optimization problems with a single-objective optimizer. We use the state-of-the-art gene-pool optimal mixing evolutionary algorithm (GOMEA) that is capable of efficiently exploiting the intrinsically available grey-box properties of this problem. The resulting algorithm, UHV-GOMEA, is compared with Sofomore equipped with GOMEA, and the domination-based MO-GOMEA. In doing so, we investigate in which scenarios either domination-based or hypervolume-based methods are preferred. Finally, we construct a simple hybrid approach that combines MO-GOMEA with UHV-GOMEA and outperforms both.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"30 3","pages":"329-353"},"PeriodicalIF":6.8,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39702735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce a regret-based fitness assignment strategy for evolutionary algorithms to find Nash equilibria in noncooperative simultaneous combinatorial game theory problems where it is computationally intractable to enumerate all decision options of the players involved in the game. Applications of evolutionary algorithms to non-cooperative simultaneous games have been limited due to challenges in guiding the evolutionary search toward equilibria, which are usually inferior points in the objective space. We propose a regret-based approach to select candidate decision options of the players for the next generation in a multipopulation genetic algorithm called Regret-Based Nash Equilibrium Sorting Genetic Algorithm (RNESGA). We show that RNESGA can converge to multiple Nash equilibria in a single run using two- and three- player competitive knapsack games and other games from the literature. We also show that pure payoff-based fitness assignment strategies perform poorly in three-player games.
{"title":"Regret-Based Nash Equilibrium Sorting Genetic Algorithm for Combinatorial Game Theory Problems with Multiple Players","authors":"Abdullah Konak;Sadan Kulturel-Konak","doi":"10.1162/evco_a_00308","DOIUrl":"10.1162/evco_a_00308","url":null,"abstract":"We introduce a regret-based fitness assignment strategy for evolutionary algorithms to find Nash equilibria in noncooperative simultaneous combinatorial game theory problems where it is computationally intractable to enumerate all decision options of the players involved in the game. Applications of evolutionary algorithms to non-cooperative simultaneous games have been limited due to challenges in guiding the evolutionary search toward equilibria, which are usually inferior points in the objective space. We propose a regret-based approach to select candidate decision options of the players for the next generation in a multipopulation genetic algorithm called Regret-Based Nash Equilibrium Sorting Genetic Algorithm (RNESGA). We show that RNESGA can converge to multiple Nash equilibria in a single run using two- and three- player competitive knapsack games and other games from the literature. We also show that pure payoff-based fitness assignment strategies perform poorly in three-player games.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"30 3","pages":"447-478"},"PeriodicalIF":6.8,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45325151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. G. Falcón-Cardona;M. T. M. Emmerich;C. A. Coello Coello
The most relevant property that a quality indicator (QI) is expected to have is Pareto compliance, which means that every time an approximation set strictly dominates another in a Pareto sense, the indicator must reflect this. The hypervolume indicator and its variants are the only unary QIs known to be Pareto-compliant but there are many commonly used weakly Pareto-compliant indicators such as R2, IGD+, and ε+. Currently, an open research area is related to finding new Pareto-compliant indicators whose preferences are different from those of the hypervolume indicator. In this article, we propose a theoretical basis to combine existing weakly Pareto-compliant indicators with at least one being Pareto-compliant, such that the resulting combined indicator is Pareto-compliant as well. Most importantly, we show that the combination of Pareto-compliant QIs with weakly Pareto-compliant indicators leads to indicators that inherit properties of the weakly compliant indicators in terms of optimal point distributions. The consequences of these new combined indicators are threefold: (1) to increase the variety of available Pareto-compliant QIs by correcting weakly Pareto-compliant indicators, (2) to introduce a general framework for the combination of QIs, and (3) to generate new selection mechanisms for multiobjective evolutionary algorithms where it is possible to achieve/adjust desired distributions on the Pareto front.
{"title":"On the Construction of Pareto-Compliant Combined Indicators","authors":"J. G. Falcón-Cardona;M. T. M. Emmerich;C. A. Coello Coello","doi":"10.1162/evco_a_00307","DOIUrl":"10.1162/evco_a_00307","url":null,"abstract":"The most relevant property that a quality indicator (QI) is expected to have is Pareto compliance, which means that every time an approximation set strictly dominates another in a Pareto sense, the indicator must reflect this. The hypervolume indicator and its variants are the only unary QIs known to be Pareto-compliant but there are many commonly used weakly Pareto-compliant indicators such as R2, IGD+, and ε+. Currently, an open research area is related to finding new Pareto-compliant indicators whose preferences are different from those of the hypervolume indicator. In this article, we propose a theoretical basis to combine existing weakly Pareto-compliant indicators with at least one being Pareto-compliant, such that the resulting combined indicator is Pareto-compliant as well. Most importantly, we show that the combination of Pareto-compliant QIs with weakly Pareto-compliant indicators leads to indicators that inherit properties of the weakly compliant indicators in terms of optimal point distributions. The consequences of these new combined indicators are threefold: (1) to increase the variety of available Pareto-compliant QIs by correcting weakly Pareto-compliant indicators, (2) to introduce a general framework for the combination of QIs, and (3) to generate new selection mechanisms for multiobjective evolutionary algorithms where it is possible to achieve/adjust desired distributions on the Pareto front.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"30 3","pages":"381-408"},"PeriodicalIF":6.8,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39934482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Resource Allocation approach (RA) improves the performance of MOEA/D by maintaining a big population and updating few solutions each generation. However, most of the studies on RA generally focused on the properties of different Resource Allocation metrics. Thus, it is still uncertain what the main factors are that lead to increments in performance of MOEA/D with RA. This study investigates the effects of MOEA/D with the Partial Update Strategy (PS) in an extensive set of MOPs to generate insights into correspondences of MOEA/D with the partial update and MOEA/D with small population size and big population size. Our work undertakes an in-depth analysis of the populational dynamics behaviour considering their final approximation Pareto sets, anytime hypervolume performance, attained regions, and number of unique nondominated solutions. Our results indicate that MOEA/D with partial update progresses with the search as fast as MOEA/D with small population size and explores the search space as MOEA/D with big population size. MOEA/D with partial update can mitigate common problems related to population size choice with better convergence speed in most MOPs, as shown by the results of hypervolume and number of unique nondominated solutions, and as the anytime performance and Empirical Attainment Function indicate.
{"title":"Faster Convergence in Multiobjective Optimization Algorithms Based on Decomposition","authors":"Yuri Lavinas;Marcelo Ladeira;Claus Aranha","doi":"10.1162/evco_a_00306","DOIUrl":"10.1162/evco_a_00306","url":null,"abstract":"The Resource Allocation approach (RA) improves the performance of MOEA/D by maintaining a big population and updating few solutions each generation. However, most of the studies on RA generally focused on the properties of different Resource Allocation metrics. Thus, it is still uncertain what the main factors are that lead to increments in performance of MOEA/D with RA. This study investigates the effects of MOEA/D with the Partial Update Strategy (PS) in an extensive set of MOPs to generate insights into correspondences of MOEA/D with the partial update and MOEA/D with small population size and big population size. Our work undertakes an in-depth analysis of the populational dynamics behaviour considering their final approximation Pareto sets, anytime hypervolume performance, attained regions, and number of unique nondominated solutions. Our results indicate that MOEA/D with partial update progresses with the search as fast as MOEA/D with small population size and explores the search space as MOEA/D with big population size. MOEA/D with partial update can mitigate common problems related to population size choice with better convergence speed in most MOPs, as shown by the results of hypervolume and number of unique nondominated solutions, and as the anytime performance and Empirical Attainment Function indicate.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"30 3","pages":"355-380"},"PeriodicalIF":6.8,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39906957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Several test function suites are being used for numerical benchmarking of multiobjective optimization algorithms. While they have some desirable properties, such as well-understood Pareto sets and Pareto fronts of various shapes, most of the currently used functions possess characteristics that are arguably underrepresented in real-world problems such as separability, optima located exactly at the boundary constraints, and the existence of variables that solely control the distance between a solution and the Pareto front. Via the alternative construction of combining existing single-objective problems from the literature, we describe the bbob-biobj test suite with 55 bi-objective functions in continuous domain, and its extended version with 92 bi-objective functions (bbob-biobj-ext). Both test suites have been implemented in the COCO platform for black-box optimization benchmarking and various visualizations of the test functions are shown to reveal their properties. Besides providing details on the construction of these problems and presenting their (known) properties, this article also aims at giving the rationale behind our approach in terms of groups of functions with similar properties, objective space normalization, and problem instances. The latter allows us to easily compare the performance of deterministic and stochastic solvers, which is an often overlooked issue in benchmarking.
{"title":"Using Well-Understood Single-Objective Functions in Multiobjective Black-Box Optimization Test Suites","authors":"Dimo Brockhoff;Anne Auger;Nikolaus Hansen;Tea Tušar","doi":"10.1162/evco_a_00298","DOIUrl":"10.1162/evco_a_00298","url":null,"abstract":"Several test function suites are being used for numerical benchmarking of multiobjective optimization algorithms. While they have some desirable properties, such as well-understood Pareto sets and Pareto fronts of various shapes, most of the currently used functions possess characteristics that are arguably underrepresented in real-world problems such as separability, optima located exactly at the boundary constraints, and the existence of variables that solely control the distance between a solution and the Pareto front. Via the alternative construction of combining existing single-objective problems from the literature, we describe the bbob-biobj test suite with 55 bi-objective functions in continuous domain, and its extended version with 92 bi-objective functions (bbob-biobj-ext). Both test suites have been implemented in the COCO platform for black-box optimization benchmarking and various visualizations of the test functions are shown to reveal their properties. Besides providing details on the construction of these problems and presenting their (known) properties, this article also aims at giving the rationale behind our approach in terms of groups of functions with similar properties, objective space normalization, and problem instances. The latter allows us to easily compare the performance of deterministic and stochastic solvers, which is an often overlooked issue in benchmarking.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"30 2","pages":"165-193"},"PeriodicalIF":6.8,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39555350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Individual semantics have been used for guiding the learning process of Genetic Programming. Novel genetic operators and different ways of performing parent selection have been proposed with the use of semantics. The latter is the focus of this contribution by proposing three heuristics for parent selection that measure the similarity among individuals' semantics for choosing parents that enhance the addition, Naive Bayes, and Nearest Centroid. To the best of our knowledge, this is the first time that functions' properties are used for guiding the learning process. As the heuristics were created based on the properties of these functions, we apply them only when they are used to create offspring. The similarity functions considered are the cosine similarity, Pearson's correlation, and agreement. We analyze these heuristics' performance against random selection, state-of-the-art selection schemes, and 18 classifiers, including auto-machine-learning techniques, on 30 classification problems with a variable number of samples, variables, and classes. The result indicated that the combination of parent selection based on agreement and random selection to replace an individual in the population produces statistically better results than the classical selection and state-of-the-art schemes, and it is competitive with state-of-the-art classifiers. Finally, the code is released as open-source software.
{"title":"Selection Heuristics on Semantic Genetic Programming for Classification Problems","authors":"Claudia N. Sánchez;Mario Graff","doi":"10.1162/evco_a_00297","DOIUrl":"10.1162/evco_a_00297","url":null,"abstract":"Individual semantics have been used for guiding the learning process of Genetic Programming. Novel genetic operators and different ways of performing parent selection have been proposed with the use of semantics. The latter is the focus of this contribution by proposing three heuristics for parent selection that measure the similarity among individuals' semantics for choosing parents that enhance the addition, Naive Bayes, and Nearest Centroid. To the best of our knowledge, this is the first time that functions' properties are used for guiding the learning process. As the heuristics were created based on the properties of these functions, we apply them only when they are used to create offspring. The similarity functions considered are the cosine similarity, Pearson's correlation, and agreement. We analyze these heuristics' performance against random selection, state-of-the-art selection schemes, and 18 classifiers, including auto-machine-learning techniques, on 30 classification problems with a variable number of samples, variables, and classes. The result indicated that the combination of parent selection based on agreement and random selection to replace an individual in the population produces statistically better results than the classical selection and state-of-the-art schemes, and it is competitive with state-of-the-art classifiers. Finally, the code is released as open-source software.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"30 2","pages":"253-289"},"PeriodicalIF":6.8,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39555351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}