Artificial Bee Colony (ABC) algorithm, which was initially proposed for numerical function optimization, has been increasingly used for clustering. However, when it is directly applied to clustering, the performance of ABC is lower than expected. This paper proposes an improved ABC algorithm for clustering, denoted as EABC. EABC uses a key initialization method to accommodate the special solution space of clustering. Experimental results show that the evaluation of clustering is significantly improved and the latency of clustering is sharply reduced. Furthermore, EABC outperforms two ABC variants in clustering benchmark data sets.
{"title":"An improved artificial bee colony algorithm for clustering","authors":"Qiuhan Tan, Hejun Wu, Biao Hu, Xingcheng Liu","doi":"10.1145/2598394.2598464","DOIUrl":"https://doi.org/10.1145/2598394.2598464","url":null,"abstract":"Artificial Bee Colony (ABC) algorithm, which was initially proposed for numerical function optimization, has been increasingly used for clustering. However, when it is directly applied to clustering, the performance of ABC is lower than expected. This paper proposes an improved ABC algorithm for clustering, denoted as EABC. EABC uses a key initialization method to accommodate the special solution space of clustering. Experimental results show that the evaluation of clustering is significantly improved and the latency of clustering is sharply reduced. Furthermore, EABC outperforms two ABC variants in clustering benchmark data sets.","PeriodicalId":298232,"journal":{"name":"Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation","volume":"290 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126868857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. A. Lopes, R. Silva, A. Freitas, F. Campelo, F. Guimarães
The Island Model (IM) is a well known multi-population approach for Evolutionary Algorithms (EAs). One of the critical parameters for defining a suitable IM is the migration topology. Basically it determines the Migratory Flows (MF) between the islands of the model which are able to improve the rate and pace of convergence observed in the EAs coupled with IMs. Although, it is possible to find a wide number of approaches for the configuration of MFs, there still is a lack of knowledge about the real performance of these approaches in the IM. In order to fill this gap, this paper presents a thorough experimental analysis of the approaches coupled with the state-of-the-art EA Differential Evolution. The experiments on well known benchmark functions show that there is a trade-off between convergence speed and convergence rate among the different approaches. With respect to the computational times, the results indicate that the increase in implementation complexity does not necessarily represent an increase in the overall execution time.
{"title":"A study on the configuration of migratory flows in island model differential evolution","authors":"R. A. Lopes, R. Silva, A. Freitas, F. Campelo, F. Guimarães","doi":"10.1145/2598394.2605439","DOIUrl":"https://doi.org/10.1145/2598394.2605439","url":null,"abstract":"The Island Model (IM) is a well known multi-population approach for Evolutionary Algorithms (EAs). One of the critical parameters for defining a suitable IM is the migration topology. Basically it determines the Migratory Flows (MF) between the islands of the model which are able to improve the rate and pace of convergence observed in the EAs coupled with IMs. Although, it is possible to find a wide number of approaches for the configuration of MFs, there still is a lack of knowledge about the real performance of these approaches in the IM. In order to fill this gap, this paper presents a thorough experimental analysis of the approaches coupled with the state-of-the-art EA Differential Evolution. The experiments on well known benchmark functions show that there is a trade-off between convergence speed and convergence rate among the different approaches. With respect to the computational times, the results indicate that the increase in implementation complexity does not necessarily represent an increase in the overall execution time.","PeriodicalId":298232,"journal":{"name":"Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122585528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For the past 25 years, NK landscapes have been the classic benchmarks for modeling combinatorial fitness landscapes with epistatic interactions between up to K+1 of N binary features. However, the ruggedness of NK landscapes grows in large discrete jumps as K increases, and computing the global optimum of unrestricted NK landscapes is an NP-complete problem. Walsh polynomials are a superset of NK landscapes that solve some of the problems. In this paper, we propose a new class of benchmarks called NM landscapes, where M refers to the Maximum order of epistatic interactions between N features. NM landscapes are much more smoothly tunable in ruggedness than NK landscapes and the location and value of the global optima are trivially known. For a subset of NM landscapes the location and magnitude of global minima are also easily computed, enabling proper normalization of fitnesses. NM landscapes are simpler than Walsh polynomials and can be used with alphabets of any arity, from binary to real-valued. We discuss several advantages of NM landscapes over NK landscapes and Walsh polynomials as benchmark problems for evaluating search strategies.
{"title":"NM landscapes: beyond NK","authors":"N. Manukyan, M. Eppstein, J. Buzas","doi":"10.1145/2598394.2598403","DOIUrl":"https://doi.org/10.1145/2598394.2598403","url":null,"abstract":"For the past 25 years, NK landscapes have been the classic benchmarks for modeling combinatorial fitness landscapes with epistatic interactions between up to K+1 of N binary features. However, the ruggedness of NK landscapes grows in large discrete jumps as K increases, and computing the global optimum of unrestricted NK landscapes is an NP-complete problem. Walsh polynomials are a superset of NK landscapes that solve some of the problems. In this paper, we propose a new class of benchmarks called NM landscapes, where M refers to the Maximum order of epistatic interactions between N features. NM landscapes are much more smoothly tunable in ruggedness than NK landscapes and the location and value of the global optima are trivially known. For a subset of NM landscapes the location and magnitude of global minima are also easily computed, enabling proper normalization of fitnesses. NM landscapes are simpler than Walsh polynomials and can be used with alphabets of any arity, from binary to real-valued. We discuss several advantages of NM landscapes over NK landscapes and Walsh polynomials as benchmark problems for evaluating search strategies.","PeriodicalId":298232,"journal":{"name":"Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123106993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In evolutionary optimization, it is important to use efficient evolutionary operators, such as mutation and crossover. But it is often difficult to decide, which operator should be used when solving a specific optimization problem. So an automatic approach is needed. We propose an adaptive method of selecting evolutionary operators, which takes a set of possible operators as input and learns what operators are efficient for the considered problem. One evolutionary algorithm run should be enough for both learning and obtaining suitable performance. The proposed EA+RL(O) method is based on reinforcement learning. We test it by solving H-IFF and Travelling Salesman optimization problems. The obtained results show that the proposed method significantly outperforms random selection, since it manages to select efficient evolutionary operators and ignore inefficient ones.
{"title":"Selecting evolutionary operators using reinforcement learning: initial explorations","authors":"Arina Buzdalova, V. Kononov, M. Buzdalov","doi":"10.1145/2598394.2605681","DOIUrl":"https://doi.org/10.1145/2598394.2605681","url":null,"abstract":"In evolutionary optimization, it is important to use efficient evolutionary operators, such as mutation and crossover. But it is often difficult to decide, which operator should be used when solving a specific optimization problem. So an automatic approach is needed. We propose an adaptive method of selecting evolutionary operators, which takes a set of possible operators as input and learns what operators are efficient for the considered problem. One evolutionary algorithm run should be enough for both learning and obtaining suitable performance. The proposed EA+RL(O) method is based on reinforcement learning. We test it by solving H-IFF and Travelling Salesman optimization problems. The obtained results show that the proposed method significantly outperforms random selection, since it manages to select efficient evolutionary operators and ignore inefficient ones.","PeriodicalId":298232,"journal":{"name":"Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114077452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present ongoing research that is an extension of novelty search, flood evolution. This technique aims to improve evolutionary algorithms by presenting them with large sets of problems, as opposed to individual ones. If the older approach of incremental evolution were analogous to moving over a path of stepping stones, then this approach is similar to navigating a rocky field. The method is discussed and preliminary results are presented.
{"title":"Flood evolution: changing the evolutionary substrate from a path of stepping stones to a field of rocks","authors":"D. Shorten, G. Nitschke","doi":"10.1145/2598394.2605675","DOIUrl":"https://doi.org/10.1145/2598394.2605675","url":null,"abstract":"We present ongoing research that is an extension of novelty search, flood evolution. This technique aims to improve evolutionary algorithms by presenting them with large sets of problems, as opposed to individual ones. If the older approach of incremental evolution were analogous to moving over a path of stepping stones, then this approach is similar to navigating a rocky field. The method is discussed and preliminary results are presented.","PeriodicalId":298232,"journal":{"name":"Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation","volume":"22 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114095308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper presents a parallel implementation of a variant of quantum inspired genetic algorithm (QIGA) for the problem of community structure detection in complex networks using NVIDIA® Compute Unified Device Architecture (CUDA®) technology. The paper explores feasibility of the approach in the domain of complex networks. The approach does not require any knowledge of the number of communities beforehand and works well for both directed and undirected networks. Experiments on benchmark networks show that the method is able to successfully reveal community structure with high modularity.
{"title":"GPU-based massively parallel quantum inspired genetic algorithm for detection of communities in complex networks","authors":"Shikha Gupta, Naveen Kumar","doi":"10.1145/2598394.2598437","DOIUrl":"https://doi.org/10.1145/2598394.2598437","url":null,"abstract":"The paper presents a parallel implementation of a variant of quantum inspired genetic algorithm (QIGA) for the problem of community structure detection in complex networks using NVIDIA® Compute Unified Device Architecture (CUDA®) technology. The paper explores feasibility of the approach in the domain of complex networks. The approach does not require any knowledge of the number of communities beforehand and works well for both directed and undirected networks. Experiments on benchmark networks show that the method is able to successfully reveal community structure with high modularity.","PeriodicalId":298232,"journal":{"name":"Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114226297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When designing a wind farm layout, we can reduce the number of variables by optimizing a pattern instead of considering the position of each turbine. In this paper we show that, by reducing the problem to only two variables defining a grid, we can gain up to $3%$ of energy output on simple examples of wind farms dealing with many turbines (up to 1000) while dramatically reducing the computation time.
{"title":"Windmill farm pattern optimization using evolutionary algorithms","authors":"C. Vanaret, N. Durand, J. Alliot","doi":"10.1145/2598394.2598506","DOIUrl":"https://doi.org/10.1145/2598394.2598506","url":null,"abstract":"When designing a wind farm layout, we can reduce the number of variables by optimizing a pattern instead of considering the position of each turbine. In this paper we show that, by reducing the problem to only two variables defining a grid, we can gain up to $3%$ of energy output on simple examples of wind farms dealing with many turbines (up to 1000) while dramatically reducing the computation time.","PeriodicalId":298232,"journal":{"name":"Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123837400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Evolutionary Algorithms (EAs), when used for global optimization, can be seen as unconstrained optimization techniques. Therefore, they require an additional mechanism to incorporate constraints of any kind (i.e., inequality, equality, linear, nonlinear) into their fitness function. Although the use of penalty functions (very popular with mathematical programming techniques) may seem an obvious choice, this sort of approach requires a careful fine tuning of the penalty factors to be used. Otherwise, an EA may be unable to reach the feasible region (if the penalty is too low) or may reach quickly the feasible region but being unable to locate solutions that lie in the boundary with the infeasible region (if the penalty is too severe). This has motivated the development of a number of approaches to incorporate constraints into the fitness function of an EA. This tutorial will cover the main proposals in current use, including novel approaches such as the use of tournament rules based on feasibility, multiobjective optimization concepts, hybrids with mathematical programming techniques (e.g., Lagrange multipliers), cultural algorithms, and artificial immune systems, among others. Other topics such as the importance of maintaining diversity, current benchmarks and the use of alternative search engines (e.g., particle swarm optimization, differential evolution, evolution strategies, etc.) will be also discussed (as time allows).
{"title":"Constraint-handling techniques used with evolutionary algorithms","authors":"C. C. Coello Coello","doi":"10.1145/2598394.2605348","DOIUrl":"https://doi.org/10.1145/2598394.2605348","url":null,"abstract":"Evolutionary Algorithms (EAs), when used for global optimization, can be seen as unconstrained optimization techniques. Therefore, they require an additional mechanism to incorporate constraints of any kind (i.e., inequality, equality, linear, nonlinear) into their fitness function. Although the use of penalty functions (very popular with mathematical programming techniques) may seem an obvious choice, this sort of approach requires a careful fine tuning of the penalty factors to be used. Otherwise, an EA may be unable to reach the feasible region (if the penalty is too low) or may reach quickly the feasible region but being unable to locate solutions that lie in the boundary with the infeasible region (if the penalty is too severe). This has motivated the development of a number of approaches to incorporate constraints into the fitness function of an EA. This tutorial will cover the main proposals in current use, including novel approaches such as the use of tournament rules based on feasibility, multiobjective optimization concepts, hybrids with mathematical programming techniques (e.g., Lagrange multipliers), cultural algorithms, and artificial immune systems, among others. Other topics such as the importance of maintaining diversity, current benchmarks and the use of alternative search engines (e.g., particle swarm optimization, differential evolution, evolution strategies, etc.) will be also discussed (as time allows).","PeriodicalId":298232,"journal":{"name":"Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125167669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. A. F. Mollinetti, Daniel Leal Souza, O. N. Teixeira
This paper has the purpose of presenting a new hybridization of the Artificial Bee Colony Algorithm (ABC) based on the evolutionary strategies (ES) found on the Evolutionary Particle Swarm Optimization (EPSO). The main motivation of this approach is to augment the original ABC in a way that combines the effectiveness and simplicity of the ABC with the robustness and increased exploitation of the Evolution Strategies. The algorithm is intended to be tested on two large-scale engineering design problem and its results compared to other optimization techniques.
{"title":"ABC+ES: a novel hybrid artificial bee colony algorithm with evolution strategies","authors":"M. A. F. Mollinetti, Daniel Leal Souza, O. N. Teixeira","doi":"10.1145/2598394.2602277","DOIUrl":"https://doi.org/10.1145/2598394.2602277","url":null,"abstract":"This paper has the purpose of presenting a new hybridization of the Artificial Bee Colony Algorithm (ABC) based on the evolutionary strategies (ES) found on the Evolutionary Particle Swarm Optimization (EPSO). The main motivation of this approach is to augment the original ABC in a way that combines the effectiveness and simplicity of the ABC with the robustness and increased exploitation of the Evolution Strategies. The algorithm is intended to be tested on two large-scale engineering design problem and its results compared to other optimization techniques.","PeriodicalId":298232,"journal":{"name":"Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115173558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
1. PROBLEM STATEMENT The ‘Representative’ pattern is applicable when it is desirable to eliminate redundancy in the search process: • It is often the case that some function f of interest in optimization gives a many-to-one mapping, i.e. it induces equivalence classes over the image of f . If f is a fitness function, this can lead to plateaus in the fitness landscape. • It may be that the elimination of redundancy allows search to be performed in a smaller (‘quotient’) space that can be searched using methods (possibly even exact techniques) not applicable to the original space. • In the case of GP-trees, syntactically inequivalent but semantically equivalent representations (e.g. x + x, 2 ∗x) can lead to a lack of gradient in genotype-to-phenotype mappings, which may make the space of programs harder to search effectively.
{"title":"The 'representative' metaheuristic design pattern","authors":"J. Swan, Zoltan A. Kocsis, A. Lisitsa","doi":"10.1145/2598394.2609842","DOIUrl":"https://doi.org/10.1145/2598394.2609842","url":null,"abstract":"1. PROBLEM STATEMENT The ‘Representative’ pattern is applicable when it is desirable to eliminate redundancy in the search process: • It is often the case that some function f of interest in optimization gives a many-to-one mapping, i.e. it induces equivalence classes over the image of f . If f is a fitness function, this can lead to plateaus in the fitness landscape. • It may be that the elimination of redundancy allows search to be performed in a smaller (‘quotient’) space that can be searched using methods (possibly even exact techniques) not applicable to the original space. • In the case of GP-trees, syntactically inequivalent but semantically equivalent representations (e.g. x + x, 2 ∗x) can lead to a lack of gradient in genotype-to-phenotype mappings, which may make the space of programs harder to search effectively.","PeriodicalId":298232,"journal":{"name":"Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132635292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}