This paper presents an adaptive memetic algorithm (AMA) to minimize the total travel distance in the NP-hard vehicle routing problem with time windows (VRPTW). Although memetic algorithms (MAs) have been proven to be very efficient in solving the VRPTW, their main drawback is an unclear tuning of their numerous parameters. Here, we introduce the AMA in which the selection scheme and the population size are adjusted during the search. We propose a new adaptive selection scheme to balance the exploration and exploitation of the search space. An extensive experimental study confirms that the AMA outperforms a standard MA in terms of the convergence capabilities.
{"title":"Adaptive memetic algorithm for the vehicle routing problem with time windows","authors":"J. Nalepa","doi":"10.1145/2598394.2602273","DOIUrl":"https://doi.org/10.1145/2598394.2602273","url":null,"abstract":"This paper presents an adaptive memetic algorithm (AMA) to minimize the total travel distance in the NP-hard vehicle routing problem with time windows (VRPTW). Although memetic algorithms (MAs) have been proven to be very efficient in solving the VRPTW, their main drawback is an unclear tuning of their numerous parameters. Here, we introduce the AMA in which the selection scheme and the population size are adjusted during the search. We propose a new adaptive selection scheme to balance the exploration and exploitation of the search space. An extensive experimental study confirms that the AMA outperforms a standard MA in terms of the convergence capabilities.","PeriodicalId":298232,"journal":{"name":"Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130792363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Feature selection has two main conflicting objectives, which are to minimise the number of features and maximise the classification accuracy. Evolutionary computation techniques are particularly suitable for solving mult-objective tasks. Based on differential evolution (DE), this paper develops a multi-objective feature selection algorithm (DEMOFS). DEMOFS is examined and compared with two traditional feature selection algorithms and a DE based single objective feature selection algorithm. DEFS aims to minimise the classification error rate of the selected features. Experiments on nine benchmark datasets show that DEMOFS can successfully obtain a set of non-dominated feature subsets, which include a smaller number of features and maintain or improve the classification performance over using all features. In almost all cases, DEMOFS outperforms DEFS and the two traditional feature selection methods in terms of both the number of features and the classification performance.
{"title":"Differential evolution (DE) for multi-objective feature selection in classification","authors":"Bing Xue, Wenlong Fu, Mengjie Zhang","doi":"10.1145/2598394.2598493","DOIUrl":"https://doi.org/10.1145/2598394.2598493","url":null,"abstract":"Feature selection has two main conflicting objectives, which are to minimise the number of features and maximise the classification accuracy. Evolutionary computation techniques are particularly suitable for solving mult-objective tasks. Based on differential evolution (DE), this paper develops a multi-objective feature selection algorithm (DEMOFS). DEMOFS is examined and compared with two traditional feature selection algorithms and a DE based single objective feature selection algorithm. DEFS aims to minimise the classification error rate of the selected features. Experiments on nine benchmark datasets show that DEMOFS can successfully obtain a set of non-dominated feature subsets, which include a smaller number of features and maintain or improve the classification performance over using all features. In almost all cases, DEMOFS outperforms DEFS and the two traditional feature selection methods in terms of both the number of features and the classification performance.","PeriodicalId":298232,"journal":{"name":"Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132282830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There exist optimization problems with the target objective, which is to be optimized, and several extra objectives, which can be helpful in the optimization process. The previously proposed EA+RL method is designed to adaptively select objectives during the run of an optimization algorithm in order to reduce the number of evaluations needed to reach an optimum of the target objective. The case when the extra objective is a fine-grained version of the target one is probably the simplest case when using an extra objective actually helps. We define a coarse-grained version of OneMax called XdivK as follows: XdivK(x)= [OneMax(x)/k] for a parameter k which is a divisor of n- the length of a bit vector. We also define XdivK+OneMax, which is a problem where the target objective is XdivK and a single extra objective is OneMax. In this paper, the randomized local search (RLS) is used in the EA+RL method as an optimization algorithm. We construct exact expressions for the expected running time of RLS solving the XdivK problem and of the EA+RL method solving the XdivK+OneMax problem. It is shown that the EA+RL method makes optimization faster, and the speedup is exponential in k.
{"title":"Onemax helps optimizing XdivK:: theoretical runtime analysis for RLS and EA+RL","authors":"M. Buzdalov, Arina Buzdalova","doi":"10.1145/2598394.2598442","DOIUrl":"https://doi.org/10.1145/2598394.2598442","url":null,"abstract":"There exist optimization problems with the target objective, which is to be optimized, and several extra objectives, which can be helpful in the optimization process. The previously proposed EA+RL method is designed to adaptively select objectives during the run of an optimization algorithm in order to reduce the number of evaluations needed to reach an optimum of the target objective. The case when the extra objective is a fine-grained version of the target one is probably the simplest case when using an extra objective actually helps. We define a coarse-grained version of OneMax called XdivK as follows: XdivK(x)= [OneMax(x)/k] for a parameter k which is a divisor of n- the length of a bit vector. We also define XdivK+OneMax, which is a problem where the target objective is XdivK and a single extra objective is OneMax. In this paper, the randomized local search (RLS) is used in the EA+RL method as an optimization algorithm. We construct exact expressions for the expected running time of RLS solving the XdivK problem and of the EA+RL method solving the XdivK+OneMax problem. It is shown that the EA+RL method makes optimization faster, and the speedup is exponential in k.","PeriodicalId":298232,"journal":{"name":"Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122143504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
MOEA/D decomposes a multi-objective optimization problem (MOP) into a set of scalar sub-problems with evenly spread weight vectors. Recent studies have shown that the fixed weight vectors used in MOEA/D might not be able to cover the whole Pareto front (PF) very well. Due to this, we developed an adaptive weight adjustment method in our previous work by removing subproblems from the crowded parts of the PF and adding new ones into the sparse parts. Although it performs well, we found that the sparse measurement of a subproblem which is determined by the m-nearest (m is the dimensional of the object space) neighbors of its solution can be more appropriately defined. In this work, the neighborhood relationship between subproblems is defined by using Delaunay triangulation (DT) of the points in the population.
{"title":"MOEA/D with a delaunay triangulation based weight adjustment","authors":"Yutao Qi, Xiaoliang Ma, Minglei Yin, Fang Liu, Jingxuan Wei","doi":"10.1145/2598394.2598416","DOIUrl":"https://doi.org/10.1145/2598394.2598416","url":null,"abstract":"MOEA/D decomposes a multi-objective optimization problem (MOP) into a set of scalar sub-problems with evenly spread weight vectors. Recent studies have shown that the fixed weight vectors used in MOEA/D might not be able to cover the whole Pareto front (PF) very well. Due to this, we developed an adaptive weight adjustment method in our previous work by removing subproblems from the crowded parts of the PF and adding new ones into the sparse parts. Although it performs well, we found that the sparse measurement of a subproblem which is determined by the m-nearest (m is the dimensional of the object space) neighbors of its solution can be more appropriately defined. In this work, the neighborhood relationship between subproblems is defined by using Delaunay triangulation (DT) of the points in the population.","PeriodicalId":298232,"journal":{"name":"Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124858341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Genetic programming: a tutorial introduction","authors":"Una-May O’Reilly","doi":"10.1145/2598394.2605336","DOIUrl":"https://doi.org/10.1145/2598394.2605336","url":null,"abstract":"","PeriodicalId":298232,"journal":{"name":"Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124969202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper deals with the problem of scheduling the mixed workload of both homogeneous multiprocessor on-line and off-line periodic tasks in a critical reconfigurable real-time environment by a genetic algorithm. Two forms of automatic reconfigurations which are assumed to be applied at run-time: Addition-Removal of tasks or just modifications of their temporal parameters: worst case execution time (WCET) and/or deadlines. Nevertheless, when such a scenario is applied to save the system at the occurrence of hardware-software faults, or to improve its performance, some real-time properties can be violated at run-time. We define an Intelligent Agent that automatically checks the system's feasibility after any reconfiguration scenario to verify if all tasks meet the required deadlines after a reconfiguration scenario was applied on a multiprocessor embedded real-time system. Indeed, if the system is unfeasible, then the proposed genetic algorithm dynamically provides a solution that meets real-time constraints. This genetic algorithm based on a highly efficient decoding procedure, strongly improves the quality of real-time scheduling in a critical environment. The effectiveness and the performance of the designed approach is evaluated through simulation studies illustrated by testing Hopper's benchmark results.
{"title":"A genetic based scheduling approach of real-time reconfigurable embedded systems","authors":"H. Gharsellaoui, Hamadi Hasni, S. Ahmed","doi":"10.1145/2598394.2605440","DOIUrl":"https://doi.org/10.1145/2598394.2605440","url":null,"abstract":"This paper deals with the problem of scheduling the mixed workload of both homogeneous multiprocessor on-line and off-line periodic tasks in a critical reconfigurable real-time environment by a genetic algorithm. Two forms of automatic reconfigurations which are assumed to be applied at run-time: Addition-Removal of tasks or just modifications of their temporal parameters: worst case execution time (WCET) and/or deadlines. Nevertheless, when such a scenario is applied to save the system at the occurrence of hardware-software faults, or to improve its performance, some real-time properties can be violated at run-time. We define an Intelligent Agent that automatically checks the system's feasibility after any reconfiguration scenario to verify if all tasks meet the required deadlines after a reconfiguration scenario was applied on a multiprocessor embedded real-time system. Indeed, if the system is unfeasible, then the proposed genetic algorithm dynamically provides a solution that meets real-time constraints. This genetic algorithm based on a highly efficient decoding procedure, strongly improves the quality of real-time scheduling in a critical environment. The effectiveness and the performance of the designed approach is evaluated through simulation studies illustrated by testing Hopper's benchmark results.","PeriodicalId":298232,"journal":{"name":"Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129413959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hypervolume has been frequently used as an indicator to evaluate a solution set in indicator-based evolutionary algorithms (IBEAs). One important issue in such an IBEA is the choice of a reference point. A different solution set is often obtained from a different reference point since the hypervolume calculation depends on the location of the reference point. In this paper, we propose an idea of utilizing this dependency to formulate a meta-level multi-objective set optimization problem. Hypervolume maximization for a different reference point is used as a different objective.
{"title":"Meta-level multi-objective formulations of set optimization for multi-objective optimization problems: multi-reference point approach to hypervolume maximization","authors":"H. Ishibuchi, Hiroyuki Masuda, Y. Nojima","doi":"10.1145/2598394.2598484","DOIUrl":"https://doi.org/10.1145/2598394.2598484","url":null,"abstract":"Hypervolume has been frequently used as an indicator to evaluate a solution set in indicator-based evolutionary algorithms (IBEAs). One important issue in such an IBEA is the choice of a reference point. A different solution set is often obtained from a different reference point since the hypervolume calculation depends on the location of the reference point. In this paper, we propose an idea of utilizing this dependency to formulate a meta-level multi-objective set optimization problem. Hypervolume maximization for a different reference point is used as a different objective.","PeriodicalId":298232,"journal":{"name":"Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127333830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Agent Based Modeling (ABM) is a bottom-up approach that has been used to study adaptive group (collective) behavior. ABM is an analogical system that aids ethologists in constructing novel hypotheses, and allows the investigation of emergent phenomena in experiments that could not be conducted in nature [15], [2], [12], [11]. Many studies in ethology have formalized mathematical models of collective migration behavior [1], but few have examined the impact of phenotypic traits (such as lifetime length) on the learning and evolution of collective migration behavior [9], [4]. The first objective of this research is to test the impact of agent lifetime length on the adaptation of collective migration behaviors in a virtual environment. Agent behavior is adapted with a hybrid Particle Swarm Optimization (PSO) method that integrates learning and evolution. Learning (lifetime learning) refers to a process whereby agents learn new behaviors during their lifetime [13], [3]. Evolution (genetic learning) refers to behavioral adaptation over successive lifetimes (generations) of an agent population [5]. The second objective is to demonstrate these hybrid PSO methods are appropriate for modeling the adaptation of collective migration behaviors in an ABM. The motivation is that PSO methods combined with evolution and learning approaches have received little attention as ABMs for potentially addressing (supporting or refuting) hypotheses posited in ethological literature. The task was for an agent group (flock) to locate a migration point during a simulated season in a virtual environment, where a season consisted of X simulation iterations.
{"title":"Lifetimes of migration","authors":"Faith Agwang, Willem S. van Heerden, G. Nitschke","doi":"10.1145/2598394.2598450","DOIUrl":"https://doi.org/10.1145/2598394.2598450","url":null,"abstract":"Agent Based Modeling (ABM) is a bottom-up approach that has been used to study adaptive group (collective) behavior. ABM is an analogical system that aids ethologists in constructing novel hypotheses, and allows the investigation of emergent phenomena in experiments that could not be conducted in nature [15], [2], [12], [11]. Many studies in ethology have formalized mathematical models of collective migration behavior [1], but few have examined the impact of phenotypic traits (such as lifetime length) on the learning and evolution of collective migration behavior [9], [4]. The first objective of this research is to test the impact of agent lifetime length on the adaptation of collective migration behaviors in a virtual environment. Agent behavior is adapted with a hybrid Particle Swarm Optimization (PSO) method that integrates learning and evolution. Learning (lifetime learning) refers to a process whereby agents learn new behaviors during their lifetime [13], [3]. Evolution (genetic learning) refers to behavioral adaptation over successive lifetimes (generations) of an agent population [5]. The second objective is to demonstrate these hybrid PSO methods are appropriate for modeling the adaptation of collective migration behaviors in an ABM. The motivation is that PSO methods combined with evolution and learning approaches have received little attention as ABMs for potentially addressing (supporting or refuting) hypotheses posited in ethological literature. The task was for an agent group (flock) to locate a migration point during a simulated season in a virtual environment, where a season consisted of X simulation iterations.","PeriodicalId":298232,"journal":{"name":"Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126972311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we tackle the distribution network expansion planning (DNEP) problem by employing two evolutionary algorithms (EAs): the classical Genetic Algorithm (GA) and a linkage-learning EA, specifically a Gene-pool Optimal Mixing Evolutionary Algorithm (GOMEA). We furthermore develop two efficiency-enhancement techniques for these two EAs for solving the DNEP problem: a restricted initialization mechanism to reduce the size of the explorable search space and a means to filter linkages (for GOMEA) to disregard linkage groups during genetic variation that are likely not useful. Experimental results on a benchmark network show that if we may assume that the optimal network will be very similar to the starting network, restricted initialization is generally useful for solving DNEP and moreover it becomes more beneficial to use the simple GA. However, in the more general setting where we cannot make the closeness assumption and the explorable search space becomes much larger, GOMEA outperforms the classical GA.
{"title":"Efficiency enhancements for evolutionary capacity planning in distribution grids","authors":"N. H. Luong, M. Grond, H. L. Poutré, P. Bosman","doi":"10.1145/2598394.2605696","DOIUrl":"https://doi.org/10.1145/2598394.2605696","url":null,"abstract":"In this paper, we tackle the distribution network expansion planning (DNEP) problem by employing two evolutionary algorithms (EAs): the classical Genetic Algorithm (GA) and a linkage-learning EA, specifically a Gene-pool Optimal Mixing Evolutionary Algorithm (GOMEA). We furthermore develop two efficiency-enhancement techniques for these two EAs for solving the DNEP problem: a restricted initialization mechanism to reduce the size of the explorable search space and a means to filter linkages (for GOMEA) to disregard linkage groups during genetic variation that are likely not useful. Experimental results on a benchmark network show that if we may assume that the optimal network will be very similar to the starting network, restricted initialization is generally useful for solving DNEP and moreover it becomes more beneficial to use the simple GA. However, in the more general setting where we cannot make the closeness assumption and the explorable search space becomes much larger, GOMEA outperforms the classical GA.","PeriodicalId":298232,"journal":{"name":"Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation","volume":"35 9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130780924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, deep learning methods applying unsupervised learning to train deep layers of neural networks have achieved remarkable results in numerous fields. In the past, many genetic algorithms based methods have been successfully applied to training neural networks. In this paper, we extend previous work and propose a GA-assisted method for deep learning. Our experimental results indicate that this GA-assisted approach improves the performance of a deep autoencoder, producing a sparser neural network.
{"title":"Genetic algorithms for evolving deep neural networks","authors":"E. David, Iddo Greental","doi":"10.1145/2598394.2602287","DOIUrl":"https://doi.org/10.1145/2598394.2602287","url":null,"abstract":"In recent years, deep learning methods applying unsupervised learning to train deep layers of neural networks have achieved remarkable results in numerous fields. In the past, many genetic algorithms based methods have been successfully applied to training neural networks. In this paper, we extend previous work and propose a GA-assisted method for deep learning. Our experimental results indicate that this GA-assisted approach improves the performance of a deep autoencoder, producing a sparser neural network.","PeriodicalId":298232,"journal":{"name":"Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130794688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}