We consider the very simple problem of optimizing a stationary unimodal function over R/sup n/ without using analytical gradient information. There exist numerous algorithms from mathematical programming to evolutionary algorithms for this problem. We have a closer look at advanced evolution strategies (GSA, CMA), the evolutionary gradient search algorithm (EGS), local search enhancement by random memorizing (LSERM), and the simple (1+1)-evolution strategy. These approaches show different problem-solving capabilities for different test functions. We introduce different measures which reflect certain aspects of what might be seen as the problem difficulty. Based on these measures it is possible to characterize the weak and strong points of the approaches which may lead to even more advanced algorithms.
{"title":"On some difficulties in local evolutionary search","authors":"H. Voigt","doi":"10.1109/CEC.1999.782012","DOIUrl":"https://doi.org/10.1109/CEC.1999.782012","url":null,"abstract":"We consider the very simple problem of optimizing a stationary unimodal function over R/sup n/ without using analytical gradient information. There exist numerous algorithms from mathematical programming to evolutionary algorithms for this problem. We have a closer look at advanced evolution strategies (GSA, CMA), the evolutionary gradient search algorithm (EGS), local search enhancement by random memorizing (LSERM), and the simple (1+1)-evolution strategy. These approaches show different problem-solving capabilities for different test functions. We introduce different measures which reflect certain aspects of what might be seen as the problem difficulty. Based on these measures it is possible to characterize the weak and strong points of the approaches which may lead to even more advanced algorithms.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122157074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper presents a new mathematical analysis of genetic algorithms (GAs); we propose the use of random systems with complete connections (RSCC), a non-trivial extension of the Markovian dependence, accounting for a complete, rather than recent, history of a stochastic evolution. As far as we know, this is the first theoretical modeling of an adaptive GA. First we introduce the RSCC model of an p/sub m/-adaptive GA, then we prove that a "classification of states" is still valid for our model, and finally we derive a convergence condition for the algorithm.
{"title":"Adaptive genetic algorithms-modeling and convergence","authors":"Alexandru Agapie","doi":"10.1109/CEC.1999.782005","DOIUrl":"https://doi.org/10.1109/CEC.1999.782005","url":null,"abstract":"The paper presents a new mathematical analysis of genetic algorithms (GAs); we propose the use of random systems with complete connections (RSCC), a non-trivial extension of the Markovian dependence, accounting for a complete, rather than recent, history of a stochastic evolution. As far as we know, this is the first theoretical modeling of an adaptive GA. First we introduce the RSCC model of an p/sub m/-adaptive GA, then we prove that a \"classification of states\" is still valid for our model, and finally we derive a convergence condition for the algorithm.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116868961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Kondo, A. Ishiguro, S. Tokura, Y. Uchikawa, P. E. Hotz
The evolutionary robotics approach has been attracting a lot of attention in the field of robotics and artificial life. In this approach, neural networks are widely used to construct controllers for autonomous mobile agents, since they intrinsically have generalization, noise-tolerant abilities and so on. However, there are still open questions: (1) the gap between simulated and real environments, (2) the evolutionary and learning phase are completely separated, and (3) the conflict between stability and evolvability/adaptability. In this paper, we try to overcome these problems by incorporating the concept of dynamic rearrangement function of biological neural networks with the use of neuromodulators. Simulation results show that the proposed approach is highly promising.
{"title":"Realization of robust controllers in evolutionary robotics: a dynamically-rearranging neural network approach","authors":"T. Kondo, A. Ishiguro, S. Tokura, Y. Uchikawa, P. E. Hotz","doi":"10.1109/CEC.1999.781948","DOIUrl":"https://doi.org/10.1109/CEC.1999.781948","url":null,"abstract":"The evolutionary robotics approach has been attracting a lot of attention in the field of robotics and artificial life. In this approach, neural networks are widely used to construct controllers for autonomous mobile agents, since they intrinsically have generalization, noise-tolerant abilities and so on. However, there are still open questions: (1) the gap between simulated and real environments, (2) the evolutionary and learning phase are completely separated, and (3) the conflict between stability and evolvability/adaptability. In this paper, we try to overcome these problems by incorporating the concept of dynamic rearrangement function of biological neural networks with the use of neuromodulators. Simulation results show that the proposed approach is highly promising.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"315 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128708568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Scheduling the production for plating machines is a tedious and difficult task of critical importance for the economic exploitation of this equipment. The paper describes a promising approach to solving a simple version of this problem, namely cyclical hoist scheduling, based on evolutionary algorithms. The issues of solution encoding and specialised genetic operators are discussed and some preliminary results are presented.
{"title":"Evolutionary design of time-way charts for plating machines","authors":"Georges E. Matile, A. Tettamanzi, M. Tomassini","doi":"10.1109/CEC.1999.782557","DOIUrl":"https://doi.org/10.1109/CEC.1999.782557","url":null,"abstract":"Scheduling the production for plating machines is a tedious and difficult task of critical importance for the economic exploitation of this equipment. The paper describes a promising approach to solving a simple version of this problem, namely cyclical hoist scheduling, based on evolutionary algorithms. The issues of solution encoding and specialised genetic operators are discussed and some preliminary results are presented.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130521751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A hierarchical architecture for information processing, the hypernetwork model, has recently been implemented. This is a three level model, inspired by biological systems, that includes representation of scale, vertical flow of information, and feedback control. All interactions are based on complementary relationships between molecular subunits. The system is molded to perform desired tasks through a variation-selection algorithm acting on the structure of the molecular subunits. The design of the system, the learning algorithm, and preliminary pattern classification results are presented.
{"title":"Hypernetwork model of biological information processing","authors":"Jose L. Segovia-Juarez, M. Conrad","doi":"10.1109/CEC.1999.781976","DOIUrl":"https://doi.org/10.1109/CEC.1999.781976","url":null,"abstract":"A hierarchical architecture for information processing, the hypernetwork model, has recently been implemented. This is a three level model, inspired by biological systems, that includes representation of scale, vertical flow of information, and feedback control. All interactions are based on complementary relationships between molecular subunits. The system is molded to perform desired tasks through a variation-selection algorithm acting on the structure of the molecular subunits. The design of the system, the learning algorithm, and preliminary pattern classification results are presented.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129039003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper investigates the use of kriging interpolation and estimation as a function approximation tool for the optimization of computationally complex functions. A model of the fitness function is built from a small number of samples of this function. This model is utilized in a model based learning strategy as an auxiliary fitness function. The kriging approach represents a compromise between global models and local models. The model is initially a global approximation of the entire domain, and successive updates during the optimization process transform it into a more precise local approximation. Several approaches for the sampling of the true fitness function are investigated in order to update a fitness model efficiently and at a low computational cost.
{"title":"Optimal sampling strategies for learning a fitness model","authors":"A. Ratle","doi":"10.1109/CEC.1999.785531","DOIUrl":"https://doi.org/10.1109/CEC.1999.785531","url":null,"abstract":"The paper investigates the use of kriging interpolation and estimation as a function approximation tool for the optimization of computationally complex functions. A model of the fitness function is built from a small number of samples of this function. This model is utilized in a model based learning strategy as an auxiliary fitness function. The kriging approach represents a compromise between global models and local models. The model is initially a global approximation of the entire domain, and successive updates during the optimization process transform it into a more precise local approximation. Several approaches for the sampling of the true fitness function are investigated in order to update a fitness model efficiently and at a low computational cost.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130651542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Given a graph G(V,E), we consider the problem of deciding whether G is Hamiltonian, that is, whether or not there is a simple cycle in E spanning all vertices in V. This problem is known to be NP-complete, hence cannot be solved in time polynomial in |V| unless P=NP. The problem is a special case of the Travelling Salesperson Problem (TSP), that was extensively studied in the literature, and has recently been attacked by various ant-colony methods. We address the Hamiltonian cycle problem using a new ant-inspired approach, based on repeated covering of the graph. Our method is based on a process in which an ant traverses the graph by moving from vertex to vertex along the edges while leaving traces in the vertices, and deciding on the next step according to the level of traces in the surrounding neighborhood. We show that Hamiltonian cycles are limit cycles of the process, and investigate the average time needed by our ant process to recognize a Hamiltonian graph, on the basis of simulations made over large samples of random graphs with varying density of edges.
{"title":"Hamiltonian(t)-an ant-inspired heuristic for recognizing Hamiltonian graphs","authors":"I. Wagner, A. Bruckstein","doi":"10.1109/CEC.1999.782656","DOIUrl":"https://doi.org/10.1109/CEC.1999.782656","url":null,"abstract":"Given a graph G(V,E), we consider the problem of deciding whether G is Hamiltonian, that is, whether or not there is a simple cycle in E spanning all vertices in V. This problem is known to be NP-complete, hence cannot be solved in time polynomial in |V| unless P=NP. The problem is a special case of the Travelling Salesperson Problem (TSP), that was extensively studied in the literature, and has recently been attacked by various ant-colony methods. We address the Hamiltonian cycle problem using a new ant-inspired approach, based on repeated covering of the graph. Our method is based on a process in which an ant traverses the graph by moving from vertex to vertex along the edges while leaving traces in the vertices, and deciding on the next step according to the level of traces in the surrounding neighborhood. We show that Hamiltonian cycles are limit cycles of the process, and investigate the average time needed by our ant process to recognize a Hamiltonian graph, on the basis of simulations made over large samples of random graphs with varying density of edges.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123907662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we discuss two evolutionary learning methods to learn a fuzzy knowledge base which is required to control a system not just from a single initial configuration (open loop control), but over a region of configuration states (closed loop control). It is applied to the control of three different systems, the control of an inverted pendulum (a nonlinear dynamic model), control of a simulated point mass, mobile robot to a fixed target in a two robot collision-avoidance problem, and control of a simulated point mass robot to a target moving with constant speed in a fixed direction (kinematic models). The first method involves amalgamation through averaging of fuzzy knowledge bases learnt across a grid of initial configurations representative of states in the region. The second method addresses this learning directly without having to acquire the knowledge via amalgamation by incorporating operators which would pass fuzzy logic knowledge in a local region from generation to generation, at the same time as accumulating this knowledge across the entire region.
{"title":"Evolutionary learning of fuzzy logic controllers over a region of initial states","authors":"R. Stonier","doi":"10.1109/CEC.1999.785538","DOIUrl":"https://doi.org/10.1109/CEC.1999.785538","url":null,"abstract":"In this paper we discuss two evolutionary learning methods to learn a fuzzy knowledge base which is required to control a system not just from a single initial configuration (open loop control), but over a region of configuration states (closed loop control). It is applied to the control of three different systems, the control of an inverted pendulum (a nonlinear dynamic model), control of a simulated point mass, mobile robot to a fixed target in a two robot collision-avoidance problem, and control of a simulated point mass robot to a target moving with constant speed in a fixed direction (kinematic models). The first method involves amalgamation through averaging of fuzzy knowledge bases learnt across a grid of initial configurations representative of states in the region. The second method addresses this learning directly without having to acquire the knowledge via amalgamation by incorporating operators which would pass fuzzy logic knowledge in a local region from generation to generation, at the same time as accumulating this knowledge across the entire region.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124178469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gibbs distribution is used to represent fuzzy codebooks of individual speakers. The method of fuzzy evolutionary programming is employed to create the fuzzy codebooks and also to train hidden Markov models of speakers. This method increases the chance of attaining global maxima in the Baum-Welch algorithm for hidden Markov model re-estimation. The experiments show the results of speaker identification are very encouraging.
{"title":"Fuzzy evolutionary programming for hidden Markov modelling in speaker identification","authors":"T. V. Le, D. Tran, M. Wagner","doi":"10.1109/CEC.1999.782505","DOIUrl":"https://doi.org/10.1109/CEC.1999.782505","url":null,"abstract":"Gibbs distribution is used to represent fuzzy codebooks of individual speakers. The method of fuzzy evolutionary programming is employed to create the fuzzy codebooks and also to train hidden Markov models of speakers. This method increases the chance of attaining global maxima in the Baum-Welch algorithm for hidden Markov model re-estimation. The experiments show the results of speaker identification are very encouraging.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"427 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120948040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper two different generalizations of intermediate recombination in evolution strategies are investigated. Both generalizations allow for recombining an arbitrary number of /spl rho/ parents. However, the so-called /spl rho///spl rho/-mechanism averages all /spl rho/ parents, while the so-called /spl rho//2-mechanism repeatedly (for each object variable anew) selects two out of /spl rho/ parents and averages the corresponding object variables to create an offspring individual. Results presented for the spherical function demonstrate that these two operators can cause a significantly different behavior concerning the convergence velocity of the algorithm. Both operators are applied to a number of different objective functions (including separable and non-separable, unimodal and multimodal, regular and irregular topologies), and the impact of the number of parents /spl rho/ is investigated. The results illustrate that important differences in the results are not consistent with the canonical topology classification of objective functions, but can be explained to some extent by the "genetic repair" hypothesis of Beyer in combination with a reasoning about the success region.
{"title":"Generalizations of intermediate recombination in evolution strategies","authors":"Thomas Bäck, A. Eiben","doi":"10.1109/CEC.1999.782670","DOIUrl":"https://doi.org/10.1109/CEC.1999.782670","url":null,"abstract":"In this paper two different generalizations of intermediate recombination in evolution strategies are investigated. Both generalizations allow for recombining an arbitrary number of /spl rho/ parents. However, the so-called /spl rho///spl rho/-mechanism averages all /spl rho/ parents, while the so-called /spl rho//2-mechanism repeatedly (for each object variable anew) selects two out of /spl rho/ parents and averages the corresponding object variables to create an offspring individual. Results presented for the spherical function demonstrate that these two operators can cause a significantly different behavior concerning the convergence velocity of the algorithm. Both operators are applied to a number of different objective functions (including separable and non-separable, unimodal and multimodal, regular and irregular topologies), and the impact of the number of parents /spl rho/ is investigated. The results illustrate that important differences in the results are not consistent with the canonical topology classification of objective functions, but can be explained to some extent by the \"genetic repair\" hypothesis of Beyer in combination with a reasoning about the success region.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"250 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114557012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}