Genetic programming (GP) has proved useful in optimization problems. The way of representing individuals in this methodology is particularly good when we want to construct decision trees. Decision trees are well suited to representing explicit information and relationships among parameters studied. A set of decision trees could make up a decision support system. In this paper we set out a methodology for developing decision support systems as an aid to medical decision making. Above all, we apply it to diagnosing the evolution of a burn, which is a really difficult task even for specialists. A learning classifier system is developed by means of multipopulation genetic programming (MGP). It uses a set of parameters, obtained by specialist doctors, to predict the evolution of a burn according to its initial stages. The system is first trained with a set of parameters and results of evolutions which have been recorded over a set of clinic cases. Once the system is trained, it is useful for deciding how new cases will probably evolve. Thanks to the use of GP, an explicit expression of the input parameter is provided. This explicit expression takes the form of a decision tree which will be incorporated into software tools that help physicians In their everyday work.
{"title":"Multipopulation genetic programming applied to burn diagnosing","authors":"F. F. Vega, L. Roa, M. Tomassini, J. M. Sánchez","doi":"10.1109/CEC.2000.870800","DOIUrl":"https://doi.org/10.1109/CEC.2000.870800","url":null,"abstract":"Genetic programming (GP) has proved useful in optimization problems. The way of representing individuals in this methodology is particularly good when we want to construct decision trees. Decision trees are well suited to representing explicit information and relationships among parameters studied. A set of decision trees could make up a decision support system. In this paper we set out a methodology for developing decision support systems as an aid to medical decision making. Above all, we apply it to diagnosing the evolution of a burn, which is a really difficult task even for specialists. A learning classifier system is developed by means of multipopulation genetic programming (MGP). It uses a set of parameters, obtained by specialist doctors, to predict the evolution of a burn according to its initial stages. The system is first trained with a set of parameters and results of evolutions which have been recorded over a set of clinic cases. Once the system is trained, it is useful for deciding how new cases will probably evolve. Thanks to the use of GP, an explicit expression of the input parameter is provided. This explicit expression takes the form of a decision tree which will be incorporated into software tools that help physicians In their everyday work.","PeriodicalId":218136,"journal":{"name":"Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No.00TH8512)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121523123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces a hybrid neural structure using radial-basis functions (RBF) and multilayer perceptron (MLP) networks. The hybrid network is composed of one RBF network and a number of MLPs, and is trained using a combined genetic/unsupervised/supervised learning algorithm. The genetic and unsupervised learning algorithms are used to locate the centres of the RBF part in the hybrid network. In addition, the supervised learning algorithm, based on a back-propagation algorithm, is used to train the connection weights of the MLP part in the hybrid network. Performances of the hybrid network are initially tested using a two-spiral benchmark problem. Several simulation results are reported for applying the algorithm in the classification of myoelectric or electromyographic (EMG) signals where the GA-based network proved most efficient.
{"title":"Myoelectric signal classification using evolutionary hybrid RBF-MLP networks","authors":"A. Zalzala, N. Chaiyaratana","doi":"10.1109/CEC.2000.870365","DOIUrl":"https://doi.org/10.1109/CEC.2000.870365","url":null,"abstract":"This paper introduces a hybrid neural structure using radial-basis functions (RBF) and multilayer perceptron (MLP) networks. The hybrid network is composed of one RBF network and a number of MLPs, and is trained using a combined genetic/unsupervised/supervised learning algorithm. The genetic and unsupervised learning algorithms are used to locate the centres of the RBF part in the hybrid network. In addition, the supervised learning algorithm, based on a back-propagation algorithm, is used to train the connection weights of the MLP part in the hybrid network. Performances of the hybrid network are initially tested using a two-spiral benchmark problem. Several simulation results are reported for applying the algorithm in the classification of myoelectric or electromyographic (EMG) signals where the GA-based network proved most efficient.","PeriodicalId":218136,"journal":{"name":"Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No.00TH8512)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117344738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The NK model introduced by Kauffman (1993) has been widely accepted as a formal model of rugged fitness landscapes. It is shown that the NK model is incapable of accurately modeling an important class of combinatorial optimization problems. Most notable is the limitation in modeling the epistatic relationships that exist in many real-world constrained optimization problems. In addition to introducing a new method of graphically depicting all high dimension fitness landscapes, an extension to the NK model is proposed.
{"title":"Modeling epistatic interactions in fitness landscapes","authors":"Xiaobo Hu, G. Greenwood, S. Ravichandran","doi":"10.1109/CEC.2000.870743","DOIUrl":"https://doi.org/10.1109/CEC.2000.870743","url":null,"abstract":"The NK model introduced by Kauffman (1993) has been widely accepted as a formal model of rugged fitness landscapes. It is shown that the NK model is incapable of accurately modeling an important class of combinatorial optimization problems. Most notable is the limitation in modeling the epistatic relationships that exist in many real-world constrained optimization problems. In addition to introducing a new method of graphically depicting all high dimension fitness landscapes, an extension to the NK model is proposed.","PeriodicalId":218136,"journal":{"name":"Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No.00TH8512)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127734175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces a new weight evolution algorithm to find the global minimum of the error function in a multi-layered neural network. During the learning phase of backpropagation, the network weights are adjusted intentionally in order to have an improvement in system performance. By looking at the system outputs of the nodes, it is possible to adjust some of the network weights deterministically so as to achieve an overall reduction in system error. The idea is to work backward from the error components and the system outputs to deduce a deterministic perturbation on particular network weights for optimization purposes. Using the new algorithm, it is found that the weight evolution between the hidden and output layer can accelerate the convergence speed, whereas the weight evolution between the input layer and the hidden layer can assist in solving the local minima problem.
{"title":"A weight evolution algorithm for finding the global minimum of error function in neural networks","authors":"S. Ng, S. Leung","doi":"10.1109/CEC.2000.870289","DOIUrl":"https://doi.org/10.1109/CEC.2000.870289","url":null,"abstract":"This paper introduces a new weight evolution algorithm to find the global minimum of the error function in a multi-layered neural network. During the learning phase of backpropagation, the network weights are adjusted intentionally in order to have an improvement in system performance. By looking at the system outputs of the nodes, it is possible to adjust some of the network weights deterministically so as to achieve an overall reduction in system error. The idea is to work backward from the error components and the system outputs to deduce a deterministic perturbation on particular network weights for optimization purposes. Using the new algorithm, it is found that the weight evolution between the hidden and output layer can accelerate the convergence speed, whereas the weight evolution between the input layer and the hidden layer can assist in solving the local minima problem.","PeriodicalId":218136,"journal":{"name":"Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No.00TH8512)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129028167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The performance of particle swarm optimization using an inertia weight is compared with performance using a constriction factor. Five benchmark functions are used for the comparison. It is concluded that the best approach is to use the constriction factor while limiting the maximum velocity Vmax to the dynamic range of the variable Xmax on each dimension. This approach provides performance on the benchmark functions superior to any other published results known by the authors.
{"title":"Comparing inertia weights and constriction factors in particle swarm optimization","authors":"R. Eberhart, Yuhui Shi","doi":"10.1109/CEC.2000.870279","DOIUrl":"https://doi.org/10.1109/CEC.2000.870279","url":null,"abstract":"The performance of particle swarm optimization using an inertia weight is compared with performance using a constriction factor. Five benchmark functions are used for the comparison. It is concluded that the best approach is to use the constriction factor while limiting the maximum velocity Vmax to the dynamic range of the variable Xmax on each dimension. This approach provides performance on the benchmark functions superior to any other published results known by the authors.","PeriodicalId":218136,"journal":{"name":"Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No.00TH8512)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132814447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Under a species-level abstraction of classical evolutionary programming, the standard tournament selection model is not appropriate. When viewed in this manner, it is more appropriate to consider two modes of life histories: background evolution and extinction. The utility of this approach as an optimization procedure is evaluated on a series of test functions relative to the performance of classical evolutionary programming and fast evolutionary programming. The results indicate that on some smooth, convex landscapes and over noisy, highly multimodal landscapes, extinction evolutionary programming can outperform classical and fast evolutionary programming. On other landscapes, however, extinction evolutionary programming performs considerably worse than classical and fast evolutionary programming. Potential reasons for this variability in performance are indicated.
{"title":"Evolutionary computation with extinction: experiments and analysis","authors":"G. Fogel, G. Greenwood, K. Chellapilla","doi":"10.1109/CEC.2000.870818","DOIUrl":"https://doi.org/10.1109/CEC.2000.870818","url":null,"abstract":"Under a species-level abstraction of classical evolutionary programming, the standard tournament selection model is not appropriate. When viewed in this manner, it is more appropriate to consider two modes of life histories: background evolution and extinction. The utility of this approach as an optimization procedure is evaluated on a series of test functions relative to the performance of classical evolutionary programming and fast evolutionary programming. The results indicate that on some smooth, convex landscapes and over noisy, highly multimodal landscapes, extinction evolutionary programming can outperform classical and fast evolutionary programming. On other landscapes, however, extinction evolutionary programming performs considerably worse than classical and fast evolutionary programming. Potential reasons for this variability in performance are indicated.","PeriodicalId":218136,"journal":{"name":"Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No.00TH8512)","volume":"350 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134371569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Artificial neural networks (ANNs) are used extensively involving continuous data. However, their application in many domains is hampered because it is not clear how they partition continuous data for classification. The extraction of rules, therefore, from ANNs trained on continuous data is of great importance. The system described in this paper uses a genetic algorithm to generate input patterns which are presented to the network, and the output from the ANN is then used to calculate the fitness function for the algorithm. These patterns can contain null characters which represent a zero input to the ANN, and this allows the genetic algorithm to find patterns which can be converted into additive rules with few antecedent clauses. These antecedents provide information as to where and how the neural network has partitioned the continuous data and can be combined together to make rules. These rules compare favourably with the results of those generated by See5 (a decision tree-based data mining tool) when executed on a data set consisting of continuous attributes.
{"title":"Evolving rules from neural networks trained on continuous data","authors":"E. Keedwell, A. Narayanan, D. Savić","doi":"10.1109/CEC.2000.870358","DOIUrl":"https://doi.org/10.1109/CEC.2000.870358","url":null,"abstract":"Artificial neural networks (ANNs) are used extensively involving continuous data. However, their application in many domains is hampered because it is not clear how they partition continuous data for classification. The extraction of rules, therefore, from ANNs trained on continuous data is of great importance. The system described in this paper uses a genetic algorithm to generate input patterns which are presented to the network, and the output from the ANN is then used to calculate the fitness function for the algorithm. These patterns can contain null characters which represent a zero input to the ANN, and this allows the genetic algorithm to find patterns which can be converted into additive rules with few antecedent clauses. These antecedents provide information as to where and how the neural network has partitioned the continuous data and can be combined together to make rules. These rules compare favourably with the results of those generated by See5 (a decision tree-based data mining tool) when executed on a data set consisting of continuous attributes.","PeriodicalId":218136,"journal":{"name":"Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No.00TH8512)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128991584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In a quarter of a century of evolutionary computing, nature still seems to be teasing us with its complexity and flexibility whilst we struggle to apply our artificial creations, that perform so beautifully in blocks-world to the real world. We discuss some of the ways in which the biological world has seemed to defy the curse of dimensionality and present the results of an experiment to evolve neural network pattern detectors based on a pre-emptive 'phylogeny'. Strategies discussed are: congruent graduation of objective function and genome complexity; relaxation of objective function specificity; pre-evolved niche recombination; and fractal-like ontogenesis. A phyletic evolutionary architecture is proposed that combines these principles, together with three novel neural net transformations that preserve node-function integrity at different levels of complexity. Using a simple genetic algorithm, a number of 81-node fully recurrent neural nets were evolved to detect intermediate level features in 9/spl times/9 subimages. It is shown that by seeding the population with transformations of pre-evolved 3/spl times/3 detectors of constituent low-level features, evolution converged faster and to a more accurate and general solution than when they were evolved from a random population.
{"title":"Phyletic evolution of neural feature detectors","authors":"Peter R. W. Harvey, J. Boyce","doi":"10.1109/CEC.2000.870321","DOIUrl":"https://doi.org/10.1109/CEC.2000.870321","url":null,"abstract":"In a quarter of a century of evolutionary computing, nature still seems to be teasing us with its complexity and flexibility whilst we struggle to apply our artificial creations, that perform so beautifully in blocks-world to the real world. We discuss some of the ways in which the biological world has seemed to defy the curse of dimensionality and present the results of an experiment to evolve neural network pattern detectors based on a pre-emptive 'phylogeny'. Strategies discussed are: congruent graduation of objective function and genome complexity; relaxation of objective function specificity; pre-evolved niche recombination; and fractal-like ontogenesis. A phyletic evolutionary architecture is proposed that combines these principles, together with three novel neural net transformations that preserve node-function integrity at different levels of complexity. Using a simple genetic algorithm, a number of 81-node fully recurrent neural nets were evolved to detect intermediate level features in 9/spl times/9 subimages. It is shown that by seeding the population with transformations of pre-evolved 3/spl times/3 detectors of constituent low-level features, evolution converged faster and to a more accurate and general solution than when they were evolved from a random population.","PeriodicalId":218136,"journal":{"name":"Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No.00TH8512)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131600048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper a non-generational genetic algorithm for multiobjective optimization problems is proposed. For each element in the population a domination count is defined together with a neighborhood density measure based on a sharing function. Those two measures are then nonlinearly combined in order to define the individual's fitness. Numerical experiments with four test-problems taken from the evolutionary multiobjective literature are performed and the results are compared with those obtained by other evolutionary techniques.
{"title":"A non-generational genetic algorithm for multiobjective optimization","authors":"C. Borges, H. Barbosa","doi":"10.1109/CEC.2000.870292","DOIUrl":"https://doi.org/10.1109/CEC.2000.870292","url":null,"abstract":"In this paper a non-generational genetic algorithm for multiobjective optimization problems is proposed. For each element in the population a domination count is defined together with a neighborhood density measure based on a sharing function. Those two measures are then nonlinearly combined in order to define the individual's fitness. Numerical experiments with four test-problems taken from the evolutionary multiobjective literature are performed and the results are compared with those obtained by other evolutionary techniques.","PeriodicalId":218136,"journal":{"name":"Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No.00TH8512)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122612987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The nurse scheduling problem (NSPs) represents a difficult class of multi-objective optimisation problems consisting of a number of interfering objectives between the hospitals and individual nurses. The objective of this research is to investigate difficulties that occur during the solution of NSP using evolutionary algorithms, in particular genetic algorithms (GA). As the solution method a population-less cooperative genetic algorithm (CGA) is taken into consideration. Because contrary to competitive GAs, we have to simultaneously deal with the optimization of the fitness of the individual nurses and also optimization of the entire schedule as the final solution to the problem in hand. To confirm the search ability of CGA, first a simplified version of NSP is examined. Later we report a more complex and useful version of the problem. We also compare CGA with another multi-agent evolutionary algorithm using pheromone style communication of real ants. Finally, we report the results of computer simulations acquired throughout the experiments.
{"title":"Evolutionary algorithms for nurse scheduling problem","authors":"Ahmad Jan, Masahito Yamamoto, A. Ohuchi","doi":"10.1109/CEC.2000.870295","DOIUrl":"https://doi.org/10.1109/CEC.2000.870295","url":null,"abstract":"The nurse scheduling problem (NSPs) represents a difficult class of multi-objective optimisation problems consisting of a number of interfering objectives between the hospitals and individual nurses. The objective of this research is to investigate difficulties that occur during the solution of NSP using evolutionary algorithms, in particular genetic algorithms (GA). As the solution method a population-less cooperative genetic algorithm (CGA) is taken into consideration. Because contrary to competitive GAs, we have to simultaneously deal with the optimization of the fitness of the individual nurses and also optimization of the entire schedule as the final solution to the problem in hand. To confirm the search ability of CGA, first a simplified version of NSP is examined. Later we report a more complex and useful version of the problem. We also compare CGA with another multi-agent evolutionary algorithm using pheromone style communication of real ants. Finally, we report the results of computer simulations acquired throughout the experiments.","PeriodicalId":218136,"journal":{"name":"Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No.00TH8512)","volume":"141 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128818895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}