This paper introduces a hybrid neural structure using radial-basis functions (RBF) and multilayer perceptron (MLP) networks. The hybrid network is composed of one RBF network and a number of MLPs, and is trained using a combined genetic/unsupervised/supervised learning algorithm. The genetic and unsupervised learning algorithms are used to locate the centres of the RBF part in the hybrid network. In addition, the supervised learning algorithm, based on a back-propagation algorithm, is used to train the connection weights of the MLP part in the hybrid network. Performances of the hybrid network are initially tested using a two-spiral benchmark problem. Several simulation results are reported for applying the algorithm in the classification of myoelectric or electromyographic (EMG) signals where the GA-based network proved most efficient.
{"title":"Myoelectric signal classification using evolutionary hybrid RBF-MLP networks","authors":"A. Zalzala, N. Chaiyaratana","doi":"10.1109/CEC.2000.870365","DOIUrl":"https://doi.org/10.1109/CEC.2000.870365","url":null,"abstract":"This paper introduces a hybrid neural structure using radial-basis functions (RBF) and multilayer perceptron (MLP) networks. The hybrid network is composed of one RBF network and a number of MLPs, and is trained using a combined genetic/unsupervised/supervised learning algorithm. The genetic and unsupervised learning algorithms are used to locate the centres of the RBF part in the hybrid network. In addition, the supervised learning algorithm, based on a back-propagation algorithm, is used to train the connection weights of the MLP part in the hybrid network. Performances of the hybrid network are initially tested using a two-spiral benchmark problem. Several simulation results are reported for applying the algorithm in the classification of myoelectric or electromyographic (EMG) signals where the GA-based network proved most efficient.","PeriodicalId":218136,"journal":{"name":"Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No.00TH8512)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117344738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The wide use of field bus based distributed systems in embedded control applications triggered the research on the problem of transmission network induced jitter in control variables. In this paper we introduce a variant of the classical genetic algorithm, which we call progressive genetic algorithm, and show how it can be used to reduce jitter suffered by periodic messages. The approach can be applied either in centrally controlled field buses or in synchronized ones. The algorithm was tested with two well-known and widely used benchmarks: the PSA, coming from automotive industries and the SAE from automatic guided vehicles. It is shown that it is possible to completely eliminate jitter if the adequate transmission rate is available and, if not, a satisfactory reduced jitter can be obtained.
{"title":"Jitter reduction in a real-time message transmission system using genetic algorithms","authors":"J. Barreiros, E. Costa, J. Fonseca, F. Coutinho","doi":"10.1109/CEC.2000.870769","DOIUrl":"https://doi.org/10.1109/CEC.2000.870769","url":null,"abstract":"The wide use of field bus based distributed systems in embedded control applications triggered the research on the problem of transmission network induced jitter in control variables. In this paper we introduce a variant of the classical genetic algorithm, which we call progressive genetic algorithm, and show how it can be used to reduce jitter suffered by periodic messages. The approach can be applied either in centrally controlled field buses or in synchronized ones. The algorithm was tested with two well-known and widely used benchmarks: the PSA, coming from automotive industries and the SAE from automatic guided vehicles. It is shown that it is possible to completely eliminate jitter if the adequate transmission rate is available and, if not, a satisfactory reduced jitter can be obtained.","PeriodicalId":218136,"journal":{"name":"Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No.00TH8512)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116422960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Several specific methods have been proposed for handling nonlinear constraints. These methods have to bring individuals in the feasible space, and help to explore and exploit efficiently the feasible domain. However, even if this domain is not sparse, this paper demonstrates that the exploration capacity of standard reproduction operators is not optimal when solving constrained problems. The logarithmic mutation operator presented in this paper has been conceived to explore both locally and globally the search space. As expected, it exhibits a robust and efficient behavior on a constrained version of the Sphere problem, compared to some other standard operators. Associated with BLX-0.5 crossover and a special ranking selection taking the constraints into account, the logarithmic mutation allows a GA to often reach better performance than several well known methods on a set of classical test cases.
{"title":"The need for improving the exploration operators for constrained optimization problems","authors":"S. B. Hamida, A. Pétrowski","doi":"10.1109/CEC.2000.870781","DOIUrl":"https://doi.org/10.1109/CEC.2000.870781","url":null,"abstract":"Several specific methods have been proposed for handling nonlinear constraints. These methods have to bring individuals in the feasible space, and help to explore and exploit efficiently the feasible domain. However, even if this domain is not sparse, this paper demonstrates that the exploration capacity of standard reproduction operators is not optimal when solving constrained problems. The logarithmic mutation operator presented in this paper has been conceived to explore both locally and globally the search space. As expected, it exhibits a robust and efficient behavior on a constrained version of the Sphere problem, compared to some other standard operators. Associated with BLX-0.5 crossover and a special ranking selection taking the constraints into account, the logarithmic mutation allows a GA to often reach better performance than several well known methods on a set of classical test cases.","PeriodicalId":218136,"journal":{"name":"Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No.00TH8512)","volume":"296 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115425881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Martín-Bautista, M. Vila, D. Sánchez, H. Larsen
An improvement in the effectiveness of information retrieval by using genetic algorithms (GAs) and fuzzy logic is demonstrated. A new classification of information retrieval models within the framework of GAs is given. Such a classification is based on the target of the fitness function selected. When the aim of the optimization is document classification, we deal with document-oriented models. On the other hand, term-oriented models attempt to find those terms that are more discriminatory and adequate for user preferences to build a profile. A new weighting scheme based on fuzzy logic is presented for the first class of models. A comparison with other classical weighting schemes and a study of the best aggregation operators of the gene's local fitness to the overall fitness per chromosome are also presented. The deeper study of this new scheme in the term-oriented models is the main objective for future work.
{"title":"Fuzzy genes: improving the effectiveness of information retrieval","authors":"M. Martín-Bautista, M. Vila, D. Sánchez, H. Larsen","doi":"10.1109/CEC.2000.870334","DOIUrl":"https://doi.org/10.1109/CEC.2000.870334","url":null,"abstract":"An improvement in the effectiveness of information retrieval by using genetic algorithms (GAs) and fuzzy logic is demonstrated. A new classification of information retrieval models within the framework of GAs is given. Such a classification is based on the target of the fitness function selected. When the aim of the optimization is document classification, we deal with document-oriented models. On the other hand, term-oriented models attempt to find those terms that are more discriminatory and adequate for user preferences to build a profile. A new weighting scheme based on fuzzy logic is presented for the first class of models. A comparison with other classical weighting schemes and a study of the best aggregation operators of the gene's local fitness to the overall fitness per chromosome are also presented. The deeper study of this new scheme in the term-oriented models is the main objective for future work.","PeriodicalId":218136,"journal":{"name":"Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No.00TH8512)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114861641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Applications of genetic algorithms (GA) for optimisation problems are widely known as well as their advantages and disadvantages compared with classical numerical methods. In practical tests, GA appears a robust method with a broad range of applications. The determination of GA parameters could be complicated. Therefore for some real-life applications, several empirical observations of an experienced expert are needed to define these parameters. This fact degrades the applicability of a GA for most of the real-world problems and users. Therefore, this article discusses some possibilities with setting GA parameters. The setting method of GA parameters is based on the fuzzy control of values of GA parameters. The feedback for the fuzzy control of GA parameters is realized by virtue of the behavior of some GA characteristics. The goal of this article is to present the conception of the solution and some new ideas.
{"title":"GA with fuzzy inference system","authors":"R. Matousek, P. Osmera, J. Roupec","doi":"10.1109/CEC.2000.870359","DOIUrl":"https://doi.org/10.1109/CEC.2000.870359","url":null,"abstract":"Applications of genetic algorithms (GA) for optimisation problems are widely known as well as their advantages and disadvantages compared with classical numerical methods. In practical tests, GA appears a robust method with a broad range of applications. The determination of GA parameters could be complicated. Therefore for some real-life applications, several empirical observations of an experienced expert are needed to define these parameters. This fact degrades the applicability of a GA for most of the real-world problems and users. Therefore, this article discusses some possibilities with setting GA parameters. The setting method of GA parameters is based on the fuzzy control of values of GA parameters. The feedback for the fuzzy control of GA parameters is realized by virtue of the behavior of some GA characteristics. The goal of this article is to present the conception of the solution and some new ideas.","PeriodicalId":218136,"journal":{"name":"Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No.00TH8512)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115384112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Genetic programming (GP) has proved useful in optimization problems. The way of representing individuals in this methodology is particularly good when we want to construct decision trees. Decision trees are well suited to representing explicit information and relationships among parameters studied. A set of decision trees could make up a decision support system. In this paper we set out a methodology for developing decision support systems as an aid to medical decision making. Above all, we apply it to diagnosing the evolution of a burn, which is a really difficult task even for specialists. A learning classifier system is developed by means of multipopulation genetic programming (MGP). It uses a set of parameters, obtained by specialist doctors, to predict the evolution of a burn according to its initial stages. The system is first trained with a set of parameters and results of evolutions which have been recorded over a set of clinic cases. Once the system is trained, it is useful for deciding how new cases will probably evolve. Thanks to the use of GP, an explicit expression of the input parameter is provided. This explicit expression takes the form of a decision tree which will be incorporated into software tools that help physicians In their everyday work.
{"title":"Multipopulation genetic programming applied to burn diagnosing","authors":"F. F. Vega, L. Roa, M. Tomassini, J. M. Sánchez","doi":"10.1109/CEC.2000.870800","DOIUrl":"https://doi.org/10.1109/CEC.2000.870800","url":null,"abstract":"Genetic programming (GP) has proved useful in optimization problems. The way of representing individuals in this methodology is particularly good when we want to construct decision trees. Decision trees are well suited to representing explicit information and relationships among parameters studied. A set of decision trees could make up a decision support system. In this paper we set out a methodology for developing decision support systems as an aid to medical decision making. Above all, we apply it to diagnosing the evolution of a burn, which is a really difficult task even for specialists. A learning classifier system is developed by means of multipopulation genetic programming (MGP). It uses a set of parameters, obtained by specialist doctors, to predict the evolution of a burn according to its initial stages. The system is first trained with a set of parameters and results of evolutions which have been recorded over a set of clinic cases. Once the system is trained, it is useful for deciding how new cases will probably evolve. Thanks to the use of GP, an explicit expression of the input parameter is provided. This explicit expression takes the form of a decision tree which will be incorporated into software tools that help physicians In their everyday work.","PeriodicalId":218136,"journal":{"name":"Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No.00TH8512)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121523123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper a novel approach to performing classification is presented, hypersurface discriminant functions are evolved using genetic programming. These discriminant functions reside in the states of finite state automata which have the ability to reason and logically combine the hypersurfaces to generate a complex decision space. An object may be classified by one or many of the discriminant functions, this is decided by the automata. During the evolution of this symbiotic architecture, feature selection for each of the discriminant functions is achieved implicitly, a task which is normally performed before a classification algorithm is trained. Since each discriminant function has different features, and objects may be classified with one or more discriminant functions, no two objects from the same class need be classified using the same features. Instead, the most appropriate features for a given object are used.
{"title":"Performing classification with an environment manipulating mutable automata (EMMA)","authors":"K. Benson","doi":"10.1109/CEC.2000.870305","DOIUrl":"https://doi.org/10.1109/CEC.2000.870305","url":null,"abstract":"In this paper a novel approach to performing classification is presented, hypersurface discriminant functions are evolved using genetic programming. These discriminant functions reside in the states of finite state automata which have the ability to reason and logically combine the hypersurfaces to generate a complex decision space. An object may be classified by one or many of the discriminant functions, this is decided by the automata. During the evolution of this symbiotic architecture, feature selection for each of the discriminant functions is achieved implicitly, a task which is normally performed before a classification algorithm is trained. Since each discriminant function has different features, and objects may be classified with one or more discriminant functions, no two objects from the same class need be classified using the same features. Instead, the most appropriate features for a given object are used.","PeriodicalId":218136,"journal":{"name":"Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No.00TH8512)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121703915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Paechter, T. Back, Marc Schoenauer, M. Sebag, A. Eiben, J. Merelo, T. Fogarty
This paper describes a project funded by the European Commission which seeks to provide the technology and software infrastructure necessary to support the next generation of evolving infohabitants in a way that makes that infrastructure universal, open and scalable. The Distributed Resource Evolutionary Algorithm Machine (DREAM) will use existing hardware infrastructure in a more efficient manner, by utilising otherwise unused CPU time. It will allow infohabitants to co-operate, communicate, negotiate and trade; and emergent behaviour is expected to result. It is expected that there will be an emergent economy that results from the provision and use of CPU cycles by infohabitants and their owners. The DREAM infrastructure will be evaluated with new work on distributed data mining, distributed scheduling and the modelling of economic and social behaviour.
{"title":"A Distributed Resource Evolutionary Algorithm Machine (DREAM)","authors":"B. Paechter, T. Back, Marc Schoenauer, M. Sebag, A. Eiben, J. Merelo, T. Fogarty","doi":"10.1109/CEC.2000.870746","DOIUrl":"https://doi.org/10.1109/CEC.2000.870746","url":null,"abstract":"This paper describes a project funded by the European Commission which seeks to provide the technology and software infrastructure necessary to support the next generation of evolving infohabitants in a way that makes that infrastructure universal, open and scalable. The Distributed Resource Evolutionary Algorithm Machine (DREAM) will use existing hardware infrastructure in a more efficient manner, by utilising otherwise unused CPU time. It will allow infohabitants to co-operate, communicate, negotiate and trade; and emergent behaviour is expected to result. It is expected that there will be an emergent economy that results from the provision and use of CPU cycles by infohabitants and their owners. The DREAM infrastructure will be evaluated with new work on distributed data mining, distributed scheduling and the modelling of economic and social behaviour.","PeriodicalId":218136,"journal":{"name":"Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No.00TH8512)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128505029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces a new weight evolution algorithm to find the global minimum of the error function in a multi-layered neural network. During the learning phase of backpropagation, the network weights are adjusted intentionally in order to have an improvement in system performance. By looking at the system outputs of the nodes, it is possible to adjust some of the network weights deterministically so as to achieve an overall reduction in system error. The idea is to work backward from the error components and the system outputs to deduce a deterministic perturbation on particular network weights for optimization purposes. Using the new algorithm, it is found that the weight evolution between the hidden and output layer can accelerate the convergence speed, whereas the weight evolution between the input layer and the hidden layer can assist in solving the local minima problem.
{"title":"A weight evolution algorithm for finding the global minimum of error function in neural networks","authors":"S. Ng, S. Leung","doi":"10.1109/CEC.2000.870289","DOIUrl":"https://doi.org/10.1109/CEC.2000.870289","url":null,"abstract":"This paper introduces a new weight evolution algorithm to find the global minimum of the error function in a multi-layered neural network. During the learning phase of backpropagation, the network weights are adjusted intentionally in order to have an improvement in system performance. By looking at the system outputs of the nodes, it is possible to adjust some of the network weights deterministically so as to achieve an overall reduction in system error. The idea is to work backward from the error components and the system outputs to deduce a deterministic perturbation on particular network weights for optimization purposes. Using the new algorithm, it is found that the weight evolution between the hidden and output layer can accelerate the convergence speed, whereas the weight evolution between the input layer and the hidden layer can assist in solving the local minima problem.","PeriodicalId":218136,"journal":{"name":"Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No.00TH8512)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129028167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Artificial neural networks (ANNs) are used extensively involving continuous data. However, their application in many domains is hampered because it is not clear how they partition continuous data for classification. The extraction of rules, therefore, from ANNs trained on continuous data is of great importance. The system described in this paper uses a genetic algorithm to generate input patterns which are presented to the network, and the output from the ANN is then used to calculate the fitness function for the algorithm. These patterns can contain null characters which represent a zero input to the ANN, and this allows the genetic algorithm to find patterns which can be converted into additive rules with few antecedent clauses. These antecedents provide information as to where and how the neural network has partitioned the continuous data and can be combined together to make rules. These rules compare favourably with the results of those generated by See5 (a decision tree-based data mining tool) when executed on a data set consisting of continuous attributes.
{"title":"Evolving rules from neural networks trained on continuous data","authors":"E. Keedwell, A. Narayanan, D. Savić","doi":"10.1109/CEC.2000.870358","DOIUrl":"https://doi.org/10.1109/CEC.2000.870358","url":null,"abstract":"Artificial neural networks (ANNs) are used extensively involving continuous data. However, their application in many domains is hampered because it is not clear how they partition continuous data for classification. The extraction of rules, therefore, from ANNs trained on continuous data is of great importance. The system described in this paper uses a genetic algorithm to generate input patterns which are presented to the network, and the output from the ANN is then used to calculate the fitness function for the algorithm. These patterns can contain null characters which represent a zero input to the ANN, and this allows the genetic algorithm to find patterns which can be converted into additive rules with few antecedent clauses. These antecedents provide information as to where and how the neural network has partitioned the continuous data and can be combined together to make rules. These rules compare favourably with the results of those generated by See5 (a decision tree-based data mining tool) when executed on a data set consisting of continuous attributes.","PeriodicalId":218136,"journal":{"name":"Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No.00TH8512)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128991584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}