Pub Date : 2018-07-01DOI: 10.1109/CEC.2018.8477931
Xiu Cheng, Will N. Browne, Mengjie Zhang
Learning Classifier Systems (LCSs) have been widely used to tackle Reinforcement Learning (RL) problems as they have a good generalization ability and provide a simple understandable rule-based solution. The accuracy-based LCS, XCS, has been most popularly used for single-objective RL problems. As many real-world problems exhibit multiple conflicting objectives recent work has sought to adapt XCS to Multi-Objective Reinforcement Learning (MORL) tasks. However, many of these algorithms need large storage or cannot discover the Pareto Optimal solutions. This is due to the complexity of finding a policy having multiple steps to multiple possible objectives. This paper aims to employ a decomposition strategy based on MOEA/D in XCS to approximate complex Pareto Fronts. In order to achieve multi-objective learning, a new MORL algorithm has been developed based on XCS and MOEA/D. The experimental results show that on complex bi-objective maze problems our MORL algorithm is able to learn a group of Pareto optimal solutions for MORL problems without huge storage. Analysis of the learned policies shows successful trade-offs between the distance to the reward versus the amount of reward itself.
{"title":"Decomposition Based Multi-Objective Evolutionary Algorithm in XCS for Multi-Objective Reinforcement Learning","authors":"Xiu Cheng, Will N. Browne, Mengjie Zhang","doi":"10.1109/CEC.2018.8477931","DOIUrl":"https://doi.org/10.1109/CEC.2018.8477931","url":null,"abstract":"Learning Classifier Systems (LCSs) have been widely used to tackle Reinforcement Learning (RL) problems as they have a good generalization ability and provide a simple understandable rule-based solution. The accuracy-based LCS, XCS, has been most popularly used for single-objective RL problems. As many real-world problems exhibit multiple conflicting objectives recent work has sought to adapt XCS to Multi-Objective Reinforcement Learning (MORL) tasks. However, many of these algorithms need large storage or cannot discover the Pareto Optimal solutions. This is due to the complexity of finding a policy having multiple steps to multiple possible objectives. This paper aims to employ a decomposition strategy based on MOEA/D in XCS to approximate complex Pareto Fronts. In order to achieve multi-objective learning, a new MORL algorithm has been developed based on XCS and MOEA/D. The experimental results show that on complex bi-objective maze problems our MORL algorithm is able to learn a group of Pareto optimal solutions for MORL problems without huge storage. Analysis of the learned policies shows successful trade-offs between the distance to the reward versus the amount of reward itself.","PeriodicalId":212677,"journal":{"name":"2018 IEEE Congress on Evolutionary Computation (CEC)","volume":"159 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123081286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-01DOI: 10.1109/CEC.2018.8477729
Soheila Sadeghiram, Hui Ma, Gang Chen
Automatic Web service composition has received much interest in the last decades. Data-intensive concepts have provided a promising computing paradigm for data-intensive Web service composition. Due to the complexity of the problem, metaheuristics in particular Evolutionary Computing (EC) techniques have been used for solving this composition problem. However, most of the current works neglected the distributed nature of data-intensive Web services. In this paper, we study the problem of distributed data-intensive service composition and propose a model which integrates attributes of constituent data-intensive Web services and attributes of the network. The core idea is to propose a communication cost and time model of a composed Web service considering communication delay and cost. We therefore propose a novel method based on Genetic Algorithm (GA) which uses a variation of K-means clustering algorithm.
{"title":"Cluster-Guided Genetic Algorithm for Distributed Data-intensive Web Service Composition","authors":"Soheila Sadeghiram, Hui Ma, Gang Chen","doi":"10.1109/CEC.2018.8477729","DOIUrl":"https://doi.org/10.1109/CEC.2018.8477729","url":null,"abstract":"Automatic Web service composition has received much interest in the last decades. Data-intensive concepts have provided a promising computing paradigm for data-intensive Web service composition. Due to the complexity of the problem, metaheuristics in particular Evolutionary Computing (EC) techniques have been used for solving this composition problem. However, most of the current works neglected the distributed nature of data-intensive Web services. In this paper, we study the problem of distributed data-intensive service composition and propose a model which integrates attributes of constituent data-intensive Web services and attributes of the network. The core idea is to propose a communication cost and time model of a composed Web service considering communication delay and cost. We therefore propose a novel method based on Genetic Algorithm (GA) which uses a variation of K-means clustering algorithm.","PeriodicalId":212677,"journal":{"name":"2018 IEEE Congress on Evolutionary Computation (CEC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129250741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-01DOI: 10.1109/CEC.2018.8477966
Tobias Rodemann
For a commercial building or campus, the management of local energy production, storage, and consumption, promises substantial gains in efficiency and reduced costs and emissions. When facility managers are planning updates to an existing building complex, they face a variety of options for investment. This work targets to provide support for this investment decision by performing a many-objective optimization (MAO) of the system configuration considering initial investment cost, running costs, CO2 emissions, and system resilience. In our specific example the potential investment covers a photo voltaic (PV) system, a stationary battery, and a heat storage. We also consider potential changes to the operation of an existing co-generator for heat and power (CHP), by optimizing controller parameters. The proposed system is simulated using a Modelica-based software environment. In this work we show the results of our configuration optimization using the well-known NSGA-III algorithm and also consider the problem of variable run-times of the simulator on the optimization process especially for a parallel execution of fitness evaluations on a computing cluster.
{"title":"A Many-Objective Configuration Optimization for Building Energy Management","authors":"Tobias Rodemann","doi":"10.1109/CEC.2018.8477966","DOIUrl":"https://doi.org/10.1109/CEC.2018.8477966","url":null,"abstract":"For a commercial building or campus, the management of local energy production, storage, and consumption, promises substantial gains in efficiency and reduced costs and emissions. When facility managers are planning updates to an existing building complex, they face a variety of options for investment. This work targets to provide support for this investment decision by performing a many-objective optimization (MAO) of the system configuration considering initial investment cost, running costs, CO2 emissions, and system resilience. In our specific example the potential investment covers a photo voltaic (PV) system, a stationary battery, and a heat storage. We also consider potential changes to the operation of an existing co-generator for heat and power (CHP), by optimizing controller parameters. The proposed system is simulated using a Modelica-based software environment. In this work we show the results of our configuration optimization using the well-known NSGA-III algorithm and also consider the problem of variable run-times of the simulator on the optimization process especially for a parallel execution of fitness evaluations on a computing cluster.","PeriodicalId":212677,"journal":{"name":"2018 IEEE Congress on Evolutionary Computation (CEC)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126518131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-01DOI: 10.1109/CEC.2018.8477822
J. Rueda, I. Erlich
The MVMO algorithm (Mean-Variance Mapping Optimization) has two main features: i) normalized search range for each dimension (associated to each optimization variable); ii) use of a mapping function to generate a new value of a selected optimization variable based on the mean and variance derived from the best solutions achieved so far. The current version of MVMO offers several alternatives. The single parent-offspring version is designed for use in case the evaluation budget is small and the optimization task is not too challenging. The population based MVMO requires more function evaluations, but the results are usually better. Both variants of MVMO can be improved considerably if additionally separate local search algorithms are incorporated. In this case, MVMO is basically responsible for the initial global search. This paper presents the results of a study on the use of the hybrid version of MVMO, called MVMO-PH (population based, hybrid), to solve the IEEE-CEC 2018 test suite for single objective optimization with continuous (real-number) decision variables. Additionally, two new mapping functions representing the unique feature of MVMO are presented.
{"title":"Hybrid Population Based MVMO for Solving CEC 2018 Test Bed of Single-Objective Problems","authors":"J. Rueda, I. Erlich","doi":"10.1109/CEC.2018.8477822","DOIUrl":"https://doi.org/10.1109/CEC.2018.8477822","url":null,"abstract":"The MVMO algorithm (Mean-Variance Mapping Optimization) has two main features: i) normalized search range for each dimension (associated to each optimization variable); ii) use of a mapping function to generate a new value of a selected optimization variable based on the mean and variance derived from the best solutions achieved so far. The current version of MVMO offers several alternatives. The single parent-offspring version is designed for use in case the evaluation budget is small and the optimization task is not too challenging. The population based MVMO requires more function evaluations, but the results are usually better. Both variants of MVMO can be improved considerably if additionally separate local search algorithms are incorporated. In this case, MVMO is basically responsible for the initial global search. This paper presents the results of a study on the use of the hybrid version of MVMO, called MVMO-PH (population based, hybrid), to solve the IEEE-CEC 2018 test suite for single objective optimization with continuous (real-number) decision variables. Additionally, two new mapping functions representing the unique feature of MVMO are presented.","PeriodicalId":212677,"journal":{"name":"2018 IEEE Congress on Evolutionary Computation (CEC)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132828790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-01DOI: 10.1109/CEC.2018.8477683
Stephyn G. W. Butcher, John W. Sheppard, S. Strasser
Particle Swarm Optimization is an effective stochastic optimization technique that simulates a swarm of particles that fly through a problem space. In the process of searching the problem space for a solution, the individual variables of a candidate solution will often take on inferior values characterized as “Two Steps Forward, One Step Back.” Several approaches to solving this problem have introduced varying notions of cooperation and competition. Instead we characterize the success of these multi-swarm techniques as reconciling conflicting information through a mechanism that makes successive candidates Pareto improvements. We use this analysis to construct a variation of PSO that applies this mechanism to gbest selection. Experiments show that this algorithm performs better than the standard gbest PSO algorithm.
{"title":"Pareto Improving Selection of the Global Best in Particle Swarm Optimization","authors":"Stephyn G. W. Butcher, John W. Sheppard, S. Strasser","doi":"10.1109/CEC.2018.8477683","DOIUrl":"https://doi.org/10.1109/CEC.2018.8477683","url":null,"abstract":"Particle Swarm Optimization is an effective stochastic optimization technique that simulates a swarm of particles that fly through a problem space. In the process of searching the problem space for a solution, the individual variables of a candidate solution will often take on inferior values characterized as “Two Steps Forward, One Step Back.” Several approaches to solving this problem have introduced varying notions of cooperation and competition. Instead we characterize the success of these multi-swarm techniques as reconciling conflicting information through a mechanism that makes successive candidates Pareto improvements. We use this analysis to construct a variation of PSO that applies this mechanism to gbest selection. Experiments show that this algorithm performs better than the standard gbest PSO algorithm.","PeriodicalId":212677,"journal":{"name":"2018 IEEE Congress on Evolutionary Computation (CEC)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131419326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-01DOI: 10.1109/CEC.2018.8477811
J. T. Carvalho, Nicola Milano, S. Nolfi
We demonstrate how evaluating candidate solutions in a limited number of stochastically varying conditions that vary over generations at a moderate rate is an effective method for developing high quality robust solutions. Indeed, agents evolved with this method for the ability to solve an extended version of the double-pole balancing problem, in which the initial state of the agents and the characteristics of the environment in which the agents are situated vary, show the ability to solve the problem in a wide variety of environmental circumstances and for prolonged periods of time without the need to readapt. The combinatorial explosion of possible environmental conditions does not prevent the evolution of robust solutions. Indeed, exposing evolving agents to a limited number of different environmental conditions that vary over generations is sufficient and leads to better results with respect to control experiments in which the number of experienced environmental conditions is greater. Interestingly the exposure to environmental variations promotes the evolution of convergent strategies in which the agents act so to exhibit the required functionality and so to reduce the complexity of the control problem.
{"title":"Evolving Robust Solutions for Stochastically Varying Problems","authors":"J. T. Carvalho, Nicola Milano, S. Nolfi","doi":"10.1109/CEC.2018.8477811","DOIUrl":"https://doi.org/10.1109/CEC.2018.8477811","url":null,"abstract":"We demonstrate how evaluating candidate solutions in a limited number of stochastically varying conditions that vary over generations at a moderate rate is an effective method for developing high quality robust solutions. Indeed, agents evolved with this method for the ability to solve an extended version of the double-pole balancing problem, in which the initial state of the agents and the characteristics of the environment in which the agents are situated vary, show the ability to solve the problem in a wide variety of environmental circumstances and for prolonged periods of time without the need to readapt. The combinatorial explosion of possible environmental conditions does not prevent the evolution of robust solutions. Indeed, exposing evolving agents to a limited number of different environmental conditions that vary over generations is sufficient and leads to better results with respect to control experiments in which the number of experienced environmental conditions is greater. Interestingly the exposure to environmental variations promotes the evolution of convergent strategies in which the agents act so to exhibit the required functionality and so to reduce the complexity of the control problem.","PeriodicalId":212677,"journal":{"name":"2018 IEEE Congress on Evolutionary Computation (CEC)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127682026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-01DOI: 10.1109/CEC.2018.8477795
G. A. D. Silva, M. Fantinato, S. M. Peres, H. Reijers
Process model discovery can be approached as an optimization problem, for which genetic algorithms have been used previously. However, the fitness functions used, which consider full log traces, have not been found adequate to discover unstructured processes. We propose a solution based on a local analysis of activity transitions, which proves effective for unstructured processes, most common in organizations. Our solution considers completeness and accuracy calculation for the fitness function.
{"title":"Discovery of Unstructured Business Processes Through Genetic Algorithms Using Activity Transitions-Based Completeness and Precision","authors":"G. A. D. Silva, M. Fantinato, S. M. Peres, H. Reijers","doi":"10.1109/CEC.2018.8477795","DOIUrl":"https://doi.org/10.1109/CEC.2018.8477795","url":null,"abstract":"Process model discovery can be approached as an optimization problem, for which genetic algorithms have been used previously. However, the fitness functions used, which consider full log traces, have not been found adequate to discover unstructured processes. We propose a solution based on a local analysis of activity transitions, which proves effective for unstructured processes, most common in organizations. Our solution considers completeness and accuracy calculation for the fitness function.","PeriodicalId":212677,"journal":{"name":"2018 IEEE Congress on Evolutionary Computation (CEC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128534747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-01DOI: 10.1109/CEC.2018.8478414
M. Polnik, A. Riccardi
Stopping exploration of the search space regions that can be proven to contain only inferior solutions is an important acceleration technique in optimization algorithms. This study is focused on the utility of trie-based data structures for indexing discrete sets that allow to detect such a state faster. An empirical evaluation is performed in the context of index operations executed by a label setting algorithm for solving the Elementary Shortest Path Problem with Resource Constraints. Numerical simulations are run to compare a trie with a HAT-trie, a variant of a trie, which is considered as the fastest in-memory data structure for storing text in sorted order, further optimized for efficient use of cache in modern processors. Results indicate that a HAT-trie is better suited for indexing sparse multi dimensional data, such as sets with high cardinality, offering superior performance at a lower memory footprint. Therefore, HAT-tries remain practical when tries reach their scalability limits due to an expensive memory allocation pattern. Authors leave a final note on comparing and reporting credible time benchmarks for the Elementary Shortest Path Problem with Resource Constraints.
{"title":"Indexing Discrete Sets in a Label Setting Algorithm for Solving the Elementary Shortest Path Problem with Resource Constraints","authors":"M. Polnik, A. Riccardi","doi":"10.1109/CEC.2018.8478414","DOIUrl":"https://doi.org/10.1109/CEC.2018.8478414","url":null,"abstract":"Stopping exploration of the search space regions that can be proven to contain only inferior solutions is an important acceleration technique in optimization algorithms. This study is focused on the utility of trie-based data structures for indexing discrete sets that allow to detect such a state faster. An empirical evaluation is performed in the context of index operations executed by a label setting algorithm for solving the Elementary Shortest Path Problem with Resource Constraints. Numerical simulations are run to compare a trie with a HAT-trie, a variant of a trie, which is considered as the fastest in-memory data structure for storing text in sorted order, further optimized for efficient use of cache in modern processors. Results indicate that a HAT-trie is better suited for indexing sparse multi dimensional data, such as sets with high cardinality, offering superior performance at a lower memory footprint. Therefore, HAT-tries remain practical when tries reach their scalability limits due to an expensive memory allocation pattern. Authors leave a final note on comparing and reporting credible time benchmarks for the Elementary Shortest Path Problem with Resource Constraints.","PeriodicalId":212677,"journal":{"name":"2018 IEEE Congress on Evolutionary Computation (CEC)","volume":"161 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128772452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-01DOI: 10.1109/CEC.2018.8477908
Geng Zhang, Yuhui Shi
This paper proposes an evolution strategy (ES) algorithm called hybrid sampling-evolution strategy (HS-ES) that combines the covariance matrix adaptation-evolution strategy (CMA-ES) and univariate sampling method. In spite that the univariate sampling has been widely thought as a method only to separable problems, the analysis and experimental tests show that it is actually very effective for solving multimodal nonseparable problems. As the univariate sampling is a complementary algorithm to the CMA-ES which has obvious advantages for solving unimodal nonseparable problems, the proposed HS-ES tries to take advantages of these two algorithms to improve its searching performance. Experimental results on CEC-2018 demonstrate the effectiveness of the proposed HS-ES.
{"title":"Hybrid Sampling Evolution Strategy for Solving Single Objective Bound Constrained Problems","authors":"Geng Zhang, Yuhui Shi","doi":"10.1109/CEC.2018.8477908","DOIUrl":"https://doi.org/10.1109/CEC.2018.8477908","url":null,"abstract":"This paper proposes an evolution strategy (ES) algorithm called hybrid sampling-evolution strategy (HS-ES) that combines the covariance matrix adaptation-evolution strategy (CMA-ES) and univariate sampling method. In spite that the univariate sampling has been widely thought as a method only to separable problems, the analysis and experimental tests show that it is actually very effective for solving multimodal nonseparable problems. As the univariate sampling is a complementary algorithm to the CMA-ES which has obvious advantages for solving unimodal nonseparable problems, the proposed HS-ES tries to take advantages of these two algorithms to improve its searching performance. Experimental results on CEC-2018 demonstrate the effectiveness of the proposed HS-ES.","PeriodicalId":212677,"journal":{"name":"2018 IEEE Congress on Evolutionary Computation (CEC)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129108283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-01DOI: 10.1109/CEC.2018.8477779
E. Vissol-Gaudin, A. Kotsialos, C. Groves, C. Pearson, D. Zeze, M. Petty, N. A. Moubayed
This paper focuses on a performance analysis of single-walled-carbon-nanotube / liquid crystal classifiers produced by evolution in materio. A new confidence measure is proposed in this paper. It is different from statistical tools commonly used to evaluate the performance of classifiers in that it is based on physical quantities extracted from the composite and related to its state. Using this measure, it is confirmed that in an untrained state, ie: before being subjected to an algorithm-controlled evolution, the carbon-nanotube-based composites classify data at random. The training, or evolution, process brings these composites into a state where the classification is no longer random. Instead, the classifiers generalise well to unseen data and the classification accuracy remains stable across tests. The confidence measure associated with the resulting classifier's accuracy is relatively high at the classes' boundaries, which is consistent with the problem formulation.
{"title":"Confidence Measures for Carbon-Nanotube / Liquid Crystals Classifiers","authors":"E. Vissol-Gaudin, A. Kotsialos, C. Groves, C. Pearson, D. Zeze, M. Petty, N. A. Moubayed","doi":"10.1109/CEC.2018.8477779","DOIUrl":"https://doi.org/10.1109/CEC.2018.8477779","url":null,"abstract":"This paper focuses on a performance analysis of single-walled-carbon-nanotube / liquid crystal classifiers produced by evolution in materio. A new confidence measure is proposed in this paper. It is different from statistical tools commonly used to evaluate the performance of classifiers in that it is based on physical quantities extracted from the composite and related to its state. Using this measure, it is confirmed that in an untrained state, ie: before being subjected to an algorithm-controlled evolution, the carbon-nanotube-based composites classify data at random. The training, or evolution, process brings these composites into a state where the classification is no longer random. Instead, the classifiers generalise well to unseen data and the classification accuracy remains stable across tests. The confidence measure associated with the resulting classifier's accuracy is relatively high at the classes' boundaries, which is consistent with the problem formulation.","PeriodicalId":212677,"journal":{"name":"2018 IEEE Congress on Evolutionary Computation (CEC)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115847388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}