Pub Date : 2018-07-01DOI: 10.1109/CEC.2018.8477920
Amin Majd, A. Ashraf, E. Troubitsyna, M. Daneshtalab
Despite the growing popularity of swarm-based applications of drones, there is still a lack of approaches to maximize the safety of swarms of drones by minimizing the risks of drone collisions. In this paper, we present an approach that uses optimization, learning, and automatic immediate responses (reflexes) of drones to ensure safe operations of swarms of drones. The proposed approach integrates a high-performance dynamic evolutionary algorithm and a reinforcement learning algorithm to generate safe and efficient drone routes and then augments the generated routes with dynamically computed drone reflexes to prevent collisions with unforeseen obstacles in the flying zone. We also present a parallel implementation of the proposed approach and evaluate it against two benchmarks. The results show that the proposed approach maximizes safety and generates highly efficient drone routes.
{"title":"Using Optimization, Learning, and Drone Reflexes to Maximize Safety of Swarms of Drones","authors":"Amin Majd, A. Ashraf, E. Troubitsyna, M. Daneshtalab","doi":"10.1109/CEC.2018.8477920","DOIUrl":"https://doi.org/10.1109/CEC.2018.8477920","url":null,"abstract":"Despite the growing popularity of swarm-based applications of drones, there is still a lack of approaches to maximize the safety of swarms of drones by minimizing the risks of drone collisions. In this paper, we present an approach that uses optimization, learning, and automatic immediate responses (reflexes) of drones to ensure safe operations of swarms of drones. The proposed approach integrates a high-performance dynamic evolutionary algorithm and a reinforcement learning algorithm to generate safe and efficient drone routes and then augments the generated routes with dynamically computed drone reflexes to prevent collisions with unforeseen obstacles in the flying zone. We also present a parallel implementation of the proposed approach and evaluate it against two benchmarks. The results show that the proposed approach maximizes safety and generates highly efficient drone routes.","PeriodicalId":212677,"journal":{"name":"2018 IEEE Congress on Evolutionary Computation (CEC)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130964978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-01DOI: 10.1109/CEC.2018.8477683
Stephyn G. W. Butcher, John W. Sheppard, S. Strasser
Particle Swarm Optimization is an effective stochastic optimization technique that simulates a swarm of particles that fly through a problem space. In the process of searching the problem space for a solution, the individual variables of a candidate solution will often take on inferior values characterized as “Two Steps Forward, One Step Back.” Several approaches to solving this problem have introduced varying notions of cooperation and competition. Instead we characterize the success of these multi-swarm techniques as reconciling conflicting information through a mechanism that makes successive candidates Pareto improvements. We use this analysis to construct a variation of PSO that applies this mechanism to gbest selection. Experiments show that this algorithm performs better than the standard gbest PSO algorithm.
{"title":"Pareto Improving Selection of the Global Best in Particle Swarm Optimization","authors":"Stephyn G. W. Butcher, John W. Sheppard, S. Strasser","doi":"10.1109/CEC.2018.8477683","DOIUrl":"https://doi.org/10.1109/CEC.2018.8477683","url":null,"abstract":"Particle Swarm Optimization is an effective stochastic optimization technique that simulates a swarm of particles that fly through a problem space. In the process of searching the problem space for a solution, the individual variables of a candidate solution will often take on inferior values characterized as “Two Steps Forward, One Step Back.” Several approaches to solving this problem have introduced varying notions of cooperation and competition. Instead we characterize the success of these multi-swarm techniques as reconciling conflicting information through a mechanism that makes successive candidates Pareto improvements. We use this analysis to construct a variation of PSO that applies this mechanism to gbest selection. Experiments show that this algorithm performs better than the standard gbest PSO algorithm.","PeriodicalId":212677,"journal":{"name":"2018 IEEE Congress on Evolutionary Computation (CEC)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131419326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-01DOI: 10.1109/CEC.2018.8477822
J. Rueda, I. Erlich
The MVMO algorithm (Mean-Variance Mapping Optimization) has two main features: i) normalized search range for each dimension (associated to each optimization variable); ii) use of a mapping function to generate a new value of a selected optimization variable based on the mean and variance derived from the best solutions achieved so far. The current version of MVMO offers several alternatives. The single parent-offspring version is designed for use in case the evaluation budget is small and the optimization task is not too challenging. The population based MVMO requires more function evaluations, but the results are usually better. Both variants of MVMO can be improved considerably if additionally separate local search algorithms are incorporated. In this case, MVMO is basically responsible for the initial global search. This paper presents the results of a study on the use of the hybrid version of MVMO, called MVMO-PH (population based, hybrid), to solve the IEEE-CEC 2018 test suite for single objective optimization with continuous (real-number) decision variables. Additionally, two new mapping functions representing the unique feature of MVMO are presented.
{"title":"Hybrid Population Based MVMO for Solving CEC 2018 Test Bed of Single-Objective Problems","authors":"J. Rueda, I. Erlich","doi":"10.1109/CEC.2018.8477822","DOIUrl":"https://doi.org/10.1109/CEC.2018.8477822","url":null,"abstract":"The MVMO algorithm (Mean-Variance Mapping Optimization) has two main features: i) normalized search range for each dimension (associated to each optimization variable); ii) use of a mapping function to generate a new value of a selected optimization variable based on the mean and variance derived from the best solutions achieved so far. The current version of MVMO offers several alternatives. The single parent-offspring version is designed for use in case the evaluation budget is small and the optimization task is not too challenging. The population based MVMO requires more function evaluations, but the results are usually better. Both variants of MVMO can be improved considerably if additionally separate local search algorithms are incorporated. In this case, MVMO is basically responsible for the initial global search. This paper presents the results of a study on the use of the hybrid version of MVMO, called MVMO-PH (population based, hybrid), to solve the IEEE-CEC 2018 test suite for single objective optimization with continuous (real-number) decision variables. Additionally, two new mapping functions representing the unique feature of MVMO are presented.","PeriodicalId":212677,"journal":{"name":"2018 IEEE Congress on Evolutionary Computation (CEC)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132828790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-01DOI: 10.1109/CEC.2018.8477990
I. Strumberger, Eva Tuba, N. Bačanin, M. Beko, M. Tuba
In this paper we present bare bones fireworks algorithm implemented and adjusted for solving radio frequency identification (RFID) network planning problem. Bare bones fireworks algorithm is new and simplified version of the fireworks metaheuristic. This approach for the RFID network planning problem was not implemented before according to the literature survey. RFID network planning problem is a well known hard optimization problem and it poses one of the most fundamental challenges in the process of deployment of the RFID network. We tested bare bones fireworks algorithm on one problem model found in the literature and performed comparative analysis with approaches tested on the same problem formulation. We also performed additional set of experiments where the number of readers is considered as the algorithm's parameter. Results obtained from empirical tests prove the robustness and efficiency of the bare bones fireworks metaheuristic for tackling the RFID network planning problem and categorize this new version of the fireworks algorithm as state-of-the-art method for dealing with NP-hard tasks.
{"title":"Bare Bones Fireworks Algorithm for the RFID Network Planning Problem","authors":"I. Strumberger, Eva Tuba, N. Bačanin, M. Beko, M. Tuba","doi":"10.1109/CEC.2018.8477990","DOIUrl":"https://doi.org/10.1109/CEC.2018.8477990","url":null,"abstract":"In this paper we present bare bones fireworks algorithm implemented and adjusted for solving radio frequency identification (RFID) network planning problem. Bare bones fireworks algorithm is new and simplified version of the fireworks metaheuristic. This approach for the RFID network planning problem was not implemented before according to the literature survey. RFID network planning problem is a well known hard optimization problem and it poses one of the most fundamental challenges in the process of deployment of the RFID network. We tested bare bones fireworks algorithm on one problem model found in the literature and performed comparative analysis with approaches tested on the same problem formulation. We also performed additional set of experiments where the number of readers is considered as the algorithm's parameter. Results obtained from empirical tests prove the robustness and efficiency of the bare bones fireworks metaheuristic for tackling the RFID network planning problem and categorize this new version of the fireworks algorithm as state-of-the-art method for dealing with NP-hard tasks.","PeriodicalId":212677,"journal":{"name":"2018 IEEE Congress on Evolutionary Computation (CEC)","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133275175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-01DOI: 10.1109/CEC.2018.8477915
Jing J. Liang, X. Zhu, C. Yue, Zhihui Li, B. Qu
Some multi-objective evolutionary algorithms have been introduced to solve sparse optimization problems in recent years. These multi-objective sparse optimization algorithms obtain a set of solutions with different sparsities. However, for a specific sparse optimization problem, a unique sparse solution should be selected from the whole Pareto Set (PS). Usually, knee point in the PF is a preferred solution if the decision maker has no special preference. An effective knee point selection method plays a pivotal role in multi-objective sparse optimization. In this paper, a study on the knee point selection methods in multiobjective sparse optimization problems has been done. Three knee point selection methods, which are angle-based method, the weighted sum of objective values method and the distance to the extreme line method, are compared and the experimental results indicate that the second method is better than the others. Finally, an analysis of parameter in the best knee point selection method is conducted and an optimal setting range of parameters is given.
{"title":"Performance Analysis on Knee Point Selection Methods for Multi-Objective Sparse Optimization Problems","authors":"Jing J. Liang, X. Zhu, C. Yue, Zhihui Li, B. Qu","doi":"10.1109/CEC.2018.8477915","DOIUrl":"https://doi.org/10.1109/CEC.2018.8477915","url":null,"abstract":"Some multi-objective evolutionary algorithms have been introduced to solve sparse optimization problems in recent years. These multi-objective sparse optimization algorithms obtain a set of solutions with different sparsities. However, for a specific sparse optimization problem, a unique sparse solution should be selected from the whole Pareto Set (PS). Usually, knee point in the PF is a preferred solution if the decision maker has no special preference. An effective knee point selection method plays a pivotal role in multi-objective sparse optimization. In this paper, a study on the knee point selection methods in multiobjective sparse optimization problems has been done. Three knee point selection methods, which are angle-based method, the weighted sum of objective values method and the distance to the extreme line method, are compared and the experimental results indicate that the second method is better than the others. Finally, an analysis of parameter in the best knee point selection method is conducted and an optimal setting range of parameters is given.","PeriodicalId":212677,"journal":{"name":"2018 IEEE Congress on Evolutionary Computation (CEC)","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132172740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-01DOI: 10.1109/CEC.2018.8477811
J. T. Carvalho, Nicola Milano, S. Nolfi
We demonstrate how evaluating candidate solutions in a limited number of stochastically varying conditions that vary over generations at a moderate rate is an effective method for developing high quality robust solutions. Indeed, agents evolved with this method for the ability to solve an extended version of the double-pole balancing problem, in which the initial state of the agents and the characteristics of the environment in which the agents are situated vary, show the ability to solve the problem in a wide variety of environmental circumstances and for prolonged periods of time without the need to readapt. The combinatorial explosion of possible environmental conditions does not prevent the evolution of robust solutions. Indeed, exposing evolving agents to a limited number of different environmental conditions that vary over generations is sufficient and leads to better results with respect to control experiments in which the number of experienced environmental conditions is greater. Interestingly the exposure to environmental variations promotes the evolution of convergent strategies in which the agents act so to exhibit the required functionality and so to reduce the complexity of the control problem.
{"title":"Evolving Robust Solutions for Stochastically Varying Problems","authors":"J. T. Carvalho, Nicola Milano, S. Nolfi","doi":"10.1109/CEC.2018.8477811","DOIUrl":"https://doi.org/10.1109/CEC.2018.8477811","url":null,"abstract":"We demonstrate how evaluating candidate solutions in a limited number of stochastically varying conditions that vary over generations at a moderate rate is an effective method for developing high quality robust solutions. Indeed, agents evolved with this method for the ability to solve an extended version of the double-pole balancing problem, in which the initial state of the agents and the characteristics of the environment in which the agents are situated vary, show the ability to solve the problem in a wide variety of environmental circumstances and for prolonged periods of time without the need to readapt. The combinatorial explosion of possible environmental conditions does not prevent the evolution of robust solutions. Indeed, exposing evolving agents to a limited number of different environmental conditions that vary over generations is sufficient and leads to better results with respect to control experiments in which the number of experienced environmental conditions is greater. Interestingly the exposure to environmental variations promotes the evolution of convergent strategies in which the agents act so to exhibit the required functionality and so to reduce the complexity of the control problem.","PeriodicalId":212677,"journal":{"name":"2018 IEEE Congress on Evolutionary Computation (CEC)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127682026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-01DOI: 10.1109/CEC.2018.8477795
G. A. D. Silva, M. Fantinato, S. M. Peres, H. Reijers
Process model discovery can be approached as an optimization problem, for which genetic algorithms have been used previously. However, the fitness functions used, which consider full log traces, have not been found adequate to discover unstructured processes. We propose a solution based on a local analysis of activity transitions, which proves effective for unstructured processes, most common in organizations. Our solution considers completeness and accuracy calculation for the fitness function.
{"title":"Discovery of Unstructured Business Processes Through Genetic Algorithms Using Activity Transitions-Based Completeness and Precision","authors":"G. A. D. Silva, M. Fantinato, S. M. Peres, H. Reijers","doi":"10.1109/CEC.2018.8477795","DOIUrl":"https://doi.org/10.1109/CEC.2018.8477795","url":null,"abstract":"Process model discovery can be approached as an optimization problem, for which genetic algorithms have been used previously. However, the fitness functions used, which consider full log traces, have not been found adequate to discover unstructured processes. We propose a solution based on a local analysis of activity transitions, which proves effective for unstructured processes, most common in organizations. Our solution considers completeness and accuracy calculation for the fitness function.","PeriodicalId":212677,"journal":{"name":"2018 IEEE Congress on Evolutionary Computation (CEC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128534747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-01DOI: 10.1109/CEC.2018.8478414
M. Polnik, A. Riccardi
Stopping exploration of the search space regions that can be proven to contain only inferior solutions is an important acceleration technique in optimization algorithms. This study is focused on the utility of trie-based data structures for indexing discrete sets that allow to detect such a state faster. An empirical evaluation is performed in the context of index operations executed by a label setting algorithm for solving the Elementary Shortest Path Problem with Resource Constraints. Numerical simulations are run to compare a trie with a HAT-trie, a variant of a trie, which is considered as the fastest in-memory data structure for storing text in sorted order, further optimized for efficient use of cache in modern processors. Results indicate that a HAT-trie is better suited for indexing sparse multi dimensional data, such as sets with high cardinality, offering superior performance at a lower memory footprint. Therefore, HAT-tries remain practical when tries reach their scalability limits due to an expensive memory allocation pattern. Authors leave a final note on comparing and reporting credible time benchmarks for the Elementary Shortest Path Problem with Resource Constraints.
{"title":"Indexing Discrete Sets in a Label Setting Algorithm for Solving the Elementary Shortest Path Problem with Resource Constraints","authors":"M. Polnik, A. Riccardi","doi":"10.1109/CEC.2018.8478414","DOIUrl":"https://doi.org/10.1109/CEC.2018.8478414","url":null,"abstract":"Stopping exploration of the search space regions that can be proven to contain only inferior solutions is an important acceleration technique in optimization algorithms. This study is focused on the utility of trie-based data structures for indexing discrete sets that allow to detect such a state faster. An empirical evaluation is performed in the context of index operations executed by a label setting algorithm for solving the Elementary Shortest Path Problem with Resource Constraints. Numerical simulations are run to compare a trie with a HAT-trie, a variant of a trie, which is considered as the fastest in-memory data structure for storing text in sorted order, further optimized for efficient use of cache in modern processors. Results indicate that a HAT-trie is better suited for indexing sparse multi dimensional data, such as sets with high cardinality, offering superior performance at a lower memory footprint. Therefore, HAT-tries remain practical when tries reach their scalability limits due to an expensive memory allocation pattern. Authors leave a final note on comparing and reporting credible time benchmarks for the Elementary Shortest Path Problem with Resource Constraints.","PeriodicalId":212677,"journal":{"name":"2018 IEEE Congress on Evolutionary Computation (CEC)","volume":"161 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128772452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-01DOI: 10.1109/CEC.2018.8477908
Geng Zhang, Yuhui Shi
This paper proposes an evolution strategy (ES) algorithm called hybrid sampling-evolution strategy (HS-ES) that combines the covariance matrix adaptation-evolution strategy (CMA-ES) and univariate sampling method. In spite that the univariate sampling has been widely thought as a method only to separable problems, the analysis and experimental tests show that it is actually very effective for solving multimodal nonseparable problems. As the univariate sampling is a complementary algorithm to the CMA-ES which has obvious advantages for solving unimodal nonseparable problems, the proposed HS-ES tries to take advantages of these two algorithms to improve its searching performance. Experimental results on CEC-2018 demonstrate the effectiveness of the proposed HS-ES.
{"title":"Hybrid Sampling Evolution Strategy for Solving Single Objective Bound Constrained Problems","authors":"Geng Zhang, Yuhui Shi","doi":"10.1109/CEC.2018.8477908","DOIUrl":"https://doi.org/10.1109/CEC.2018.8477908","url":null,"abstract":"This paper proposes an evolution strategy (ES) algorithm called hybrid sampling-evolution strategy (HS-ES) that combines the covariance matrix adaptation-evolution strategy (CMA-ES) and univariate sampling method. In spite that the univariate sampling has been widely thought as a method only to separable problems, the analysis and experimental tests show that it is actually very effective for solving multimodal nonseparable problems. As the univariate sampling is a complementary algorithm to the CMA-ES which has obvious advantages for solving unimodal nonseparable problems, the proposed HS-ES tries to take advantages of these two algorithms to improve its searching performance. Experimental results on CEC-2018 demonstrate the effectiveness of the proposed HS-ES.","PeriodicalId":212677,"journal":{"name":"2018 IEEE Congress on Evolutionary Computation (CEC)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129108283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-01DOI: 10.1109/CEC.2018.8477779
E. Vissol-Gaudin, A. Kotsialos, C. Groves, C. Pearson, D. Zeze, M. Petty, N. A. Moubayed
This paper focuses on a performance analysis of single-walled-carbon-nanotube / liquid crystal classifiers produced by evolution in materio. A new confidence measure is proposed in this paper. It is different from statistical tools commonly used to evaluate the performance of classifiers in that it is based on physical quantities extracted from the composite and related to its state. Using this measure, it is confirmed that in an untrained state, ie: before being subjected to an algorithm-controlled evolution, the carbon-nanotube-based composites classify data at random. The training, or evolution, process brings these composites into a state where the classification is no longer random. Instead, the classifiers generalise well to unseen data and the classification accuracy remains stable across tests. The confidence measure associated with the resulting classifier's accuracy is relatively high at the classes' boundaries, which is consistent with the problem formulation.
{"title":"Confidence Measures for Carbon-Nanotube / Liquid Crystals Classifiers","authors":"E. Vissol-Gaudin, A. Kotsialos, C. Groves, C. Pearson, D. Zeze, M. Petty, N. A. Moubayed","doi":"10.1109/CEC.2018.8477779","DOIUrl":"https://doi.org/10.1109/CEC.2018.8477779","url":null,"abstract":"This paper focuses on a performance analysis of single-walled-carbon-nanotube / liquid crystal classifiers produced by evolution in materio. A new confidence measure is proposed in this paper. It is different from statistical tools commonly used to evaluate the performance of classifiers in that it is based on physical quantities extracted from the composite and related to its state. Using this measure, it is confirmed that in an untrained state, ie: before being subjected to an algorithm-controlled evolution, the carbon-nanotube-based composites classify data at random. The training, or evolution, process brings these composites into a state where the classification is no longer random. Instead, the classifiers generalise well to unseen data and the classification accuracy remains stable across tests. The confidence measure associated with the resulting classifier's accuracy is relatively high at the classes' boundaries, which is consistent with the problem formulation.","PeriodicalId":212677,"journal":{"name":"2018 IEEE Congress on Evolutionary Computation (CEC)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115847388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}