The coordination of population structure is the foundation for the effective functioning of evolutionary algorithms. An efficient population evolution structure can guide individuals to engage in successful and robust exploitative and exploratory behaviors. However, due to the black-box property of the search process, it is challenging to assess the current state of the population and implement targeted measures. In this paper, we propose a dynamic population structures-based differential evolution algorithm (DPSDE) to uncover the real-time state of population continuous optimization. According to the exploitation and exploration state of population, we introduce four structural modules to address the premature convergence and search stagnation issues of the current population. To effectively utilize these modules, we propose a real-time discernment mechanism to judge the population's current state. Based on the feedback information, suitable structural modules are dynamically invoked, ensuring that the population undergoes continuous and beneficial evolution, ultimately exploring the optimal population structure. The comparative outcomes with numerous cutting-edge algorithms on the IEEE Congress on Evolutionary Computation (CEC) 2017 benchmark functions and 2011 real-world problems verify the superiority of DPSDE. Furthermore, parameters, population state, and ablation study of modules are discussed.
{"title":"Dynamic Population Structures-Based Differential Evolution Algorithm","authors":"Jiaru Yang;Kaiyu Wang;Yirui Wang;Jiahai Wang;Zhenyu Lei;Shangce Gao","doi":"10.1109/TETCI.2024.3367809","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3367809","url":null,"abstract":"The coordination of population structure is the foundation for the effective functioning of evolutionary algorithms. An efficient population evolution structure can guide individuals to engage in successful and robust exploitative and exploratory behaviors. However, due to the black-box property of the search process, it is challenging to assess the current state of the population and implement targeted measures. In this paper, we propose a dynamic population structures-based differential evolution algorithm (DPSDE) to uncover the real-time state of population continuous optimization. According to the exploitation and exploration state of population, we introduce four structural modules to address the premature convergence and search stagnation issues of the current population. To effectively utilize these modules, we propose a real-time discernment mechanism to judge the population's current state. Based on the feedback information, suitable structural modules are dynamically invoked, ensuring that the population undergoes continuous and beneficial evolution, ultimately exploring the optimal population structure. The comparative outcomes with numerous cutting-edge algorithms on the IEEE Congress on Evolutionary Computation (CEC) 2017 benchmark functions and 2011 real-world problems verify the superiority of DPSDE. Furthermore, parameters, population state, and ablation study of modules are discussed.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141096308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The conventional dendritic neuron model (DNM) is a single-neuron model inspired by biological dendritic neurons that has been applied successfully in various fields. However, an increasing number of input features results in inefficient learning and gradient vanishing problems in the DNM. Thus, the DNM struggles to handle more complex tasks, including multiclass classification and multivariate time-series forecasting problems. In this study, we extended the conventional DNM to overcome these limitations. In the proposed dendritic neural network (DNN), the flexibility of both synapses and dendritic branches is considered and formulated, which can improve the model's nonlinear capabilities on high-dimensional problems. Then, multiple output layers are stacked to accommodate the various loss functions of complex tasks, and a dropout mechanism is implemented to realize a better balance between the underfitting and overfitting problems, which enhances the network's generalizability. The performance and computational efficiency of the proposed DNN compared to state-of-the-art machine learning algorithms were verified on 10 multiclass classification and 2 high-dimensional binary classification datasets. The experimental results demonstrate that the proposed DNN is a promising and practical neural network architecture.
{"title":"Dendritic Neural Network: A Novel Extension of Dendritic Neuron Model","authors":"Cheng Tang;Junkai Ji;Yuki Todo;Atsushi Shimada;Weiping Ding;Akimasa Hirata","doi":"10.1109/TETCI.2024.3367819","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3367819","url":null,"abstract":"The conventional dendritic neuron model (DNM) is a single-neuron model inspired by biological dendritic neurons that has been applied successfully in various fields. However, an increasing number of input features results in inefficient learning and gradient vanishing problems in the DNM. Thus, the DNM struggles to handle more complex tasks, including multiclass classification and multivariate time-series forecasting problems. In this study, we extended the conventional DNM to overcome these limitations. In the proposed dendritic neural network (DNN), the flexibility of both synapses and dendritic branches is considered and formulated, which can improve the model's nonlinear capabilities on high-dimensional problems. Then, multiple output layers are stacked to accommodate the various loss functions of complex tasks, and a dropout mechanism is implemented to realize a better balance between the underfitting and overfitting problems, which enhances the network's generalizability. The performance and computational efficiency of the proposed DNN compared to state-of-the-art machine learning algorithms were verified on 10 multiclass classification and 2 high-dimensional binary classification datasets. The experimental results demonstrate that the proposed DNN is a promising and practical neural network architecture.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10460122","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141096366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-08DOI: 10.1109/TETCI.2024.3369949
Rung-Tzuo Liaw;Yu-Wei Wen
Evolutionary machine learning has drawn much attentions on solving data-driven learning problem in the past decades, where classification is a major branch of data-driven learning problem. To improve the quality of obtained classifier, ensemble is a simple yet powerful strategy. However, gathering classifiers for ensemble requires multiple runs of learning process which bring additional cost at evaluation on the data. This study proposes an innovative framework for ensemble learning through evolutionary multitasking, i.e., the evolutionary multitasking for ensemble learning (EMTEL). There are four main features in the EMTEL. First, the EMTEL formulates a classification problem as a dynamic multitask optimization problem. Second, the EMTEL utilizes evolutionary multitasking to resolve the dynamic multitask optimization problem for better convergence through the synergy of common properties hidden in the tasks. Third, the EMTEL incorporates evolutionary instance selection for saving the cost at evaluation. Finally, the EMTEL formulates the ensemble learning problem as a numerical optimization problem and proposes an online ensemble aggregation approach to simultaneously select appropriate ensemble candidates from learning history and optimize ensemble weights for aggregating predictions. A case study is investigated by integrating two state-of-the-art methods for evolutionary multitasking and evolutionary instance selection respectively, i.e., the symbiosis in biocoenosis optimization and cooperative evolutionary learning and instance selection. For online ensemble aggregation, this study adopts the well-known covariance matrix adaptation evolution strategy. Experiments validate the effectiveness of the EMTEL over conventional and advanced evolutionary machine learning algorithms, including genetic programming, self-learning gene expression programming, and multi-dimensional genetic programming. Experimental results show that the proposed framework ameliorates state-of-the-art methods, and the improvements on quality for multiclass classification are at 8.48% at least and 56.35% at most in relation to the macro F-score. For convergence speed, the speedups achieved by the proposed framework are 7.85 at least and 100.53 at most on multiclass classification.
{"title":"Ensemble Learning Through Evolutionary Multitasking: A Formulation and Case Study","authors":"Rung-Tzuo Liaw;Yu-Wei Wen","doi":"10.1109/TETCI.2024.3369949","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3369949","url":null,"abstract":"Evolutionary machine learning has drawn much attentions on solving data-driven learning problem in the past decades, where classification is a major branch of data-driven learning problem. To improve the quality of obtained classifier, ensemble is a simple yet powerful strategy. However, gathering classifiers for ensemble requires multiple runs of learning process which bring additional cost at evaluation on the data. This study proposes an innovative framework for ensemble learning through evolutionary multitasking, i.e., the evolutionary multitasking for ensemble learning (EMTEL). There are four main features in the EMTEL. First, the EMTEL formulates a classification problem as a dynamic multitask optimization problem. Second, the EMTEL utilizes evolutionary multitasking to resolve the dynamic multitask optimization problem for better convergence through the synergy of common properties hidden in the tasks. Third, the EMTEL incorporates evolutionary instance selection for saving the cost at evaluation. Finally, the EMTEL formulates the ensemble learning problem as a numerical optimization problem and proposes an online ensemble aggregation approach to simultaneously select appropriate ensemble candidates from learning history and optimize ensemble weights for aggregating predictions. A case study is investigated by integrating two state-of-the-art methods for evolutionary multitasking and evolutionary instance selection respectively, i.e., the symbiosis in biocoenosis optimization and cooperative evolutionary learning and instance selection. For online ensemble aggregation, this study adopts the well-known covariance matrix adaptation evolution strategy. Experiments validate the effectiveness of the EMTEL over conventional and advanced evolutionary machine learning algorithms, including genetic programming, self-learning gene expression programming, and multi-dimensional genetic programming. Experimental results show that the proposed framework ameliorates state-of-the-art methods, and the improvements on quality for multiclass classification are at 8.48% at least and 56.35% at most in relation to the macro F-score. For convergence speed, the speedups achieved by the proposed framework are 7.85 at least and 100.53 at most on multiclass classification.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141964786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-07DOI: 10.1109/TETCI.2024.3369629
Songbai Liu;Jun Li;Qiuzhen Lin;Ye Tian;Jianqiang Li;Kay Chen Tan
Addressing the challenge of efficiently handling high-dimensional search spaces in solving large-scale multiobjective optimization problems (LMOPs) becomes an emerging research topic in evolutionary computation. In response, this paper proposes a new evolutionary optimizer with a tactic of autoencoder-based problem transformation (APT). The APT involves creating an autoencoder to learn the relative importance of each variable by competitively reconstructing the dominated and non-dominated solutions. Using the learned importance, all variables are divided into multiple groups without consuming any function evaluations. The number of groups dynamically increases according to the population's evolutionary status. Each variable group has an associated autoencoder, transforming the search space into an adaptable small-scale representation space. Thus, the search process occurs within these dynamic representation spaces, leading to effective production of offspring solutions. To assess the effectiveness of APT, extensive testing is performed on benchmark suites and real-world LMOPs, encompassing variable sizes ranging from 10 3