Pub Date : 2024-09-18DOI: 10.1109/TETCI.2024.3446458
Jianfeng Lu;Hangjian Zhang;Pan Zhou;Xiong Wang;Chen Wang;Dapeng Oliver Wu
A long-standing problem remains with the heterogeneous clients in Federated Learning (FL), who often have diverse gains and requirements for the trained model, while their contributions are hard to evaluate due to the privacy-preserving training. Existing works mainly rely on single-dimension metric to calculate clients' contributions as aggregation weights, which however may damage the social fairness, thus discouraging the cooperation willingness of worse-off clients and causing the revenue instability. To tackle this issue, we propose a novel incentive mechanism named FedLaw to effectively evaluate clients' contributions and further assign aggregation weights. Specifically, we reuse the local model updates and model the contribution evaluation process as a convex coalition game among multiple players with a non-empty core. By deriving a closed-form expression of the Shapley value, we solve the game core in quadratic time. Moreover, we theoretically prove that FedLaw guarantees individual fairness, coalition stability, computational efficiency, collective rationality, redundancy, symmetry, additivity, strict desirability, and individual monotonicity, and also show that FedLaw can achieve a constant convergence bound. Extensive experiments on four real-world datasets validate the superiority of FedLaw in terms of model aggregation, fairness, and time overhead compared to the state-of-the-art five baselines. Experimental results show that FedLaw is able to reduce the computation time of contribution evaluation by about 12 times and improve the global model performance by about 2% while ensuring fairness.
{"title":"FedLaw: Value-Aware Federated Learning With Individual Fairness and Coalition Stability","authors":"Jianfeng Lu;Hangjian Zhang;Pan Zhou;Xiong Wang;Chen Wang;Dapeng Oliver Wu","doi":"10.1109/TETCI.2024.3446458","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3446458","url":null,"abstract":"A long-standing problem remains with the heterogeneous clients in Federated Learning (FL), who often have diverse gains and requirements for the trained model, while their contributions are hard to evaluate due to the privacy-preserving training. Existing works mainly rely on single-dimension metric to calculate clients' contributions as aggregation weights, which however may damage the social fairness, thus discouraging the cooperation willingness of worse-off clients and causing the revenue instability. To tackle this issue, we propose a novel incentive mechanism named <italic>FedLaw</i> to effectively evaluate clients' contributions and further assign aggregation weights. Specifically, we reuse the local model updates and model the contribution evaluation process as a convex coalition game among multiple players with a non-empty core. By deriving a closed-form expression of the Shapley value, we solve the game core in quadratic time. Moreover, we theoretically prove that <italic>FedLaw</i> guarantees <italic>individual fairness</i>, <italic>coalition stability</i>, <italic>computational efficiency</i>, <italic>collective rationality</i>, <italic>redundancy</i>, <italic>symmetry</i>, <italic>additivity</i>, <italic>strict desirability</i>, and <italic>individual monotonicity</i>, and also show that <italic>FedLaw</i> can achieve a constant convergence bound. Extensive experiments on four real-world datasets validate the superiority of <italic>FedLaw</i> in terms of model aggregation, fairness, and time overhead compared to the state-of-the-art five baselines. Experimental results show that <italic>FedLaw</i> is able to reduce the computation time of contribution evaluation by about 12 times and improve the global model performance by about 2% while ensuring fairness.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 1","pages":"1049-1062"},"PeriodicalIF":5.3,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143360994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-16DOI: 10.1109/TETCI.2024.3442867
Shuijia Li;Rui Wang;Wenyin Gong;Zuowen Liao;Ling Wang
A nonlinear equation system often has multiple roots, while finding all roots simultaneously in one run remains a challenging work in numerical optimization. Although many methods have been proposed to solve the problem, few have utilised two algorithms with different characteristics to improve the root rate. To locate as many roots as possible of nonlinear equation systems, in this paper, a co-evolutionary dual niching differential evolution with information sharing and migration is developed. To be specific, firstly it utilizes a dual niching algorithm namely neighborhood-based crowding/speciation differential evolution co-evolutionary to search concurrently; secondly, a parameter adaptation strategy is employed to ameliorate the capability of the dual algorithm; finally, the dual niching differential evolution adaptively performs information sharing and migration according to the evolutionary experience, thereby balancing the population diversity and convergence. To investigate the performance of the proposed approach, thirty nonlinear equation systems with diverse characteristics and a more complex test set are used as the test suite. A comprehensive comparison shows that the proposed method performs well in terms of root rate and success rate when compared with other advanced algorithms.
{"title":"A Co-Evolutionary Dual Niching Differential Evolution Algorithm for Nonlinear Equation Systems Optimization","authors":"Shuijia Li;Rui Wang;Wenyin Gong;Zuowen Liao;Ling Wang","doi":"10.1109/TETCI.2024.3442867","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3442867","url":null,"abstract":"A nonlinear equation system often has multiple roots, while finding all roots simultaneously in one run remains a challenging work in numerical optimization. Although many methods have been proposed to solve the problem, few have utilised two algorithms with different characteristics to improve the root rate. To locate as many roots as possible of nonlinear equation systems, in this paper, a co-evolutionary dual niching differential evolution with information sharing and migration is developed. To be specific, firstly it utilizes a dual niching algorithm namely neighborhood-based crowding/speciation differential evolution co-evolutionary to search concurrently; secondly, a parameter adaptation strategy is employed to ameliorate the capability of the dual algorithm; finally, the dual niching differential evolution adaptively performs information sharing and migration according to the evolutionary experience, thereby balancing the population diversity and convergence. To investigate the performance of the proposed approach, thirty nonlinear equation systems with diverse characteristics and a more complex test set are used as the test suite. A comprehensive comparison shows that the proposed method performs well in terms of root rate and success rate when compared with other advanced algorithms.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 1","pages":"109-118"},"PeriodicalIF":5.3,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143107208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Incomplete Multi-View Clustering (IMVC) offers a way to analyze incomplete data, facilitating the inference of unobserved and missing data points through completion techniques. However, existing IMVC methods, predominantly depending on either data completion or similarity matrix completion, failed to uncover the inherent geometric structure and potential complementary information between intra- and inter-views, causing incomplete similarity matrices to further tear apart the connections between views. To address this problem, we propose Dual Completion Learning for Incomplete Multi-view Clustering (DCIMC), which elaborately designs data completion and similarity tensor completion, and fuses both of them into a unified model to effectively recover the missing samples and similarities. Concretely, in data completion, DCIMC utilizes subspace clustering to recover the missing and unknown instances directly. Meanwhile, in similarity tensor completion, DCIMC introduces the idea of tensor completion to make better use of the high-order complementary information from multi-view data. By fusing the dual completions, missing information and complementary information in each completion are fully explored by each other, reciprocally enhancing one another to boost the accuracy of our clustering algorithm. Experimental results on various datasets show the effectiveness of the proposed DCIMC. Moreover, our DCIMC also achieved superior or comparable performance in an extended comparison with recent deep learning-based multi-view clustering algorithms.
{"title":"Dual Completion Learning for Incomplete Multi-View Clustering","authors":"Qiangqiang Shen;Xuanqi Zhang;Shuqin Wang;Yuanman Li;Yongsheng Liang;Yongyong Chen","doi":"10.1109/TETCI.2024.3451562","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3451562","url":null,"abstract":"Incomplete Multi-View Clustering (IMVC) offers a way to analyze incomplete data, facilitating the inference of unobserved and missing data points through completion techniques. However, existing IMVC methods, predominantly depending on either data completion or similarity matrix completion, failed to uncover the inherent geometric structure and potential complementary information between intra- and inter-views, causing incomplete similarity matrices to further tear apart the connections between views. To address this problem, we propose Dual Completion Learning for Incomplete Multi-view Clustering (DCIMC), which elaborately designs data completion and similarity tensor completion, and fuses both of them into a unified model to effectively recover the missing samples and similarities. Concretely, in data completion, DCIMC utilizes subspace clustering to recover the missing and unknown instances directly. Meanwhile, in similarity tensor completion, DCIMC introduces the idea of tensor completion to make better use of the high-order complementary information from multi-view data. By fusing the dual completions, missing information and complementary information in each completion are fully explored by each other, reciprocally enhancing one another to boost the accuracy of our clustering algorithm. Experimental results on various datasets show the effectiveness of the proposed DCIMC. Moreover, our DCIMC also achieved superior or comparable performance in an extended comparison with recent deep learning-based multi-view clustering algorithms.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 1","pages":"455-467"},"PeriodicalIF":5.3,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143361375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-09DOI: 10.1109/TETCI.2024.3449911
Jing Xu;Wentao Shi;Pan Gao;Qizhu Li;Zhengwei Wang
In recent works on semantic segmentation, there has been a significant focus on designing and integrating transformer-based encoders. However, less attention has been given to transformer-based decoders. We emphasize that the decoder stage is equally vital as the encoder in achieving superior segmentation performance. It disentangles and refines high-level cues, enabling precise object boundary delineation at the pixel level. In this paper, we introduce a novel transformer-based decoder called MUSTER, which seamlessly integrates with hierarchical encoders and consistently delivers high-quality segmentation results, regardless of the encoder architecture. Furthermore, we present a variant of MUSTER that reduces FLOPS while maintaining performance. MUSTER incorporates carefully designed multi-head skip attention (MSKA) units and introduces innovative upsampling operations. The MSKA units enable the fusion of multi-scale features from the encoder and decoder, facilitating comprehensive information integration. The upsampling operation leverages encoder features to enhance object localization and surpasses traditional upsampling methods, improving mIoU (mean Intersection over Union) by 0.4% to 3.2%. On the challenging ADE20K dataset, our best model achieves a single-scale mIoU of 50.23 and a multi-scale mIoU of 51.88, which is on-par with the current state-of-the-art model. Remarkably, we achieve this while significantly reducing the number of FLOPs by 61.3%.
{"title":"MUSTER: A Multi-Scale Transformer-Based Decoder for Semantic Segmentation","authors":"Jing Xu;Wentao Shi;Pan Gao;Qizhu Li;Zhengwei Wang","doi":"10.1109/TETCI.2024.3449911","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3449911","url":null,"abstract":"In recent works on semantic segmentation, there has been a significant focus on designing and integrating transformer-based encoders. However, less attention has been given to transformer-based decoders. We emphasize that the decoder stage is equally vital as the encoder in achieving superior segmentation performance. It disentangles and refines high-level cues, enabling precise object boundary delineation at the pixel level. In this paper, we introduce a novel transformer-based decoder called MUSTER, which seamlessly integrates with hierarchical encoders and consistently delivers high-quality segmentation results, regardless of the encoder architecture. Furthermore, we present a variant of MUSTER that reduces FLOPS while maintaining performance. MUSTER incorporates carefully designed multi-head skip attention (MSKA) units and introduces innovative upsampling operations. The MSKA units enable the fusion of multi-scale features from the encoder and decoder, facilitating comprehensive information integration. The upsampling operation leverages encoder features to enhance object localization and surpasses traditional upsampling methods, improving mIoU (mean Intersection over Union) by 0.4% to 3.2%. On the challenging ADE20K dataset, our best model achieves a single-scale mIoU of 50.23 and a multi-scale mIoU of 51.88, which is on-par with the current state-of-the-art model. Remarkably, we achieve this while significantly reducing the number of FLOPs by 61.3%.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 1","pages":"202-212"},"PeriodicalIF":5.3,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143107209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-09DOI: 10.1109/TETCI.2024.3449881
Xiaoyong Tang;Juan Zhang;Ronghui Cao;Wenzheng Liu
In the new electricity market, the accurate electricity demand prediction can make high possible profit. However, electricity consumption data exhibits nonlinearity, high volatility, and susceptibility to various factors. Most existing prediction schemes inadequately account for these traits, resulting in weak performance. In view of this, we propose a collaborative multi-component optimization model (MCO-BHPSF) to achieve high accuracy electricity demand prediction. For this model, the original data is first decomposed into linear trend components and nonlinear residual components using the Moving Average filter. Then, the enhanced Pattern Sequence-based Forecasting (PSF) algorithm that can effectively capture data patterns with obvious changes is used to accurately forecast the trend component and the embedded LightGBM for residual components. We further optimize the prediction results by using an error optimization scheme based on online sequence extreme learning machines to reduce prediction errors. The results of extensive experiments on four real-world datasets demonstrate that our proposed MCO-BHPSF model outperforms four advanced baseline models. In day-ahead prediction, our model is on average 31% better than PSF baselines. For long-term prediction, our proposed MCO-BHPSF model has an average improvement rate of 37% compared to PSF baselines.
{"title":"A Collaborative Multi-Component Optimization Model Based on Pattern Sequence Similarity for Electricity Demand Prediction","authors":"Xiaoyong Tang;Juan Zhang;Ronghui Cao;Wenzheng Liu","doi":"10.1109/TETCI.2024.3449881","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3449881","url":null,"abstract":"In the new electricity market, the accurate electricity demand prediction can make high possible profit. However, electricity consumption data exhibits nonlinearity, high volatility, and susceptibility to various factors. Most existing prediction schemes inadequately account for these traits, resulting in weak performance. In view of this, we propose a collaborative multi-component optimization model (MCO-BHPSF) to achieve high accuracy electricity demand prediction. For this model, the original data is first decomposed into linear trend components and nonlinear residual components using the Moving Average filter. Then, the enhanced Pattern Sequence-based Forecasting (PSF) algorithm that can effectively capture data patterns with obvious changes is used to accurately forecast the trend component and the embedded LightGBM for residual components. We further optimize the prediction results by using an error optimization scheme based on online sequence extreme learning machines to reduce prediction errors. The results of extensive experiments on four real-world datasets demonstrate that our proposed MCO-BHPSF model outperforms four advanced baseline models. In day-ahead prediction, our model is on average 31% better than PSF baselines. For long-term prediction, our proposed MCO-BHPSF model has an average improvement rate of 37% compared to PSF baselines.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 1","pages":"119-130"},"PeriodicalIF":5.3,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143106801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-09DOI: 10.1109/TETCI.2024.3451335
Wenjing Li;Can Chen;Junfei Qiao
By integrating the small-world (SW) property into the design of feedforward neural networks, the network performance would be improved by well-documented evidence. To achieve the structural self-adaptation of the feedforward small-world neural networks (FSWNNs), a self-organizing FSWNN, namely SOFSWNN, is proposed based on a hub-based self-organizing algorithm in this paper. Firstly, an FSWNN is constructed according to Watts-Strogatz's rule. Derived from the graph theory, the hub centrality is calculated for each hidden neuron and then used as a measurement for its importance. The self-organizing algorithm is designed by splitting important neurons and merging unimportant neurons with their correlated neurons, and the convergence of this algorithm can be guaranteed theoretically. Extensive experiments are conducted to validate the effectiveness and superiority of SOFSWNN for both classification and regression problems. SOFSWNN achieves an improved generalization performance by SW property and the self-organizing structure. Besides, the hub-based self-organizing algorithm would determine a compact and stable network structure adaptively even from different initial structure.
{"title":"A Hub-Based Self-Organizing Algorithm for Feedforward Small-World Neural Network","authors":"Wenjing Li;Can Chen;Junfei Qiao","doi":"10.1109/TETCI.2024.3451335","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3451335","url":null,"abstract":"By integrating the small-world (SW) property into the design of feedforward neural networks, the network performance would be improved by well-documented evidence. To achieve the structural self-adaptation of the feedforward small-world neural networks (FSWNNs), a self-organizing FSWNN, namely SOFSWNN, is proposed based on a hub-based self-organizing algorithm in this paper. Firstly, an FSWNN is constructed according to Watts-Strogatz's rule. Derived from the graph theory, the hub centrality is calculated for each hidden neuron and then used as a measurement for its importance. The self-organizing algorithm is designed by splitting important neurons and merging unimportant neurons with their correlated neurons, and the convergence of this algorithm can be guaranteed theoretically. Extensive experiments are conducted to validate the effectiveness and superiority of SOFSWNN for both classification and regression problems. SOFSWNN achieves an improved generalization performance by SW property and the self-organizing structure. Besides, the hub-based self-organizing algorithm would determine a compact and stable network structure adaptively even from different initial structure.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 1","pages":"160-175"},"PeriodicalIF":5.3,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143107207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper focuses on the event-triggered-based prescribed-time optimal consensus control issue for switched stochastic nonlinear multi–agent systems under switching topologies. Notably, the system stability may be affected owing to the change in information transmission channels between agents. To surmount this obstacle, this paper presents a reconstruction mechanism to rebuild the consensus error at the switching topology instant. Combining optimal control theory and reinforcement learning strategy, the identifier neural network is utilized to approximate the unknown function, with its corresponding updating law being independent of the switching duration of system dynamics. In addition, an event-triggered mechanism is adopted to enhance the efficiency of resource utilization. With the assistance of the Lyapunov stability principle, sufficient conditions are established to ensure that all signals in the closed-loop system are cooperatively semi-globally uniformly ultimately bounded in probability and the consensus error is capable of converging to the specified interval in a prescribed time. At last, a simulation example is carried out to validate the feasibility of the presented control scheme.
{"title":"Prescribed-Time Optimal Consensus for Switched Stochastic Multiagent Systems: Reinforcement Learning Strategy","authors":"Weiwei Guang;Xin Wang;Lihua Tan;Jian Sun;Tingwen Huang","doi":"10.1109/TETCI.2024.3451334","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3451334","url":null,"abstract":"This paper focuses on the event-triggered-based prescribed-time optimal consensus control issue for switched stochastic nonlinear multi–agent systems under switching topologies. Notably, the system stability may be affected owing to the change in information transmission channels between agents. To surmount this obstacle, this paper presents a reconstruction mechanism to rebuild the consensus error at the switching topology instant. Combining optimal control theory and reinforcement learning strategy, the identifier neural network is utilized to approximate the unknown function, with its corresponding updating law being independent of the switching duration of system dynamics. In addition, an event-triggered mechanism is adopted to enhance the efficiency of resource utilization. With the assistance of the Lyapunov stability principle, sufficient conditions are established to ensure that all signals in the closed-loop system are cooperatively semi-globally uniformly ultimately bounded in probability and the consensus error is capable of converging to the specified interval in a prescribed time. At last, a simulation example is carried out to validate the feasibility of the presented control scheme.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 1","pages":"75-86"},"PeriodicalIF":5.3,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143106798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-09DOI: 10.1109/TETCI.2024.3451309
Hu Peng;Zhongtian Luo;Tian Fang;Qingfu Zhang
Computational effectiveness and limited resources in evolutionary algorithms are interdependently handled during the working of low-power microprocessors for real-world problems, particularly in many-objective evolutionary algorithms (MaOEAs). In this respect, the balance between them will be broken by evolutionary algorithms with a normal-sized population, but which doesn't include a micro population. To tackle this issue, this paper proposes a micro many-objective evolutionary algorithm with knowledge transfer ($mu$MaOEA). To address the oversight that knowledge is often not considered enough between niches, the knowledge-transfer strategy is proposed to bolster each unoptimized niche through optimizing adjacent niches, which enables niches to generate better individuals. Meanwhile, a two-stage mechanism based on fuzzy logic is designed to settle the conflict between convergence and diversity in many-objective optimization problems. Through efficient fuzzy logic decision-making, the mechanism maintains different properties of the population at different stages. Different MaOEAs and micro multi-objective evolutionary algorithms were compared on benchmark test problems DTLZ, MaF, and WFG, and the results showed that $mu$MaOEA has an excellent performance. In addition, it also conducted simulation on two real-world problems, MPDMP and MLDMP, based on a low-power microprocessor. The results indicated the applicability of $mu$MaOEA for low-power microprocessor optimization.
{"title":"Micro Many-Objective Evolutionary Algorithm With Knowledge Transfer","authors":"Hu Peng;Zhongtian Luo;Tian Fang;Qingfu Zhang","doi":"10.1109/TETCI.2024.3451309","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3451309","url":null,"abstract":"Computational effectiveness and limited resources in evolutionary algorithms are interdependently handled during the working of low-power microprocessors for real-world problems, particularly in many-objective evolutionary algorithms (MaOEAs). In this respect, the balance between them will be broken by evolutionary algorithms with a normal-sized population, but which doesn't include a micro population. To tackle this issue, this paper proposes a micro many-objective evolutionary algorithm with knowledge transfer (<inline-formula><tex-math>$mu$</tex-math></inline-formula>MaOEA). To address the oversight that knowledge is often not considered enough between niches, the knowledge-transfer strategy is proposed to bolster each unoptimized niche through optimizing adjacent niches, which enables niches to generate better individuals. Meanwhile, a two-stage mechanism based on fuzzy logic is designed to settle the conflict between convergence and diversity in many-objective optimization problems. Through efficient fuzzy logic decision-making, the mechanism maintains different properties of the population at different stages. Different MaOEAs and micro multi-objective evolutionary algorithms were compared on benchmark test problems DTLZ, MaF, and WFG, and the results showed that <inline-formula><tex-math>$mu$</tex-math></inline-formula>MaOEA has an excellent performance. In addition, it also conducted simulation on two real-world problems, MPDMP and MLDMP, based on a low-power microprocessor. The results indicated the applicability of <inline-formula><tex-math>$mu$</tex-math></inline-formula>MaOEA for low-power microprocessor optimization.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 1","pages":"43-56"},"PeriodicalIF":5.3,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143106795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-05DOI: 10.1109/TETCI.2024.3451566
Lei Zhang;Chaofan Qin;Haipeng Yang;Zishan Xiong;Renzhi Cao;Fan Cheng
Dynamic community detection, which is capable of revealing changes in community structure over time, has garnered increasing attention in research. While evolutionary clustering methods have proven to be effective in tackling this issue, they often have a tendency to favor what are referred to as elite solutions, inadvertently neglecting the potential value of non-elite alternatives. Although elite solutions can ensure population convergence, they may result in negative population migration due to the lack of diversity when the network changes. In contrast, when the network undergoes changes, non-elite solutions could better adapt to the changed network, thereby can help the algorithm find accurate community structures in the new environment. To this end, we propose a diversified population migration strategy that consists of two-stages, i.e., solution selection and solution migration. In the first stage, we use elite solutions not only to ensure convergence but also non-elite solutions to maintain diversity and cope with network changes. In the second stage, the migration solutions are refined by using incremental changes between the two consecutive snapshots of networks. Based on the proposed strategy, we suggest a diversified population migration-based multiobjective evolutionary algorithm named DPMOEA. In DPMOEA, we design new genetic operators that utilize incremental changes between networks to make the population evolve in the right direction. Our experimental results demonstrate that the proposed method outperforms state-of-the-art baseline algorithms and can effectively solve the dynamic community detection problem.
{"title":"A Diversified Population Migration-Based Multiobjective Evolutionary Algorithm for Dynamic Community Detection","authors":"Lei Zhang;Chaofan Qin;Haipeng Yang;Zishan Xiong;Renzhi Cao;Fan Cheng","doi":"10.1109/TETCI.2024.3451566","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3451566","url":null,"abstract":"Dynamic community detection, which is capable of revealing changes in community structure over time, has garnered increasing attention in research. While evolutionary clustering methods have proven to be effective in tackling this issue, they often have a tendency to favor what are referred to as elite solutions, inadvertently neglecting the potential value of non-elite alternatives. Although elite solutions can ensure population convergence, they may result in negative population migration due to the lack of diversity when the network changes. In contrast, when the network undergoes changes, non-elite solutions could better adapt to the changed network, thereby can help the algorithm find accurate community structures in the new environment. To this end, we propose a diversified population migration strategy that consists of two-stages, i.e., solution selection and solution migration. In the first stage, we use elite solutions not only to ensure convergence but also non-elite solutions to maintain diversity and cope with network changes. In the second stage, the migration solutions are refined by using incremental changes between the two consecutive snapshots of networks. Based on the proposed strategy, we suggest a diversified population migration-based multiobjective evolutionary algorithm named DPMOEA. In DPMOEA, we design new genetic operators that utilize incremental changes between networks to make the population evolve in the right direction. Our experimental results demonstrate that the proposed method outperforms state-of-the-art baseline algorithms and can effectively solve the dynamic community detection problem.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 1","pages":"145-159"},"PeriodicalIF":5.3,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143107213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-05DOI: 10.1109/TETCI.2024.3451709
Jiabin Lin;Qi Chen;Bing Xue;Mengjie Zhang
Over the past decades, evolutionary multi-objective algorithms have proven their efficacy in feature selection. Nevertheless, a prevalent approach involves addressing feature selection tasks in isolation, even when these tasks share common knowledge and interdependencies. In response to this, the emerging field of evolutionary sequential transfer learning is gaining attention for feature selection. This novel approach aims to transfer and leverage knowledge gleaned by evolutionary algorithms in a source domain, applying it intelligently to enhance feature selection outcomes in a target domain. Despite its promising potential to exploit shared insights, the adoption of this transfer learning paradigm for feature selection remains surprisingly limited due to the computational expense of existing methods, which learn a mapping between the source and target search spaces. This paper introduces an advanced multi-objective feature selection approach grounded in evolutionary sequential transfer learning, strategically crafted to tackle interconnected feature selection tasks with overlapping features. Our novel framework integrates probabilistic models to capture high-order information within feature selection solutions, successfully tackling the challenges of extracting and preserving knowledge from the source domain without an expensive cost. It also provides a better way to transfer the source knowledge when the feature spaces of the source and target domains diverge. We evaluate our proposed method against four prominent single-task feature selection approaches and a cutting-edge evolutionary transfer learning feature selection method. Through empirical evaluation, our proposed approach showcases superior performance across the majority of datasets, surpassing the effectiveness of the compared methods.
{"title":"Evolutionary Sequential Transfer Learning for Multi-Objective Feature Selection in Classification","authors":"Jiabin Lin;Qi Chen;Bing Xue;Mengjie Zhang","doi":"10.1109/TETCI.2024.3451709","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3451709","url":null,"abstract":"Over the past decades, evolutionary multi-objective algorithms have proven their efficacy in feature selection. Nevertheless, a prevalent approach involves addressing feature selection tasks in isolation, even when these tasks share common knowledge and interdependencies. In response to this, the emerging field of evolutionary sequential transfer learning is gaining attention for feature selection. This novel approach aims to transfer and leverage knowledge gleaned by evolutionary algorithms in a source domain, applying it intelligently to enhance feature selection outcomes in a target domain. Despite its promising potential to exploit shared insights, the adoption of this transfer learning paradigm for feature selection remains surprisingly limited due to the computational expense of existing methods, which learn a mapping between the source and target search spaces. This paper introduces an advanced multi-objective feature selection approach grounded in evolutionary sequential transfer learning, strategically crafted to tackle interconnected feature selection tasks with overlapping features. Our novel framework integrates probabilistic models to capture high-order information within feature selection solutions, successfully tackling the challenges of extracting and preserving knowledge from the source domain without an expensive cost. It also provides a better way to transfer the source knowledge when the feature spaces of the source and target domains diverge. We evaluate our proposed method against four prominent single-task feature selection approaches and a cutting-edge evolutionary transfer learning feature selection method. Through empirical evaluation, our proposed approach showcases superior performance across the majority of datasets, surpassing the effectiveness of the compared methods.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 1","pages":"1019-1033"},"PeriodicalIF":5.3,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143360992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}