Pub Date : 2024-11-22DOI: 10.1109/TETCI.2024.3501715
{"title":"IEEE Transactions on Emerging Topics in Computational Intelligence Publication Information","authors":"","doi":"10.1109/TETCI.2024.3501715","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3501715","url":null,"abstract":"","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 6","pages":"C2-C2"},"PeriodicalIF":5.3,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10765928","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142691784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-22DOI: 10.1109/TETCI.2024.3501717
{"title":"IEEE Computational Intelligence Society Information","authors":"","doi":"10.1109/TETCI.2024.3501717","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3501717","url":null,"abstract":"","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 6","pages":"C3-C3"},"PeriodicalIF":5.3,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10765923","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142691805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-31DOI: 10.1109/TETCI.2024.3485723
Ngoc Duy Pham;Khoa T. Phan;Naveen Chilamkurti
Split learning (SL) aims to protect user data privacy by distributing deep models between the client-server and keeping private data locally. Only processed or ‘smashed’ data can be transmitted from the clients to the server during the SL process. However, recently proposed model inversion attacks can recover original data from smashed data. To enhance privacy protection against such attacks, one strategy is to adopt differential privacy (DP), which involves safeguarding the smashed data at the expense of some accuracy loss. This paper presents the first investigation into the impact on accuracy when training multiple clients in SL with various privacy requirements. Subsequently, we propose an approach that reviews the DP noise distributions of other clients during client training to address the identified accuracy degradation. We also examine the application of DP to the local model of SL to gain insights into the trade-off between accuracy and privacy. Specifically, the findings reveal that introducing noise in the later local layers offers the most favorable balance between accuracy and privacy. Drawing from our insights in the shallower layers, we propose an approach to reduce the size of smashed data to minimize data leakage while maintaining higher accuracy, optimizing the accuracy-privacy trade-off. Additionally, smashed data of a smaller size reduces communication overhead on the client side, mitigating one of the notable drawbacks of SL. Intensive experiments on various datasets demonstrate that our proposed approaches provide an optimal trade-off for incorporating DP into SL, ultimately enhancing the training accuracy for multi-client SL with varying privacy requirements.
{"title":"Enhancing Accuracy-Privacy Trade-Off in Differentially Private Split Learning","authors":"Ngoc Duy Pham;Khoa T. Phan;Naveen Chilamkurti","doi":"10.1109/TETCI.2024.3485723","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3485723","url":null,"abstract":"Split learning (SL) aims to protect user data privacy by distributing deep models between the client-server and keeping private data locally. Only processed or ‘smashed’ data can be transmitted from the clients to the server during the SL process. However, recently proposed model inversion attacks can recover original data from smashed data. To enhance privacy protection against such attacks, one strategy is to adopt differential privacy (DP), which involves safeguarding the smashed data at the expense of some accuracy loss. This paper presents the first investigation into the impact on accuracy when training multiple clients in SL with various privacy requirements. Subsequently, we propose an approach that reviews the DP noise distributions of other clients during client training to address the identified accuracy degradation. We also examine the application of DP to the local model of SL to gain insights into the trade-off between accuracy and privacy. Specifically, the findings reveal that introducing noise in the later local layers offers the most favorable balance between accuracy and privacy. Drawing from our insights in the shallower layers, we propose an approach to reduce the size of smashed data to minimize data leakage while maintaining higher accuracy, optimizing the accuracy-privacy trade-off. Additionally, smashed data of a smaller size reduces communication overhead on the client side, mitigating one of the notable drawbacks of SL. Intensive experiments on various datasets demonstrate that our proposed approaches provide an optimal trade-off for incorporating DP into SL, ultimately enhancing the training accuracy for multi-client SL with varying privacy requirements.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 1","pages":"988-1000"},"PeriodicalIF":5.3,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143361046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-07DOI: 10.1109/TETCI.2024.3372389
Chaoxu Mu;Ke Wang;Song Zhu;Guangbin Cai
Multiplayer differential games are typically characterized by multiple control loops, where communication resources are periodically transmitted and control policies are updated in a time-triggered manner. In this paper, two different event-triggered mechanisms are proposed for a class of multiplayer nonzero-sum differential game systems. Specifically, by defining a global sampled state, a centralized triggering rule is devised to manage state sampling and control updating in a synchronized manner. By considering each player's preferences, the decentralized triggering rule is devised in which a local event generator produces the triggering sequence independently. On the other hand, with experience replay and integral reinforcement learning, an event-based adaptive learning scheme is developed, which is implemented by critic neural networks and only requires partial knowledge of system dynamics. The theoretical results indicate that both two triggering mechanisms can guarantee the asymptotic stability and weight convergence. Finally, simulation results on a three-player numerical system and a two-player supersonic transport system substantiate the effectiveness of two learning-based triggering mechanisms.
{"title":"Decentralized Triggering and Event-Based Integral Reinforcement Learning for Multiplayer Differential Game Systems","authors":"Chaoxu Mu;Ke Wang;Song Zhu;Guangbin Cai","doi":"10.1109/TETCI.2024.3372389","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3372389","url":null,"abstract":"Multiplayer differential games are typically characterized by multiple control loops, where communication resources are periodically transmitted and control policies are updated in a time-triggered manner. In this paper, two different event-triggered mechanisms are proposed for a class of multiplayer nonzero-sum differential game systems. Specifically, by defining a global sampled state, a centralized triggering rule is devised to manage state sampling and control updating in a synchronized manner. By considering each player's preferences, the decentralized triggering rule is devised in which a local event generator produces the triggering sequence independently. On the other hand, with experience replay and integral reinforcement learning, an event-based adaptive learning scheme is developed, which is implemented by critic neural networks and only requires partial knowledge of system dynamics. The theoretical results indicate that both two triggering mechanisms can guarantee the asymptotic stability and weight convergence. Finally, simulation results on a three-player numerical system and a two-player supersonic transport system substantiate the effectiveness of two learning-based triggering mechanisms.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 6","pages":"3727-3741"},"PeriodicalIF":5.3,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142691785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-02DOI: 10.1109/TETCI.2024.3465291
{"title":"IEEE Transactions on Emerging Topics in Computational Intelligence Publication Information","authors":"","doi":"10.1109/TETCI.2024.3465291","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3465291","url":null,"abstract":"","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 5","pages":"C2-C2"},"PeriodicalIF":5.3,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10703867","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142368269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-02DOI: 10.1109/TETCI.2024.3463048
Joey Tianyi Zhou;Ivor W. Tsang;Yew Soon Ong
In Recent years, the rapid advancements in computational and artificial intelligence (C/AI) have led to successful applications across various disciplines, driven by neural networks and powerful computing hardware. However, these achievements come with a significant challenge: the resource-intensive nature of current AI systems, particularly deep learning models, results in substantial energy consumption and carbon emissions throughout their lifecycle. This resource demand underscores the urgent need to develop resource-constrained AI and computational intelligence methods. Sustainable C/AI approaches are crucial not only to mitigate the environmental impact of AI systems but also to enhance their role as tools for promoting sustainability in industries like reliability engineering, material design, and manufacturing.
{"title":"Guest Editorial Special Issue on Resource Sustainable Computational and Artificial Intelligence","authors":"Joey Tianyi Zhou;Ivor W. Tsang;Yew Soon Ong","doi":"10.1109/TETCI.2024.3463048","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3463048","url":null,"abstract":"In Recent years, the rapid advancements in computational and artificial intelligence (C/AI) have led to successful applications across various disciplines, driven by neural networks and powerful computing hardware. However, these achievements come with a significant challenge: the resource-intensive nature of current AI systems, particularly deep learning models, results in substantial energy consumption and carbon emissions throughout their lifecycle. This resource demand underscores the urgent need to develop resource-constrained AI and computational intelligence methods. Sustainable C/AI approaches are crucial not only to mitigate the environmental impact of AI systems but also to enhance their role as tools for promoting sustainability in industries like reliability engineering, material design, and manufacturing.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 5","pages":"3196-3198"},"PeriodicalIF":5.3,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10703865","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142368233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-02DOI: 10.1109/TETCI.2024.3465295
{"title":"IEEE Transactions on Emerging Topics in Computational Intelligence Information for Authors","authors":"","doi":"10.1109/TETCI.2024.3465295","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3465295","url":null,"abstract":"","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 5","pages":"C4-C4"},"PeriodicalIF":5.3,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10703869","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142377136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-02DOI: 10.1109/TETCI.2024.3465293
{"title":"IEEE Computational Intelligence Society Information","authors":"","doi":"10.1109/TETCI.2024.3465293","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3465293","url":null,"abstract":"","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 5","pages":"C3-C3"},"PeriodicalIF":5.3,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10703866","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142377141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-19DOI: 10.1109/TETCI.2024.3425329
Fei Qi;Junyu Li;Yue Zhang;Weitian Huang;Bin Hu;Hongmin Cai
Multi-kernel clustering aims to learn a fused kernel from a set of base kernels. However, conventional multi-kernel clustering methods typically suffer from inherent limitations in exploiting the interrelations and complementarity between the kernels. The noises and redundant information from original base kernels also lead to contamination of the fused kernel. To address these issues, this paper presents a Tensorlized Multi-Kernel Clustering (TensorMKC) method. The proposed TensorMKC stacks kernel matrices into a kernel tensor along the kernel space. To attain consensus extraction while mitigating the impact of noise, we incorporate the tensor low-rank constraint into the process of learning base kernels. Subsequently, a tensor-based weighted fusion strategy is employed to integrate the refined base kernels, yielding an optimized fused kernel for clustering. The process of kernel learning is formulated as a joint minimization problem to seek the promising fusion solution. Through extensive comparative experiments with fifteen popular methods on ten benchmark datasets from various fields, the results demonstrate that TensorMKC exhibits superior performance.
{"title":"Tensorlized Multi-Kernel Clustering via Consensus Tensor Decomposition","authors":"Fei Qi;Junyu Li;Yue Zhang;Weitian Huang;Bin Hu;Hongmin Cai","doi":"10.1109/TETCI.2024.3425329","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3425329","url":null,"abstract":"Multi-kernel clustering aims to learn a fused kernel from a set of base kernels. However, conventional multi-kernel clustering methods typically suffer from inherent limitations in exploiting the interrelations and complementarity between the kernels. The noises and redundant information from original base kernels also lead to contamination of the fused kernel. To address these issues, this paper presents a Tensorlized Multi-Kernel Clustering (TensorMKC) method. The proposed TensorMKC stacks kernel matrices into a kernel tensor along the kernel space. To attain consensus extraction while mitigating the impact of noise, we incorporate the tensor low-rank constraint into the process of learning base kernels. Subsequently, a tensor-based weighted fusion strategy is employed to integrate the refined base kernels, yielding an optimized fused kernel for clustering. The process of kernel learning is formulated as a joint minimization problem to seek the promising fusion solution. Through extensive comparative experiments with fifteen popular methods on ten benchmark datasets from various fields, the results demonstrate that TensorMKC exhibits superior performance.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 1","pages":"406-418"},"PeriodicalIF":5.3,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143361505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-18DOI: 10.1109/TETCI.2024.3419711
Huajin Tang;Pengjie Gu;Jayawan Wijekoon;MHD Anas Alsakkal;Ziming Wang;Jiangrong Shen;Rui Yan;Gang Pan
Neuromorphic computing holds the promise to achieve the energy efficiency and robust learning performance of biological neural systems. To realize the promised brain-like intelligence, it needs to solve the challenges of the neuromorphic hardware architecture design of biological neural substrate and the hardware amicable algorithms with spike-based encoding and learning. Here we introduce a neural spike coding model termed spiketrum, to characterize and transform the time-varying analog signals, typically auditory signals, into computationally efficient spatiotemporal spike patterns. It minimizes the information loss occurring at the analog-to-spike transformation and possesses informational robustness to neural fluctuations and spike losses. The model provides a sparse and efficient coding scheme with precisely controllable spike rate that facilitates training of spiking neural networks in various auditory perception tasks. We further investigate the algorithm-hardware co-designs through a neuromorphic cochlear prototype which demonstrates that our approach can provide a systematic solution for spike-based artificial intelligence by fully exploiting its advantages with spike-based computation.
{"title":"Neuromorphic Auditory Perception by Neural Spiketrum","authors":"Huajin Tang;Pengjie Gu;Jayawan Wijekoon;MHD Anas Alsakkal;Ziming Wang;Jiangrong Shen;Rui Yan;Gang Pan","doi":"10.1109/TETCI.2024.3419711","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3419711","url":null,"abstract":"Neuromorphic computing holds the promise to achieve the energy efficiency and robust learning performance of biological neural systems. To realize the promised brain-like intelligence, it needs to solve the challenges of the neuromorphic hardware architecture design of biological neural substrate and the hardware amicable algorithms with spike-based encoding and learning. Here we introduce a neural spike coding model termed spiketrum, to characterize and transform the time-varying analog signals, typically auditory signals, into computationally efficient spatiotemporal spike patterns. It minimizes the information loss occurring at the analog-to-spike transformation and possesses informational robustness to neural fluctuations and spike losses. The model provides a sparse and efficient coding scheme with precisely controllable spike rate that facilitates training of spiking neural networks in various auditory perception tasks. We further investigate the algorithm-hardware co-designs through a neuromorphic cochlear prototype which demonstrates that our approach can provide a systematic solution for spike-based artificial intelligence by fully exploiting its advantages with spike-based computation.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 1","pages":"292-303"},"PeriodicalIF":5.3,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143106869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}