Pub Date : 2026-01-15DOI: 10.1016/j.compeleceng.2026.110976
Gabriel Gómez-Ruiz, Jesús Clavijo-Camacho, Reyes Sánchez-Herrera, José M. Andújar
This article evaluates the potential of thermostatically controlled loads (TCL) as flexible resources to improve power quality―particularly phase unbalance―in low-voltage residential distribution networks while ensuring fair consumer participation. To address both grid-level and social objectives, the adaptive fairness and grid-aware allocation (AFGA) algorithm is proposed. This algorithm integrates cooperative game theory and Nash bargaining principles to jointly optimize phase balancing and consumer utility. The proposed approach dynamically allocates residential consumer flexibility by accounting for phase-level constraints, individual flexibility capacity, and historical participation, thereby preventing the persistent overuse of specific consumers and promoting equitable long-term engagement. Simulation results on a representative residential network with 100 households demonstrate that, with only 20% participation, the AFGA algorithm reduces the unbalance load factor (ULF) to below 10%, achieves a highly equitable distribution of benefits (Gini index = 0.065), and effectively enforces adaptive fairness through penalty-feedback mechanisms. Furthermore, the algorithm completes a full-day simulation in 102 s with only 0.24 MB of peak memory usage. These findings position the AFGA algorithm as an effective and scalable solution for integrating fairness-aware residential flexibility into the operation of low-voltage residential distribution networks.
{"title":"A game-theoretic approach to fair and grid-aware load flexibility allocation in residential distribution networks","authors":"Gabriel Gómez-Ruiz, Jesús Clavijo-Camacho, Reyes Sánchez-Herrera, José M. Andújar","doi":"10.1016/j.compeleceng.2026.110976","DOIUrl":"10.1016/j.compeleceng.2026.110976","url":null,"abstract":"<div><div>This article evaluates the potential of thermostatically controlled loads (TCL) as flexible resources to improve power quality―particularly phase unbalance―in low-voltage residential distribution networks while ensuring fair consumer participation. To address both grid-level and social objectives, the adaptive fairness and grid-aware allocation (AFGA) algorithm is proposed. This algorithm integrates cooperative game theory and Nash bargaining principles to jointly optimize phase balancing and consumer utility. The proposed approach dynamically allocates residential consumer flexibility by accounting for phase-level constraints, individual flexibility capacity, and historical participation, thereby preventing the persistent overuse of specific consumers and promoting equitable long-term engagement. Simulation results on a representative residential network with 100 households demonstrate that, with only 20% participation, the AFGA algorithm reduces the unbalance load factor (ULF) to below 10%, achieves a highly equitable distribution of benefits (Gini index = 0.065), and effectively enforces adaptive fairness through penalty-feedback mechanisms. Furthermore, the algorithm completes a full-day simulation in 102 s with only 0.24 MB of peak memory usage. These findings position the AFGA algorithm as an effective and scalable solution for integrating fairness-aware residential flexibility into the operation of low-voltage residential distribution networks.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110976"},"PeriodicalIF":4.9,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the widespread application of edge collaborative inference, concerns regarding data privacy and model interpretability are increasingly prominent. Gradient inversion attacks can reconstruct sensitive input data from leaked gradients, posing a significant threat to image confidentiality. Meanwhile, traditional image steganography techniques do not take into full consideration the semantic structures inherent in visual content. This can lead to suboptimal embedding locations and limited resilience to semantic perturbations, ultimately resulting in reduced robustness and concealment performance. To address the dual challenge of preserving semantic fidelity and resisting gradient-based inversion attacks in image steganography, this paper proposes an image steganography framework,named CamDWT which integrates semantic attention, frequency-domain embedding, and adversarial reversibility optimization. The proposed method combines Grad-CAM, Discrete Wavelet Transform (DWT), and gradient inversion to achieve semantically aware and robust image steganography. Grad-CAM is used to identify salient regions in the image based on class-specific activations, and secret information is embedded into the high-frequency components of these regions using DWT. During the inversion process, a dual-loss strategy is employed to ensure both gradient consistency and frequency-domain alignment, enhancing the fidelity and recoverability of the hidden content. Experimental results show a high degree of consistency in the salient regions of the original, stego, and reconstructed images. This is validated by four metrics — PCC, cosine similarity, IoU, and Top-K overlap — all meeting the required thresholds. The proposed method achieves an information extraction accuracy of over 98%, representing a 7.3% improvement compared to existing approaches. Moreover, the method exhibits robustness in embedding fidelity and ensures reliable recovery under inversion attacks.
{"title":"A reversible image steganography framework against gradient inversion attacks via saliency-guided embedding","authors":"Chen Liang , Yuxin Zhou , Ziqi Wang , Jiamin Zheng","doi":"10.1016/j.compeleceng.2026.110951","DOIUrl":"10.1016/j.compeleceng.2026.110951","url":null,"abstract":"<div><div>With the widespread application of edge collaborative inference, concerns regarding data privacy and model interpretability are increasingly prominent. Gradient inversion attacks can reconstruct sensitive input data from leaked gradients, posing a significant threat to image confidentiality. Meanwhile, traditional image steganography techniques do not take into full consideration the semantic structures inherent in visual content. This can lead to suboptimal embedding locations and limited resilience to semantic perturbations, ultimately resulting in reduced robustness and concealment performance. To address the dual challenge of preserving semantic fidelity and resisting gradient-based inversion attacks in image steganography, this paper proposes an image steganography framework,named CamDWT which integrates semantic attention, frequency-domain embedding, and adversarial reversibility optimization. The proposed method combines Grad-CAM, Discrete Wavelet Transform (DWT), and gradient inversion to achieve semantically aware and robust image steganography. Grad-CAM is used to identify salient regions in the image based on class-specific activations, and secret information is embedded into the high-frequency components of these regions using DWT. During the inversion process, a dual-loss strategy is employed to ensure both gradient consistency and frequency-domain alignment, enhancing the fidelity and recoverability of the hidden content. Experimental results show a high degree of consistency in the salient regions of the original, stego, and reconstructed images. This is validated by four metrics — PCC, cosine similarity, IoU, and Top-K overlap — all meeting the required thresholds. The proposed method achieves an information extraction accuracy of over 98%, representing a 7.3% improvement compared to existing approaches. Moreover, the method exhibits robustness in embedding fidelity and ensures reliable recovery under inversion attacks.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110951"},"PeriodicalIF":4.9,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-15DOI: 10.1016/j.compeleceng.2026.110972
Imtiyaz Ahmad, Vibhav Prakash Singh, Manoj Madhava Gore
Diabetic Retinopathy (DR) is one of the leading causes of vision impairment and blindness globally, necessitating early and accurate detection for timely clinical intervention. This paper proposes NGCF-RVFL, a novel Computer-aided-diagnosis system for multi-grade DR detection from retinal fundus images. The working of this system begins with an enhanced preprocessing pipeline that includes median filtering, Gaussian filtering, and Contrast-limited adaptive histogram equalization to reduce noise and improve contrast of the fundus images. Next, we introduce an adaptive image augmentation technique to address the issue of class imbalance. Minority class samples are increased using an augmentation that adapts the size of majority class samples. After that, we propose a Next Generation Convolutional Feature (NGCF) based on the fine-tuned ConvNeXt architecture, consisting of a hierarchical design with four feature extraction stages utilizing depthwise separable convolutions. The NGCF feature effectively encodes intricate retinal structures and disease patterns crucial for accurate DR grading. Further, the discriminative analysis with Principal Component Analysis confirms the significance and effectiveness of the extracted NGC feature in representing relevant retinal information. Furthermore, a lightweight network, Random Vector Functional Link (RVFL), is employed to evaluate the grade-wise detection performance of the proposed NGCF feature. Unlike traditional iterative learning models, the RVFL utilizes a single-pass training mechanism, significantly reducing computation time while maintaining high detection performance. Finally, we evaluate the effectiveness and detection performance of the NGCF feature on other machine learning classifiers such as Support vector machine, Multilayer perceptron, Random forest, and Decision tree. Comprehensive experiments on a benchmark dataset demonstrate that NGCF-RVFL achieves competitive scores across all DR grades with minimal training time, outperforming the state-of-the-art approaches.
{"title":"NGCF-RVFL: Next Generation Convolutional Feature with Random Vector Functional Link for multi-grade diabetic retinopathy detection","authors":"Imtiyaz Ahmad, Vibhav Prakash Singh, Manoj Madhava Gore","doi":"10.1016/j.compeleceng.2026.110972","DOIUrl":"10.1016/j.compeleceng.2026.110972","url":null,"abstract":"<div><div>Diabetic Retinopathy (DR) is one of the leading causes of vision impairment and blindness globally, necessitating early and accurate detection for timely clinical intervention. This paper proposes NGCF-RVFL, a novel Computer-aided-diagnosis system for multi-grade DR detection from retinal fundus images. The working of this system begins with an enhanced preprocessing pipeline that includes median filtering, Gaussian filtering, and Contrast-limited adaptive histogram equalization to reduce noise and improve contrast of the fundus images. Next, we introduce an adaptive image augmentation technique to address the issue of class imbalance. Minority class samples are increased using an augmentation that adapts the size of majority class samples. After that, we propose a Next Generation Convolutional Feature (NGCF) based on the fine-tuned ConvNeXt architecture, consisting of a hierarchical design with four feature extraction stages utilizing depthwise separable convolutions. The NGCF feature effectively encodes intricate retinal structures and disease patterns crucial for accurate DR grading. Further, the discriminative analysis with Principal Component Analysis confirms the significance and effectiveness of the extracted NGC feature in representing relevant retinal information. Furthermore, a lightweight network, Random Vector Functional Link (RVFL), is employed to evaluate the grade-wise detection performance of the proposed NGCF feature. Unlike traditional iterative learning models, the RVFL utilizes a single-pass training mechanism, significantly reducing computation time while maintaining high detection performance. Finally, we evaluate the effectiveness and detection performance of the NGCF feature on other machine learning classifiers such as Support vector machine, Multilayer perceptron, Random forest, and Decision tree. Comprehensive experiments on a benchmark dataset demonstrate that NGCF-RVFL achieves competitive scores across all DR grades with minimal training time, outperforming the state-of-the-art approaches.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110972"},"PeriodicalIF":4.9,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-14DOI: 10.1016/j.compeleceng.2026.110986
Vedran Jurdana
Compressive sensing (CS) enables high-resolution reconstruction of time–frequency distributions (TFDs) for non-stationary signals. The Rényi entropy-based two-step iterative shrinkage/thresholding (RTwIST) algorithm addresses regularization challenges through component-wise shrinkage guided by local Rényi entropy (LRE). However, RTwIST exhibits reconstruction inaccuracies for signals with components of differing time–frequency orientations due to global shrinkage and imprecise LRE estimation. This study proposes an enhanced RTwIST framework incorporating a component alignment map (CAM), which utilizes orientation estimation to segment the TFD into regions dominated by time- or frequency-aligned components. This localized segmentation enables adaptive shrinkage tailored to each region, and automates LRE parameter selection. Experiments on synthetic signals and real-world datasets, including gravitational wave and electroencephalogram (EEG) seizure signals, demonstrate improved auto-term resolution, reduced cross-term interference, and lower tuning complexity compared to standard RTwIST and state-of-the-art methods. These improvements support more accurate analysis of complex oscillatory and transient signals found in astrophysics, biomedical engineering, and beyond.
{"title":"Component alignment-aware sparse time–frequency distribution reconstruction for complex signals with coexisting oscillatory and transient components","authors":"Vedran Jurdana","doi":"10.1016/j.compeleceng.2026.110986","DOIUrl":"10.1016/j.compeleceng.2026.110986","url":null,"abstract":"<div><div>Compressive sensing (CS) enables high-resolution reconstruction of time–frequency distributions (TFDs) for non-stationary signals. The Rényi entropy-based two-step iterative shrinkage/thresholding (RTwIST) algorithm addresses regularization challenges through component-wise shrinkage guided by local Rényi entropy (LRE). However, RTwIST exhibits reconstruction inaccuracies for signals with components of differing time–frequency orientations due to global shrinkage and imprecise LRE estimation. This study proposes an enhanced RTwIST framework incorporating a component alignment map (CAM), which utilizes orientation estimation to segment the TFD into regions dominated by time- or frequency-aligned components. This localized segmentation enables adaptive shrinkage tailored to each region, and automates LRE parameter selection. Experiments on synthetic signals and real-world datasets, including gravitational wave and electroencephalogram (EEG) seizure signals, demonstrate improved auto-term resolution, reduced cross-term interference, and lower tuning complexity compared to standard RTwIST and state-of-the-art methods. These improvements support more accurate analysis of complex oscillatory and transient signals found in astrophysics, biomedical engineering, and beyond.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110986"},"PeriodicalIF":4.9,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1016/j.compeleceng.2026.110948
Mohamed W. Haggag , Asmaa H. Rabie , Islam Ismael , Waleed Shaaban
The increasing global demand for food and energy, along with climate change and resource scarcity, causes potential challenges to sustainable agriculture. Smart greenhouses create controlled environments that optimize crop production and minimize resource use, especially in arid regions. This paper introduces a Real-Time Smart Greenhouse Energy Management (SGEM) system that combines IoT-based sensing, renewable energy sources, and a Hybrid Optimization Algorithm (HOA). The HOA combines Particle Swarm Optimization (PSO) for global exploration with the Coati Optimization Algorithm (COA) for local exploitation. To optimize operating costs, battery State of Charge (SoC), and the use of renewable energy, the HOA dynamically gathers energy from photovoltaic panels (PV), battery storage, and the electrical grid. The system is designed as a hybrid PV-battery-grid configuration, validated through both simulation and experimental implementation, guaranteeing a steady supply of energy while minimizing grid dependency. Experimental validation shows that the SGEM system reduces costs by 49.98%, lowers daily grid consumption by 50.24%, cuts CO₂ emissions by 50.5%, and extends battery life by 14.7%. The results obtained demonstrate the system’s capability for adaptive, efficient, and sustainable greenhouse energy management, providing a scalable solution for modern smart agriculture.
{"title":"A real-time smart energy management system for greenhouses using a hybrid optimization algorithm: Experimental implementation for efficient and sustainable operation","authors":"Mohamed W. Haggag , Asmaa H. Rabie , Islam Ismael , Waleed Shaaban","doi":"10.1016/j.compeleceng.2026.110948","DOIUrl":"10.1016/j.compeleceng.2026.110948","url":null,"abstract":"<div><div>The increasing global demand for food and energy, along with climate change and resource scarcity, causes potential challenges to sustainable agriculture. Smart greenhouses create controlled environments that optimize crop production and minimize resource use, especially in arid regions. This paper introduces a Real-Time Smart Greenhouse Energy Management (SGEM) system that combines IoT-based sensing, renewable energy sources, and a Hybrid Optimization Algorithm (HOA). The HOA combines Particle Swarm Optimization (PSO) for global exploration with the Coati Optimization Algorithm (COA) for local exploitation. To optimize operating costs, battery State of Charge (SoC), and the use of renewable energy, the HOA dynamically gathers energy from photovoltaic panels (PV), battery storage, and the electrical grid. The system is designed as a hybrid PV-battery-grid configuration, validated through both simulation and experimental implementation, guaranteeing a steady supply of energy while minimizing grid dependency. Experimental validation shows that the SGEM system reduces costs by 49.98%, lowers daily grid consumption by 50.24%, cuts CO₂ emissions by 50.5%, and extends battery life by 14.7%. The results obtained demonstrate the system’s capability for adaptive, efficient, and sustainable greenhouse energy management, providing a scalable solution for modern smart agriculture.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110948"},"PeriodicalIF":4.9,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1016/j.compeleceng.2026.110960
Pundreekaksha Sharma , Dr. Vijay Kumar , Dr. Neeraj Kumar
With the rapid proliferation of obscene content over the internet, detecting and preventing obscenity has become the most prominent way of maintaining a safe digital environment. The accessibility of obscene content has significant psychological, social, ethical, and technological impacts. To overcome these challenges, it is essential to develop an obscenity detection system using advanced artificial intelligence techniques, including social and ethical considerations that prevent the spread of obscenity. This research has presented a comprehensive literature analysis covering traditional to advanced computational techniques for obscenity detection. It also serves as a valuable resource for researchers improving obscenity detection techniques. Analyse computer vision techniques for obscenity detection, featuring hybrid deep learning methods including Transformers, vision transformers, diffusion models, and other techniques. Additionally, this research discusses the strengths and limitations of these techniques. Examines the mathematical formulations and equations of the models, and the impact of input and additional parameters. Compare the performance of models on various datasets and discuss how to develop a diverse dataset. A significant overview of social and ethical considerations included in obscenity detection. The research paper also highlights challenges and potential future research directions in obscenity detection. In conclusion, this research provides a gap analysis that helps researchers enhance computational techniques for obscenity detection.
{"title":"A comprehensive review of computational techniques for obscenity detection: Past, present, and future","authors":"Pundreekaksha Sharma , Dr. Vijay Kumar , Dr. Neeraj Kumar","doi":"10.1016/j.compeleceng.2026.110960","DOIUrl":"10.1016/j.compeleceng.2026.110960","url":null,"abstract":"<div><div>With the rapid proliferation of obscene content over the internet, detecting and preventing obscenity has become the most prominent way of maintaining a safe digital environment. The accessibility of obscene content has significant psychological, social, ethical, and technological impacts. To overcome these challenges, it is essential to develop an obscenity detection system using advanced artificial intelligence techniques, including social and ethical considerations that prevent the spread of obscenity. This research has presented a comprehensive literature analysis covering traditional to advanced computational techniques for obscenity detection. It also serves as a valuable resource for researchers improving obscenity detection techniques. Analyse computer vision techniques for obscenity detection, featuring hybrid deep learning methods including Transformers, vision transformers, diffusion models, and other techniques. Additionally, this research discusses the strengths and limitations of these techniques. Examines the mathematical formulations and equations of the models, and the impact of input and additional parameters. Compare the performance of models on various datasets and discuss how to develop a diverse dataset. A significant overview of social and ethical considerations included in obscenity detection. The research paper also highlights challenges and potential future research directions in obscenity detection. In conclusion, this research provides a gap analysis that helps researchers enhance computational techniques for obscenity detection.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110960"},"PeriodicalIF":4.9,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1016/j.compeleceng.2025.110925
Sai Zhang , Wu Le , Zhen-Hong Jia , Hao Wu
Existing time series similarity measures are often difficult to apply to large-scale datasets due to their high computational complexity. Some solutions that pursue linear complexity usually come at the expense of fine-grained analysis of sequence dynamics, resulting in insufficient discriminative ability in complex scenarios. In this paper, we propose a multi-feature fusion algorithm that can achieve a fine-grained measure of sequence similarity while maintaining linear complexity. First, this paper introduces a novel subsequence trend encoding mechanism, which provides a new perspective beyond the traditional structural features for similarity judgment by quantifying the dynamic direction within the subsequence. Second, the algorithm comprehensively evaluates candidate subsequences from both complexity and trend perspectives, and forms a more robust distance metric by weighted fusion of the two features, thus effectively reducing the misjudgments that a single perspective may cause. Experimental results on 70 UCR benchmark datasets validate our approach, which not only achieves the #1 average rank in classification accuracy among 17 state-of-the-art algorithms but also demonstrates exceptional efficiency, proving to be orders of magnitude faster in single sequence prediction than many traditional, computationally intensive distance measures.
{"title":"A multi-feature distance measure for time series classification","authors":"Sai Zhang , Wu Le , Zhen-Hong Jia , Hao Wu","doi":"10.1016/j.compeleceng.2025.110925","DOIUrl":"10.1016/j.compeleceng.2025.110925","url":null,"abstract":"<div><div>Existing time series similarity measures are often difficult to apply to large-scale datasets due to their high computational complexity. Some solutions that pursue linear complexity usually come at the expense of fine-grained analysis of sequence dynamics, resulting in insufficient discriminative ability in complex scenarios. In this paper, we propose a multi-feature fusion algorithm that can achieve a fine-grained measure of sequence similarity while maintaining linear complexity. First, this paper introduces a novel subsequence trend encoding mechanism, which provides a new perspective beyond the traditional structural features for similarity judgment by quantifying the dynamic direction within the subsequence. Second, the algorithm comprehensively evaluates candidate subsequences from both complexity and trend perspectives, and forms a more robust distance metric by weighted fusion of the two features, thus effectively reducing the misjudgments that a single perspective may cause. Experimental results on 70 UCR benchmark datasets validate our approach, which not only achieves the #1 average rank in classification accuracy among 17 state-of-the-art algorithms but also demonstrates exceptional efficiency, proving to be orders of magnitude faster in single sequence prediction than many traditional, computationally intensive distance measures.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110925"},"PeriodicalIF":4.9,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-11DOI: 10.1016/j.compeleceng.2026.110961
Fei Wu , Jiahuan Lu , Hao Jin , Yibo Song , Guangwei Gao , Xiao-Yuan Jing
Federated learning (FL) allows multiple parties to collectively train deep learning models without the need to disclose their local data. The data distributions among various parties are usually non-independently and identically distributed (non-IID), and simultaneously the class imbalance problem often exits locally and globally, which is the main challenge of FL. Although some FL works have been presented aiming to solve this issue, there still exist much room to enhance the image classification effect by using deep learning models. In addition, under the non-IID setting, how to ensure the security of FL methods against the attack of malicious clients or central servers has not been well researched. We develop a novel decentralized FL approach in this paper, namely Blockchain-based Federated learning with Metric and Imbalanced Learning (BFMIL). The triplet loss is introduced to promote the consistency of feature representations between the client model and server model. To address the class imbalance problem, a cost-sensitive semantic discrimination loss is designed to fully explore the discriminative information, and data in each party is divided into the majority classes and the minority classes for unequal training. To reduce malicious attack, we utilize the blockchain to store the local update and the global model, and a novel voting mechanism is used to select parties with better model parameters for aggregation in each round of FL. The effectiveness of BFMIL is demonstrated by experiments conducted on four imbalanced datasets.
联邦学习(FL)允许多方共同训练深度学习模型,而无需公开其本地数据。各方之间的数据分布通常是非独立同分布(non- independent and identity distribution, non-IID),同时局部和全局往往存在类不平衡问题,这是人工智能面临的主要挑战。尽管已经有一些针对这一问题的人工智能作品出现,但利用深度学习模型来增强图像分类效果仍有很大的空间。此外,在非iid设置下,如何保证FL方法不受恶意客户端或中央服务器攻击的安全性还没有得到很好的研究。我们在本文中开发了一种新的分散FL方法,即基于区块链的联邦学习与度量和不平衡学习(BFMIL)。为了提高客户端模型和服务器模型之间特征表示的一致性,引入了三元丢失。为了解决类不平衡问题,设计了一个代价敏感的语义歧视损失来充分挖掘歧视信息,并将每一方的数据分成多数类和少数类进行不平等训练。为了减少恶意攻击,我们利用区块链来存储本地更新和全局模型,并使用一种新的投票机制来选择具有更好模型参数的各方在每轮FL中进行聚合。通过在四个不平衡数据集上的实验证明了BFMIL的有效性。
{"title":"Blockchain-based federated learning with metric and imbalanced learning for visual classification","authors":"Fei Wu , Jiahuan Lu , Hao Jin , Yibo Song , Guangwei Gao , Xiao-Yuan Jing","doi":"10.1016/j.compeleceng.2026.110961","DOIUrl":"10.1016/j.compeleceng.2026.110961","url":null,"abstract":"<div><div>Federated learning (FL) allows multiple parties to collectively train deep learning models without the need to disclose their local data. The data distributions among various parties are usually non-independently and identically distributed (non-IID), and simultaneously the class imbalance problem often exits locally and globally, which is the main challenge of FL. Although some FL works have been presented aiming to solve this issue, there still exist much room to enhance the image classification effect by using deep learning models. In addition, under the non-IID setting, how to ensure the security of FL methods against the attack of malicious clients or central servers has not been well researched. We develop a novel decentralized FL approach in this paper, namely Blockchain-based Federated learning with Metric and Imbalanced Learning (BFMIL). The triplet loss is introduced to promote the consistency of feature representations between the client model and server model. To address the class imbalance problem, a cost-sensitive semantic discrimination loss is designed to fully explore the discriminative information, and data in each party is divided into the majority classes and the minority classes for unequal training. To reduce malicious attack, we utilize the blockchain to store the local update and the global model, and a novel voting mechanism is used to select parties with better model parameters for aggregation in each round of FL. The effectiveness of BFMIL is demonstrated by experiments conducted on four imbalanced datasets.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110961"},"PeriodicalIF":4.9,"publicationDate":"2026-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-10DOI: 10.1016/j.compeleceng.2026.110945
Shasya Shukla, S.K. Jha
This study presents an advanced intelligent control strategy for Load Frequency Control (LFC) in a multi-area hybrid power system (HPS) comprising reheat thermal units, nuclear generation, and renewable energy sources (RESs) such as wind power, supported by a Battery Energy Storage System (BESS). The study proposes a novel HBA-tuned Deep Deterministic Policy Gradient Reinforcement Learning (DDPG-RL) controller designed to enhance dynamic frequency regulation under varying operating conditions. In the proposed approach, a reinforcement learning agent adaptively modulates governor setpoints and coordinates auxiliary energy resources to suppress frequency deviations. To further improve policy convergence and optimization quality, the critical hyperparameters of the agent are fine-tuned using the Honey Badger Algorithm (HBA), a recent nature-inspired metaheuristic based on the foraging intelligence and digging behavior of honey badgers. The hybrid HBA-DDPG framework enables robust adaptation to load fluctuations, renewable intermittency, and inter-area disturbances while maintaining tie-line power balance. Simulation studies demonstrate significant improvements over conventional controllers and standalone metaheuristic-based methods showing settling time (7.6 s.), maximum overshoot (1.4%), and overall error indices (ISE as 0.0022 and ITAE as 0.566) hence highlighting the effectiveness of combining reinforcement learning with metaheuristic optimization, offering a scalable, resilient, and high-performance solution for next-generation smart grids.
{"title":"A hybrid HBA-tuned DDPG reinforcement learning strategy for intelligent load frequency control in multi-area hybrid power systems","authors":"Shasya Shukla, S.K. Jha","doi":"10.1016/j.compeleceng.2026.110945","DOIUrl":"10.1016/j.compeleceng.2026.110945","url":null,"abstract":"<div><div>This study presents an advanced intelligent control strategy for Load Frequency Control (LFC) in a multi-area hybrid power system (HPS) comprising reheat thermal units, nuclear generation, and renewable energy sources (RESs) such as wind power, supported by a Battery Energy Storage System (BESS). The study proposes a novel HBA-tuned Deep Deterministic Policy Gradient Reinforcement Learning (DDPG-RL) controller designed to enhance dynamic frequency regulation under varying operating conditions. In the proposed approach, a reinforcement learning agent adaptively modulates governor setpoints and coordinates auxiliary energy resources to suppress frequency deviations. To further improve policy convergence and optimization quality, the critical hyperparameters of the agent are fine-tuned using the Honey Badger Algorithm (HBA), a recent nature-inspired metaheuristic based on the foraging intelligence and digging behavior of honey badgers. The hybrid HBA-DDPG framework enables robust adaptation to load fluctuations, renewable intermittency, and inter-area disturbances while maintaining tie-line power balance. Simulation studies demonstrate significant improvements over conventional controllers and standalone metaheuristic-based methods showing settling time (7.6 s.), maximum overshoot (1.4%), and overall error indices (ISE as 0.0022 and ITAE as 0.566) hence highlighting the effectiveness of combining reinforcement learning with metaheuristic optimization, offering a scalable, resilient, and high-performance solution for next-generation smart grids.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110945"},"PeriodicalIF":4.9,"publicationDate":"2026-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autism Spectrum Disorder (ASD) affects approximately 1% of the global child population, yet current gold-standard diagnostic methods remain time-intensive and expertise-dependent. Electroencephalography (EEG) offers an objective and scalable approach for neurophysiological measurement, facilitating early detection.
Methods
This study evaluated three neural sequence architectures —Long Short-Term Memory (LSTM), Transformer, and Mamba (Selective State Space Model) —for ASD classification using 47-channel, 150-second resting-state EEG recordings from 56 adults (28 with ASD, 28 controls) from the University of Sheffield dataset. Data were preprocessed using MNE-Python with band-pass filtering (0.50–50 Hz), Independent Component Analysis (ICA) artifact removal, and z-score normalization. Models were trained on epochs of varying durations (1 s, 2.50 s, 5 s) using stratified 5-fold cross-validation, with performance evaluated on a held-out test set (15%). Mixture-of-Experts (MoE) ensembles were constructed using performance-based weighted averaging. Regional classification and spectral analyses identified anatomical and frequency-specific biomarkers.
Results
The Mamba model achieved 98.18% accuracy with only 2972 parameters and a training time of 0.09 min at 2.50-second epochs. LSTM (144,578 parameters) reached 95.25% accuracy, while Transformer (38,946 parameters) attained 94.41%. The optimal Mamba+LSTM ensemble achieved 98.46% accuracy (Cohen's κ=0.97, ROC-AUC=99.84%) with only 11 misclassifications from 716 test samples. Regional analysis revealed frontal lobe dominance (76.81% accuracy, 25 channels) with theta-band (4–8 Hz) biomarkers. Spectral analysis confirmed characteristic ASD patterns: elevated delta/theta power, suppressed alpha rhythm, and increased beta/gamma activity. Single-channel analysis identified C5 (left central, 58.80% accuracy) as the most discriminative electrode.
Conclusions
Neural sequence models, particularly the parameter-efficient Mamba architecture and the Mamba+LSTM ensemble, demonstrate exceptional performance for EEG-based ASD classification, offering a clinically scalable and objective diagnostic tool. The frontal-central electrode configuration and theta-band biomarkers provide neurophysiologically interpretable features suitable for portable EEG systems and early screening applications.
{"title":"Quantitative EEG-based autism spectrum disorder detection using neural sequence models","authors":"Majid Nour , Ümit Şentürk , Alperen Akgül , Kemal Polat","doi":"10.1016/j.compeleceng.2026.110962","DOIUrl":"10.1016/j.compeleceng.2026.110962","url":null,"abstract":"<div><h3>Background</h3><div>Autism Spectrum Disorder (ASD) affects approximately 1% of the global child population, yet current gold-standard diagnostic methods remain time-intensive and expertise-dependent. Electroencephalography (EEG) offers an objective and scalable approach for neurophysiological measurement, facilitating early detection.</div></div><div><h3>Methods</h3><div>This study evaluated three neural sequence architectures —Long Short-Term Memory (LSTM), Transformer, and Mamba (Selective State Space Model) —for ASD classification using 47-channel, 150-second resting-state EEG recordings from 56 adults (28 with ASD, 28 controls) from the University of Sheffield dataset. Data were preprocessed using MNE-Python with band-pass filtering (0.50–50 Hz), Independent Component Analysis (ICA) artifact removal, and z-score normalization. Models were trained on epochs of varying durations (1 s, 2.50 s, 5 s) using stratified 5-fold cross-validation, with performance evaluated on a held-out test set (15%). Mixture-of-Experts (MoE) ensembles were constructed using performance-based weighted averaging. Regional classification and spectral analyses identified anatomical and frequency-specific biomarkers.</div></div><div><h3>Results</h3><div>The Mamba model achieved 98.18% accuracy with only 2972 parameters and a training time of 0.09 min at 2.50-second epochs. LSTM (144,578 parameters) reached 95.25% accuracy, while Transformer (38,946 parameters) attained 94.41%. The optimal Mamba+LSTM ensemble achieved 98.46% accuracy (Cohen's κ=0.97, ROC-AUC=99.84%) with only 11 misclassifications from 716 test samples. Regional analysis revealed frontal lobe dominance (76.81% accuracy, 25 channels) with theta-band (4–8 Hz) biomarkers. Spectral analysis confirmed characteristic ASD patterns: elevated delta/theta power, suppressed alpha rhythm, and increased beta/gamma activity. Single-channel analysis identified C5 (left central, 58.80% accuracy) as the most discriminative electrode.</div></div><div><h3>Conclusions</h3><div>Neural sequence models, particularly the parameter-efficient Mamba architecture and the Mamba+LSTM ensemble, demonstrate exceptional performance for EEG-based ASD classification, offering a clinically scalable and objective diagnostic tool. The frontal-central electrode configuration and theta-band biomarkers provide neurophysiologically interpretable features suitable for portable EEG systems and early screening applications.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110962"},"PeriodicalIF":4.9,"publicationDate":"2026-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}