Pub Date : 2026-03-01Epub Date: 2026-01-15DOI: 10.1016/j.compeleceng.2026.110976
Gabriel Gómez-Ruiz, Jesús Clavijo-Camacho, Reyes Sánchez-Herrera, José M. Andújar
This article evaluates the potential of thermostatically controlled loads (TCL) as flexible resources to improve power quality―particularly phase unbalance―in low-voltage residential distribution networks while ensuring fair consumer participation. To address both grid-level and social objectives, the adaptive fairness and grid-aware allocation (AFGA) algorithm is proposed. This algorithm integrates cooperative game theory and Nash bargaining principles to jointly optimize phase balancing and consumer utility. The proposed approach dynamically allocates residential consumer flexibility by accounting for phase-level constraints, individual flexibility capacity, and historical participation, thereby preventing the persistent overuse of specific consumers and promoting equitable long-term engagement. Simulation results on a representative residential network with 100 households demonstrate that, with only 20% participation, the AFGA algorithm reduces the unbalance load factor (ULF) to below 10%, achieves a highly equitable distribution of benefits (Gini index = 0.065), and effectively enforces adaptive fairness through penalty-feedback mechanisms. Furthermore, the algorithm completes a full-day simulation in 102 s with only 0.24 MB of peak memory usage. These findings position the AFGA algorithm as an effective and scalable solution for integrating fairness-aware residential flexibility into the operation of low-voltage residential distribution networks.
{"title":"A game-theoretic approach to fair and grid-aware load flexibility allocation in residential distribution networks","authors":"Gabriel Gómez-Ruiz, Jesús Clavijo-Camacho, Reyes Sánchez-Herrera, José M. Andújar","doi":"10.1016/j.compeleceng.2026.110976","DOIUrl":"10.1016/j.compeleceng.2026.110976","url":null,"abstract":"<div><div>This article evaluates the potential of thermostatically controlled loads (TCL) as flexible resources to improve power quality―particularly phase unbalance―in low-voltage residential distribution networks while ensuring fair consumer participation. To address both grid-level and social objectives, the adaptive fairness and grid-aware allocation (AFGA) algorithm is proposed. This algorithm integrates cooperative game theory and Nash bargaining principles to jointly optimize phase balancing and consumer utility. The proposed approach dynamically allocates residential consumer flexibility by accounting for phase-level constraints, individual flexibility capacity, and historical participation, thereby preventing the persistent overuse of specific consumers and promoting equitable long-term engagement. Simulation results on a representative residential network with 100 households demonstrate that, with only 20% participation, the AFGA algorithm reduces the unbalance load factor (ULF) to below 10%, achieves a highly equitable distribution of benefits (Gini index = 0.065), and effectively enforces adaptive fairness through penalty-feedback mechanisms. Furthermore, the algorithm completes a full-day simulation in 102 s with only 0.24 MB of peak memory usage. These findings position the AFGA algorithm as an effective and scalable solution for integrating fairness-aware residential flexibility into the operation of low-voltage residential distribution networks.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110976"},"PeriodicalIF":4.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Self-supervised learning (SSL) models are increasingly used in speech processing tasks, where they provide powerful pretrained representations of speech. Most existing methods utilize these models by either fine-tuning them on domain-specific data or using their output representations as input features in conventional ASR systems. However, the relationship between SSL layer representations and the severity level of dysarthric speech remains poorly understood, despite the potential for different layers to capture features that vary in relevance across severity levels. Furthermore, the high dimensionality of these representations, often reaching up to 1024 dimensions, imposes a heavy computational load, highlighting the need for optimized feature representations in downstream ASR and keyword spotting (KWS) tasks. This study proposes a severity-independent approach for dysarthric speech processing using SSL features, investigating three state-of-the-art pretrained models: Wav2Vec2, HuBERT, and Data2Vec. We propose: (1) selecting SSL layers based on severity level to extract the most useful features; (2) a Kaldi-based ASR system, that uses an autoencoder to reduce the size of SSL features; and (3) validating the proposed SSL feature optimization in a KWS task. We evaluate the proposed method using a DNN–HMM model in Kaldi on two standard dysarthric speech datasets: TORGO and UAspeech. Our approach shows that selecting severity-specific SSL layers, combined with autoencoder (AE)-based feature optimization, leads to significant improvements over both zero-shot and fine-tuned SSL baselines. On TORGO, our method achieved a WER of 23.12%, outperforming zero-shot (60.35%) and fine-tuned SSL model (40.48%). On UAspeech, it reached 50.33% WER, surpassing both the fine-tuned (51.04%) and MFCC-based systems (58.67%). Layer-wise analysis revealed consistent trends: lower layers were more effective for very high-severity speech, while mid-to-upper layers performed better for low/medium-severity cases. Further, in the KWS task, later SSL layers showed the best performance, with our proposed system outperforming the MFCC baseline. These findings highlight the generalization of our proposed method, which combines layer-specific selection and autoencoder-based optimization of SSL features, for dysarthric speech processing tasks.
{"title":"Role of SSL models: Finetuning and feature optimization for dysarthric speech recognition and keyword spotting","authors":"Paban Sapkota, Hemant Kumar Kathania, Subham Kutum","doi":"10.1016/j.compeleceng.2025.110921","DOIUrl":"10.1016/j.compeleceng.2025.110921","url":null,"abstract":"<div><div>Self-supervised learning (SSL) models are increasingly used in speech processing tasks, where they provide powerful pretrained representations of speech. Most existing methods utilize these models by either fine-tuning them on domain-specific data or using their output representations as input features in conventional ASR systems. However, the relationship between SSL layer representations and the severity level of dysarthric speech remains poorly understood, despite the potential for different layers to capture features that vary in relevance across severity levels. Furthermore, the high dimensionality of these representations, often reaching up to 1024 dimensions, imposes a heavy computational load, highlighting the need for optimized feature representations in downstream ASR and keyword spotting (KWS) tasks. This study proposes a severity-independent approach for dysarthric speech processing using SSL features, investigating three state-of-the-art pretrained models: Wav2Vec2, HuBERT, and Data2Vec. We propose: (1) selecting SSL layers based on severity level to extract the most useful features; (2) a Kaldi-based ASR system, that uses an autoencoder to reduce the size of SSL features; and (3) validating the proposed SSL feature optimization in a KWS task. We evaluate the proposed method using a DNN–HMM model in Kaldi on two standard dysarthric speech datasets: TORGO and UAspeech. Our approach shows that selecting severity-specific SSL layers, combined with autoencoder (AE)-based feature optimization, leads to significant improvements over both zero-shot and fine-tuned SSL baselines. On TORGO, our method achieved a WER of 23.12%, outperforming zero-shot (60.35%) and fine-tuned SSL model (40.48%). On UAspeech, it reached 50.33% WER, surpassing both the fine-tuned (51.04%) and MFCC-based systems (58.67%). Layer-wise analysis revealed consistent trends: lower layers were more effective for very high-severity speech, while mid-to-upper layers performed better for low/medium-severity cases. Further, in the KWS task, later SSL layers showed the best performance, with our proposed system outperforming the MFCC baseline. These findings highlight the generalization of our proposed method, which combines layer-specific selection and autoencoder-based optimization of SSL features, for dysarthric speech processing tasks.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110921"},"PeriodicalIF":4.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-01-09DOI: 10.1016/j.compeleceng.2026.110950
Rong Zhou
This study conducts cryptanalysis on a Novel Image Cryptosystem based on Latin Squares (NIC-LS). The NIC-LS adopts a multi-round encryption structure, with row or column scrambling alternating with diffusion. It leverages properties of Latin squares generated by the Coupled Map Lattice (CML) system to determine scrambling/diffusion selection modes, aiming for enhanced encryption performance. However, all diffusion operations in NIC-LS rely solely on simple modular addition—this flaw gives rise to an equivalent algorithm for the cryptosystem. When a Differential Attack (DA) is applied to this equivalent scheme, the system degenerates into a linear one: all diffusion effects are eliminated, leaving only the scrambling component. Building on the superposition principle and standard orthogonal basis concept, this study further breaks the equivalent algorithm (and thus NIC-LS) via a Chosen-Ciphertext Attack (CCA). Notably, the attack’s computational complexity is extremely low and some countermeasures are discussed based on the cryptanalysis. Both theoretical analysis and experimental results confirm the proposed cryptanalysis is effective and practically feasible.
{"title":"Cryptanalysis of an image encryption algorithm using Latin squares","authors":"Rong Zhou","doi":"10.1016/j.compeleceng.2026.110950","DOIUrl":"10.1016/j.compeleceng.2026.110950","url":null,"abstract":"<div><div>This study conducts cryptanalysis on a Novel Image Cryptosystem based on Latin Squares (NIC-LS). The NIC-LS adopts a multi-round encryption structure, with row or column scrambling alternating with diffusion. It leverages properties of Latin squares generated by the Coupled Map Lattice (CML) system to determine scrambling/diffusion selection modes, aiming for enhanced encryption performance. However, all diffusion operations in NIC-LS rely solely on simple modular addition—this flaw gives rise to an equivalent algorithm for the cryptosystem. When a Differential Attack (DA) is applied to this equivalent scheme, the system degenerates into a linear one: all diffusion effects are eliminated, leaving only the scrambling component. Building on the superposition principle and standard orthogonal basis concept, this study further breaks the equivalent algorithm (and thus NIC-LS) via a Chosen-Ciphertext Attack (CCA). Notably, the attack’s computational complexity is extremely low and some countermeasures are discussed based on the cryptanalysis. Both theoretical analysis and experimental results confirm the proposed cryptanalysis is effective and practically feasible.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110950"},"PeriodicalIF":4.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-01-17DOI: 10.1016/j.compeleceng.2026.110943
Ming-An Chung , Ting-Lan Lin , Ding-Yuan Chen , Bang-Hao Liu , Kun-Hu Jiang , Yangming Wen , Mohammad Shahid
The image sensors capture image signals in a color filter array (CFA) format. After demosaicking and RGB-to-YUV conversion, YUV 420 subsampling is performed for image/video compression. In recent work, YUV 420 subsampling is considered in either of two schemes: subsampling the chrominance while keeping the luminance values the same, or finding optimal luminance values given subsampled chrominance values. In this paper, we extended prior work by reducing the search space to a few Y candidates by observing multiple intervals in the pixel distortion curve, and by developing more flexible, structured cost functions to enable further optimization of the recovered pixels. The closed-form solution still requires a parameter set for each pixel location. Therefore, several methods for reducing complexity are proposed. In comparison to previous methods evaluated on two benchmark datasets, IMAX and SCI, our approach consistently improves image quality (measured in dB) while incurring only minimal increases in computation time (in seconds). Specifically, for the SCI dataset, relative to the Unoptimized Luminance method, we achieve an average CPSNR increase of 3.69 to 7.15 dB, accompanied by an increase in computation time of 12.35 to 13.63 s. In contrast, the Optimized Luminance method yields an average CPSNR improvement of 2.84 to 5.67 dB, with a lower computation time of 0.24 to 3.94 s. For the IMAX dataset, when compared to the unoptimized Luminance method, we note an average CPSNR enhancement of 1.66 to 4.58 dB, with a corresponding rise in computation time of 7.00 to 8.71 s. Meanwhile, the Optimized Luminance method results in an average CPSNR increase of 0.4 to 3.73 dB, with a modest computation time increase of 2.07 to 2.86 s.
{"title":"Optimization of subsampled chrominance and luminance for color image signals","authors":"Ming-An Chung , Ting-Lan Lin , Ding-Yuan Chen , Bang-Hao Liu , Kun-Hu Jiang , Yangming Wen , Mohammad Shahid","doi":"10.1016/j.compeleceng.2026.110943","DOIUrl":"10.1016/j.compeleceng.2026.110943","url":null,"abstract":"<div><div>The image sensors capture image signals in a color filter array (CFA) format. After demosaicking and RGB-to-YUV conversion, YUV 420 subsampling is performed for image/video compression. In recent work, YUV 420 subsampling is considered in either of two schemes: subsampling the chrominance while keeping the luminance values the same, or finding optimal luminance values given subsampled chrominance values. In this paper, we extended prior work by reducing the search space to a few Y candidates by observing multiple intervals in the pixel distortion curve, and by developing more flexible, structured cost functions to enable further optimization of the recovered pixels. The closed-form solution still requires a parameter set for each pixel location. Therefore, several methods for reducing complexity are proposed. In comparison to previous methods evaluated on two benchmark datasets, IMAX and SCI, our approach consistently improves image quality (measured in dB) while incurring only minimal increases in computation time (in seconds). Specifically, for the SCI dataset, relative to the Unoptimized Luminance method, we achieve an average CPSNR increase of 3.69 to 7.15 dB, accompanied by an increase in computation time of 12.35 to 13.63 s. In contrast, the Optimized Luminance method yields an average CPSNR improvement of 2.84 to 5.67 dB, with a lower computation time of 0.24 to 3.94 s. For the IMAX dataset, when compared to the unoptimized Luminance method, we note an average CPSNR enhancement of 1.66 to 4.58 dB, with a corresponding rise in computation time of 7.00 to 8.71 s. Meanwhile, the Optimized Luminance method results in an average CPSNR increase of 0.4 to 3.73 dB, with a modest computation time increase of 2.07 to 2.86 s.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110943"},"PeriodicalIF":4.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-12-29DOI: 10.1016/j.compeleceng.2025.110922
Mohamed Lahdeb , Ali Hennache , Bachir Bentouati , M.M.R. Ahmed , Ragab A. El-Sehiemy , M. Elzalik
The optimal power flow (OPF) problem is a highly nonlinear and complex multi-dimension optimization problem, especially with the increased penetration of uncertain renewable energies (RES). In this line, this paper presents the Hybrid Brown-Bear and Hippopotamus Optimization Algorithms with Quasi-Opposition-Based Learning (HBOA-QOBL) to enhance multi-dimension OPF solution. The algorithm combines the strengths of Brown-Bear optimizer, which excels in exploration and adaptive search mechanisms, and the Hippopotamus optimizer, known for its social behavior modeling and localized search strategies. By integrating QOBL, the HBOA-QOBL improves exploration through the generation of quasi-opposite solutions, allowing for a wider search of the solution space and reducing the risk of premature convergence. Adaptive search mechanisms embedded in HBOA-QOBL enhance exploitation by dynamically adjusting search behaviors during iterative power dispatch tuning, enabling improved fine-tuning of generation schedules and voltage profiles. The effectiveness of the proposed method is evaluated on the IEEE 30-bus, 57-bus, and 118-bus test systems for multiple dimension OPF objectives, including fuel cost minimization, emission reduction, power loss reduction, voltage deviation minimization, reactive power loss reduction and the voltage stability indicator (L-index). Simulation results indicate faster convergence compared to conventional techniques, achieving near-optimal solutions within 200 iterations, with a standard deviation of 63.8%, demonstrating superior technical and economic performance relative to previous research. Key convergence parameters such as population size, maximum iterations, and learning factor are explicitly tuned to enhance both exploration and exploitation. Simulation results confirm that HBOA-QOBL outperforms conventional optimization techniques in terms of solution quality, convergence speed, and stability, establishing significant improvement in the technical and economic issues.
{"title":"Hybrid Brown-Bear and Hippopotamus Optimization with Quasi-Opposition-Based Learning for Optimal Power Flow with Renewable Energy Integration","authors":"Mohamed Lahdeb , Ali Hennache , Bachir Bentouati , M.M.R. Ahmed , Ragab A. El-Sehiemy , M. Elzalik","doi":"10.1016/j.compeleceng.2025.110922","DOIUrl":"10.1016/j.compeleceng.2025.110922","url":null,"abstract":"<div><div>The optimal power flow (OPF) problem <strong>is</strong> a highly nonlinear and complex multi-dimension optimization problem, especially with the increased penetration of uncertain renewable energies (RES). In this line, this paper presents the Hybrid Brown-Bear and Hippopotamus Optimization Algorithms with Quasi-Opposition-Based Learning (HBOA-QOBL) to enhance multi-dimension OPF solution. The algorithm combines the strengths of Brown-Bear optimizer, which excels in exploration and adaptive search mechanisms, and the Hippopotamus optimizer, known for its social behavior modeling and localized search strategies. By integrating QOBL, the HBOA-QOBL improves exploration through the generation of quasi-opposite solutions, allowing for a wider search of the solution space and reducing the risk of premature convergence. Adaptive search mechanisms embedded in HBOA-QOBL enhance exploitation by dynamically adjusting search behaviors during iterative power dispatch tuning, enabling improved fine-tuning of generation schedules and voltage profiles. The effectiveness of the proposed method is evaluated on the IEEE 30-bus, 57-bus, and 118-bus test systems for multiple dimension OPF objectives, including fuel cost minimization, emission reduction, power loss reduction, voltage deviation minimization, reactive power loss reduction and the voltage stability indicator (L-index). Simulation results indicate faster convergence compared to conventional techniques, achieving near-optimal solutions within 200 iterations, with a standard deviation of 63.8%, demonstrating superior technical and economic performance relative to previous research. Key convergence parameters such as population size, maximum iterations, and learning factor are explicitly tuned to enhance both exploration and exploitation. Simulation results confirm that HBOA-QOBL outperforms conventional optimization techniques in terms of solution quality, convergence speed, and stability, establishing significant improvement in the technical and economic issues.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110922"},"PeriodicalIF":4.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145885934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-01-07DOI: 10.1016/j.compeleceng.2025.110924
Amara Miloudi , Abdelkader Laouid , Ahcène Bounceur , Mostefa Kara , Mohammed Mounir Bouhamed , Mohammad Hamoudeh , Insaf Kraidia
Federated Learning (FL) emerged as a transformative approach to collaborative model training in healthcare, enabling multiple institutions to develop robust Machine Learning models without compromising sensitive patient data. This review examines recent advances, applications, and challenges associated with FL in healthcare, focusing on its potential to enhance data security and privacy through the aggregation of decentralized models. A comprehensive literature review was conducted using databases including PubMed, Google Scholar, and Scopus, identifying 316 relevant publications, from which 23 were selected for detailed analysis. The findings highlight the applications of FL in critical healthcare areas, including oncology, infectious diseases, medical imaging, drug development, and personalized medicine. Although FL offers significant opportunities for precision medicine by managing fragmented and heterogeneous datasets, substantial challenges remain, particularly regarding data standardization, model convergence, and communication efficiency. This review also addresses crucial aspects such as privacy-preserving techniques, ethical compliance, and system scalability, emphasizing the need for interdisciplinary solutions. Ultimately, FL demonstrates significant potential to revolutionize healthcare by improving patient outcomes and accelerating medical research while maintaining strict regulatory compliance. Future research directions are discussed to overcome current barriers and advance the broader adoption of FL in healthcare applications.
{"title":"Federated learning in healthcare: Recent progress and challenges","authors":"Amara Miloudi , Abdelkader Laouid , Ahcène Bounceur , Mostefa Kara , Mohammed Mounir Bouhamed , Mohammad Hamoudeh , Insaf Kraidia","doi":"10.1016/j.compeleceng.2025.110924","DOIUrl":"10.1016/j.compeleceng.2025.110924","url":null,"abstract":"<div><div>Federated Learning (FL) emerged as a transformative approach to collaborative model training in healthcare, enabling multiple institutions to develop robust Machine Learning models without compromising sensitive patient data. This review examines recent advances, applications, and challenges associated with FL in healthcare, focusing on its potential to enhance data security and privacy through the aggregation of decentralized models. A comprehensive literature review was conducted using databases including PubMed, Google Scholar, and Scopus, identifying 316 relevant publications, from which 23 were selected for detailed analysis. The findings highlight the applications of FL in critical healthcare areas, including oncology, infectious diseases, medical imaging, drug development, and personalized medicine. Although FL offers significant opportunities for precision medicine by managing fragmented and heterogeneous datasets, substantial challenges remain, particularly regarding data standardization, model convergence, and communication efficiency. This review also addresses crucial aspects such as privacy-preserving techniques, ethical compliance, and system scalability, emphasizing the need for interdisciplinary solutions. Ultimately, FL demonstrates significant potential to revolutionize healthcare by improving patient outcomes and accelerating medical research while maintaining strict regulatory compliance. Future research directions are discussed to overcome current barriers and advance the broader adoption of FL in healthcare applications.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110924"},"PeriodicalIF":4.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-01-09DOI: 10.1016/j.compeleceng.2025.110932
Marriam Liaqat , Ali Raza , Muhammad Sajid Iqbal , Muhammad Adnan , Usman Abbasi , Maqsood Khan
The super smart grid (SSG) is a revolutionary grid which offers significant fossil fuel elimination, emissions reduction, renewable energy integration, and demand fulfillment. However, such mega grids are in the strategic analysis stage due to the involvement of multiple countries and complexities. Although the existing literature has performed different types of analysis for the different SSGs around the world, there is a lack of studies on the strategic analysis of the SSG planned by the South Asian Association for Regional Cooperation (SAARC). For the first time, this review paper presents the hybrid PESTEL-SWOT analysis for the futuristic SAARC SSG. This paper offers important insights and strategies for the implementation of the futuristic SAARC SSG. For instance, a practical strategy towards the emergence of the SAARC SSG is the encouragement of the P2P trading at a very basic level through the hierarchical integration of thousands of prosumers, prosumer communities, and national grids.
{"title":"Empowering SAARC's energy future: A PESTEL-SWOT roadmap for super smart grids and P2P energy trading","authors":"Marriam Liaqat , Ali Raza , Muhammad Sajid Iqbal , Muhammad Adnan , Usman Abbasi , Maqsood Khan","doi":"10.1016/j.compeleceng.2025.110932","DOIUrl":"10.1016/j.compeleceng.2025.110932","url":null,"abstract":"<div><div>The super smart grid (SSG) is a revolutionary grid which offers significant fossil fuel elimination, emissions reduction, renewable energy integration, and demand fulfillment. However, such mega grids are in the strategic analysis stage due to the involvement of multiple countries and complexities. Although the existing literature has performed different types of analysis for the different SSGs around the world, there is a lack of studies on the strategic analysis of the SSG planned by the South Asian Association for Regional Cooperation (SAARC). For the first time, this review paper presents the hybrid PESTEL-SWOT analysis for the futuristic SAARC SSG. This paper offers important insights and strategies for the implementation of the futuristic SAARC SSG. For instance, a practical strategy towards the emergence of the SAARC SSG is the encouragement of the P2P trading at a very basic level through the hierarchical integration of thousands of prosumers, prosumer communities, and national grids.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110932"},"PeriodicalIF":4.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-12-29DOI: 10.1016/j.compeleceng.2025.110918
Ande Bhargav, Mohamed Asan Basiri M.
Reversible digital image watermarking methods are crucial for embedding authentication information in medical imaging, military communication, and etc. The reversible data hiding (RDH) techniques embed auxiliary data or necessitate separate transmission of location maps to recover the data. These practices reduce the imperceptibility of the stegano image and demand higher bandwidth. To overcome these limitations, this paper proposes histogram-based pixel sorting (HBPS) in Algorithm-I, which directly embeds data into the least significant bits (LSBs), improving the Peak Signal-to-Noise Ratio (PSNR) by 22.29%. The experimental results validate the superior visual quality of the recovered cover image with average PSNR exceeding 50 dB. Algorithms-II and III incorporate preprocessing of the cover image using Laplacian kernel and the proposed triplet linear pixel transformation (TLPT), respectively to preserve the visual integrity of the cover image. The observed PSNR and latency gains compared to existing methods are statistically significant at the 95% confidence level using t-tests with Bonferroni correction. The preprocessing technique in Algorithm-IV refines the pixel value search algorithm (PVSA) with a sharpening filter to reduce latency by 52.82%. The multi-core implementation of PVSA to reduce the latency is shown in Algorithm-V.
{"title":"Digital image watermarking using histogram based pixel sorting and pixel value search techniques","authors":"Ande Bhargav, Mohamed Asan Basiri M.","doi":"10.1016/j.compeleceng.2025.110918","DOIUrl":"10.1016/j.compeleceng.2025.110918","url":null,"abstract":"<div><div>Reversible digital image watermarking methods are crucial for embedding authentication information in medical imaging, military communication, and etc. The reversible data hiding (RDH) techniques embed auxiliary data or necessitate separate transmission of location maps to recover the data. These practices reduce the imperceptibility of the stegano image and demand higher bandwidth. To overcome these limitations, this paper proposes histogram-based pixel sorting (HBPS) in Algorithm-I, which directly embeds data into the least significant bits (LSBs), improving the Peak Signal-to-Noise Ratio (PSNR) by 22.29%. The experimental results validate the superior visual quality of the recovered cover image with average PSNR exceeding 50 dB. Algorithms-II and III incorporate preprocessing of the cover image using Laplacian kernel and the proposed triplet linear pixel transformation (TLPT), respectively to preserve the visual integrity of the cover image. The observed PSNR and latency gains compared to existing methods are statistically significant at the 95% confidence level using t-tests with Bonferroni correction. The preprocessing technique in Algorithm-IV refines the pixel value search algorithm (PVSA) with a sharpening filter to reduce latency by 52.82%. The multi-core implementation of PVSA to reduce the latency is shown in Algorithm-V.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110918"},"PeriodicalIF":4.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145885935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-01-17DOI: 10.1016/j.compeleceng.2026.110977
Renjin , Liyunhe , Gongshenggao , Biantao
A microgrid is an advanced infrastructure that offers increased sustainability, dependability, and local energy autonomy by incorporating renewable and hybrid energy sources into the utility system. However, uncertainties arising from the intermittent nature of renewable sources, fluctuating loads, and dynamic electricity market prices present significant challenges for efficient operation. Traditional heuristic-based energy management systems (EMS) rely on forecasted data but often lack precision and adaptability under real-world variability. To address these limitations, this research proposes a novel Fuzzy Logic Controller-based EMS (FLC-EMS) for optimizing microgrid performance. Unlike rigid rule-based or computationally intensive linear programming (LP) methods, the proposed FLC-EMS combines intelligent decision-making with responsiveness and cost-effectiveness. Simulation results demonstrate that the FLC-EMS outperforms both heuristic and LP-based EMS strategies. Specifically, it achieves cost savings of approximately 8.1% on clear days and 16.6% on cloudy days compared to heuristic methods, while offering additional savings of 1.6–5.5% over LP-based optimization. Furthermore, FLC-EMS reduces grid energy usage and effectively manages state-of-charge (SoC) variations, resulting in enhanced utilization of renewable resources and lower reliance on grid power. The integrated microgrid model and EMS framework developed in this study serve as a robust platform for smart grid applications, offering scalability, real-time adaptability, and improved consumer economics. This work positions the FLC-EMS as a promising candidate for advanced microgrid control, paving the way for resilient and intelligent next-generation power systems.
{"title":"A hybrid fuzzy logic-based energy management strategy for grid-connected photovoltaic microgrids with energy storage optimization","authors":"Renjin , Liyunhe , Gongshenggao , Biantao","doi":"10.1016/j.compeleceng.2026.110977","DOIUrl":"10.1016/j.compeleceng.2026.110977","url":null,"abstract":"<div><div>A microgrid is an advanced infrastructure that offers increased sustainability, dependability, and local energy autonomy by incorporating renewable and hybrid energy sources into the utility system. However, uncertainties arising from the intermittent nature of renewable sources, fluctuating loads, and dynamic electricity market prices present significant challenges for efficient operation. Traditional heuristic-based energy management systems (EMS) rely on forecasted data but often lack precision and adaptability under real-world variability. To address these limitations, this research proposes a novel Fuzzy Logic Controller-based EMS (FLC-EMS) for optimizing microgrid performance. Unlike rigid rule-based or computationally intensive linear programming (LP) methods, the proposed FLC-EMS combines intelligent decision-making with responsiveness and cost-effectiveness. Simulation results demonstrate that the FLC-EMS outperforms both heuristic and LP-based EMS strategies. Specifically, it achieves cost savings of approximately 8.1% on clear days and 16.6% on cloudy days compared to heuristic methods, while offering additional savings of 1.6–5.5% over LP-based optimization. Furthermore, FLC-EMS reduces grid energy usage and effectively manages state-of-charge (SoC) variations, resulting in enhanced utilization of renewable resources and lower reliance on grid power. The integrated microgrid model and EMS framework developed in this study serve as a robust platform for smart grid applications, offering scalability, real-time adaptability, and improved consumer economics. This work positions the FLC-EMS as a promising candidate for advanced microgrid control, paving the way for resilient and intelligent next-generation power systems.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110977"},"PeriodicalIF":4.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-01-11DOI: 10.1016/j.compeleceng.2026.110961
Fei Wu , Jiahuan Lu , Hao Jin , Yibo Song , Guangwei Gao , Xiao-Yuan Jing
Federated learning (FL) allows multiple parties to collectively train deep learning models without the need to disclose their local data. The data distributions among various parties are usually non-independently and identically distributed (non-IID), and simultaneously the class imbalance problem often exits locally and globally, which is the main challenge of FL. Although some FL works have been presented aiming to solve this issue, there still exist much room to enhance the image classification effect by using deep learning models. In addition, under the non-IID setting, how to ensure the security of FL methods against the attack of malicious clients or central servers has not been well researched. We develop a novel decentralized FL approach in this paper, namely Blockchain-based Federated learning with Metric and Imbalanced Learning (BFMIL). The triplet loss is introduced to promote the consistency of feature representations between the client model and server model. To address the class imbalance problem, a cost-sensitive semantic discrimination loss is designed to fully explore the discriminative information, and data in each party is divided into the majority classes and the minority classes for unequal training. To reduce malicious attack, we utilize the blockchain to store the local update and the global model, and a novel voting mechanism is used to select parties with better model parameters for aggregation in each round of FL. The effectiveness of BFMIL is demonstrated by experiments conducted on four imbalanced datasets.
联邦学习(FL)允许多方共同训练深度学习模型,而无需公开其本地数据。各方之间的数据分布通常是非独立同分布(non- independent and identity distribution, non-IID),同时局部和全局往往存在类不平衡问题,这是人工智能面临的主要挑战。尽管已经有一些针对这一问题的人工智能作品出现,但利用深度学习模型来增强图像分类效果仍有很大的空间。此外,在非iid设置下,如何保证FL方法不受恶意客户端或中央服务器攻击的安全性还没有得到很好的研究。我们在本文中开发了一种新的分散FL方法,即基于区块链的联邦学习与度量和不平衡学习(BFMIL)。为了提高客户端模型和服务器模型之间特征表示的一致性,引入了三元丢失。为了解决类不平衡问题,设计了一个代价敏感的语义歧视损失来充分挖掘歧视信息,并将每一方的数据分成多数类和少数类进行不平等训练。为了减少恶意攻击,我们利用区块链来存储本地更新和全局模型,并使用一种新的投票机制来选择具有更好模型参数的各方在每轮FL中进行聚合。通过在四个不平衡数据集上的实验证明了BFMIL的有效性。
{"title":"Blockchain-based federated learning with metric and imbalanced learning for visual classification","authors":"Fei Wu , Jiahuan Lu , Hao Jin , Yibo Song , Guangwei Gao , Xiao-Yuan Jing","doi":"10.1016/j.compeleceng.2026.110961","DOIUrl":"10.1016/j.compeleceng.2026.110961","url":null,"abstract":"<div><div>Federated learning (FL) allows multiple parties to collectively train deep learning models without the need to disclose their local data. The data distributions among various parties are usually non-independently and identically distributed (non-IID), and simultaneously the class imbalance problem often exits locally and globally, which is the main challenge of FL. Although some FL works have been presented aiming to solve this issue, there still exist much room to enhance the image classification effect by using deep learning models. In addition, under the non-IID setting, how to ensure the security of FL methods against the attack of malicious clients or central servers has not been well researched. We develop a novel decentralized FL approach in this paper, namely Blockchain-based Federated learning with Metric and Imbalanced Learning (BFMIL). The triplet loss is introduced to promote the consistency of feature representations between the client model and server model. To address the class imbalance problem, a cost-sensitive semantic discrimination loss is designed to fully explore the discriminative information, and data in each party is divided into the majority classes and the minority classes for unequal training. To reduce malicious attack, we utilize the blockchain to store the local update and the global model, and a novel voting mechanism is used to select parties with better model parameters for aggregation in each round of FL. The effectiveness of BFMIL is demonstrated by experiments conducted on four imbalanced datasets.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110961"},"PeriodicalIF":4.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}