Pub Date : 2026-03-01Epub Date: 2026-01-10DOI: 10.1016/j.compeleceng.2026.110949
Rajakumar Ponnumani , Nisha Vasudeva , Thenmozhi Elumalai , Prabu Kaliyaperumal , Balamurugan Balusamy , Francesco Benedetto
The rapid proliferation of Internet of Things (IoT) devices in cloud environments has led to an expanded attack surface and increased susceptibility to diverse and evolving cyber threats. This study proposes a robust, multi-stage hybrid intrusion detection framework designed to address the challenges of high-dimensional data, class imbalance, and dynamic traffic in IoT ecosystems. The framework integrates Variational AutoEncoder (VAE) for latent feature compression, Isolation Forest (IF) for unsupervised anomaly detection, and Graph Attention Network (GAT) for relational modeling and multi-class classification. The CIC IoT-DIAD 2024 dataset is utilized to evaluate performance across multiple attack categories. The VAE extracts compact latent representations, enabling effective anomaly detection through IF. Detected anomalies are then structured into graph topologies, and classified by GAT based on node-level features and inter-node relations. Experimental results demonstrate superior detection performance with an overall accuracy of 99.08% and an F1-score of 98.03%, outperforming traditional and deep learning baselines. The proposed system exhibits strong scalability, generalization, and adaptability to dynamic IoT-cloud threat landscapes. Furthermore, its graph-based reasoning enhances interpretability and supports actionable insights for real-time threat response. Overall, this framework establishes a practical pathway toward intelligent, adaptive, and interpretable intrusion diagnosis in next-generation IoT-cloud ecosystems.
{"title":"A multi-stage framework for scalable and context-aware intrusion detection in IoT-cloud systems using deep latent modeling and graph-based attack classification","authors":"Rajakumar Ponnumani , Nisha Vasudeva , Thenmozhi Elumalai , Prabu Kaliyaperumal , Balamurugan Balusamy , Francesco Benedetto","doi":"10.1016/j.compeleceng.2026.110949","DOIUrl":"10.1016/j.compeleceng.2026.110949","url":null,"abstract":"<div><div>The rapid proliferation of Internet of Things (IoT) devices in cloud environments has led to an expanded attack surface and increased susceptibility to diverse and evolving cyber threats. This study proposes a robust, multi-stage hybrid intrusion detection framework designed to address the challenges of high-dimensional data, class imbalance, and dynamic traffic in IoT ecosystems. The framework integrates Variational AutoEncoder (VAE) for latent feature compression, Isolation Forest (IF) for unsupervised anomaly detection, and Graph Attention Network (GAT) for relational modeling and multi-class classification. The CIC IoT-DIAD 2024 dataset is utilized to evaluate performance across multiple attack categories. The VAE extracts compact latent representations, enabling effective anomaly detection through IF. Detected anomalies are then structured into graph topologies, and classified by GAT based on node-level features and inter-node relations. Experimental results demonstrate superior detection performance with an overall accuracy of 99.08% and an F1-score of 98.03%, outperforming traditional and deep learning baselines. The proposed system exhibits strong scalability, generalization, and adaptability to dynamic IoT-cloud threat landscapes. Furthermore, its graph-based reasoning enhances interpretability and supports actionable insights for real-time threat response. Overall, this framework establishes a practical pathway toward intelligent, adaptive, and interpretable intrusion diagnosis in next-generation IoT-cloud ecosystems.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110949"},"PeriodicalIF":4.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autism Spectrum Disorder (ASD) affects approximately 1% of the global child population, yet current gold-standard diagnostic methods remain time-intensive and expertise-dependent. Electroencephalography (EEG) offers an objective and scalable approach for neurophysiological measurement, facilitating early detection.
Methods
This study evaluated three neural sequence architectures —Long Short-Term Memory (LSTM), Transformer, and Mamba (Selective State Space Model) —for ASD classification using 47-channel, 150-second resting-state EEG recordings from 56 adults (28 with ASD, 28 controls) from the University of Sheffield dataset. Data were preprocessed using MNE-Python with band-pass filtering (0.50–50 Hz), Independent Component Analysis (ICA) artifact removal, and z-score normalization. Models were trained on epochs of varying durations (1 s, 2.50 s, 5 s) using stratified 5-fold cross-validation, with performance evaluated on a held-out test set (15%). Mixture-of-Experts (MoE) ensembles were constructed using performance-based weighted averaging. Regional classification and spectral analyses identified anatomical and frequency-specific biomarkers.
Results
The Mamba model achieved 98.18% accuracy with only 2972 parameters and a training time of 0.09 min at 2.50-second epochs. LSTM (144,578 parameters) reached 95.25% accuracy, while Transformer (38,946 parameters) attained 94.41%. The optimal Mamba+LSTM ensemble achieved 98.46% accuracy (Cohen's κ=0.97, ROC-AUC=99.84%) with only 11 misclassifications from 716 test samples. Regional analysis revealed frontal lobe dominance (76.81% accuracy, 25 channels) with theta-band (4–8 Hz) biomarkers. Spectral analysis confirmed characteristic ASD patterns: elevated delta/theta power, suppressed alpha rhythm, and increased beta/gamma activity. Single-channel analysis identified C5 (left central, 58.80% accuracy) as the most discriminative electrode.
Conclusions
Neural sequence models, particularly the parameter-efficient Mamba architecture and the Mamba+LSTM ensemble, demonstrate exceptional performance for EEG-based ASD classification, offering a clinically scalable and objective diagnostic tool. The frontal-central electrode configuration and theta-band biomarkers provide neurophysiologically interpretable features suitable for portable EEG systems and early screening applications.
{"title":"Quantitative EEG-based autism spectrum disorder detection using neural sequence models","authors":"Majid Nour , Ümit Şentürk , Alperen Akgül , Kemal Polat","doi":"10.1016/j.compeleceng.2026.110962","DOIUrl":"10.1016/j.compeleceng.2026.110962","url":null,"abstract":"<div><h3>Background</h3><div>Autism Spectrum Disorder (ASD) affects approximately 1% of the global child population, yet current gold-standard diagnostic methods remain time-intensive and expertise-dependent. Electroencephalography (EEG) offers an objective and scalable approach for neurophysiological measurement, facilitating early detection.</div></div><div><h3>Methods</h3><div>This study evaluated three neural sequence architectures —Long Short-Term Memory (LSTM), Transformer, and Mamba (Selective State Space Model) —for ASD classification using 47-channel, 150-second resting-state EEG recordings from 56 adults (28 with ASD, 28 controls) from the University of Sheffield dataset. Data were preprocessed using MNE-Python with band-pass filtering (0.50–50 Hz), Independent Component Analysis (ICA) artifact removal, and z-score normalization. Models were trained on epochs of varying durations (1 s, 2.50 s, 5 s) using stratified 5-fold cross-validation, with performance evaluated on a held-out test set (15%). Mixture-of-Experts (MoE) ensembles were constructed using performance-based weighted averaging. Regional classification and spectral analyses identified anatomical and frequency-specific biomarkers.</div></div><div><h3>Results</h3><div>The Mamba model achieved 98.18% accuracy with only 2972 parameters and a training time of 0.09 min at 2.50-second epochs. LSTM (144,578 parameters) reached 95.25% accuracy, while Transformer (38,946 parameters) attained 94.41%. The optimal Mamba+LSTM ensemble achieved 98.46% accuracy (Cohen's κ=0.97, ROC-AUC=99.84%) with only 11 misclassifications from 716 test samples. Regional analysis revealed frontal lobe dominance (76.81% accuracy, 25 channels) with theta-band (4–8 Hz) biomarkers. Spectral analysis confirmed characteristic ASD patterns: elevated delta/theta power, suppressed alpha rhythm, and increased beta/gamma activity. Single-channel analysis identified C5 (left central, 58.80% accuracy) as the most discriminative electrode.</div></div><div><h3>Conclusions</h3><div>Neural sequence models, particularly the parameter-efficient Mamba architecture and the Mamba+LSTM ensemble, demonstrate exceptional performance for EEG-based ASD classification, offering a clinically scalable and objective diagnostic tool. The frontal-central electrode configuration and theta-band biomarkers provide neurophysiologically interpretable features suitable for portable EEG systems and early screening applications.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110962"},"PeriodicalIF":4.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent developments in computational intelligence have produced a huge volume of multimodal data across different digital platforms. This data is a great source of contextual, sentimental, and emotional information. Multimodal sentiment analysis (MMSA) is the process of inferring sentiments from multimodal data. MMSA has improved the effectiveness and accuracy of sentiment analysis by integrating heterogeneous modalities. However, there are several issues and challenges in combining multiple modalities, like high complexity, modality fusion, lack of explainability, and temporal synchronization. This paper presents a review of MMSA, discussing data modalities, fusion approaches, issues and challenges. It also presents the statistical analysis and overview of datasets and evaluation metrics used in the reviewed papers. Moreover, it identifies several future research opportunities for the research advancements in MMSA. It is believed that the article will be beneficial for the researchers working in the relevant field.
{"title":"A review of multimodal sentiment analysis: Taxonomy, issues, challenges, and future perspectives","authors":"Khalid Anwar, Shreya, Meghna Sharma, Kritika Saanvi","doi":"10.1016/j.compeleceng.2026.110959","DOIUrl":"10.1016/j.compeleceng.2026.110959","url":null,"abstract":"<div><div>Recent developments in computational intelligence have produced a huge volume of multimodal data across different digital platforms. This data is a great source of contextual, sentimental, and emotional information. Multimodal sentiment analysis (MMSA) is the process of inferring sentiments from multimodal data. MMSA has improved the effectiveness and accuracy of sentiment analysis by integrating heterogeneous modalities. However, there are several issues and challenges in combining multiple modalities, like high complexity, modality fusion, lack of explainability, and temporal synchronization. This paper presents a review of MMSA, discussing data modalities, fusion approaches, issues and challenges. It also presents the statistical analysis and overview of datasets and evaluation metrics used in the reviewed papers. Moreover, it identifies several future research opportunities for the research advancements in MMSA. It is believed that the article will be beneficial for the researchers working in the relevant field.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110959"},"PeriodicalIF":4.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-01-17DOI: 10.1016/j.compeleceng.2026.110946
Ahmed Reda Mohamed , Abdulaziz Al-Khulaifi , Muneer A. Al Absi
This paper presents a high-efficiency complementary metal–oxide–semiconductor (CMOS) radio-frequency energy harvesting rectifier based on a novel three-phase architecture for self-powered Internet of Things nodes and implantable biomedical devices. The proposed architecture routes the received radio-frequency signal into three equal-amplitude paths with phase shifts of 0°, 120°, and 240°. It enables time-interleaved parallel rectification thereby improving power conversion efficiency (PCE) and output voltage stability. Implemented in a 180 nm CMOS technology, the rectifier occupies a compact silicon area of and operates at 920 MHz. Simulation results demonstrate a peak PCE of 81% at an input power of dBm, a dynamic range of 21 dB, and a sensitivity of dBm, delivering a regulated 1 V output across a 100 k load. The effects of practical parasitic components, including bond wires, pads, and printed circuit board traces, are incorporated into the design of the input matching network, resulting in a reflection coefficient of approximately dB at the operating frequency. Furthermore, statistical Monte Carlo and process–voltage–temperature analyses are performed to assess post-fabrication robustness. Compared with conventional single-phase rectifiers, the proposed three-phase architecture achieves higher efficiency and lower output voltage ripple for low-power energy-harvesting applications.
{"title":"A high-efficiency three-phase CMOS RF–DC rectifier for low-power IoT applications","authors":"Ahmed Reda Mohamed , Abdulaziz Al-Khulaifi , Muneer A. Al Absi","doi":"10.1016/j.compeleceng.2026.110946","DOIUrl":"10.1016/j.compeleceng.2026.110946","url":null,"abstract":"<div><div>This paper presents a high-efficiency complementary metal–oxide–semiconductor (CMOS) radio-frequency energy harvesting rectifier based on a novel three-phase architecture for self-powered Internet of Things nodes and implantable biomedical devices. The proposed architecture routes the received radio-frequency signal into three equal-amplitude paths with phase shifts of 0°, 120°, and 240°. It enables time-interleaved parallel rectification thereby improving power conversion efficiency (PCE) and output voltage stability. Implemented in a 180 nm CMOS technology, the rectifier occupies a compact silicon area of <span><math><mrow><mn>47</mn><mo>.</mo><mn>88</mn><mspace></mspace><mi>μ</mi><mtext>m</mtext><mo>×</mo><mn>88</mn><mo>.</mo><mn>8</mn><mspace></mspace><mi>μ</mi><mtext>m</mtext></mrow></math></span> and operates at 920 MHz. Simulation results demonstrate a peak PCE of 81% at an input power of <span><math><mrow><mo>−</mo><mn>25</mn><mo>.</mo><mn>8</mn></mrow></math></span> dBm, a dynamic range of 21 dB, and a sensitivity of <span><math><mrow><mo>−</mo><mn>10</mn><mo>.</mo><mn>5</mn></mrow></math></span> dBm, delivering a regulated 1 V output across a 100 k<span><math><mi>Ω</mi></math></span> load. The effects of practical parasitic components, including bond wires, pads, and printed circuit board traces, are incorporated into the design of the input matching network, resulting in a reflection coefficient of approximately <span><math><mrow><mo>−</mo><mn>20</mn></mrow></math></span> dB at the operating frequency. Furthermore, statistical Monte Carlo and process–voltage–temperature analyses are performed to assess post-fabrication robustness. Compared with conventional single-phase rectifiers, the proposed three-phase architecture achieves higher efficiency and lower output voltage ripple for low-power energy-harvesting applications.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110946"},"PeriodicalIF":4.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-01-12DOI: 10.1016/j.compeleceng.2025.110925
Sai Zhang , Wu Le , Zhen-Hong Jia , Hao Wu
Existing time series similarity measures are often difficult to apply to large-scale datasets due to their high computational complexity. Some solutions that pursue linear complexity usually come at the expense of fine-grained analysis of sequence dynamics, resulting in insufficient discriminative ability in complex scenarios. In this paper, we propose a multi-feature fusion algorithm that can achieve a fine-grained measure of sequence similarity while maintaining linear complexity. First, this paper introduces a novel subsequence trend encoding mechanism, which provides a new perspective beyond the traditional structural features for similarity judgment by quantifying the dynamic direction within the subsequence. Second, the algorithm comprehensively evaluates candidate subsequences from both complexity and trend perspectives, and forms a more robust distance metric by weighted fusion of the two features, thus effectively reducing the misjudgments that a single perspective may cause. Experimental results on 70 UCR benchmark datasets validate our approach, which not only achieves the #1 average rank in classification accuracy among 17 state-of-the-art algorithms but also demonstrates exceptional efficiency, proving to be orders of magnitude faster in single sequence prediction than many traditional, computationally intensive distance measures.
{"title":"A multi-feature distance measure for time series classification","authors":"Sai Zhang , Wu Le , Zhen-Hong Jia , Hao Wu","doi":"10.1016/j.compeleceng.2025.110925","DOIUrl":"10.1016/j.compeleceng.2025.110925","url":null,"abstract":"<div><div>Existing time series similarity measures are often difficult to apply to large-scale datasets due to their high computational complexity. Some solutions that pursue linear complexity usually come at the expense of fine-grained analysis of sequence dynamics, resulting in insufficient discriminative ability in complex scenarios. In this paper, we propose a multi-feature fusion algorithm that can achieve a fine-grained measure of sequence similarity while maintaining linear complexity. First, this paper introduces a novel subsequence trend encoding mechanism, which provides a new perspective beyond the traditional structural features for similarity judgment by quantifying the dynamic direction within the subsequence. Second, the algorithm comprehensively evaluates candidate subsequences from both complexity and trend perspectives, and forms a more robust distance metric by weighted fusion of the two features, thus effectively reducing the misjudgments that a single perspective may cause. Experimental results on 70 UCR benchmark datasets validate our approach, which not only achieves the #1 average rank in classification accuracy among 17 state-of-the-art algorithms but also demonstrates exceptional efficiency, proving to be orders of magnitude faster in single sequence prediction than many traditional, computationally intensive distance measures.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110925"},"PeriodicalIF":4.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-01-15DOI: 10.1016/j.compeleceng.2026.110972
Imtiyaz Ahmad, Vibhav Prakash Singh, Manoj Madhava Gore
Diabetic Retinopathy (DR) is one of the leading causes of vision impairment and blindness globally, necessitating early and accurate detection for timely clinical intervention. This paper proposes NGCF-RVFL, a novel Computer-aided-diagnosis system for multi-grade DR detection from retinal fundus images. The working of this system begins with an enhanced preprocessing pipeline that includes median filtering, Gaussian filtering, and Contrast-limited adaptive histogram equalization to reduce noise and improve contrast of the fundus images. Next, we introduce an adaptive image augmentation technique to address the issue of class imbalance. Minority class samples are increased using an augmentation that adapts the size of majority class samples. After that, we propose a Next Generation Convolutional Feature (NGCF) based on the fine-tuned ConvNeXt architecture, consisting of a hierarchical design with four feature extraction stages utilizing depthwise separable convolutions. The NGCF feature effectively encodes intricate retinal structures and disease patterns crucial for accurate DR grading. Further, the discriminative analysis with Principal Component Analysis confirms the significance and effectiveness of the extracted NGC feature in representing relevant retinal information. Furthermore, a lightweight network, Random Vector Functional Link (RVFL), is employed to evaluate the grade-wise detection performance of the proposed NGCF feature. Unlike traditional iterative learning models, the RVFL utilizes a single-pass training mechanism, significantly reducing computation time while maintaining high detection performance. Finally, we evaluate the effectiveness and detection performance of the NGCF feature on other machine learning classifiers such as Support vector machine, Multilayer perceptron, Random forest, and Decision tree. Comprehensive experiments on a benchmark dataset demonstrate that NGCF-RVFL achieves competitive scores across all DR grades with minimal training time, outperforming the state-of-the-art approaches.
{"title":"NGCF-RVFL: Next Generation Convolutional Feature with Random Vector Functional Link for multi-grade diabetic retinopathy detection","authors":"Imtiyaz Ahmad, Vibhav Prakash Singh, Manoj Madhava Gore","doi":"10.1016/j.compeleceng.2026.110972","DOIUrl":"10.1016/j.compeleceng.2026.110972","url":null,"abstract":"<div><div>Diabetic Retinopathy (DR) is one of the leading causes of vision impairment and blindness globally, necessitating early and accurate detection for timely clinical intervention. This paper proposes NGCF-RVFL, a novel Computer-aided-diagnosis system for multi-grade DR detection from retinal fundus images. The working of this system begins with an enhanced preprocessing pipeline that includes median filtering, Gaussian filtering, and Contrast-limited adaptive histogram equalization to reduce noise and improve contrast of the fundus images. Next, we introduce an adaptive image augmentation technique to address the issue of class imbalance. Minority class samples are increased using an augmentation that adapts the size of majority class samples. After that, we propose a Next Generation Convolutional Feature (NGCF) based on the fine-tuned ConvNeXt architecture, consisting of a hierarchical design with four feature extraction stages utilizing depthwise separable convolutions. The NGCF feature effectively encodes intricate retinal structures and disease patterns crucial for accurate DR grading. Further, the discriminative analysis with Principal Component Analysis confirms the significance and effectiveness of the extracted NGC feature in representing relevant retinal information. Furthermore, a lightweight network, Random Vector Functional Link (RVFL), is employed to evaluate the grade-wise detection performance of the proposed NGCF feature. Unlike traditional iterative learning models, the RVFL utilizes a single-pass training mechanism, significantly reducing computation time while maintaining high detection performance. Finally, we evaluate the effectiveness and detection performance of the NGCF feature on other machine learning classifiers such as Support vector machine, Multilayer perceptron, Random forest, and Decision tree. Comprehensive experiments on a benchmark dataset demonstrate that NGCF-RVFL achieves competitive scores across all DR grades with minimal training time, outperforming the state-of-the-art approaches.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110972"},"PeriodicalIF":4.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-12-27DOI: 10.1016/j.compeleceng.2025.110919
Himanshu Nandanwar , Rahul Katarya
The security and sustainability of Industrial Internet of Things (IIoT) systems are paramount to ensuring the safety of human lives during critical operations. Modern IIoT networks require robust security mechanisms encompassing safety, trust, privacy, reliability, and resilience to address the inadequacies of traditional security approaches, which are hindered by protocol incompatibilities, limited update capabilities, and outdated measures. These challenges are exacerbated in heterogeneous IoT environments, where intrusion detection systems (IDS) face significant obstacles in accuracy, scalability, and efficiency. This paper presents Alpha-Net, a unique and trustworthy Deep Learning (DL)-based IDS framework enhanced by a Quantum-Inspired Genetic Algorithm (QIGA) for optimized feature selection. By differentiating between benign and attack scenarios effectively, QIGA ensures superior feature representation, improving the model's transparency and reliability. The proposed Alpha-Net is evaluated on real-world IoT datasets, attaining an exceptional accuracy of 99.97 %, a true negative rate (TNR) of 99 %, and a recall of 99.94 %. Additionally, it achieves an accuracy of 99.75 % across ten classes, outperforming state-of-the-art techniques (SOTA) by a edge of 5 % to 15.93 %. Alpha-Net demonstrates remarkable efficiency in detecting and classifying botnet attacks in IIoT environments, showcasing its ability to address critical security challenges and establish itself as a dependable solution for anomaly detection in Industrial Internet of Things networks.
{"title":"Alpha-Net: A dependable and trustworthy deep learning framework for securing industrial internet of things networks against botnet attacks","authors":"Himanshu Nandanwar , Rahul Katarya","doi":"10.1016/j.compeleceng.2025.110919","DOIUrl":"10.1016/j.compeleceng.2025.110919","url":null,"abstract":"<div><div>The security and sustainability of Industrial Internet of Things (IIoT) systems are paramount to ensuring the safety of human lives during critical operations. Modern IIoT networks require robust security mechanisms encompassing safety, trust, privacy, reliability, and resilience to address the inadequacies of traditional security approaches, which are hindered by protocol incompatibilities, limited update capabilities, and outdated measures. These challenges are exacerbated in heterogeneous IoT environments, where intrusion detection systems (IDS) face significant obstacles in accuracy, scalability, and efficiency. This paper presents Alpha-Net, a unique and trustworthy Deep Learning (DL)-based IDS framework enhanced by a Quantum-Inspired Genetic Algorithm (QIGA) for optimized feature selection. By differentiating between benign and attack scenarios effectively, QIGA ensures superior feature representation, improving the model's transparency and reliability. The proposed Alpha-Net is evaluated on real-world IoT datasets, attaining an exceptional accuracy of 99.97 %, a true negative rate (TNR) of 99 %, and a recall of 99.94 %. Additionally, it achieves an accuracy of 99.75 % across ten classes, outperforming state-of-the-art techniques (SOTA) by a edge of 5 % to 15.93 %. Alpha-Net demonstrates remarkable efficiency in detecting and classifying botnet attacks in IIoT environments, showcasing its ability to address critical security challenges and establish itself as a dependable solution for anomaly detection in Industrial Internet of Things networks.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110919"},"PeriodicalIF":4.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145842638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, biometric systems have become integral to authentication, access control, and identification. However, the sensitive nature of biometric data raises significant privacy concerns. Homomorphic Encryption (HE) has emerged as a promising solution, allowing computations on encrypted data without decryption, thus preserving privacy. This bibliometric survey provides a focused bibliometric analysis based on the Scopus dataset, highlighting the evolution and current state-of-the-art in HE techniques within the context of privacy-preserving biometrics. Key aspects explored include foundational principles, encryption schemes, biometric applications, and the patent landscape. The study analyzes 206 documents using bibliometric methods such as keyword co-occurrence networks, author co-citation analysis, thematic evolution, and Sankey diagrams. The findings highlight a notable increase in research and patent activity, with 30 publications and 12 patents in the past year alone, reflecting growing interest in the convergence of HE and biometrics. Emerging applications in Artificial Intelligence and Blockchain are identified, while potential future directions include healthcare, Industry 5.0, and the Metaverse. This survey offers valuable insights into current research trends, challenges, and future opportunities, contributing to the advancement of privacy-preserving technologies in biometric systems.
{"title":"A bibliometric analysis of Homomorphic Encryption for privacy-preserving biometrics","authors":"Shreyansh Sharma , Anurag Mudgil , Richa Dubey , Anil Saini , Santanu Chaudhury","doi":"10.1016/j.compeleceng.2026.110969","DOIUrl":"10.1016/j.compeleceng.2026.110969","url":null,"abstract":"<div><div>In recent years, biometric systems have become integral to authentication, access control, and identification. However, the sensitive nature of biometric data raises significant privacy concerns. Homomorphic Encryption (HE) has emerged as a promising solution, allowing computations on encrypted data without decryption, thus preserving privacy. This bibliometric survey provides a focused bibliometric analysis based on the Scopus dataset, highlighting the evolution and current state-of-the-art in HE techniques within the context of privacy-preserving biometrics. Key aspects explored include foundational principles, encryption schemes, biometric applications, and the patent landscape. The study analyzes 206 documents using bibliometric methods such as keyword co-occurrence networks, author co-citation analysis, thematic evolution, and Sankey diagrams. The findings highlight a notable increase in research and patent activity, with 30 publications and 12 patents in the past year alone, reflecting growing interest in the convergence of HE and biometrics. Emerging applications in Artificial Intelligence and Blockchain are identified, while potential future directions include healthcare, Industry 5.0, and the Metaverse. This survey offers valuable insights into current research trends, challenges, and future opportunities, contributing to the advancement of privacy-preserving technologies in biometric systems.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110969"},"PeriodicalIF":4.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146023197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-01-09DOI: 10.1016/j.compeleceng.2025.110931
Qiyuan Gao, Qianhong Wu, Qi Liu, Junxiang Nong
Verifiable computation is essential for ensuring correctness in decentralized systems, yet existing approaches rely heavily on circuit-based proofs, task decomposition, or trusted hardware, which introduce high overhead and limit generality. To address these challenges, we propose CleVer, a compute-and-leave anonymous verification framework for general-purpose computation.
CleVer avoids circuit-based proof generation by using snapshot-based state transitions, enabling single-step dispute resolution without task decomposition. We design a cumulative staking incentive mechanism that guarantees profitability for honest verifiers and enforces bounded finality under adversarial budgets. Furthermore, we introduce an anonymous verifier protocol to prevent targeted attacks and collusion. Security is analyzed under a formal threat model, and experiments demonstrate that CleVer significantly reduces verification rounds and on-chain burden compared with existing optimistic-verification frameworks. Our results show that CleVer provides an efficient, incentive-aligned, and privacy-preserving foundation for scalable off-chain computation.
{"title":"CleVer: A compute-and-leave anonymous verification framework for general purpose computation","authors":"Qiyuan Gao, Qianhong Wu, Qi Liu, Junxiang Nong","doi":"10.1016/j.compeleceng.2025.110931","DOIUrl":"10.1016/j.compeleceng.2025.110931","url":null,"abstract":"<div><div>Verifiable computation is essential for ensuring correctness in decentralized systems, yet existing approaches rely heavily on circuit-based proofs, task decomposition, or trusted hardware, which introduce high overhead and limit generality. To address these challenges, we propose CleVer, a compute-and-leave anonymous verification framework for general-purpose computation.</div><div>CleVer avoids circuit-based proof generation by using snapshot-based state transitions, enabling single-step dispute resolution without task decomposition. We design a cumulative staking incentive mechanism that guarantees profitability for honest verifiers and enforces bounded finality under adversarial budgets. Furthermore, we introduce an anonymous verifier protocol to prevent targeted attacks and collusion. Security is analyzed under a formal threat model, and experiments demonstrate that CleVer significantly reduces verification rounds and on-chain burden compared with existing optimistic-verification frameworks. Our results show that CleVer provides an efficient, incentive-aligned, and privacy-preserving foundation for scalable off-chain computation.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110931"},"PeriodicalIF":4.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-01-17DOI: 10.1016/j.compeleceng.2026.110968
Rakesh Reddy Gurrala, Sampath Kumar Tallapally
The Internet of Things (IoT) revolution has resulted in massive data generation, requiring effective processing. Due to their proximity, tasks that demand a prompt response are sent to the fog node. In contrast, complex tasks are transferred to the cloud due to its massive processing capacity. Transferring tasks to the fog reduces the transmission latency while increasing energy consumption. In contrast, moving work to the cloud lowers energy consumption but increases transmission latency owing to long distances. Therefore, to balance the trade-offs between energy consumption and transmission delay, a hybrid Cheetah Dung Beetle Optimization Algorithm (CDBOA) based job scheduling strategy is used in this work. This hybrid algorithm balances local exploitation and global exploration by integrating the dung beetle optimization algorithm (DBOA) with the cheetah optimization algorithm (COA). This methodology effectively assigns jobs to fog and cloud resources according to their processing requirements and delay sensitivity, guaranteeing effective processing and energy conservation. The effectiveness of the proposed method has been evaluated using NASA iPSC and HPC2N workloads. The results show that the recommended approach performs better than other methods, with 12.64%, 27.60%, 21.55%, and 10.16% improvements for makespan, energy consumption, cost and delay, demonstrating the robustness of the suggested method.
{"title":"A novel hybrid cheetah dung beetle optimization algorithm to solve cloud-fog scheduling problems","authors":"Rakesh Reddy Gurrala, Sampath Kumar Tallapally","doi":"10.1016/j.compeleceng.2026.110968","DOIUrl":"10.1016/j.compeleceng.2026.110968","url":null,"abstract":"<div><div>The Internet of Things (IoT) revolution has resulted in massive data generation, requiring effective processing. Due to their proximity, tasks that demand a prompt response are sent to the fog node. In contrast, complex tasks are transferred to the cloud due to its massive processing capacity. Transferring tasks to the fog reduces the transmission latency while increasing energy consumption. In contrast, moving work to the cloud lowers energy consumption but increases transmission latency owing to long distances. Therefore, to balance the trade-offs between energy consumption and transmission delay, a hybrid Cheetah Dung Beetle Optimization Algorithm (CDBOA) based job scheduling strategy is used in this work. This hybrid algorithm balances local exploitation and global exploration by integrating the dung beetle optimization algorithm (DBOA) with the cheetah optimization algorithm (COA). This methodology effectively assigns jobs to fog and cloud resources according to their processing requirements and delay sensitivity, guaranteeing effective processing and energy conservation. The effectiveness of the proposed method has been evaluated using NASA iPSC and HPC2N workloads. The results show that the recommended approach performs better than other methods, with 12.64%, 27.60%, 21.55%, and 10.16% improvements for makespan, energy consumption, cost and delay, demonstrating the robustness of the suggested method.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110968"},"PeriodicalIF":4.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}