The fast development of quantum computing poses major challenges for classical cryptographic techniques, hence post-quantum cryptography is needed. Emphasizing their fit for securing energy-critical infrastructure, this work methodically reviews lattice-based cryptographic systems. Theoretically strong and practically relevant fundamental lattices including the Shortest Vector Problem (SVP), Learning with Errors (LWE), and Module-LWE are investigated in limited environments including smart grids and IoT devices. Examined are important advancements in hardware implementations, algorithmic optimizations, and cryptanalysis with an eye toward programs including Falcon, Dilithium, and CRYSTALS-Kyber. Over systems including Vehicle-to- Grid (V2G) networks and Supervisory Control and Data Acquisition (SCADA) systems, lattice-based cryptography’s efficacy and deployability are shown. The review ends with a discussion of future research paths to support long-term quantum-safe infrastructure security and newly arising theoretical hazards.
{"title":"From hardness assumptions to energy-secure protocols: A systematic survey of Euclidean lattice-based cryptography","authors":"Mourad Yessef , Youness Hakam , Mohamed Tabaa , Lhoussaine Ahessab , Z.M.S. Elbarbary , Salman Arafath Mohammed , Naim Ahmad","doi":"10.1016/j.compeleceng.2026.110971","DOIUrl":"10.1016/j.compeleceng.2026.110971","url":null,"abstract":"<div><div>The fast development of quantum computing poses major challenges for classical cryptographic techniques, hence post-quantum cryptography is needed. Emphasizing their fit for securing energy-critical infrastructure, this work methodically reviews lattice-based cryptographic systems. Theoretically strong and practically relevant fundamental lattices including the Shortest Vector Problem (SVP), Learning with Errors (LWE), and Module-LWE are investigated in limited environments including smart grids and IoT devices. Examined are important advancements in hardware implementations, algorithmic optimizations, and cryptanalysis with an eye toward programs including Falcon, Dilithium, and CRYSTALS-Kyber. Over systems including Vehicle-to- Grid (V2G) networks and Supervisory Control and Data Acquisition (SCADA) systems, lattice-based cryptography’s efficacy and deployability are shown. The review ends with a discussion of future research paths to support long-term quantum-safe infrastructure security and newly arising theoretical hazards.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110971"},"PeriodicalIF":4.9,"publicationDate":"2026-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146023198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Smart Grid, a prominent IoT application, is experiencing rapid growth driven by the proliferation of connected embedded devices. This evolution has resulted in an exponential increase in time-series data, emphasizing the need for efficient data storage and retrieval mechanisms, particularly for real-time IoT environments. Existing indexing structures primarily focus on either time-based or consumption-based organization, often overlooking the interdependence between these dimensions, which limits their query efficiency. To address this limitation, this paper introduces a novel Temporal-Consumption Binary Tree (TCB-Tree), a hybrid tree-based indexing structure that jointly exploits temporal and consumption attributes for efficient data retrieval. The proposed method operates in three main phases: (i) horizontal segmentation, which applies clustering to identify key consumption levels; (ii) vertical segmentation, which groups temporally successive data within the same consumption range; and (iii) hybrid index construction, where internal nodes index time while leaf nodes index consumption patterns. Experimental evaluation using three real-world datasets demonstrates that the TCB-Tree achieves rapid construction times (under 0.20 s) and efficient hybrid query execution (under 0.9 s) on large datasets, while maintaining minimal storage overhead (below 18%). These results confirm the scalability, efficiency, and suitability of the proposed structure for Smart Grid and real-time IoT applications.
{"title":"Hybrid tree-based indexing for efficient data retrieval in Smart Grids","authors":"Abdelbacet Brahmia , Zineddine Kouahla , Ala Eddine Benrazek , Brahim Farou , Hamid Seridi","doi":"10.1016/j.compeleceng.2026.110973","DOIUrl":"10.1016/j.compeleceng.2026.110973","url":null,"abstract":"<div><div>The Smart Grid, a prominent IoT application, is experiencing rapid growth driven by the proliferation of connected embedded devices. This evolution has resulted in an exponential increase in time-series data, emphasizing the need for efficient data storage and retrieval mechanisms, particularly for real-time IoT environments. Existing indexing structures primarily focus on either time-based or consumption-based organization, often overlooking the interdependence between these dimensions, which limits their query efficiency. To address this limitation, this paper introduces a novel Temporal-Consumption Binary Tree (TCB-Tree), a hybrid tree-based indexing structure that jointly exploits temporal and consumption attributes for efficient data retrieval. The proposed method operates in three main phases: (i) horizontal segmentation, which applies clustering to identify key consumption levels; (ii) vertical segmentation, which groups temporally successive data within the same consumption range; and (iii) hybrid index construction, where internal nodes index time while leaf nodes index consumption patterns. Experimental evaluation using three real-world datasets demonstrates that the TCB-Tree achieves rapid construction times (under 0.20 s) and efficient hybrid query execution (under 0.9 s) on large datasets, while maintaining minimal storage overhead (below 18%). These results confirm the scalability, efficiency, and suitability of the proposed structure for Smart Grid and real-time IoT applications.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110973"},"PeriodicalIF":4.9,"publicationDate":"2026-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-17DOI: 10.1016/j.compeleceng.2026.110946
Ahmed Reda Mohamed , Abdulaziz Al-Khulaifi , Muneer A. Al Absi
This paper presents a high-efficiency complementary metal–oxide–semiconductor (CMOS) radio-frequency energy harvesting rectifier based on a novel three-phase architecture for self-powered Internet of Things nodes and implantable biomedical devices. The proposed architecture routes the received radio-frequency signal into three equal-amplitude paths with phase shifts of 0°, 120°, and 240°. It enables time-interleaved parallel rectification thereby improving power conversion efficiency (PCE) and output voltage stability. Implemented in a 180 nm CMOS technology, the rectifier occupies a compact silicon area of and operates at 920 MHz. Simulation results demonstrate a peak PCE of 81% at an input power of dBm, a dynamic range of 21 dB, and a sensitivity of dBm, delivering a regulated 1 V output across a 100 k load. The effects of practical parasitic components, including bond wires, pads, and printed circuit board traces, are incorporated into the design of the input matching network, resulting in a reflection coefficient of approximately dB at the operating frequency. Furthermore, statistical Monte Carlo and process–voltage–temperature analyses are performed to assess post-fabrication robustness. Compared with conventional single-phase rectifiers, the proposed three-phase architecture achieves higher efficiency and lower output voltage ripple for low-power energy-harvesting applications.
{"title":"A high-efficiency three-phase CMOS RF–DC rectifier for low-power IoT applications","authors":"Ahmed Reda Mohamed , Abdulaziz Al-Khulaifi , Muneer A. Al Absi","doi":"10.1016/j.compeleceng.2026.110946","DOIUrl":"10.1016/j.compeleceng.2026.110946","url":null,"abstract":"<div><div>This paper presents a high-efficiency complementary metal–oxide–semiconductor (CMOS) radio-frequency energy harvesting rectifier based on a novel three-phase architecture for self-powered Internet of Things nodes and implantable biomedical devices. The proposed architecture routes the received radio-frequency signal into three equal-amplitude paths with phase shifts of 0°, 120°, and 240°. It enables time-interleaved parallel rectification thereby improving power conversion efficiency (PCE) and output voltage stability. Implemented in a 180 nm CMOS technology, the rectifier occupies a compact silicon area of <span><math><mrow><mn>47</mn><mo>.</mo><mn>88</mn><mspace></mspace><mi>μ</mi><mtext>m</mtext><mo>×</mo><mn>88</mn><mo>.</mo><mn>8</mn><mspace></mspace><mi>μ</mi><mtext>m</mtext></mrow></math></span> and operates at 920 MHz. Simulation results demonstrate a peak PCE of 81% at an input power of <span><math><mrow><mo>−</mo><mn>25</mn><mo>.</mo><mn>8</mn></mrow></math></span> dBm, a dynamic range of 21 dB, and a sensitivity of <span><math><mrow><mo>−</mo><mn>10</mn><mo>.</mo><mn>5</mn></mrow></math></span> dBm, delivering a regulated 1 V output across a 100 k<span><math><mi>Ω</mi></math></span> load. The effects of practical parasitic components, including bond wires, pads, and printed circuit board traces, are incorporated into the design of the input matching network, resulting in a reflection coefficient of approximately <span><math><mrow><mo>−</mo><mn>20</mn></mrow></math></span> dB at the operating frequency. Furthermore, statistical Monte Carlo and process–voltage–temperature analyses are performed to assess post-fabrication robustness. Compared with conventional single-phase rectifiers, the proposed three-phase architecture achieves higher efficiency and lower output voltage ripple for low-power energy-harvesting applications.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110946"},"PeriodicalIF":4.9,"publicationDate":"2026-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, biometric systems have become integral to authentication, access control, and identification. However, the sensitive nature of biometric data raises significant privacy concerns. Homomorphic Encryption (HE) has emerged as a promising solution, allowing computations on encrypted data without decryption, thus preserving privacy. This bibliometric survey provides a focused bibliometric analysis based on the Scopus dataset, highlighting the evolution and current state-of-the-art in HE techniques within the context of privacy-preserving biometrics. Key aspects explored include foundational principles, encryption schemes, biometric applications, and the patent landscape. The study analyzes 206 documents using bibliometric methods such as keyword co-occurrence networks, author co-citation analysis, thematic evolution, and Sankey diagrams. The findings highlight a notable increase in research and patent activity, with 30 publications and 12 patents in the past year alone, reflecting growing interest in the convergence of HE and biometrics. Emerging applications in Artificial Intelligence and Blockchain are identified, while potential future directions include healthcare, Industry 5.0, and the Metaverse. This survey offers valuable insights into current research trends, challenges, and future opportunities, contributing to the advancement of privacy-preserving technologies in biometric systems.
{"title":"A bibliometric analysis of Homomorphic Encryption for privacy-preserving biometrics","authors":"Shreyansh Sharma , Anurag Mudgil , Richa Dubey , Anil Saini , Santanu Chaudhury","doi":"10.1016/j.compeleceng.2026.110969","DOIUrl":"10.1016/j.compeleceng.2026.110969","url":null,"abstract":"<div><div>In recent years, biometric systems have become integral to authentication, access control, and identification. However, the sensitive nature of biometric data raises significant privacy concerns. Homomorphic Encryption (HE) has emerged as a promising solution, allowing computations on encrypted data without decryption, thus preserving privacy. This bibliometric survey provides a focused bibliometric analysis based on the Scopus dataset, highlighting the evolution and current state-of-the-art in HE techniques within the context of privacy-preserving biometrics. Key aspects explored include foundational principles, encryption schemes, biometric applications, and the patent landscape. The study analyzes 206 documents using bibliometric methods such as keyword co-occurrence networks, author co-citation analysis, thematic evolution, and Sankey diagrams. The findings highlight a notable increase in research and patent activity, with 30 publications and 12 patents in the past year alone, reflecting growing interest in the convergence of HE and biometrics. Emerging applications in Artificial Intelligence and Blockchain are identified, while potential future directions include healthcare, Industry 5.0, and the Metaverse. This survey offers valuable insights into current research trends, challenges, and future opportunities, contributing to the advancement of privacy-preserving technologies in biometric systems.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110969"},"PeriodicalIF":4.9,"publicationDate":"2026-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146023197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-17DOI: 10.1016/j.compeleceng.2026.110968
Rakesh Reddy Gurrala, Sampath Kumar Tallapally
The Internet of Things (IoT) revolution has resulted in massive data generation, requiring effective processing. Due to their proximity, tasks that demand a prompt response are sent to the fog node. In contrast, complex tasks are transferred to the cloud due to its massive processing capacity. Transferring tasks to the fog reduces the transmission latency while increasing energy consumption. In contrast, moving work to the cloud lowers energy consumption but increases transmission latency owing to long distances. Therefore, to balance the trade-offs between energy consumption and transmission delay, a hybrid Cheetah Dung Beetle Optimization Algorithm (CDBOA) based job scheduling strategy is used in this work. This hybrid algorithm balances local exploitation and global exploration by integrating the dung beetle optimization algorithm (DBOA) with the cheetah optimization algorithm (COA). This methodology effectively assigns jobs to fog and cloud resources according to their processing requirements and delay sensitivity, guaranteeing effective processing and energy conservation. The effectiveness of the proposed method has been evaluated using NASA iPSC and HPC2N workloads. The results show that the recommended approach performs better than other methods, with 12.64%, 27.60%, 21.55%, and 10.16% improvements for makespan, energy consumption, cost and delay, demonstrating the robustness of the suggested method.
{"title":"A novel hybrid cheetah dung beetle optimization algorithm to solve cloud-fog scheduling problems","authors":"Rakesh Reddy Gurrala, Sampath Kumar Tallapally","doi":"10.1016/j.compeleceng.2026.110968","DOIUrl":"10.1016/j.compeleceng.2026.110968","url":null,"abstract":"<div><div>The Internet of Things (IoT) revolution has resulted in massive data generation, requiring effective processing. Due to their proximity, tasks that demand a prompt response are sent to the fog node. In contrast, complex tasks are transferred to the cloud due to its massive processing capacity. Transferring tasks to the fog reduces the transmission latency while increasing energy consumption. In contrast, moving work to the cloud lowers energy consumption but increases transmission latency owing to long distances. Therefore, to balance the trade-offs between energy consumption and transmission delay, a hybrid Cheetah Dung Beetle Optimization Algorithm (CDBOA) based job scheduling strategy is used in this work. This hybrid algorithm balances local exploitation and global exploration by integrating the dung beetle optimization algorithm (DBOA) with the cheetah optimization algorithm (COA). This methodology effectively assigns jobs to fog and cloud resources according to their processing requirements and delay sensitivity, guaranteeing effective processing and energy conservation. The effectiveness of the proposed method has been evaluated using NASA iPSC and HPC2N workloads. The results show that the recommended approach performs better than other methods, with 12.64%, 27.60%, 21.55%, and 10.16% improvements for makespan, energy consumption, cost and delay, demonstrating the robustness of the suggested method.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110968"},"PeriodicalIF":4.9,"publicationDate":"2026-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-17DOI: 10.1016/j.compeleceng.2026.110966
Lal Said , Muhammad Amin
Autonomous and remotely operated systems rely on image data for critical decision-making, yet these images are often sent over insecure channels, making them vulnerable to interception or tampering. This paper presents a lightweight image encryption scheme that uses Substitution–Permutation Network architecture with modular arithmetic-based block permutation and dynamically generated chaos-driven substitution boxes. The scheme employs dual key-dependent substitution and exclusive OR operations, ensuring that even a single-bit key change produce a completely different encrypted output. Security analysis shows a large key space, strong resistance to brute force attacks, high entropy, and desirable statistical properties. The proposed method achieves higher throughput than conventional ciphers while preserving salient image content even under pixel loss. These results demonstrate that the scheme provides secure and efficient image protection for resource-constrained environments.
{"title":"Design of lightweight image encryption scheme for saliency protection in autonomous control systems","authors":"Lal Said , Muhammad Amin","doi":"10.1016/j.compeleceng.2026.110966","DOIUrl":"10.1016/j.compeleceng.2026.110966","url":null,"abstract":"<div><div>Autonomous and remotely operated systems rely on image data for critical decision-making, yet these images are often sent over insecure channels, making them vulnerable to interception or tampering. This paper presents a lightweight image encryption scheme that uses Substitution–Permutation Network architecture with modular arithmetic-based block permutation and dynamically generated chaos-driven substitution boxes. The scheme employs dual key-dependent substitution and exclusive OR operations, ensuring that even a single-bit key change produce a completely different encrypted output. Security analysis shows a large key space, strong resistance to brute force attacks, high entropy, and desirable statistical properties. The proposed method achieves higher throughput than conventional ciphers while preserving salient image content even under pixel loss. These results demonstrate that the scheme provides secure and efficient image protection for resource-constrained environments.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110966"},"PeriodicalIF":4.9,"publicationDate":"2026-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The availability of powerful digital editing tools has made image tampering increasingly sophisticated, posing significant challenges to journalism, forensics, and social media authenticity. To address the limitations of conventional and transformer-based forgery detection approaches – which often suffer from feature redundancy, compressibility instability, and high computational demands – this study introduces a deep learning architecture for tampered image detection. The model integrates a MobileNetV2-based encoder for compact spatial feature extraction, multi-scale hierarchical feature reuse blocks inspired by DenseNet, and a U-Net-type decoder for precise forgery localization. Class imbalance is mitigated using an enhanced binary classifier with focal loss. The entire model is quantized and deployed on a Google Coral Edge TPU, achieving real-time classification performance (approximately 135 ms per image) in low-power, resource-limited environments. The model is trained and tested on four benchmark forgery datasets – CASIA v1, Columbia, MICC-F2000, and Defacto-Splicing – and demonstrates excellent results: AUC = 1.00 and accuracy = 99% on Defacto, AUC = 0.967 and F1-score = 0.915 on Columbia, and strong performance on both high-resolution (MICC-F2000) and compressed (CASIA v1) datasets. Comparative analyses show that the proposed approach outperforms recent CNN- and Transformer-based methods while using only 5.7 million parameters, confirming its efficacy, scalability, and suitability for embedded AI systems. Thus, the proposed method represents a lightweight, hardware-deployable, and interpretable solution for robust image forgery detection.
{"title":"Hierarchical mobile-dense convolutional architecture for tampered image detection using focal optimization with quantized edge TPU deployment","authors":"Badam Shanmukha Venkata Vinayak , Rama Muni Reddy Yanamala , Rayappa David Amar Raj , Archana Pallakonda","doi":"10.1016/j.compeleceng.2026.110979","DOIUrl":"10.1016/j.compeleceng.2026.110979","url":null,"abstract":"<div><div>The availability of powerful digital editing tools has made image tampering increasingly sophisticated, posing significant challenges to journalism, forensics, and social media authenticity. To address the limitations of conventional and transformer-based forgery detection approaches – which often suffer from feature redundancy, compressibility instability, and high computational demands – this study introduces a deep learning architecture for tampered image detection. The model integrates a MobileNetV2-based encoder for compact spatial feature extraction, multi-scale hierarchical feature reuse blocks inspired by DenseNet, and a U-Net-type decoder for precise forgery localization. Class imbalance is mitigated using an enhanced binary classifier with focal loss. The entire model is quantized and deployed on a Google Coral Edge TPU, achieving real-time classification performance (approximately 135 ms per image) in low-power, resource-limited environments. The model is trained and tested on four benchmark forgery datasets – CASIA v1, Columbia, MICC-F2000, and Defacto-Splicing – and demonstrates excellent results: AUC = 1.00 and accuracy = 99% on Defacto, AUC = 0.967 and F1-score = 0.915 on Columbia, and strong performance on both high-resolution (MICC-F2000) and compressed (CASIA v1) datasets. Comparative analyses show that the proposed approach outperforms recent CNN- and Transformer-based methods while using only 5.7 million parameters, confirming its efficacy, scalability, and suitability for embedded AI systems. Thus, the proposed method represents a lightweight, hardware-deployable, and interpretable solution for robust image forgery detection.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110979"},"PeriodicalIF":4.9,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-16DOI: 10.1016/j.compeleceng.2026.110941
Venkata Krishna Odugu , P. Ramakrishna , T. Vasudeva Reddy , G Harish Babu , Janardhanarao S
In this study, a new Filter Bank (FB) architecture for a 2-D FIR filter and implementation in VLSI design with the help of symmetric processing, parallelism, and Distributed Arithmetic (DA) ideas are presented. This work is motivated by the need for hardware-efficient 2D FIR filter architectures that reduce computational complexity, Power Consumption (PC), and resource usage in real-time image processing applications. Parallel processing is incorporated into the design to boost throughput and to decrease the quantity of multipliers, symmetry is introduced into the coefficients of the filter. In place of the remaining multipliers, Dual Port-Look-Up Table (DP-LUT)-based DA are proposed to reduce the area and power. Four types of symmetries are considered, and each architecture is explored and implemented using the proposed DA approach. Finally, all these filter structures are integrated by considering a common memory module and a control logic. Memory reuse and sharing are made possible by the FB design, which also allows for parallel processing. The suggested FB design has low resource requirements in terms of both memory and processing power. The hardware utilization synthesis summary is assessed for the target device of the Field Programmable Gate Array (FPGA). After that, the design is synthesized in 45 nm CMOS technology using Cadence's Genus tools for ASIC design. Existing 2-D FIR filter designs and traditional multiplier-based filter architectures are analyzed in terms of area, latency, and PC reports. The proposed FB architecture achieves up to 98.04% reduction in ADP and up to 64.51% reduction in PDP compared to existing designs, highlighting its efficiency in both area and power optimization. The proposed work's layout is then provided, including the Innovus tools used to determine the place and route.
在本研究中,提出了一种新的用于二维FIR滤波器的滤波器组(FB)架构,并利用对称处理、并行性和分布式算法(DA)思想在VLSI设计中实现。这项工作的动机是需要硬件高效的2D FIR滤波器架构,以降低实时图像处理应用中的计算复杂性、功耗(PC)和资源使用。为了提高吞吐量和减少乘法器的数量,设计中引入了并行处理,并在滤波器系数中引入了对称性。采用双端口查找表(Dual port - lookup Table, DP-LUT)代替剩余的乘法器来减少面积和功耗。考虑了四种类型的对称性,并使用所提出的数据处理方法探索和实现了每种体系结构。最后,通过考虑公共存储模块和控制逻辑,将所有这些滤波器结构集成在一起。FB设计使得内存重用和共享成为可能,它还允许并行处理。建议的FB设计在内存和处理能力方面具有较低的资源需求。对现场可编程门阵列(FPGA)目标器件的硬件利用率进行了综合评价。之后,使用Cadence的Genus工具进行ASIC设计,在45 nm CMOS技术中进行设计合成。从面积、延迟和PC报告等方面分析了现有的二维FIR滤波器设计和传统的基于乘法器的滤波器架构。与现有设计相比,所提出的FB架构实现了高达98.04%的ADP降低和高达64.51%的PDP降低,突出了其在面积和功耗优化方面的效率。然后提供建议的工作布局,包括用于确定地点和路线的Innovus工具。
{"title":"Optimized 2-D FIR filter bank architecture using various symmetries with parallel processing and DA","authors":"Venkata Krishna Odugu , P. Ramakrishna , T. Vasudeva Reddy , G Harish Babu , Janardhanarao S","doi":"10.1016/j.compeleceng.2026.110941","DOIUrl":"10.1016/j.compeleceng.2026.110941","url":null,"abstract":"<div><div>In this study, a new Filter Bank (FB) architecture for a 2-D FIR filter and implementation in VLSI design with the help of symmetric processing, parallelism, and Distributed Arithmetic (DA) ideas are presented. This work is motivated by the need for hardware-efficient 2D FIR filter architectures that reduce computational complexity, Power Consumption (PC), and resource usage in real-time image processing applications. Parallel processing is incorporated into the design to boost throughput and to decrease the quantity of multipliers, symmetry is introduced into the coefficients of the filter. In place of the remaining multipliers, Dual Port-Look-Up Table (DP-LUT)-based DA are proposed to reduce the area and power. Four types of symmetries are considered, and each architecture is explored and implemented using the proposed DA approach. Finally, all these filter structures are integrated by considering a common memory module and a control logic. Memory reuse and sharing are made possible by the FB design, which also allows for parallel processing. The suggested FB design has low resource requirements in terms of both memory and processing power. The hardware utilization synthesis summary is assessed for the target device of the Field Programmable Gate Array (FPGA). After that, the design is synthesized in 45 nm CMOS technology using Cadence's Genus tools for ASIC design. Existing 2-D FIR filter designs and traditional multiplier-based filter architectures are analyzed in terms of area, latency, and PC reports. The proposed FB architecture achieves up to 98.04% reduction in ADP and up to 64.51% reduction in PDP compared to existing designs, highlighting its efficiency in both area and power optimization. The proposed work's layout is then provided, including the Innovus tools used to determine the place and route.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110941"},"PeriodicalIF":4.9,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-16DOI: 10.1016/j.compeleceng.2026.110958
Anju Mishra , Priya Ranjan
Emotion recognition from physiological signals is an emerging field due to its vast application areas. The electroencephalogram (EEG) as a physiological marker in developing automated emotion recognition systems is gaining popularity with its ability to capture the brain's electrical activity providing a window into understanding how these emotional states are represented and processed. Because of this inherent capability of EEG recordings, this systematic review intends to give the readers a comprehensive understanding of the state of the art of the emotion recognition domain and the tools and technologies used by other contemporary researchers in this field. The review outlines the latest research in the field and also performs a comprehensive analysis of available literature to identify the best tools and technologies used by researchers in the domain at every step of the development of such models. The final section of the review tries to point out some directions that can be worked out in the future by the researchers.
{"title":"Comprehensive analysis of the state of art on emotion recognition using EEG","authors":"Anju Mishra , Priya Ranjan","doi":"10.1016/j.compeleceng.2026.110958","DOIUrl":"10.1016/j.compeleceng.2026.110958","url":null,"abstract":"<div><div>Emotion recognition from physiological signals is an emerging field due to its vast application areas. The electroencephalogram (EEG) as a physiological marker in developing automated emotion recognition systems is gaining popularity with its ability to capture the brain's electrical activity providing a window into understanding how these emotional states are represented and processed. Because of this inherent capability of EEG recordings, this systematic review intends to give the readers a comprehensive understanding of the state of the art of the emotion recognition domain and the tools and technologies used by other contemporary researchers in this field. The review outlines the latest research in the field and also performs a comprehensive analysis of available literature to identify the best tools and technologies used by researchers in the domain at every step of the development of such models. The final section of the review tries to point out some directions that can be worked out in the future by the researchers.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110958"},"PeriodicalIF":4.9,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent developments in computational intelligence have produced a huge volume of multimodal data across different digital platforms. This data is a great source of contextual, sentimental, and emotional information. Multimodal sentiment analysis (MMSA) is the process of inferring sentiments from multimodal data. MMSA has improved the effectiveness and accuracy of sentiment analysis by integrating heterogeneous modalities. However, there are several issues and challenges in combining multiple modalities, like high complexity, modality fusion, lack of explainability, and temporal synchronization. This paper presents a review of MMSA, discussing data modalities, fusion approaches, issues and challenges. It also presents the statistical analysis and overview of datasets and evaluation metrics used in the reviewed papers. Moreover, it identifies several future research opportunities for the research advancements in MMSA. It is believed that the article will be beneficial for the researchers working in the relevant field.
{"title":"A review of multimodal sentiment analysis: Taxonomy, issues, challenges, and future perspectives","authors":"Khalid Anwar, Shreya, Meghna Sharma, Kritika Saanvi","doi":"10.1016/j.compeleceng.2026.110959","DOIUrl":"10.1016/j.compeleceng.2026.110959","url":null,"abstract":"<div><div>Recent developments in computational intelligence have produced a huge volume of multimodal data across different digital platforms. This data is a great source of contextual, sentimental, and emotional information. Multimodal sentiment analysis (MMSA) is the process of inferring sentiments from multimodal data. MMSA has improved the effectiveness and accuracy of sentiment analysis by integrating heterogeneous modalities. However, there are several issues and challenges in combining multiple modalities, like high complexity, modality fusion, lack of explainability, and temporal synchronization. This paper presents a review of MMSA, discussing data modalities, fusion approaches, issues and challenges. It also presents the statistical analysis and overview of datasets and evaluation metrics used in the reviewed papers. Moreover, it identifies several future research opportunities for the research advancements in MMSA. It is believed that the article will be beneficial for the researchers working in the relevant field.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"131 ","pages":"Article 110959"},"PeriodicalIF":4.9,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}