Pub Date : 2026-03-01Epub Date: 2025-12-21DOI: 10.1016/j.vlsi.2025.102637
Manali Dhar , Chiradeep Mukherjee , Saradindu Panda , Bansibadan Maji , Aurpan Majumder
Quantum Cellular Automata (QCA) is a promising technology that offers an alternative to conventional Metal Oxide Semiconductor (MOS) approaches for designing efficient, high-performance logic circuits. In present quantum technologies, there is a growing demand for QCA circuits to meet the requirements of high speed, energy efficiency, and device density. However, due to their nanoscale dimensions and complex fabrication processes, QCA circuits are inherently prone to defects, which significantly affect circuit reliability, energy efficiency, and design robustness. This paper explores the innovative research on the prediction of energy dissipation of QCA Layered T (QCA LT) Ex-OR, Ex-NOR, and 4-bit Binary to Gray (BTG) converter circuits under single-cell displacement defect (SCDD) and cell polarization using machine learning models. Firstly, QCA logic gates are selected and realized by LT logic over Majority voter (MV), where logic reduction methodologies are used using the coherence vector (watt/energy) simulation engine of QCADesigner-E. Both horizontal and vertical SCDD are applied to the output cell of the LT design, resulting in variations in the polarization and energy dissipation, acquired dataset scdd_polarization_energy (SPE Version 2). To assess the energy dissipation of QCA LT designs from the original dataset, Machine Learning (ML) models have been used. Consequently, the best-fitting machine learning models for prediction are identified as K-Nearest Neighbour (KNN), Random Forest (RF), and Polynomial Regression (PR). These models are evaluated based on the R2 Score, mean absolute error (MAE), mean squared error (MSE), and root mean squared error (RMSE). Based on evaluation parameters, the optimal machine learning model has been identified for each of the SCDD's directions.
量子元胞自动机(QCA)是一种很有前途的技术,它为设计高效、高性能的逻辑电路提供了传统金属氧化物半导体(MOS)方法的替代方案。在当前的量子技术中,为了满足高速、高能效和器件密度的要求,对QCA电路的需求日益增长。然而,由于其纳米级尺寸和复杂的制造工艺,QCA电路天生就容易出现缺陷,这严重影响了电路的可靠性、能效和设计稳健性。本文利用机器学习模型对QCA分层T (QCA LT) Ex-OR、Ex-NOR和4位二进制到灰色(BTG)转换电路在单细胞位移缺陷(SCDD)和细胞极化条件下的能量耗散进行了创新性研究。首先,QCA逻辑门的选择和实现是基于多数派选民(MV)的LT逻辑,其中使用qcaddesigner - e的相干向量(瓦特/能量)仿真引擎使用逻辑约简方法。获得的数据集scdd_polarization_energy (SPE Version 2)显示,水平和垂直SCDD都应用于LT设计的输出单元,导致极化和能量耗散的变化。为了从原始数据集评估QCA LT设计的能量耗散,使用了机器学习(ML)模型。因此,用于预测的最佳拟合机器学习模型被确定为k近邻(KNN)、随机森林(RF)和多项式回归(PR)。根据R2评分、平均绝对误差(MAE)、均方误差(MSE)和均方根误差(RMSE)对这些模型进行评估。基于评估参数,确定了SCDD各个方向的最佳机器学习模型。
{"title":"Predictive analysis of energy dissipation in Layered-T QCA circuits under cell displacement defects and polarization: A machine-learning approach","authors":"Manali Dhar , Chiradeep Mukherjee , Saradindu Panda , Bansibadan Maji , Aurpan Majumder","doi":"10.1016/j.vlsi.2025.102637","DOIUrl":"10.1016/j.vlsi.2025.102637","url":null,"abstract":"<div><div>Quantum Cellular Automata (QCA) is a promising technology that offers an alternative to conventional Metal Oxide Semiconductor (MOS) approaches for designing efficient, high-performance logic circuits. In present quantum technologies, there is a growing demand for QCA circuits to meet the requirements of high speed, energy efficiency, and device density. However, due to their nanoscale dimensions and complex fabrication processes, QCA circuits are inherently prone to defects, which significantly affect circuit reliability, energy efficiency, and design robustness. This paper explores the innovative research on the prediction of energy dissipation of QCA Layered T (QCA LT) Ex-OR, Ex-NOR, and 4-bit Binary to Gray (BTG) converter circuits under single-cell displacement defect (SCDD) and cell polarization using machine learning models. Firstly, QCA logic gates are selected and realized by LT logic over Majority voter (MV), where logic reduction methodologies are used using the coherence vector (watt/energy) simulation engine of QCADesigner-E. Both horizontal and vertical SCDD are applied to the output cell of the LT design, resulting in variations in the polarization and energy dissipation, acquired dataset <em>scdd_polarization_energy</em> (SPE Version 2). To assess the energy dissipation of QCA LT designs from the original dataset, Machine Learning (ML) models have been used. Consequently, the best-fitting machine learning models for prediction are identified as K-Nearest Neighbour (KNN), Random Forest (RF), and Polynomial Regression (PR). These models are evaluated based on the R<sup>2</sup> Score, mean absolute error (MAE), mean squared error (MSE), and root mean squared error (RMSE). Based on evaluation parameters, the optimal machine learning model has been identified for each of the SCDD's directions.</div></div>","PeriodicalId":54973,"journal":{"name":"Integration-The Vlsi Journal","volume":"107 ","pages":"Article 102637"},"PeriodicalIF":2.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145839829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-11-14DOI: 10.1016/j.vlsi.2025.102600
T. Delphine Sheeba, G. Athisha
Compressed sensing applications frequently employ sparse signal recovery techniques, including Orthogonal Matching Pursuit (OMP), to effectively reconstruct signals. However, traditional OMP suffers from limitations in atom selection due to its reliance on single-atom selection methods, which can lead to inaccurate reconstructions and increased computational complexity. A novel Adaptable Threshold and Projection-Aware Orthogonal Matching Pursuit (ATPAwOMP) algorithm is suggested in this study to overcome these issues. By combining an adaptive thresholding method with projection-based atom selection, ATPAwOMP improves conventional OMP by iteratively improving reconstruction accuracy. By eliminating unnecessary atoms from the support set during the backtracking phase of the method, redundant computations are decreased, and the importance of the chosen atoms is increased. A lightweight VLSI design with a parallel multiplication and accumulation (MAC) unit, sorting unit, and matrix inversion unit is presented in order to further optimize the method for hardware deployment. A Newton-Raphson-based reciprocal operator decreases resource requirements for matrix inversion. At the same time, a Reconfigurable Adder/Subtractor Module (RASM) and a low-complexity LUT-based multiplier are integrated to minimize hardware overhead in the MAC unit. The proposed work is implemented in the Xilinx platform using the MIT-BIH arrhythmia database. The FPGA measures and the error metrics, such as signal to noise ratio (SNR), root mean square error (RMSE), percentage root mean square difference (PRD), and normalized PRDN, are evaluated. The ATPAwOMP algorithm is well-suited for real-time and resource-constrained applications like wearable ECG monitoring devices because of its adaptive thresholding and projection-aware approach, which provide notable increases in reconstruction accuracy and processing efficiency.
{"title":"VLSI implementation of adaptable threshold and projection aware OMP with reconfigurable LUT-based MAC unit for ECG signal reconstruction","authors":"T. Delphine Sheeba, G. Athisha","doi":"10.1016/j.vlsi.2025.102600","DOIUrl":"10.1016/j.vlsi.2025.102600","url":null,"abstract":"<div><div>Compressed sensing applications frequently employ sparse signal recovery techniques, including Orthogonal Matching Pursuit (OMP), to effectively reconstruct signals. However, traditional OMP suffers from limitations in atom selection due to its reliance on single-atom selection methods, which can lead to inaccurate reconstructions and increased computational complexity. A novel Adaptable Threshold and Projection-Aware Orthogonal Matching Pursuit (ATPAwOMP) algorithm is suggested in this study to overcome these issues. By combining an adaptive thresholding method with projection-based atom selection, ATPAwOMP improves conventional OMP by iteratively improving reconstruction accuracy. By eliminating unnecessary atoms from the support set during the backtracking phase of the method, redundant computations are decreased, and the importance of the chosen atoms is increased. A lightweight VLSI design with a parallel multiplication and accumulation (MAC) unit, sorting unit, and matrix inversion unit is presented in order to further optimize the method for hardware deployment. A Newton-Raphson-based reciprocal operator decreases resource requirements for matrix inversion. At the same time, a Reconfigurable Adder/Subtractor Module (RASM) and a low-complexity LUT-based multiplier are integrated to minimize hardware overhead in the MAC unit. The proposed work is implemented in the Xilinx platform using the MIT-BIH arrhythmia database. The FPGA measures and the error metrics, such as signal to noise ratio (SNR), root mean square error (RMSE), percentage root mean square difference (PRD), and normalized PRDN, are evaluated. The ATPAwOMP algorithm is well-suited for real-time and resource-constrained applications like wearable ECG monitoring devices because of its adaptive thresholding and projection-aware approach, which provide notable increases in reconstruction accuracy and processing efficiency.</div></div>","PeriodicalId":54973,"journal":{"name":"Integration-The Vlsi Journal","volume":"107 ","pages":"Article 102600"},"PeriodicalIF":2.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145618429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The deposition of dielectric thin film in semiconductor fabrication is significantly influenced by process parameter configuration. Traditional optimization via experiments or multi-physics simulations is costly, time-consuming, and lacks flexibility. Data-driven methods that leverage production line sensor data provide a promising alternative. This work proposes a machine learning modeling framework for studying the nonlinear correlation between dielectric deposition parameters and film thickness distribution. The proposed approach is validated using historical High-Density Plasma Chemical Vapor Deposition (HDPCVD) process data collected from production runs and demonstrates strong predictive performance across multiple technology nodes. This framework achieves strong predictive performance in thin film thickness ( = 0.92) and enables practical assessment of specification compliance, achieving 79.5% accuracy in determining whether predicted thicknesses lie within the node–specific tolerances at the 14 nm node. The results suggest that data-driven modeling offers a practical, scalable, and efficient solution for process monitoring and optimization in advanced semiconductor fabrication.
{"title":"Machine-learning-driven prediction of thin film parameters for optimizing the dielectric deposition in semiconductor fabrication","authors":"Hao Wen , Enda Zhao , Qiyue Zhang , Ruofei Xiang , Wenjian Yu","doi":"10.1016/j.vlsi.2025.102617","DOIUrl":"10.1016/j.vlsi.2025.102617","url":null,"abstract":"<div><div>The deposition of dielectric thin film in semiconductor fabrication is significantly influenced by process parameter configuration. Traditional optimization via experiments or multi-physics simulations is costly, time-consuming, and lacks flexibility. Data-driven methods that leverage production line sensor data provide a promising alternative. This work proposes a machine learning modeling framework for studying the nonlinear correlation between dielectric deposition parameters and film thickness distribution. The proposed approach is validated using historical High-Density Plasma Chemical Vapor Deposition (HDPCVD) process data collected from production runs and demonstrates strong predictive performance across multiple technology nodes. This framework achieves strong predictive performance in thin film thickness (<span><math><msup><mrow><mo>R</mo></mrow><mrow><mn>2</mn></mrow></msup></math></span> = 0.92) and enables practical assessment of specification compliance, achieving 79.5% accuracy in determining whether predicted thicknesses lie within the node–specific tolerances at the 14 nm node. The results suggest that data-driven modeling offers a practical, scalable, and efficient solution for process monitoring and optimization in advanced semiconductor fabrication.</div></div>","PeriodicalId":54973,"journal":{"name":"Integration-The Vlsi Journal","volume":"107 ","pages":"Article 102617"},"PeriodicalIF":2.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145684623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work presents challenges and solutions of global interconnect in Network-on-Chip (NoC) based System-on-Chips (SoCs) for congestion-free communication between different quantum accelerators in quantum computing systems. To address these problems, we have proposed two novel topologies in two-dimensional (2-D) and four topologies in three-dimensional (3-D). These topologies are based on two different architectural connection methods. The first two are the hybrid connection of the ring-of-mesh, with partial-diagonal-link (HMRPD) in 2-D and 3-D, and the other two are the hybrid connection of the ring-of-torus, with partial-diagonal-link (HTRPD) in 2-D and 3-D. Initially, the parametric analysis performed for both 2-D topologies and result shows that the interconnect has less diameter and average distance, which leads to reduce latency. It requires a small node degree, which makes it more accessible to design a network. It has a high bisection bandwidth, which helps in achieving low communication cost and high throughput. The scalability is higher than that of another existing interconnect. Further, we have examined the throughput, packet latency, and energy consumption of the interconnect for performance comparison of topologies under synthetic traffic patterns. We found that the proposed technique improves performance, optimizes communication cost, and energy consumption. Next, the 2-D HMRPD and 2-D HTRPD extended to 3-D symmetric network architectures by appending two additional ports in 2-D router architectures, namely up port and down port, and connecting these ports by Through Silicon Via (TSV), and routing of packets performed by a quasi-minimal routing technique. The result shows that these 3-D HMRPD and HTRPD have better performance than the 2-D HMRPD, 2-D HTRPD, and existing topologies. Unfortunately, these 3-D topologies result in extra energy consumption issues. Therefore, to solve this issue, heterogeneous layout of 2-D and 3-D router integration techniques applied in 3-D topologies for reducing number of TSV. Furthermore, we have presented two 3-D HTRPD topologies with TSV optimized and compared them with a full TSV-connected 3-D HTRPD. We found that 1P-3DR-HTRPD topology has the lowest gate count, area, dynamic, and static power consumption in comparison to fully connected 3-D HTRPD topology. This work has been designed by modifying network system simulator and also implemented in the Xc7z020clg484-1 ZYNQ FPGA device for validation. Furthermore, we have also examined that these 2-D topologies are more area-efficient and require a maximum crossbar size of 6x6 and have a high frequency of 2.29 GHz and 2.22 GHz for 2-D HMRPD and 2-D HTRPD, respectively, in comparison to other diagonal link topologies.
{"title":"Adaptive congestion-aware high performance scalable 2-D and 3-D topologies for network-on-chip based interconnect for quantum computing","authors":"Jayshree , Gopalakrishnan Seetharaman , Jitendra Kumar","doi":"10.1016/j.vlsi.2025.102597","DOIUrl":"10.1016/j.vlsi.2025.102597","url":null,"abstract":"<div><div>This work presents challenges and solutions of global interconnect in Network-on-Chip (NoC) based System-on-Chips (SoCs) for congestion-free communication between different quantum accelerators in quantum computing systems. To address these problems, we have proposed two novel topologies in two-dimensional (2-D) and four topologies in three-dimensional (3-D). These topologies are based on two different architectural connection methods. The first two are the hybrid connection of the ring-of-mesh, with partial-diagonal-link (HMRPD) in 2-D and 3-D, and the other two are the hybrid connection of the ring-of-torus, with partial-diagonal-link (HTRPD) in 2-D and 3-D. Initially, the parametric analysis performed for both 2-D topologies and result shows that the interconnect has less diameter and average distance, which leads to reduce latency. It requires a small node degree, which makes it more accessible to design a network. It has a high bisection bandwidth, which helps in achieving low communication cost and high throughput. The scalability is higher than that of another existing interconnect. Further, we have examined the throughput, packet latency, and energy consumption of the interconnect for performance comparison of topologies under synthetic traffic patterns. We found that the proposed technique improves performance, optimizes communication cost, and energy consumption. Next, the 2-D HMRPD and 2-D HTRPD extended to 3-D symmetric network architectures by appending two additional ports in 2-D router architectures, namely up port and down port, and connecting these ports by Through Silicon Via (TSV), and routing of packets performed by a quasi-minimal routing technique. The result shows that these 3-D HMRPD and HTRPD have better performance than the 2-D HMRPD, 2-D HTRPD, and existing topologies. Unfortunately, these 3-D topologies result in extra energy consumption issues. Therefore, to solve this issue, heterogeneous layout of 2-D and 3-D router integration techniques applied in 3-D topologies for reducing number of TSV. Furthermore, we have presented two 3-D HTRPD topologies with TSV optimized and compared them with a full TSV-connected 3-D HTRPD. We found that 1P-3DR-HTRPD topology has the lowest gate count, area, dynamic, and static power consumption in comparison to fully connected 3-D HTRPD topology. This work has been designed by modifying network system simulator and also implemented in the Xc7z020clg484-1 ZYNQ FPGA device for validation. Furthermore, we have also examined that these 2-D topologies are more area-efficient and require a maximum crossbar size of 6x6 and have a high frequency of 2.29 GHz and 2.22 GHz for 2-D HMRPD and 2-D HTRPD, respectively, in comparison to other diagonal link topologies.</div></div>","PeriodicalId":54973,"journal":{"name":"Integration-The Vlsi Journal","volume":"107 ","pages":"Article 102597"},"PeriodicalIF":2.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145618431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-12-17DOI: 10.1016/j.vlsi.2025.102632
Manhong Fan, Qingsong Liu, Shiqi Xu, Yonglong Bai
While multi-stability is a well-established phenomenon in traditional chaotic systems, it remains a largely unexplored area within the realm of neural networks. This paper proposes a method for generating the stable coexistence of multiple scroll attractors in a dual memristor synaptic Hopfield neural network (DMSHNN) under multi-level logic pulse currents. A systematic study of its dynamic behavior is conducted through methods such as bifurcation diagrams, Lyapunov exponent spectra, and phase diagrams. The research findings indicate that, under specific initial conditions, the DMSHNN system exhibits distinctive dynamic behaviors: 1. Periodic attractors and chaotic attractors not only undergo state transitions but also exhibit a phenomenon of biased coexistence; 2. Not only can transient chaos be observed in the DMSHNN system, but the application of multi-level logic pulse currents also facilitates a more stable coexistence of multiple scroll attractors when the memristor's initial conditions are altered. Subsequently, the physical feasibility of the theoretical model was validated through an STM32 digital circuit platform, and the experimental results are presented. Finally, based on the chaotic sequences generated by the DMSHNN model, a remote sensing image encryption algorithm was designed and implemented. This study not only expands the engineering applicability of the DMSHNN model through this algorithm but also provides empirical evidence for the model's chaotic dynamics and the practicality, feasibility, and security of the resultant image encryption algorithm.
{"title":"Dynamics analysis and application of multi-stable Hopfield neural networks under pulsed current stimulation","authors":"Manhong Fan, Qingsong Liu, Shiqi Xu, Yonglong Bai","doi":"10.1016/j.vlsi.2025.102632","DOIUrl":"10.1016/j.vlsi.2025.102632","url":null,"abstract":"<div><div>While multi-stability is a well-established phenomenon in traditional chaotic systems, it remains a largely unexplored area within the realm of neural networks. This paper proposes a method for generating the stable coexistence of multiple scroll attractors in a dual memristor synaptic Hopfield neural network (DMSHNN) under multi-level logic pulse currents. A systematic study of its dynamic behavior is conducted through methods such as bifurcation diagrams, Lyapunov exponent spectra, and phase diagrams. The research findings indicate that, under specific initial conditions, the DMSHNN system exhibits distinctive dynamic behaviors: 1. Periodic attractors and chaotic attractors not only undergo state transitions but also exhibit a phenomenon of biased coexistence; 2. Not only can transient chaos be observed in the DMSHNN system, but the application of multi-level logic pulse currents also facilitates a more stable coexistence of multiple scroll attractors when the memristor's initial conditions are altered. Subsequently, the physical feasibility of the theoretical model was validated through an STM32 digital circuit platform, and the experimental results are presented. Finally, based on the chaotic sequences generated by the DMSHNN model, a remote sensing image encryption algorithm was designed and implemented. This study not only expands the engineering applicability of the DMSHNN model through this algorithm but also provides empirical evidence for the model's chaotic dynamics and the practicality, feasibility, and security of the resultant image encryption algorithm.</div></div>","PeriodicalId":54973,"journal":{"name":"Integration-The Vlsi Journal","volume":"107 ","pages":"Article 102632"},"PeriodicalIF":2.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145839823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hardware Trojans are emerging malicious integrated circuit (IC) modifications that pose a significant threat to the integrity of electronics. While existing methods, such as functional testing and reverse engineering, are proposed to identify Trojan anomalies in electronics, their applicability to industrial pipelines is limited. This paper proposes a new image processing technique for efficient clustering and identification of Hardware Trojan insertion in integrated circuits. The uniqueness of the proposed AI-assisted image processing method relies on using real hardware to generate images using side-channel analysis (SCA) before applying unsupervised image classification to identify the impact of hardware Trojans without the need for costly golden references. Leveraging Machine Learning on side-channel data collected from Ring-Oscillator networks, image and digital signal processing are employed to extract features for detection. This research contributes a novel use of side-channel data as images, eliminating the reliance on golden references, and achieving a remarkable accuracy of 95% in Hardware Trojan detection. In addition to significantly advancing the field and addressing crucial challenges in semiconductor supply chain, making it a significant step toward securing it.
{"title":"AI-enabled image processing approach for efficient clustering and identification of hardware Trojans","authors":"Ashutosh Ghimire , Mohammed Alkurdi , Saraju Mohanty , Fathi Amsaad","doi":"10.1016/j.vlsi.2025.102628","DOIUrl":"10.1016/j.vlsi.2025.102628","url":null,"abstract":"<div><div>Hardware Trojans are emerging malicious integrated circuit (IC) modifications that pose a significant threat to the integrity of electronics. While existing methods, such as functional testing and reverse engineering, are proposed to identify Trojan anomalies in electronics, their applicability to industrial pipelines is limited. This paper proposes a new image processing technique for efficient clustering and identification of Hardware Trojan insertion in integrated circuits. The uniqueness of the proposed AI-assisted image processing method relies on using real hardware to generate images using side-channel analysis (SCA) before applying unsupervised image classification to identify the impact of hardware Trojans without the need for costly golden references. Leveraging Machine Learning on side-channel data collected from Ring-Oscillator networks, image and digital signal processing are employed to extract features for detection. This research contributes a novel use of side-channel data as images, eliminating the reliance on golden references, and achieving a remarkable accuracy of 95% in Hardware Trojan detection. In addition to significantly advancing the field and addressing crucial challenges in semiconductor supply chain, making it a significant step toward securing it.</div></div>","PeriodicalId":54973,"journal":{"name":"Integration-The Vlsi Journal","volume":"107 ","pages":"Article 102628"},"PeriodicalIF":2.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145839822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-11-29DOI: 10.1016/j.vlsi.2025.102616
Juntao Jian, Yan Xing, Shuting Cai, Weijun Li, Xiaoming Xiong
Detailed-routability optimization methods for three-dimensional global routing typically employ a two-stage process involving initial routing and multi-level maze routing (iterative rip-up and reroute, or RRR iterations). Within the coarse-grained maze route planning of RRR iterations, the resource model and cost scheme are paramount for optimization quality. However, current advancements in these areas often overlook the dynamic nature of routing resources throughout RRR iterations and fail to consider routability features beyond congestion. To mitigate these limitations, this paper introduces a novel detailed-routability optimization approach that integrates a dynamic resource model and a routability-aware cost scheme. The proposed dynamic resource model accounts for routing resources’ sensitivity to both spatial information and the progression of RRR iterations. Moreover, the routability-aware cost scheme, derived from coarse-grained routability features, is designed to optimize fine-grained routability. Experimental results validate that our approach surpasses baseline detailed-routability-driven global routers, exhibiting superior optimization performance by concurrently enhancing routability and overall quality scores (a weighted summation of wirelength and routability metrics), alongside achieving significant runtime reduction.
{"title":"Optimizing detailed-routability for 3D global routing through dynamic resource model and routability-aware cost scheme","authors":"Juntao Jian, Yan Xing, Shuting Cai, Weijun Li, Xiaoming Xiong","doi":"10.1016/j.vlsi.2025.102616","DOIUrl":"10.1016/j.vlsi.2025.102616","url":null,"abstract":"<div><div>Detailed-routability optimization methods for three-dimensional global routing typically employ a two-stage process involving initial routing and multi-level maze routing (iterative rip-up and reroute, or RRR iterations). Within the coarse-grained maze route planning of RRR iterations, the resource model and cost scheme are paramount for optimization quality. However, current advancements in these areas often overlook the dynamic nature of routing resources throughout RRR iterations and fail to consider routability features beyond congestion. To mitigate these limitations, this paper introduces a novel detailed-routability optimization approach that integrates a dynamic resource model and a routability-aware cost scheme. The proposed dynamic resource model accounts for routing resources’ sensitivity to both spatial information and the progression of RRR iterations. Moreover, the routability-aware cost scheme, derived from coarse-grained routability features, is designed to optimize fine-grained routability. Experimental results validate that our approach surpasses baseline detailed-routability-driven global routers, exhibiting superior optimization performance by concurrently enhancing routability and overall quality scores (a weighted summation of wirelength and routability metrics), alongside achieving significant runtime reduction.</div></div>","PeriodicalId":54973,"journal":{"name":"Integration-The Vlsi Journal","volume":"107 ","pages":"Article 102616"},"PeriodicalIF":2.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145684741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-11-20DOI: 10.1016/j.vlsi.2025.102602
Katherine Shu-Min Li , Fang-Chi Wu , Ching-Han Lai , Sying-Jyan Wang
With the rapid advancement of artificial intelligence (AI) technologies and the increasing proliferation of electronic devices, the demand for high-performance and secure printed circuit boards (PCBs) has grown substantially. In particular, the requirements for high-frequency operation, high-speed signal integrity, and enhanced security have become increasingly critical in modern PCB design. This study presents an integrated framework that incorporates test point insertion directly into the PCB routing process, simultaneously addressing testability and security concerns at the design stage. For the routing task, we propose a method that prioritizes nets by assigning routing sequences prior to trace generation. The A∗ search algorithm is then employed to perform multilayer routing, utilizing a customized heuristic function to minimize overall trace length while considering the known number of board layers. To determine optimal test point placement, we adopt a reinforcement learning approach, wherein an agent learns to select appropriate insertion actions guided by a carefully designed reward function. Experimental results demonstrate that the proposed approach achieves 100 % routing success and full test point coverage across all evaluated PCB designs. The resulting design allows for improved accessibility for electrical testing and lays the groundwork for subsequent security assessment.
{"title":"Security-oriented printed-circuit-board routing with deep reinforcement learning","authors":"Katherine Shu-Min Li , Fang-Chi Wu , Ching-Han Lai , Sying-Jyan Wang","doi":"10.1016/j.vlsi.2025.102602","DOIUrl":"10.1016/j.vlsi.2025.102602","url":null,"abstract":"<div><div>With the rapid advancement of artificial intelligence (AI) technologies and the increasing proliferation of electronic devices, the demand for high-performance and secure printed circuit boards (PCBs) has grown substantially. In particular, the requirements for high-frequency operation, high-speed signal integrity, and enhanced security have become increasingly critical in modern PCB design. This study presents an integrated framework that incorporates test point insertion directly into the PCB routing process, simultaneously addressing testability and security concerns at the design stage. For the routing task, we propose a method that prioritizes nets by assigning routing sequences prior to trace generation. The A∗ search algorithm is then employed to perform multilayer routing, utilizing a customized heuristic function to minimize overall trace length while considering the known number of board layers. To determine optimal test point placement, we adopt a reinforcement learning approach, wherein an agent learns to select appropriate insertion actions guided by a carefully designed reward function. Experimental results demonstrate that the proposed approach achieves 100 % routing success and full test point coverage across all evaluated PCB designs. The resulting design allows for improved accessibility for electrical testing and lays the groundwork for subsequent security assessment.</div></div>","PeriodicalId":54973,"journal":{"name":"Integration-The Vlsi Journal","volume":"107 ","pages":"Article 102602"},"PeriodicalIF":2.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145618433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-12-17DOI: 10.1016/j.vlsi.2025.102635
Wangyong Chen , Ling Xiong, Songxuan He, Linlin Cai
Temperature variation both within a chip and from the environment is a critical concern for modern integrated circuits, posing a significant threat to system robustness. Current temperature compensation methods, however, face the challenge of additional design costs in terms of area and power consumption. This paper introduces a novel temperature immunity-driven design methodology that leverages the zero-temperature-coefficient ZTC feature to suppress the temperature sensitivity of critical paths in digital circuits. We propose an analytical compact model to determine the ZTC point by bridging device characteristics to standard cell behavior. This enables an efficient temperature immunity-driven design technology co-optimization (DTCO) paradigm. The impacts of operating conditions, process variations, and the aging effect on device characteristics, and consequently, on the digital ZTC are thoroughly investigated. These findings are seamlessly integrated into the existing design flow. The proposed framework, featuring ZTC-aware co-optimization in the presence of process variations and time-dependent aging effects, is demonstrated effectively on benchmark circuits. This work significantly contributes to the advancement of temperature-immune digital circuit design and optimization.
{"title":"Zero-temperature-coefficient-powered design technology co-optimization for temperature-immune digital circuits","authors":"Wangyong Chen , Ling Xiong, Songxuan He, Linlin Cai","doi":"10.1016/j.vlsi.2025.102635","DOIUrl":"10.1016/j.vlsi.2025.102635","url":null,"abstract":"<div><div>Temperature variation both within a chip and from the environment is a critical concern for modern integrated circuits, posing a significant threat to system robustness. Current temperature compensation methods, however, face the challenge of additional design costs in terms of area and power consumption. This paper introduces a novel temperature immunity-driven design methodology that leverages the zero-temperature-coefficient ZTC feature to suppress the temperature sensitivity of critical paths in digital circuits. We propose an analytical compact model to determine the ZTC point by bridging device characteristics to standard cell behavior. This enables an efficient temperature immunity-driven design technology co-optimization (DTCO) paradigm. The impacts of operating conditions, process variations, and the aging effect on device characteristics, and consequently, on the digital ZTC are thoroughly investigated. These findings are seamlessly integrated into the existing design flow. The proposed framework, featuring ZTC-aware co-optimization in the presence of process variations and time-dependent aging effects, is demonstrated effectively on benchmark circuits. This work significantly contributes to the advancement of temperature-immune digital circuit design and optimization.</div></div>","PeriodicalId":54973,"journal":{"name":"Integration-The Vlsi Journal","volume":"107 ","pages":"Article 102635"},"PeriodicalIF":2.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145839820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-11-07DOI: 10.1016/j.vlsi.2025.102595
Wei Zhou, Guo-Xing Wang, Yongfu Li
The rapid development of the integrated circuit (IC) industry has continuously increased the complexity of IC manufacturing processes. Massive data analysis, exemplified by Wafer Map analysis, poses growing challenges for engineers and technicians in the field. With the ongoing advancement and maturation of machine learning in artificial intelligence, the application of machine learning algorithms for automated recognition and classification of wafer map patterns, known as Wafer Map Pattern Recognition and Classification, has emerged as a prominent research focus within the industry over the past decade. This paper conducts a systematic and comprehensive study, analyzing various machine learning algorithms applied to the problem of wafer map pattern recognition and classification. Starting from traditional machine learning techniques to neural networks and deep learning, the study identifies convolutional neural networks (CNNs) as one of the most effective approaches for addressing this problem currently. The research also highlights the continuous optimization of deep learning algorithms, focusing on improvements in architecture, depth, feature fusion, and the introduction of attention mechanisms to enhance the extraction of fine local features. Furthermore, the paper addresses issues related to data dependency, emphasizing innovations such as data augmentation, data generation, and semi-supervised learning models to mitigate the adverse effects of data scarcity and imbalance on deep learning training. These advancements aim to facilitate superior results for deep learning algorithms in solving the problem of wafer map pattern recognition and classification, thereby contributing to the field's ongoing progress.
{"title":"Review: Application and development of machine learning in semiconductor manufacturing for automated wafer map pattern recognition and classification","authors":"Wei Zhou, Guo-Xing Wang, Yongfu Li","doi":"10.1016/j.vlsi.2025.102595","DOIUrl":"10.1016/j.vlsi.2025.102595","url":null,"abstract":"<div><div>The rapid development of the integrated circuit (IC) industry has continuously increased the complexity of IC manufacturing processes. Massive data analysis, exemplified by Wafer Map analysis, poses growing challenges for engineers and technicians in the field. With the ongoing advancement and maturation of machine learning in artificial intelligence, the application of machine learning algorithms for automated recognition and classification of wafer map patterns, known as Wafer Map Pattern Recognition and Classification, has emerged as a prominent research focus within the industry over the past decade. This paper conducts a systematic and comprehensive study, analyzing various machine learning algorithms applied to the problem of wafer map pattern recognition and classification. Starting from traditional machine learning techniques to neural networks and deep learning, the study identifies convolutional neural networks (CNNs) as one of the most effective approaches for addressing this problem currently. The research also highlights the continuous optimization of deep learning algorithms, focusing on improvements in architecture, depth, feature fusion, and the introduction of attention mechanisms to enhance the extraction of fine local features. Furthermore, the paper addresses issues related to data dependency, emphasizing innovations such as data augmentation, data generation, and semi-supervised learning models to mitigate the adverse effects of data scarcity and imbalance on deep learning training. These advancements aim to facilitate superior results for deep learning algorithms in solving the problem of wafer map pattern recognition and classification, thereby contributing to the field's ongoing progress.</div></div>","PeriodicalId":54973,"journal":{"name":"Integration-The Vlsi Journal","volume":"107 ","pages":"Article 102595"},"PeriodicalIF":2.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145618430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}