Pub Date : 2025-09-26DOI: 10.1109/TVLSI.2025.3611632
Linlin Huang;Yongjia Li;Jianhui Wu
This brief presents a noise-shaping (NS) successive approximation register (SAR) analog-to-digital converter (ADC) for biomedical Internet-of-Things (IoT) applications. The proposed capacitor-mismatch-error-free (CMEF) switching scheme intrinsically eliminates MSB digital-to-analog converter (DAC) mismatch errors through identical shift in the bottom-plate reference voltage, thereby realizing 0.9- and 2.9-dB improvements in the signal-to-noise-and-distortion ratio (SNDR) and spurious-free dynamic range (SFDR), respectively, over the tri-level switching method. Fabricated in a 40-nm CMOS technology, the prototype NS-SAR ADC occupies a core area of 0.053 mm${}^{mathbf {2}}$ and consumes $87.7~mu $ W at 1.1-V supply. With an oversampling ratio (OSR) of 12 and a 50-kHz bandwidth (BW), it achieves 84.6-dB SNDR and 95.4-dB SFDR, yielding a Schreier figure-of-merit (FoM) of 172.2 dB and a Walden FoM of 63.2 fJ/step.
{"title":"A 50-kHz BW, 84.6-dB SNDR Noise-Shaping SAR ADC With Capacitor-Mismatch-Error-Free Switching Scheme","authors":"Linlin Huang;Yongjia Li;Jianhui Wu","doi":"10.1109/TVLSI.2025.3611632","DOIUrl":"https://doi.org/10.1109/TVLSI.2025.3611632","url":null,"abstract":"This brief presents a noise-shaping (NS) successive approximation register (SAR) analog-to-digital converter (ADC) for biomedical Internet-of-Things (IoT) applications. The proposed capacitor-mismatch-error-free (CMEF) switching scheme intrinsically eliminates MSB digital-to-analog converter (DAC) mismatch errors through identical shift in the bottom-plate reference voltage, thereby realizing 0.9- and 2.9-dB improvements in the signal-to-noise-and-distortion ratio (SNDR) and spurious-free dynamic range (SFDR), respectively, over the tri-level switching method. Fabricated in a 40-nm CMOS technology, the prototype NS-SAR ADC occupies a core area of 0.053 mm<inline-formula> <tex-math>${}^{mathbf {2}}$ </tex-math></inline-formula> and consumes <inline-formula> <tex-math>$87.7~mu $ </tex-math></inline-formula>W at 1.1-V supply. With an oversampling ratio (OSR) of 12 and a 50-kHz bandwidth (BW), it achieves 84.6-dB SNDR and 95.4-dB SFDR, yielding a Schreier figure-of-merit (FoM) of 172.2 dB and a Walden FoM of 63.2 fJ/step.","PeriodicalId":13425,"journal":{"name":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","volume":"33 12","pages":"3540-3544"},"PeriodicalIF":3.1,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145595114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-18DOI: 10.1109/TVLSI.2025.3608498
Zilin Wang;Zehong Ou;Yi Zhong;Yuan Wang
Spiking neural networks (SNNs) are a promising alternative to traditional artificial neural networks (ANNs) due to their biologically inspired and event-driven characteristics. Similar to ANN, the weights in SNN also exhibit significant sparsity. How to make full use of the weight sparsity in SNN and coordinate hardware design to optimize resource utilization has become a challenge. In this brief, a multicore SNN accelerator named Cactus, based on a fine-grained and programmable structured pruning strategy is proposed. It is a balanced block pruning strategy, which achieves high accuracy in image and speech classification tasks while ensuring high processing elements (PEs) utilization. To increase flexibility, the block size can be configured as $8times 8$ , $16times 16$ , $32times 32$ , $64times 64$ in Cactus. Implemented on Xilinx Kintex UltraScale XCKU115 FPGA board, Cactus can operate at 200 MHz frequency, achieving 198.59GSOP/s peak performance and 56.47GSOP/W energy efficiency at 75% weight sparsity and 0% spike sparsity.
{"title":"Cactus: A Multicore Spiking Neural Network Accelerator With Fine-Grained Structured Weight Sparsity","authors":"Zilin Wang;Zehong Ou;Yi Zhong;Yuan Wang","doi":"10.1109/TVLSI.2025.3608498","DOIUrl":"https://doi.org/10.1109/TVLSI.2025.3608498","url":null,"abstract":"Spiking neural networks (SNNs) are a promising alternative to traditional artificial neural networks (ANNs) due to their biologically inspired and event-driven characteristics. Similar to ANN, the weights in SNN also exhibit significant sparsity. How to make full use of the weight sparsity in SNN and coordinate hardware design to optimize resource utilization has become a challenge. In this brief, a multicore SNN accelerator named Cactus, based on a fine-grained and programmable structured pruning strategy is proposed. It is a balanced block pruning strategy, which achieves high accuracy in image and speech classification tasks while ensuring high processing elements (PEs) utilization. To increase flexibility, the block size can be configured as <inline-formula> <tex-math>$8times 8$ </tex-math></inline-formula>, <inline-formula> <tex-math>$16times 16$ </tex-math></inline-formula>, <inline-formula> <tex-math>$32times 32$ </tex-math></inline-formula>, <inline-formula> <tex-math>$64times 64$ </tex-math></inline-formula> in Cactus. Implemented on Xilinx Kintex UltraScale XCKU115 FPGA board, Cactus can operate at 200 MHz frequency, achieving 198.59GSOP/s peak performance and 56.47GSOP/W energy efficiency at 75% weight sparsity and 0% spike sparsity.","PeriodicalId":13425,"journal":{"name":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","volume":"33 12","pages":"3535-3539"},"PeriodicalIF":3.1,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145595122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-16DOI: 10.1109/TVLSI.2025.3607319
Yishuo Meng;Qiang Fu;Jianfei Wang;Siwei Xiang;Jia Hou;Ge Li;Zhijie Lin;Chen Yang
Convolutional neural network (CNN) pruning is an effective way to reduce the computation requirement and improve the inference performance of standard convolutional layers. However, as for the low-computation-density layers in lightweight CNNs, pruning not only fails to improve their processing efficiency but also exacerbates the underutilization problem when deploying them on the convolutional engine. To efficiently execute these pruning-ineffective layers and further accelerate lightweight CNNs, a sparsity-adjustable CNN pruning method, which allows the pruning ratio to be adjusted, is proposed to prune the nonpruning-ineffective layers while shielding the pruning-ineffective layers. As a result, it achieves an additional 40% pruning ratio for nonpruning-ineffective layers with only 0.09% accuracy loss. Furthermore, a dense/sparse mixed-mode convolution computation scheme is designed to efficiently process the pruning- and nonpruning-ineffective layers using multiple acceleration techniques. Finally, a lightweight CNN accelerator is implemented on the Xilinx VCU118 FPGA platform. The comparison results with current studies present that this work can realize 1004.2 performance and 0.98 DSP efficiency while deploying MobileNetV2, achieving $1.26times $ -$6.13times $ enhancement on DSP efficiency.
{"title":"A Mixed-Mode Acceleration via Sparsity-Adjustable Pruning for Balancing Computation Density in Lightweight CNNs","authors":"Yishuo Meng;Qiang Fu;Jianfei Wang;Siwei Xiang;Jia Hou;Ge Li;Zhijie Lin;Chen Yang","doi":"10.1109/TVLSI.2025.3607319","DOIUrl":"https://doi.org/10.1109/TVLSI.2025.3607319","url":null,"abstract":"Convolutional neural network (CNN) pruning is an effective way to reduce the computation requirement and improve the inference performance of standard convolutional layers. However, as for the low-computation-density layers in lightweight CNNs, pruning not only fails to improve their processing efficiency but also exacerbates the underutilization problem when deploying them on the convolutional engine. To efficiently execute these pruning-ineffective layers and further accelerate lightweight CNNs, a sparsity-adjustable CNN pruning method, which allows the pruning ratio to be adjusted, is proposed to prune the nonpruning-ineffective layers while shielding the pruning-ineffective layers. As a result, it achieves an additional 40% pruning ratio for nonpruning-ineffective layers with only 0.09% accuracy loss. Furthermore, a dense/sparse mixed-mode convolution computation scheme is designed to efficiently process the pruning- and nonpruning-ineffective layers using multiple acceleration techniques. Finally, a lightweight CNN accelerator is implemented on the Xilinx VCU118 FPGA platform. The comparison results with current studies present that this work can realize 1004.2 performance and 0.98 DSP efficiency while deploying MobileNetV2, achieving <inline-formula> <tex-math>$1.26times $ </tex-math></inline-formula>-<inline-formula> <tex-math>$6.13times $ </tex-math></inline-formula> enhancement on DSP efficiency.","PeriodicalId":13425,"journal":{"name":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","volume":"33 12","pages":"3525-3529"},"PeriodicalIF":3.1,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145595119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-16DOI: 10.1109/TVLSI.2025.3608198
Ki-Soo Lee;Joo-Hyung Chae
This brief presents a single-ended receiver (RX) with a decision feedback equalizer (DFE)-embedded and stack-reduced slicer using a DFE weight selection multiplexer (MUX). The RX employs a quarter-rate clocking architecture to reduce the on-chip clock (CK) frequency and ensure reliable operation under stringent DFE timing constraints. The slicer output is fed back to the DFE weight selection MUX integrated into the second-stage slicer, achieving a short feedback loop latency. In the proposed architecture, the number of stacked transistors in the slicer is reduced to three, thereby reducing the CK-to-Q delay and overall DFE feedback loop latency. This optimized design increases the feedback speed and alleviates DFE timing constraints, ensuring stable operation even at low supply voltages. A prototype RX was fabricated using a 65-nm CMOS process and had an area of 0.004 mm2. The proposed RX achieved a measured bit error rate (BER) below $10^{-12}$ at a data rate of 14 Gb/s with an insertion loss of −12 dB and achieved a power efficiency of 0.097 pJ/bit with a supply voltage of 0.75 V.
{"title":"Sub 0.1-pJ/bit 14-Gb/s Receiver With Stack-Reduced Slicer Embedding One-Tap DFE for Low-Power Memory Interfaces","authors":"Ki-Soo Lee;Joo-Hyung Chae","doi":"10.1109/TVLSI.2025.3608198","DOIUrl":"https://doi.org/10.1109/TVLSI.2025.3608198","url":null,"abstract":"This brief presents a single-ended receiver (RX) with a decision feedback equalizer (DFE)-embedded and stack-reduced slicer using a DFE weight selection multiplexer (MUX). The RX employs a quarter-rate clocking architecture to reduce the on-chip clock (CK) frequency and ensure reliable operation under stringent DFE timing constraints. The slicer output is fed back to the DFE weight selection MUX integrated into the second-stage slicer, achieving a short feedback loop latency. In the proposed architecture, the number of stacked transistors in the slicer is reduced to three, thereby reducing the CK-to-Q delay and overall DFE feedback loop latency. This optimized design increases the feedback speed and alleviates DFE timing constraints, ensuring stable operation even at low supply voltages. A prototype RX was fabricated using a 65-nm CMOS process and had an area of 0.004 mm<sup>2</sup>. The proposed RX achieved a measured bit error rate (BER) below <inline-formula> <tex-math>$10^{-12}$ </tex-math></inline-formula> at a data rate of 14 Gb/s with an insertion loss of −12 dB and achieved a power efficiency of 0.097 pJ/bit with a supply voltage of 0.75 V.","PeriodicalId":13425,"journal":{"name":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","volume":"33 12","pages":"3530-3534"},"PeriodicalIF":3.1,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145595128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Static random-access memory (SRAM)-based analog compute-in-memory (ACiM) demonstrates promising energy efficiency for deep neural network (DNN) processing. Nevertheless, efforts to optimize efficiency frequently compromise accuracy, and this trade-off remains insufficiently studied due to the difficulty of performing full-system validation. Specifically, existing simulation tools rarely target SRAM-based ACiM and exhibit inconsistent accuracy predictions, highlighting the need for a standardized, SRAM compute-in-memory (CiM) circuit-aware evaluation methodology. This article presents ASiM, a simulation framework for evaluating inference accuracy in SRAM-based ACiM systems. ASiM captures critical effects in SRAM-based analog compute in memory systems, such as analog-to-digital converter (ADC) quantization, bit-parallel encoding, and analog noise, which must be modeled with high fidelity due to their distinct behavior in charge-domain architectures compared to other memory technologies. ASiM supports a wide range of modern DNN workloads, including CNN and Transformer-based models such as ViT, and scales to large-scale tasks like ImageNet classification. Our results indicate that bit-parallel encoding can improve energy efficiency with only modest accuracy degradation; however, even 1 LSB of analog noise can significantly impair inference performance, particularly in complex tasks such as ImageNet. To address this, we explore hybrid analog-digital execution and majority voting schemes, both of which enhance robustness without negating energy savings. ASiM bridges the gap between hardware design and inference performance, offering actionable insights for energy-efficient, high-accuracy ACiM deployment. The code is available at https://github.com/Keio-CSG/ASiM
{"title":"ASiM: Modeling and Analyzing Inference Accuracy of SRAM-Based Analog CiM Circuits","authors":"Wenlun Zhang;Shimpei Ando;Yung-Chin Chen;Kentaro Yoshioka","doi":"10.1109/TVLSI.2025.3605286","DOIUrl":"https://doi.org/10.1109/TVLSI.2025.3605286","url":null,"abstract":"Static random-access memory (SRAM)-based analog compute-in-memory (ACiM) demonstrates promising energy efficiency for deep neural network (DNN) processing. Nevertheless, efforts to optimize efficiency frequently compromise accuracy, and this trade-off remains insufficiently studied due to the difficulty of performing full-system validation. Specifically, existing simulation tools rarely target SRAM-based ACiM and exhibit inconsistent accuracy predictions, highlighting the need for a standardized, SRAM compute-in-memory (CiM) circuit-aware evaluation methodology. This article presents ASiM, a simulation framework for evaluating inference accuracy in SRAM-based ACiM systems. ASiM captures critical effects in SRAM-based analog compute in memory systems, such as analog-to-digital converter (ADC) quantization, bit-parallel encoding, and analog noise, which must be modeled with high fidelity due to their distinct behavior in charge-domain architectures compared to other memory technologies. ASiM supports a wide range of modern DNN workloads, including CNN and Transformer-based models such as ViT, and scales to large-scale tasks like ImageNet classification. Our results indicate that bit-parallel encoding can improve energy efficiency with only modest accuracy degradation; however, even 1 LSB of analog noise can significantly impair inference performance, particularly in complex tasks such as ImageNet. To address this, we explore hybrid analog-digital execution and majority voting schemes, both of which enhance robustness without negating energy savings. ASiM bridges the gap between hardware design and inference performance, offering actionable insights for energy-efficient, high-accuracy ACiM deployment. The code is available at <uri>https://github.com/Keio-CSG/ASiM</uri>","PeriodicalId":13425,"journal":{"name":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","volume":"33 10","pages":"2838-2851"},"PeriodicalIF":3.1,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145141714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-03DOI: 10.1109/TVLSI.2025.3603557
Qun Zhou;Kang Zeng;Weiwei Yue;Qing Hua
This brief presents an ultralow-energy and low-propagation-delay balanced voltage level shifter (VLS) with a wide voltage conversion range. The proposed VLS achieves low propagation delay while operating with subthreshold voltage, utilizing a newly introduced dynamic biasing scheme (DBS). This scheme improves both turn-on and turn-off speeds by reducing the threshold voltage of the input device circularly. The biasing-related power consumption is largely reduced by well-designed structure. The function of the proposed VLS design has been validated by using a standard 130-nm CMOS technology, taking into account process, voltage, and temperature (PVT) variations. With the implementation of the DBS, the delay is reduced to 6.5 ns, dynamic energy is lowered to 21.7 fJ, the minimum supply voltage is 0.18 V, and the average static power consumption is 2.8 nW for a conversion from 0.3 to 1.2 V.
{"title":"An Ultralow-Energy Voltage Level Shifter With an Output-Cycle-Based Dynamic Biasing Scheme in a 130-nm CMOS Technology","authors":"Qun Zhou;Kang Zeng;Weiwei Yue;Qing Hua","doi":"10.1109/TVLSI.2025.3603557","DOIUrl":"https://doi.org/10.1109/TVLSI.2025.3603557","url":null,"abstract":"This brief presents an ultralow-energy and low-propagation-delay balanced voltage level shifter (VLS) with a wide voltage conversion range. The proposed VLS achieves low propagation delay while operating with subthreshold voltage, utilizing a newly introduced dynamic biasing scheme (DBS). This scheme improves both turn-on and turn-off speeds by reducing the threshold voltage of the input device circularly. The biasing-related power consumption is largely reduced by well-designed structure. The function of the proposed VLS design has been validated by using a standard 130-nm CMOS technology, taking into account process, voltage, and temperature (PVT) variations. With the implementation of the DBS, the delay is reduced to 6.5 ns, dynamic energy is lowered to 21.7 fJ, the minimum supply voltage is 0.18 V, and the average static power consumption is 2.8 nW for a conversion from 0.3 to 1.2 V.","PeriodicalId":13425,"journal":{"name":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","volume":"33 11","pages":"3201-3205"},"PeriodicalIF":3.1,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145398668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01DOI: 10.1109/TVLSI.2025.3601192
Min-Woo Kim;Jae-Mun Oh;Byung-Do Yang
This brief proposes two types of camouflaged logic gates using threshold-voltage-defined memory cells (TVD-MCs). The proposed multiplexer-select TVD-MC (MS-TVDMC) gate consists of a target logic gate, several camouflage logic gates, a multiplexer (MUX), and TVD-MCs. All logic gates and MUX are implemented with standard threshold-voltage transistors. The TVD-MC is composed of two cross-coupled inverters with low- or high-threshold-voltage transistors. When its supply voltage increases from ground to ${V} _{text {DD}}$ , its data become “0” or “1” according to the threshold voltages of transistors in two inverters. The target logic gate is selected with the MUX by the data stored in the TVD-MCs. The data are defined by the threshold voltages of transistors, so that it is difficult to distinguish the target logic gate from the other camouflage logic gates. The proposed logic-merged TVD-MC (LM-TVDMC) gate merges all logic gates and MUX in the MS-TVDMC gate at the transistor level. The proposed camouflaged gates significantly reduce the delay, power consumption, and leakage current compared to the conventional dynamic enhanced-TVD (DE-TVD) camouflaged gate requiring the dynamic power and delay overheads and the conventional threshold-voltage-defined (TVD) switch camouflaged gate with large on-resistances in switch transistors.
{"title":"Camouflaged Logic Gates Using Threshold-Voltage-Defined Memory Cells","authors":"Min-Woo Kim;Jae-Mun Oh;Byung-Do Yang","doi":"10.1109/TVLSI.2025.3601192","DOIUrl":"https://doi.org/10.1109/TVLSI.2025.3601192","url":null,"abstract":"This brief proposes two types of camouflaged logic gates using threshold-voltage-defined memory cells (TVD-MCs). The proposed multiplexer-select TVD-MC (MS-TVDMC) gate consists of a target logic gate, several camouflage logic gates, a multiplexer (MUX), and TVD-MCs. All logic gates and MUX are implemented with standard threshold-voltage transistors. The TVD-MC is composed of two cross-coupled inverters with low- or high-threshold-voltage transistors. When its supply voltage increases from ground to <inline-formula> <tex-math>${V} _{text {DD}}$ </tex-math></inline-formula>, its data become “0” or “1” according to the threshold voltages of transistors in two inverters. The target logic gate is selected with the MUX by the data stored in the TVD-MCs. The data are defined by the threshold voltages of transistors, so that it is difficult to distinguish the target logic gate from the other camouflage logic gates. The proposed logic-merged TVD-MC (LM-TVDMC) gate merges all logic gates and MUX in the MS-TVDMC gate at the transistor level. The proposed camouflaged gates significantly reduce the delay, power consumption, and leakage current compared to the conventional dynamic enhanced-TVD (DE-TVD) camouflaged gate requiring the dynamic power and delay overheads and the conventional threshold-voltage-defined (TVD) switch camouflaged gate with large <sc>on</small>-resistances in switch transistors.","PeriodicalId":13425,"journal":{"name":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","volume":"33 12","pages":"3505-3509"},"PeriodicalIF":3.1,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145595125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-26DOI: 10.1109/TVLSI.2025.3593728
Himanshu Thapliyal;Jürgen Becker;Garrett S. Rose;Tosiron Adegbija;Selçuk Köse
{"title":"Guest Editorial: Selected Papers From IEEE Computer Society Annual Symposium on VLSI (ISVLSI) 2024","authors":"Himanshu Thapliyal;Jürgen Becker;Garrett S. Rose;Tosiron Adegbija;Selçuk Köse","doi":"10.1109/TVLSI.2025.3593728","DOIUrl":"https://doi.org/10.1109/TVLSI.2025.3593728","url":null,"abstract":"","PeriodicalId":13425,"journal":{"name":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","volume":"33 9","pages":"2354-2356"},"PeriodicalIF":3.1,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11142544","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144904831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-26DOI: 10.1109/TVLSI.2025.3598542
{"title":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems Publication Information","authors":"","doi":"10.1109/TVLSI.2025.3598542","DOIUrl":"https://doi.org/10.1109/TVLSI.2025.3598542","url":null,"abstract":"","PeriodicalId":13425,"journal":{"name":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","volume":"33 9","pages":"C2-C2"},"PeriodicalIF":3.1,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11142503","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144904730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-26DOI: 10.1109/TVLSI.2025.3600811
Guanci Wang;Xiaguang Li;Zhiyuan Chen
This brief presents a high-sensitivity battery-free radio frequency (RF) energy harvesting system with ultralow-power auxiliary modules. The proposed design implements two-stage energy conversion based on burst charging mode, achieving ultrahigh sensitivity by using an intermittent charging method that eliminates the charge pump’s loading effect on the RF rectifier. An all-nMOS RF--dc rectifier with internal ${V}_{mathrm {TH}}$ cancellation (IVC) technique achieves an ultrawide high-power conversion efficiency (PCE) input power range for an RF energy harvesting. Furthermore, a ${V} _{mathrm {TH}}$ -based voltage reference is introduced, enabling subthreshold operation of transistors with picowatt-level power consumption, thereby simultaneously improving both PCE and sensitivity. The proposed RF energy harvesting system is implemented with a 0.18-$mu $ m standard CMOS technology. The results show that the system achieves a 55% PCE, a −33-dBm sensitivity, and a 17-dB input power range at 2.4 GHz.
{"title":"A 2.4-GHz −33-dBm Sensitivity Battery-Free RF Energy Harvesting System With 17-dB Input Power Range","authors":"Guanci Wang;Xiaguang Li;Zhiyuan Chen","doi":"10.1109/TVLSI.2025.3600811","DOIUrl":"https://doi.org/10.1109/TVLSI.2025.3600811","url":null,"abstract":"This brief presents a high-sensitivity battery-free radio frequency (RF) energy harvesting system with ultralow-power auxiliary modules. The proposed design implements two-stage energy conversion based on burst charging mode, achieving ultrahigh sensitivity by using an intermittent charging method that eliminates the charge pump’s loading effect on the RF rectifier. An all-nMOS RF--dc rectifier with internal <inline-formula> <tex-math>${V}_{mathrm {TH}}$ </tex-math></inline-formula> cancellation (IVC) technique achieves an ultrawide high-power conversion efficiency (PCE) input power range for an RF energy harvesting. Furthermore, a <inline-formula> <tex-math>${V} _{mathrm {TH}}$ </tex-math></inline-formula>-based voltage reference is introduced, enabling subthreshold operation of transistors with picowatt-level power consumption, thereby simultaneously improving both PCE and sensitivity. The proposed RF energy harvesting system is implemented with a 0.18-<inline-formula> <tex-math>$mu $ </tex-math></inline-formula>m standard CMOS technology. The results show that the system achieves a 55% PCE, a −33-dBm sensitivity, and a 17-dB input power range at 2.4 GHz.","PeriodicalId":13425,"journal":{"name":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","volume":"33 12","pages":"3520-3524"},"PeriodicalIF":3.1,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145595108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}