首页 > 最新文献

IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems最新文献

英文 中文
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems publication information IEEE集成电路与系统计算机辅助设计汇刊
IF 2.7 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-07-18 DOI: 10.1109/TCAD.2025.3584438
{"title":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems publication information","authors":"","doi":"10.1109/TCAD.2025.3584438","DOIUrl":"https://doi.org/10.1109/TCAD.2025.3584438","url":null,"abstract":"","PeriodicalId":13251,"journal":{"name":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","volume":"44 8","pages":"C3-C3"},"PeriodicalIF":2.7,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11085019","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144663720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems society information 集成电路与系统计算机辅助设计学报
IF 2.7 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-07-18 DOI: 10.1109/TCAD.2025.3584436
{"title":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems society information","authors":"","doi":"10.1109/TCAD.2025.3584436","DOIUrl":"https://doi.org/10.1109/TCAD.2025.3584436","url":null,"abstract":"","PeriodicalId":13251,"journal":{"name":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","volume":"44 8","pages":"C2-C2"},"PeriodicalIF":2.7,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11085014","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144657470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HeteroQNN: Enabling Distributed QNN Under Heterogeneous Quantum Devices 异构量子网络:在异构量子器件下实现分布式量子网络
IF 2.9 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-07-11 DOI: 10.1109/TCAD.2025.3588457
Liqiang Lu;Tianyao Chu;Siwei Tan;Jingwen Leng;Fangxin Liu;Congliang Lang;Yifan Guo;Jianwei Yin
In the current NISQ era, the performance of quantum neural network (QNN) models is strictly hindered by the limited qubit number and inevitable noise. A natural idea to improve the robustness of QNN is the implementation of a distributed system. Nevertheless, due to the heterogeneity and instability of quantum chips (e.g., noise, frequent online/offline), training and inference on distributed quantum devices may even destroy the accuracy. In this article, we propose HeteroQNN, a comprehensive QNN framework designed for efficient and high-accuracy distributed training and inference. The main innovation of HeteroQNN is it decouples the QNN circuit into two uniform representations: model vector and behavioral vector. The model vector specifies the gate parameters in the QNN model, while the behavioral vector captures the hardware features when implementing the QNN circuit. To handle the architectural heterogeneity, we introduce personalized QNN models in each quantum processing unit (QPU) and share the gradient among QPUs with homogeneous behavioral vectors. We propose shot-oriented distributed inference, which is much more fine-grained scheduling that can improve accuracy and balance the workload. Finally, by leveraging the hidden homogeneity in the model vector, we present the maintenance for QPU variability. The experiments show that HeteroQNN accelerates the training process by $4.03 times $ with 7.87% loss reduction, compared with the previous distributed QNN framework.
在NISQ时代,量子神经网络(QNN)模型的性能受到有限的量子比特数和不可避免的噪声的严重阻碍。提高QNN鲁棒性的一个自然思路是实现分布式系统。然而,由于量子芯片的异质性和不稳定性(如噪声、频繁的在线/离线),在分布式量子器件上进行训练和推理甚至可能破坏精度。在本文中,我们提出了一个全面的QNN框架,旨在实现高效、高精度的分布式训练和推理。异源QNN的主要创新是将QNN电路解耦为两个统一的表示:模型向量和行为向量。模型向量指定QNN模型中的门参数,而行为向量捕获实现QNN电路时的硬件特征。为了处理体系结构的异构性,我们在每个量子处理单元(QPU)中引入个性化的QNN模型,并在具有同质行为向量的QPU之间共享梯度。我们提出了面向对象的分布式推理,这是一种更细粒度的调度,可以提高准确性并平衡工作负载。最后,通过利用模型向量中的隐藏同质性,我们提出了对QPU可变性的维护。实验表明,与之前的分布式QNN框架相比,HeteroQNN的训练速度提高了4.03倍,损失降低了7.87%。
{"title":"HeteroQNN: Enabling Distributed QNN Under Heterogeneous Quantum Devices","authors":"Liqiang Lu;Tianyao Chu;Siwei Tan;Jingwen Leng;Fangxin Liu;Congliang Lang;Yifan Guo;Jianwei Yin","doi":"10.1109/TCAD.2025.3588457","DOIUrl":"https://doi.org/10.1109/TCAD.2025.3588457","url":null,"abstract":"In the current NISQ era, the performance of quantum neural network (QNN) models is strictly hindered by the limited qubit number and inevitable noise. A natural idea to improve the robustness of QNN is the implementation of a distributed system. Nevertheless, due to the heterogeneity and instability of quantum chips (e.g., noise, frequent online/offline), training and inference on distributed quantum devices may even destroy the accuracy. In this article, we propose HeteroQNN, a comprehensive QNN framework designed for efficient and high-accuracy distributed training and inference. The main innovation of HeteroQNN is it decouples the QNN circuit into two uniform representations: model vector and behavioral vector. The model vector specifies the gate parameters in the QNN model, while the behavioral vector captures the hardware features when implementing the QNN circuit. To handle the architectural heterogeneity, we introduce personalized QNN models in each quantum processing unit (QPU) and share the gradient among QPUs with homogeneous behavioral vectors. We propose shot-oriented distributed inference, which is much more fine-grained scheduling that can improve accuracy and balance the workload. Finally, by leveraging the hidden homogeneity in the model vector, we present the maintenance for QPU variability. The experiments show that HeteroQNN accelerates the training process by <inline-formula> <tex-math>$4.03 times $ </tex-math></inline-formula> with 7.87% loss reduction, compared with the previous distributed QNN framework.","PeriodicalId":13251,"journal":{"name":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","volume":"45 2","pages":"1007-1020"},"PeriodicalIF":2.9,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11078436","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146006895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight Failure Prediction Algorithms Based on Internal Characteristics of 3-D nand Flash Memory 基于三维nand闪存内部特性的轻量级故障预测算法
IF 2.9 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-07-08 DOI: 10.1109/TCAD.2025.3586890
Zehao Chen;Yang Zhang;Ying Zeng;Wenhua Wu;Guojun Han
3-D nand flash memory has attacked widespread attention due to its fast speed, high endurance, and strong reliability. However, its reliability decreases as program and erase time increases. To tackle this problem, current researches mostly employ machine learning models to predict flash memory failure, but there lacks the consideration of using the interlayer difference and page type difference characteristics in flash memory chips to help failure prediction. Based on the internal characteristic of interlayer difference and page type difference, two failure prediction algorithms are proposed in this article, corresponding to the Standard1 and Standard2. For Standard1, an attention focused failure prediction (AFFP) algorithm is proposed. To predict the failure of the entire block, the proposed AFFP algorithm only focuses on the layer which is the most prone to failure and further predicts eight pages of the most likely failure pages within this layer. For Standard2, a low predict-frequency failure prediction (LPFFP) algorithm is proposed, which can reduce the frequency of failure prediction significantly and thus reduce the prediction overhead as much as possible. The experimental results show that, for Standard1, the AFFP algorithm can predict the block of failure accurately, and its data extraction and prediction overheads are reduced by 99.8% compared to the original algorithm, and meanwhile the F1-score exceeds 0.96. For Standard2, the LPFFP algorithm can predict the page of failure within a flash block accurately, and its F1-score exceeds 0.91 with a significant reduction in prediction overhead.
3-D nand闪存以其速度快、寿命长、可靠性强等优点受到了广泛的关注。但其可靠性随着编程时间和擦除时间的增加而降低。为了解决这一问题,目前的研究大多采用机器学习模型来预测闪存故障,但缺乏考虑利用闪存芯片的层间差异和页面类型差异特性来帮助预测故障。基于层间差异和页面类型差异的内在特性,本文提出了两种故障预测算法,分别对应于Standard1和Standard2。针对Standard1,提出了一种关注焦点故障预测(AFFP)算法。为了预测整个区块的故障,本文提出的AFFP算法只关注最容易发生故障的那一层,并进一步预测该层中最可能发生故障的8页。针对Standard2,提出了一种低预测频率故障预测(low prediction -frequency failure prediction, LPFFP)算法,该算法可以显著降低故障预测频率,从而尽可能减少预测开销。实验结果表明,对于Standard1, AFFP算法可以准确地预测故障块,其数据提取和预测开销比原算法降低了99.8%,f1得分超过0.96。对于Standard2, LPFFP算法可以准确地预测flash块内的故障页,其f1得分超过0.91,预测开销显著降低。
{"title":"Lightweight Failure Prediction Algorithms Based on Internal Characteristics of 3-D nand Flash Memory","authors":"Zehao Chen;Yang Zhang;Ying Zeng;Wenhua Wu;Guojun Han","doi":"10.1109/TCAD.2025.3586890","DOIUrl":"https://doi.org/10.1109/TCAD.2025.3586890","url":null,"abstract":"3-D <sc>nand</small> flash memory has attacked widespread attention due to its fast speed, high endurance, and strong reliability. However, its reliability decreases as program and erase time increases. To tackle this problem, current researches mostly employ machine learning models to predict flash memory failure, but there lacks the consideration of using the interlayer difference and page type difference characteristics in flash memory chips to help failure prediction. Based on the internal characteristic of interlayer difference and page type difference, two failure prediction algorithms are proposed in this article, corresponding to the Standard1 and Standard2. For Standard1, an attention focused failure prediction (AFFP) algorithm is proposed. To predict the failure of the entire block, the proposed AFFP algorithm only focuses on the layer which is the most prone to failure and further predicts eight pages of the most likely failure pages within this layer. For Standard2, a low predict-frequency failure prediction (LPFFP) algorithm is proposed, which can reduce the frequency of failure prediction significantly and thus reduce the prediction overhead as much as possible. The experimental results show that, for Standard1, the AFFP algorithm can predict the block of failure accurately, and its data extraction and prediction overheads are reduced by 99.8% compared to the original algorithm, and meanwhile the F1-score exceeds 0.96. For Standard2, the LPFFP algorithm can predict the page of failure within a flash block accurately, and its F1-score exceeds 0.91 with a significant reduction in prediction overhead.","PeriodicalId":13251,"journal":{"name":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","volume":"45 2","pages":"832-844"},"PeriodicalIF":2.9,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146006922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Shiro: Efficient and Accurate In-Storage Data Lifetime Separation for nand Flash SSDs Shiro:高效和准确的存储数据寿命分离nand闪存固态硬盘
IF 2.9 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-07-08 DOI: 10.1109/TCAD.2025.3586891
Penghao Sun;Shengan Zheng;Litong You;Wanru Zhang;Ruoyan Ma;Jie Yang;Feng Zhu;Shu Li;Linpeng Huang
The log-structured nature of nand flash storage necessitates garbage collection (GC) in solid state drives (SSDs). GC is a major source of runtime write amplification (WA), leading to faster device wear out and interference with host I/Os. The key to mitigating this problem is separating data by lifetime so that data in the same flash block are invalidated within temporal proximity. For higher lifetime prediction accuracy and adaptibility, prior works proposed using machine learning (ML) algorithms for data separation. However, existing learning-based solutions perform data lifetime prediction at the host side, leading to several drawbacks. First, host-side prediction does not have knowledge of the internal data movement inside the SSD during GC, and thus fails to leverage the opportunity to further separate GC writes, resulting in suboptimal WA reduction in the long term. Second, performing prediction at the host significantly prolongs the I/O critical path and consumes host resources that could otherwise be used for serving user applications. We present Shiro, a holistic flash translation layer (FTL) design that performs in-storage data separation for both user writes and GC writes for maximal long-term WA reduction. For user writes, Shiro uses a sequence model to accurately predict data lifetime by learning lifetime distribution from long historical access patterns. For GC writes, Shiro incorporates a reinforcement learning-assisted page migration strategy that takes direct feedback from long-term WA to further improve data separation efficacy. To address the challenges posed by performing fine-grained and real-time ML decisions inside the resource-constrained SSD, we propose a suite of enabling techniques to keep computation and storage overhead low. Extensive evaluation of Shiro on real-world traces shows that Shiro can deliver 29%–68% lower WA compared with conventional FTL and state-of-the-art in-storage data separation schemes. Furthermore, thanks to lower data migration overhead during GC, Shiro achieves significantly higher steady-state I/O performance.
nand闪存的日志结构特性要求在固态驱动器(ssd)中进行垃圾收集(GC)。GC是运行时写放大(WA)的主要来源,会导致更快的设备损耗和对主机I/ o的干扰。缓解这个问题的关键是按生命周期分离数据,以便同一闪存块中的数据在时间接近范围内无效。为了提高寿命预测的准确性和适应性,之前的研究提出使用机器学习(ML)算法进行数据分离。然而,现有的基于学习的解决方案在主机端执行数据生命周期预测,从而导致一些缺点。首先,主机端预测不了解GC期间SSD内部的数据移动情况,因此无法利用进一步分离GC写操作的机会,从而导致长期的WA减少不理想。其次,在主机上执行预测会显著延长I/O关键路径,并消耗本可用于服务用户应用程序的主机资源。我们提出了Shiro,一个整体的闪存转换层(FTL)设计,它为用户写和GC写执行存储内数据分离,以最大限度地减少长期的WA。对于用户写入,Shiro使用序列模型通过从长历史访问模式中学习生命周期分布来准确预测数据生命周期。对于GC写入,Shiro结合了强化学习辅助页面迁移策略,该策略从长期WA中获取直接反馈,以进一步提高数据分离效率。为了解决在资源受限的SSD内执行细粒度和实时ML决策所带来的挑战,我们提出了一套启用技术,以保持较低的计算和存储开销。对Shiro在实际轨迹上的广泛评估表明,与传统的FTL和最先进的存储数据分离方案相比,Shiro的WA降低了29%-68%。此外,由于GC期间的数据迁移开销较低,Shiro实现了更高的稳态I/O性能。
{"title":"Shiro: Efficient and Accurate In-Storage Data Lifetime Separation for nand Flash SSDs","authors":"Penghao Sun;Shengan Zheng;Litong You;Wanru Zhang;Ruoyan Ma;Jie Yang;Feng Zhu;Shu Li;Linpeng Huang","doi":"10.1109/TCAD.2025.3586891","DOIUrl":"https://doi.org/10.1109/TCAD.2025.3586891","url":null,"abstract":"The log-structured nature of <sc>nand</small> flash storage necessitates garbage collection (GC) in solid state drives (SSDs). GC is a major source of runtime write amplification (WA), leading to faster device wear out and interference with host I/Os. The key to mitigating this problem is separating data by lifetime so that data in the same flash block are invalidated within temporal proximity. For higher lifetime prediction accuracy and adaptibility, prior works proposed using machine learning (ML) algorithms for data separation. However, existing learning-based solutions perform data lifetime prediction at the host side, leading to several drawbacks. First, host-side prediction does not have knowledge of the internal data movement inside the SSD during GC, and thus fails to leverage the opportunity to further separate GC writes, resulting in suboptimal WA reduction in the long term. Second, performing prediction at the host significantly prolongs the I/O critical path and consumes host resources that could otherwise be used for serving user applications. We present Shiro, a holistic flash translation layer (FTL) design that performs in-storage data separation for both user writes and GC writes for maximal long-term WA reduction. For user writes, Shiro uses a sequence model to accurately predict data lifetime by learning lifetime distribution from long historical access patterns. For GC writes, Shiro incorporates a reinforcement learning-assisted page migration strategy that takes direct feedback from long-term WA to further improve data separation efficacy. To address the challenges posed by performing fine-grained and real-time ML decisions inside the resource-constrained SSD, we propose a suite of enabling techniques to keep computation and storage overhead low. Extensive evaluation of Shiro on real-world traces shows that Shiro can deliver 29%–68% lower WA compared with conventional FTL and state-of-the-art in-storage data separation schemes. Furthermore, thanks to lower data migration overhead during GC, Shiro achieves significantly higher steady-state I/O performance.","PeriodicalId":13251,"journal":{"name":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","volume":"45 2","pages":"1028-1041"},"PeriodicalIF":2.9,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146006861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TRIM: Thermal Auto-Compensation for Resistive In-Memory Computing TRIM:内存中电阻计算的热自动补偿
IF 2.9 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-07-08 DOI: 10.1109/TCAD.2025.3586889
Dipesh C. Monga;Gaurav Singh;Omar Numan;Kazybek Adam;Martin Andraud;Kari A. I. Halonen
in-memory computing (IMC) has emerged as one of the most promising architectures to efficiently compute artificial intelligence tasks on hardware, particularly deep neural networks (DNNs). IMC can make use of analog computation principles alongside emerging nonvolatile memories (eNVM) technologies, potentially offering several orders of magnitude increased energy efficiency compared to generic processing units. Yet, the use of analog circuitry, potentially integrated with emerging technologies post-processed on top of silicon wafers, increases the susceptibility of hardware to a large spectrum of variations, for instance manufacturing, noise or temperature sensitivity. Hence, this susceptibility can hamper the large-scale deployment of IMC circuits into the market. To tackle the reliability of analog resistive-based IMC circuits regarding temperature variations, this article presents TRIM, a thermal on-chip auto-compensation method aimed at fully calibrating first-order temperature effects. TRIM is designed to maintain the computational accuracy of IMC cores in DNN applications over a wide temperature range, while being highly scalable and adaptable. In essence, the temperature compensation is realized through a complementary-to-absolute-temperature (CTAT) voltage reference integrated inside a voltage regulator and applied at the zero reference node of a multiplying digital-to-analog converter (MDAC), eliminating the need for external circuits or look-up table. The proposed methodology is demonstrated on a proof-of-concept 65 nm CMOS resistive IMC column. Measurement results showcase that the proof-of-concept auto-compensation system significantly enhances inference and multiply-and-accumulate (MAC) operation accuracy of any first-order resistive crossbar column, achieving inference accuracy recovery of 100% over a temperature range of –20 °C to 60 °C and a 91.3% improvement in MAC operation accuracy, with an area overhead of 2% and power overhead of ¡ 0.02%.
内存计算(IMC)已经成为在硬件上高效计算人工智能任务的最有前途的架构之一,尤其是深度神经网络(dnn)。IMC可以利用模拟计算原理和新兴的非易失性存储器(eNVM)技术,与普通处理单元相比,可能提供几个数量级的能源效率提高。然而,模拟电路的使用,潜在地集成了在硅片上进行后处理的新兴技术,增加了硬件对各种变化的敏感性,例如制造、噪声或温度敏感性。因此,这种易感性可能会阻碍IMC电路在市场上的大规模部署。为了解决基于模拟电阻的IMC电路在温度变化方面的可靠性问题,本文提出了TRIM,一种旨在完全校准一阶温度效应的片上热自动补偿方法。TRIM设计用于在宽温度范围内保持DNN应用中IMC核心的计算精度,同时具有高度可扩展性和适应性。本质上,温度补偿是通过集成在稳压器内的绝对温度(CTAT)基准电压来实现的,并在乘法数模转换器(MDAC)的零基准节点上施加,从而消除了外部电路或查找表的需要。所提出的方法在一个概念验证65nm CMOS阻性IMC柱上进行了演示。测量结果表明,概念验证型自动补偿系统显著提高了任何一阶电阻横杆柱的推理和乘法累加(MAC)运算精度,在-20°C至60°C的温度范围内实现了100%的推理精度恢复,MAC运算精度提高了91.3%,面积开销为2%,功率开销为0.02%。
{"title":"TRIM: Thermal Auto-Compensation for Resistive In-Memory Computing","authors":"Dipesh C. Monga;Gaurav Singh;Omar Numan;Kazybek Adam;Martin Andraud;Kari A. I. Halonen","doi":"10.1109/TCAD.2025.3586889","DOIUrl":"https://doi.org/10.1109/TCAD.2025.3586889","url":null,"abstract":"in-memory computing (IMC) has emerged as one of the most promising architectures to efficiently compute artificial intelligence tasks on hardware, particularly deep neural networks (DNNs). IMC can make use of analog computation principles alongside emerging nonvolatile memories (eNVM) technologies, potentially offering several orders of magnitude increased energy efficiency compared to generic processing units. Yet, the use of analog circuitry, potentially integrated with emerging technologies post-processed on top of silicon wafers, increases the susceptibility of hardware to a large spectrum of variations, for instance manufacturing, noise or temperature sensitivity. Hence, this susceptibility can hamper the large-scale deployment of IMC circuits into the market. To tackle the reliability of analog resistive-based IMC circuits regarding temperature variations, this article presents TRIM, a thermal on-chip auto-compensation method aimed at fully calibrating first-order temperature effects. TRIM is designed to maintain the computational accuracy of IMC cores in DNN applications over a wide temperature range, while being highly scalable and adaptable. In essence, the temperature compensation is realized through a complementary-to-absolute-temperature (CTAT) voltage reference integrated inside a voltage regulator and applied at the zero reference node of a multiplying digital-to-analog converter (MDAC), eliminating the need for external circuits or look-up table. The proposed methodology is demonstrated on a proof-of-concept 65 nm CMOS resistive IMC column. Measurement results showcase that the proof-of-concept auto-compensation system significantly enhances inference and multiply-and-accumulate (MAC) operation accuracy of any first-order resistive crossbar column, achieving inference accuracy recovery of 100% over a temperature range of –20 °C to 60 °C and a 91.3% improvement in MAC operation accuracy, with an area overhead of 2% and power overhead of ¡ 0.02%.","PeriodicalId":13251,"journal":{"name":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","volume":"45 2","pages":"943-954"},"PeriodicalIF":2.9,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11073135","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146006912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Systematic Methodology of Modeling and Design Space Exploration for CMOS Image Sensors CMOS图像传感器建模与设计空间探索的系统方法
IF 2.9 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-07-03 DOI: 10.1109/TCAD.2025.3585753
Tianrui Ma;Zhe Gao;Zhe Chen;Ramakrishna Kakarala;Charles Shan;Weidong Cao;Xuan Zhang
CMOS image sensors (CIS) are integral to both human and computer vision tasks, necessitating continuous improvements in key performance metrics, such as latency, power, and noise. Despite experienced designers being able to make informed design decisions, novice designers and system architects face challenges due to the complex and expansive design space of CIS. This article introduces a systematic methodology that elucidates the tradeoffs among CIS performance metrics and enables efficient design space exploration (DSE). Specifically, we propose a first-principle-based CIS modeling method. By exposing low-level circuit parameters, our modeling method explicitly reveals the impacts of design changes on high-level metrics. Based on the modeling method, we propose a DSE process that swiftly evaluates and identifies the optimal CIS design, capable of exploring over $10^{9}$ designs in under a minute without the need for time-consuming SPICE simulations. Our approach is validated through a case study and comparisons with real-world designs, demonstrating its practical utility in guiding early-stage CIS design.
CMOS图像传感器(CIS)是人类和计算机视觉任务中不可或缺的一部分,需要在延迟、功耗和噪声等关键性能指标上不断改进。尽管经验丰富的设计师能够做出明智的设计决策,但由于CIS的复杂和广阔的设计空间,新手设计师和系统架构师面临着挑战。本文介绍了一种系统的方法,该方法阐明了CIS性能度量之间的权衡,并实现了有效的设计空间探索(DSE)。具体来说,我们提出了一种基于第一性原理的CIS建模方法。通过暴露低级电路参数,我们的建模方法明确地揭示了设计变化对高级指标的影响。基于建模方法,我们提出了一个快速评估和识别最佳CIS设计的DSE过程,能够在一分钟内探索超过10^{9}美元的设计,而无需耗时的SPICE模拟。通过案例研究和与现实世界设计的比较,我们的方法得到了验证,证明了它在指导早期CIS设计中的实际效用。
{"title":"Systematic Methodology of Modeling and Design Space Exploration for CMOS Image Sensors","authors":"Tianrui Ma;Zhe Gao;Zhe Chen;Ramakrishna Kakarala;Charles Shan;Weidong Cao;Xuan Zhang","doi":"10.1109/TCAD.2025.3585753","DOIUrl":"https://doi.org/10.1109/TCAD.2025.3585753","url":null,"abstract":"CMOS image sensors (CIS) are integral to both human and computer vision tasks, necessitating continuous improvements in key performance metrics, such as latency, power, and noise. Despite experienced designers being able to make informed design decisions, novice designers and system architects face challenges due to the complex and expansive design space of CIS. This article introduces a systematic methodology that elucidates the tradeoffs among CIS performance metrics and enables efficient design space exploration (DSE). Specifically, we propose a first-principle-based CIS modeling method. By exposing low-level circuit parameters, our modeling method explicitly reveals the impacts of design changes on high-level metrics. Based on the modeling method, we propose a DSE process that swiftly evaluates and identifies the optimal CIS design, capable of exploring over <inline-formula> <tex-math>$10^{9}$ </tex-math></inline-formula> designs in under a minute without the need for time-consuming SPICE simulations. Our approach is validated through a case study and comparisons with real-world designs, demonstrating its practical utility in guiding early-stage CIS design.","PeriodicalId":13251,"journal":{"name":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","volume":"45 2","pages":"1047-1060"},"PeriodicalIF":2.9,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146006881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploiting ARMeD Channels By Reverse Engineering ARM Memory Disambiguation Unit 逆向工程ARM内存消歧单元开发武装通道
IF 2.9 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-07-02 DOI: 10.1109/TCAD.2025.3585078
Chang Liu;Zhouyang Li;Haixia Wang;Pengfei Qiu;Gang Qu;Dongsheng Wang
ARM CPUs are widely used in both embedded systems and personal computers where security considerations are becoming important. Evidently, vulnerabilities on hardware components such as cache and translation look-aside buffer are well-documented. But there are much less studies on other components, especially those in the CPU backend, largely due to the unavailability of their design and implementation details. To address this gap, we present the first in-depth reverse engineering analysis of the memory disambiguation unit (MDU) in the backend of ARM CPUs. Across four microarchitectures from ARM and Apple CPUs, we identify two different MDU designs, switch-based and counter-based. We then analyze the state machine, selection mechanism, and organization of these MDU designs. We further propose new side channels and covert channels, which we call ARMeD channels, that exploit ARM MDU to leak information. We demonstrate with three attacks using ARMeD channels: 1) a cross-process covert channel; 2) website fingerprinting; and 3) a new implementation of the spectre attack. Finally, we present a defense strategy against ARMeD Channels with less than 3% degradation on the MDU’s prediction accuracy.
ARM cpu广泛应用于嵌入式系统和个人计算机中,在这些系统中,安全考虑变得越来越重要。显然,硬件组件上的漏洞(如缓存和转换暂置缓冲区)是有案可查的。但是对于其他组件,特别是CPU后端组件的研究要少得多,这主要是由于它们的设计和实现细节不可用。为了解决这一差距,我们首次对ARM cpu后端的内存消歧单元(MDU)进行了深入的逆向工程分析。在ARM和Apple cpu的四种微架构中,我们确定了两种不同的MDU设计,基于开关和基于计数器。然后分析这些MDU设计的状态机、选择机制和组织。我们进一步提出了新的侧信道和隐蔽信道,我们称之为武装信道,利用ARM MDU泄漏信息。我们演示了使用武装通道的三种攻击:1)跨进程隐蔽通道;2)网站指纹识别;3)幽灵攻击的新实现。最后,我们提出了一种针对武装信道的防御策略,该策略对MDU的预测精度降低小于3%。
{"title":"Exploiting ARMeD Channels By Reverse Engineering ARM Memory Disambiguation Unit","authors":"Chang Liu;Zhouyang Li;Haixia Wang;Pengfei Qiu;Gang Qu;Dongsheng Wang","doi":"10.1109/TCAD.2025.3585078","DOIUrl":"https://doi.org/10.1109/TCAD.2025.3585078","url":null,"abstract":"ARM CPUs are widely used in both embedded systems and personal computers where security considerations are becoming important. Evidently, vulnerabilities on hardware components such as cache and translation look-aside buffer are well-documented. But there are much less studies on other components, especially those in the CPU backend, largely due to the unavailability of their design and implementation details. To address this gap, we present the first in-depth reverse engineering analysis of the memory disambiguation unit (MDU) in the backend of ARM CPUs. Across four microarchitectures from ARM and Apple CPUs, we identify two different MDU designs, switch-based and counter-based. We then analyze the state machine, selection mechanism, and organization of these MDU designs. We further propose new side channels and covert channels, which we call ARMeD channels, that exploit ARM MDU to leak information. We demonstrate with three attacks using ARMeD channels: 1) a cross-process covert channel; 2) website fingerprinting; and 3) a new implementation of the spectre attack. Finally, we present a defense strategy against ARMeD Channels with less than 3% degradation on the MDU’s prediction accuracy.","PeriodicalId":13251,"journal":{"name":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","volume":"45 2","pages":"1075-1088"},"PeriodicalIF":2.9,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146006882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AnaCraft: Duel-Play Probabilistic-Model-Based Reinforcement Learning for Sample-Efficient PVT-Robust Analog Circuit Sizing Optimization 基于概率模型的双重强化学习,用于样本高效pvt -鲁棒模拟电路尺寸优化
IF 2.9 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-07-02 DOI: 10.1109/TCAD.2025.3582175
Mohsen Ahmadzadeh;Jan Lappas;Norbert Wehn;Georges Gielen
Recent advancements in machine learning offer the potential for finding faster and robust optimization approaches for analog circuit design automation. However, fully automated yet fast and process, voltage, and temperature (PVT)-robust sizing algorithms are still lacking as even the most recent methods continue to require extensive simulations or domain-specific circuit expertise. In this article, we present a PVT-robust analog circuit sizing method, called AnaCraft, that is the first to introduce an adversarial training scheme of multiagent reinforcement learning (RL) for robust circuit design automation. We adopt the soft actor–critic (SAC) agent for circuit sizing, which outperforms other actor–critic agents in stability and robustness. Then, we introduce a duel-play scheme to address PVT-robustness, where sizing agents cooperate to find optimal circuit parameters while competing with an adversarial PVT agent. We combine this approach with the model-based policy optimization method: an ensemble of probabilistic models is trained and used to extract many short rollouts of generated data for updating the sizing agents. We test our algorithm on the sizing of operational amplifiers in a 45-nm CMOS technology, as well as on a complex data receiver circuit in a predictive 7-nm FinFET technology. This demonstrates our approach’s ability to find PVT-robust power-area-optimal sizes for advanced technologies and circuits. Our proposed method achieves a higher figure of merit with up to $sim 3times $ fewer circuit simulations and $sim 2times $ less runtime compared to existing state-of-the-art methods.
机器学习的最新进展为寻找更快、更稳健的模拟电路设计自动化优化方法提供了潜力。然而,完全自动化的、快速的、过程、电压和温度(PVT)鲁棒的尺寸算法仍然缺乏,因为即使是最新的方法仍然需要大量的模拟或特定领域的电路专业知识。在本文中,我们提出了一种称为AnaCraft的pvt鲁棒模拟电路尺寸方法,这是第一个为鲁棒电路设计自动化引入多智能体强化学习(RL)的对抗训练方案。我们采用软行为-评价(SAC)代理进行电路尺寸确定,该代理在稳定性和鲁棒性方面优于其他行为-评价代理。然后,我们引入了一种双战方案来解决PVT-鲁棒性问题,其中施胶剂合作寻找最优电路参数,同时与对抗的PVT剂竞争。我们将这种方法与基于模型的策略优化方法相结合:训练概率模型的集合,并使用它提取生成数据的许多短部署,以更新分级代理。我们在45纳米CMOS技术的运算放大器尺寸以及预测7纳米FinFET技术的复杂数据接收电路上测试了我们的算法。这证明了我们的方法能够为先进的技术和电路找到pvt稳健的功率面积最佳尺寸。与现有的最先进的方法相比,我们提出的方法实现了更高的价值,电路模拟减少了3倍,运行时间减少了2倍。
{"title":"AnaCraft: Duel-Play Probabilistic-Model-Based Reinforcement Learning for Sample-Efficient PVT-Robust Analog Circuit Sizing Optimization","authors":"Mohsen Ahmadzadeh;Jan Lappas;Norbert Wehn;Georges Gielen","doi":"10.1109/TCAD.2025.3582175","DOIUrl":"https://doi.org/10.1109/TCAD.2025.3582175","url":null,"abstract":"Recent advancements in machine learning offer the potential for finding faster and robust optimization approaches for analog circuit design automation. However, fully automated yet fast and process, voltage, and temperature (PVT)-robust sizing algorithms are still lacking as even the most recent methods continue to require extensive simulations or domain-specific circuit expertise. In this article, we present a PVT-robust analog circuit sizing method, called AnaCraft, that is the first to introduce an adversarial training scheme of multiagent reinforcement learning (RL) for robust circuit design automation. We adopt the soft actor–critic (SAC) agent for circuit sizing, which outperforms other actor–critic agents in stability and robustness. Then, we introduce a duel-play scheme to address PVT-robustness, where sizing agents cooperate to find optimal circuit parameters while competing with an adversarial PVT agent. We combine this approach with the model-based policy optimization method: an ensemble of probabilistic models is trained and used to extract many short rollouts of generated data for updating the sizing agents. We test our algorithm on the sizing of operational amplifiers in a 45-nm CMOS technology, as well as on a complex data receiver circuit in a predictive 7-nm FinFET technology. This demonstrates our approach’s ability to find PVT-robust power-area-optimal sizes for advanced technologies and circuits. Our proposed method achieves a higher figure of merit with up to <inline-formula> <tex-math>$sim 3times $ </tex-math></inline-formula> fewer circuit simulations and <inline-formula> <tex-math>$sim 2times $ </tex-math></inline-formula> less runtime compared to existing state-of-the-art methods.","PeriodicalId":13251,"journal":{"name":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","volume":"45 2","pages":"901-914"},"PeriodicalIF":2.9,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146006897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Oiso: Outlier-Isolated Data Format for Low-Bit Large Language Model Quantization 低比特大语言模型量化的离群隔离数据格式
IF 2.9 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-07-01 DOI: 10.1109/TCAD.2025.3585023
Lancheng Zou;Shuo Yin;Mingjun Li;Mingzi Wang;Chen Bai;Wenqian Zhao;Bei Yu
The scale of large language models (LLMs) has steadily increased over time, leading to enhanced performance in multimodal understanding and complex reasoning, but with significant execution overhead on hardware. Quantization is a promising approach to reduce computation and memory overhead for LLM deployment. However, maintaining accuracy and efficiency simultaneously is challenging due to the presence of outliers. Moreover, low-bit quantization tends to deteriorate accuracy due to its limited precision. Existing outlier-aware quantization/hardware co-design methods split the sparse outliers from the normal values with dedicated encoding schemes. However, such separation produces a nonuniform data format for normal values and outliers, leading to additional hardware design and inefficient memory access. This article presents an outlier-isolated data format (Oiso) for low-bit LLM quantization called Oiso. Oiso is a unified representation for both outliers and normal values. It isolates the normal values from the outliers, which can reduce the impact of outliers on the normal values during the quantization process. Taking advantage of the uniform format, Oiso arithmetic can be performed using a homogeneous computational unit, and Oiso values can be stored in a standardized format. Hierarchical block encoding with a subblock alignment scheme is introduced to reduce the encoding cost and the hardware overhead. We introduce the Oiso architecture, equipped with Oiso processing elements and encoders tailored for Oiso arithmetic, realizing efficient low-bit LLM inference. Oiso quantization can push the limits of low-bit LLM quantization, and the Oiso accelerator outperforms the state-of-the-art outlieraware accelerator design with $1.26times $ performance improvement and 25% energy reduction.
随着时间的推移,大型语言模型(llm)的规模稳步增长,导致多模态理解和复杂推理的性能增强,但在硬件上的执行开销很大。量化是一种很有前途的方法,可以减少LLM部署的计算和内存开销。然而,由于异常值的存在,同时保持准确性和效率是具有挑战性的。此外,由于低比特量化的精度有限,往往会降低精度。现有的异常值感知量化/硬件协同设计方法通过专用编码方案将稀疏异常值从正态值中分离出来。然而,这种分离会导致正常值和异常值的数据格式不统一,从而导致额外的硬件设计和低效的内存访问。本文提出了一种用于低比特LLM量化的离群隔离数据格式(Oiso),称为Oiso。Oiso是异常值和正态值的统一表示。它将正态值与异常值隔离开来,可以减少量化过程中异常值对正态值的影响。利用统一格式,Oiso运算可以使用同构计算单元执行,并且Oiso值可以以标准化格式存储。为了降低编码成本和硬件开销,采用了子块对齐的分层块编码方式。我们引入了Oiso体系结构,配备了Oiso处理元件和针对Oiso算法定制的编码器,实现了高效的低位LLM推理。Oiso量化可以突破低比特LLM量化的极限,Oiso加速器的性能提高了1.26倍,能耗降低了25%,超过了最先进的异常值感知加速器设计。
{"title":"Oiso: Outlier-Isolated Data Format for Low-Bit Large Language Model Quantization","authors":"Lancheng Zou;Shuo Yin;Mingjun Li;Mingzi Wang;Chen Bai;Wenqian Zhao;Bei Yu","doi":"10.1109/TCAD.2025.3585023","DOIUrl":"https://doi.org/10.1109/TCAD.2025.3585023","url":null,"abstract":"The scale of large language models (LLMs) has steadily increased over time, leading to enhanced performance in multimodal understanding and complex reasoning, but with significant execution overhead on hardware. Quantization is a promising approach to reduce computation and memory overhead for LLM deployment. However, maintaining accuracy and efficiency simultaneously is challenging due to the presence of outliers. Moreover, low-bit quantization tends to deteriorate accuracy due to its limited precision. Existing outlier-aware quantization/hardware co-design methods split the sparse outliers from the normal values with dedicated encoding schemes. However, such separation produces a nonuniform data format for normal values and outliers, leading to additional hardware design and inefficient memory access. This article presents an outlier-isolated data format (Oiso) for low-bit LLM quantization called Oiso. Oiso is a unified representation for both outliers and normal values. It isolates the normal values from the outliers, which can reduce the impact of outliers on the normal values during the quantization process. Taking advantage of the uniform format, Oiso arithmetic can be performed using a homogeneous computational unit, and Oiso values can be stored in a standardized format. Hierarchical block encoding with a subblock alignment scheme is introduced to reduce the encoding cost and the hardware overhead. We introduce the Oiso architecture, equipped with Oiso processing elements and encoders tailored for Oiso arithmetic, realizing efficient low-bit LLM inference. Oiso quantization can push the limits of low-bit LLM quantization, and the Oiso accelerator outperforms the state-of-the-art outlieraware accelerator design with <inline-formula> <tex-math>$1.26times $ </tex-math></inline-formula> performance improvement and 25% energy reduction.","PeriodicalId":13251,"journal":{"name":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","volume":"45 2","pages":"929-942"},"PeriodicalIF":2.9,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146006925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1