首页 > 最新文献

IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems最新文献

英文 中文
Hardware Accelerator for Short-Read DNA Sequence Alignment Using Burrows-Wheeler Transformation 利用Burrows-Wheeler变换的短读DNA序列比对硬件加速器
IF 2.9 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-06-12 DOI: 10.1109/TCAD.2025.3579326
Sriparna Mandal;Surajeet Ghosh
Next-generation sequencing deals with exponential growth in sequence databases; the primary challenge is aligning short-read sequences in a time-efficient manner. Despite numerous efforts in contemporary research, they unfortunately face tradeoff issues related to time, power consumption, and resource constraints. A hardware accelerator is presented utilizing the Burrows-Wheeler Transformation without involving any sequence terminator to perform short-read sequencing at hardware speed, which eases additional storage, operations, and power consumption. Further, a hardware-based binary search scheme is introduced to reduce power consumption of the accelerator. As an alternative, a parallel searching mechanism is introduced to accomplish the searching operation in a single clock-cycle. The accelerator is evaluated for 64-to-256 nucleotide reference sequences and 32-to-56 nucleotide query sequences. The parallel search scheme consumes ≈11% less time than the binary search-based scheme, consuming ≈1.6–3.7% and ≈4.5%–23% more resources and power. While comparing the accelerator with the with-terminator method, it achieves ≈31.01%–33.13% gain in processing time, ≈31.28%–34.47% saving in hardware resource, ≈33.08%–33.29% saving in storage, and ≈14.03%–50.79% gain in power consumption. Finally, this accelerator exhibits a gain of $approx 52times $ in throughput without involving any terminator and external memory compared to state-of-the-art architectures.
下一代测序处理序列数据库的指数增长;主要的挑战是以一种省时的方式对短读序列进行对齐。尽管在当代研究中做出了许多努力,但不幸的是,它们面临着与时间、功耗和资源限制有关的权衡问题。提出了一种硬件加速器,利用Burrows-Wheeler变换,不涉及任何序列终止器,以硬件速度执行短读测序,从而减轻了额外的存储,操作和功耗。此外,还引入了一种基于硬件的二进制搜索方案,以降低加速器的功耗。作为替代方案,引入了并行搜索机制,以便在单个时钟周期内完成搜索操作。加速器对64- 256个核苷酸参考序列和32- 56个核苷酸查询序列进行评估。并行搜索方案比基于二进制搜索的方案节省约11%的时间,消耗约1.6 ~ 3.7%的资源和功耗约4.5% ~ 23%。与带终结器法相比,加速法处理时间提高约31.01% ~ 33.13%,硬件资源节省约31.28% ~ 34.47%,存储节省约33.08% ~ 33.29%,功耗提高约14.03% ~ 50.79%。最后,与最先进的架构相比,该加速器在不涉及任何终止器和外部存储器的情况下,显示出大约52倍的吞吐量增益。
{"title":"Hardware Accelerator for Short-Read DNA Sequence Alignment Using Burrows-Wheeler Transformation","authors":"Sriparna Mandal;Surajeet Ghosh","doi":"10.1109/TCAD.2025.3579326","DOIUrl":"https://doi.org/10.1109/TCAD.2025.3579326","url":null,"abstract":"Next-generation sequencing deals with exponential growth in sequence databases; the primary challenge is aligning short-read sequences in a time-efficient manner. Despite numerous efforts in contemporary research, they unfortunately face tradeoff issues related to time, power consumption, and resource constraints. A hardware accelerator is presented utilizing the Burrows-Wheeler Transformation without involving any sequence terminator to perform short-read sequencing at hardware speed, which eases additional storage, operations, and power consumption. Further, a hardware-based binary search scheme is introduced to reduce power consumption of the accelerator. As an alternative, a parallel searching mechanism is introduced to accomplish the searching operation in a single clock-cycle. The accelerator is evaluated for 64-to-256 nucleotide reference sequences and 32-to-56 nucleotide query sequences. The parallel search scheme consumes ≈11% less time than the binary search-based scheme, consuming ≈1.6–3.7% and ≈4.5%–23% more resources and power. While comparing the accelerator with the with-terminator method, it achieves ≈31.01%–33.13% gain in processing time, ≈31.28%–34.47% saving in hardware resource, ≈33.08%–33.29% saving in storage, and ≈14.03%–50.79% gain in power consumption. Finally, this accelerator exhibits a gain of <inline-formula> <tex-math>$approx 52times $ </tex-math></inline-formula> in throughput without involving any terminator and external memory compared to state-of-the-art architectures.","PeriodicalId":13251,"journal":{"name":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","volume":"45 1","pages":"547-551"},"PeriodicalIF":2.9,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145904270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimization Heuristics for Grid-Based Integer Linear Programming Package Substrate Router 基于网格的整数线性规划包基板路由器的优化启发式算法
IF 2.9 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-06-10 DOI: 10.1109/TCAD.2025.3578328
Chen-Yu Hsieh;Yu-En Lin;Yi-Yu Liu
With the increasing number of I/O pins in highly integrated semiconductor products, semiconductor packaging has become an essential yet complex part of integrated circuit (IC) design. The substrate plays an important role in advanced semiconductor packaging and provides the chip with electrical connections and heat dissipation. While numerous studies have addressed the substrate routing problem, only one state-of-the-art work provides a customized routing flow specifically designed for packages with wire-bonding style and fine-pitch ball grid arrays (FBGA), which are more widely used than advanced packaging due to their maturity and lower cost. However, the existing router suffers from unsatisfactory routability due to its simplistic implementation and lack of necessary consideration for finger connections. Therefore, this article proposes several optimization heuristics, such as finger accessibility enhancement, progressive rerouting, and half-grid rerouting techniques, to further improve the overall routing completion rate. Experimental results show that the proposed heuristics are capable of avoiding routing resource wastage, achieving better routing quality, and eliminating design-rule violations.
随着高集成度半导体产品中I/O引脚数量的增加,半导体封装已成为集成电路(IC)设计中必不可少但又复杂的一部分。衬底在先进的半导体封装中起着重要的作用,为芯片提供电气连接和散热。虽然许多研究已经解决了基板布线问题,但只有一项最先进的工作提供了专门为线键合风格和细间距球栅阵列(FBGA)封装设计的定制布线流程,由于其成熟度和成本较低,因此比先进封装应用更广泛。然而,现有的路由器由于其简单的实现和缺乏对手指连接的必要考虑而无法令人满意的可达性。因此,本文提出了几种优化启发式方法,如手指可达性增强、渐进式重路由和半网格重路由技术,以进一步提高整体路由完成率。实验结果表明,所提出的启发式算法能够避免路由资源的浪费,获得更好的路由质量,并消除违反设计规则的情况。
{"title":"Optimization Heuristics for Grid-Based Integer Linear Programming Package Substrate Router","authors":"Chen-Yu Hsieh;Yu-En Lin;Yi-Yu Liu","doi":"10.1109/TCAD.2025.3578328","DOIUrl":"https://doi.org/10.1109/TCAD.2025.3578328","url":null,"abstract":"With the increasing number of I/O pins in highly integrated semiconductor products, semiconductor packaging has become an essential yet complex part of integrated circuit (IC) design. The substrate plays an important role in advanced semiconductor packaging and provides the chip with electrical connections and heat dissipation. While numerous studies have addressed the substrate routing problem, only one state-of-the-art work provides a customized routing flow specifically designed for packages with wire-bonding style and fine-pitch ball grid arrays (FBGA), which are more widely used than advanced packaging due to their maturity and lower cost. However, the existing router suffers from unsatisfactory routability due to its simplistic implementation and lack of necessary consideration for finger connections. Therefore, this article proposes several optimization heuristics, such as finger accessibility enhancement, progressive rerouting, and half-grid rerouting techniques, to further improve the overall routing completion rate. Experimental results show that the proposed heuristics are capable of avoiding routing resource wastage, achieving better routing quality, and eliminating design-rule violations.","PeriodicalId":13251,"journal":{"name":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","volume":"45 1","pages":"552-556"},"PeriodicalIF":2.9,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145904330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multilayer Package Power/Ground Planes Synthesis With Balanced DC IR Drops: A Game-Theoretic Optimization Approach 平衡直流红外下降的多层封装电源/地平面合成:博弈论优化方法
IF 2.9 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-06-09 DOI: 10.1109/TCAD.2025.3577971
Siyuan Liang;Zhen Zhuang;Kai-Yuan Chao;Bei Yu;Tsung-Yi Ho
Recently, the challenge of integrating an increasing number of transistors on a single die to adhere to Moore’s Law has spurred the need for innovative packaging solutions. Power/ground planes are integral to packages, and designers typically strive to maximize their size. This provides shielding and maintains constant impedance for adjacent high-speed signal wires, benefiting signal integrity. Additionally, large power/ground planes help reduce DC IR drops, enhancing power integrity. However, the necessity for multiple power/ground nets, each requiring independent power/ground planes within a package, makes the optimal allocation of limited free space a complex task. This article introduces a game-theoretic optimization method aimed at evenly mitigating DC IR drops across the multilayer package power/ground planes. In the formulated game of achieving the ideal power/ground plane design, we can enhance the use of package space and realize a design with evenly distributed DC IR drops across all power/ground planes. This is accomplished by adjusting strategies and reaching a state of Nash equilibrium in the allocation of free space. Additionally, we propose a rapid multilayer power/ground plane DC IR drop evaluation and a power/ground plane legalization method to bolster our optimization method.
最近,在一个芯片上集成越来越多的晶体管以遵守摩尔定律的挑战刺激了对创新封装解决方案的需求。电源/地平面是封装中不可或缺的一部分,设计师通常会努力使其尺寸最大化。这为相邻的高速信号线提供屏蔽和保持恒定的阻抗,有利于信号的完整性。此外,大的电源/地平面有助于减少直流红外下降,提高电源完整性。然而,由于需要多个电源/接地网,每个电源/接地网都需要一个封装内的独立电源/接地网,因此对有限的空闲空间进行优化分配是一项复杂的任务。本文介绍了一种博弈论优化方法,旨在均匀地减轻多层封装电源/地平面上的直流红外下降。在实现理想的电源/地平面设计的制定游戏中,我们可以提高封装空间的利用率,并实现在所有电源/地平面上均匀分布直流红外降的设计。这是通过调整策略和在自由空间分配中达到纳什均衡状态来实现的。此外,我们还提出了一种快速多层电源/地平面直流红外跌落评估和一种电源/地平面合法化方法来支持我们的优化方法。
{"title":"Multilayer Package Power/Ground Planes Synthesis With Balanced DC IR Drops: A Game-Theoretic Optimization Approach","authors":"Siyuan Liang;Zhen Zhuang;Kai-Yuan Chao;Bei Yu;Tsung-Yi Ho","doi":"10.1109/TCAD.2025.3577971","DOIUrl":"https://doi.org/10.1109/TCAD.2025.3577971","url":null,"abstract":"Recently, the challenge of integrating an increasing number of transistors on a single die to adhere to Moore’s Law has spurred the need for innovative packaging solutions. Power/ground planes are integral to packages, and designers typically strive to maximize their size. This provides shielding and maintains constant impedance for adjacent high-speed signal wires, benefiting signal integrity. Additionally, large power/ground planes help reduce DC IR drops, enhancing power integrity. However, the necessity for multiple power/ground nets, each requiring independent power/ground planes within a package, makes the optimal allocation of limited free space a complex task. This article introduces a game-theoretic optimization method aimed at evenly mitigating DC IR drops across the multilayer package power/ground planes. In the formulated game of achieving the ideal power/ground plane design, we can enhance the use of package space and realize a design with evenly distributed DC IR drops across all power/ground planes. This is accomplished by adjusting strategies and reaching a state of Nash equilibrium in the allocation of free space. Additionally, we propose a rapid multilayer power/ground plane DC IR drop evaluation and a power/ground plane legalization method to bolster our optimization method.","PeriodicalId":13251,"journal":{"name":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","volume":"45 1","pages":"453-465"},"PeriodicalIF":2.9,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11028916","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145904275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Benchmark Suite for Resilience Assessment of Deep Learning Models 深度学习模型弹性评估基准套件
IF 2.9 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-06-09 DOI: 10.1109/TCAD.2025.3578297
Cristiana Bolchini;Alberto Bosio;Luca Cassano;Antonio Miele;Salvatore Pappalardo;Dario Passarello;Annachiara Ruospo;Ernesto Sanchez;Matteo Sonza Reorda;Vittorio Turco
The reliability assessment of systems powered by artificial intelligence (AI) is becoming a crucial step prior to their deployment in safety and mission-critical systems. Recently, many efforts have been made to develop sophisticated techniques to evaluate and improve the resilience of AI models against the occurrence of random hardware faults. However, due to the intrinsic nature of such models, the comparison of the results obtained in state-of-the-art works is crucial, as reference models are missing. Moreover, their resilience is strongly influenced by the training process, the adopted framework and data representation, and so on. To enable a common ground for future research targeting convolutional neural networks (CNNs) resilience analysis/hardening, this work proposes a first benchmark suite of deep learning (DL) models commonly adopted in this context, providing the models, the training/test data, and the resilience-related information (fault list, coverage, etc.) that can be used as a baseline for fair comparison. To this end, this research identifies a set of axes that have an impact on the resilience and classifies some popular CNN models, in both PyTorch and TensorFlow. Some final considerations are drawn, showing the relevance of a benchmark suite tailored for the resilience context.
人工智能(AI)驱动系统的可靠性评估正在成为其在安全和关键任务系统中部署之前的关键步骤。最近,许多人都在努力开发复杂的技术来评估和提高人工智能模型对随机硬件故障的恢复能力。然而,由于这些模型的固有性质,在缺乏参考模型的情况下,与最先进的作品中获得的结果进行比较至关重要。此外,它们的弹性受训练过程、所采用的框架和数据表示等因素的强烈影响。为了为未来针对卷积神经网络(cnn)弹性分析/强化的研究提供一个共同的基础,这项工作提出了在这种情况下通常采用的深度学习(DL)模型的第一个基准套件,提供了模型、训练/测试数据和弹性相关信息(故障列表、覆盖范围等),这些信息可以用作公平比较的基线。为此,本研究确定了一组对弹性有影响的轴,并对PyTorch和TensorFlow中一些流行的CNN模型进行了分类。最后给出了一些考虑,显示了为弹性上下文量身定制的基准套件的相关性。
{"title":"Benchmark Suite for Resilience Assessment of Deep Learning Models","authors":"Cristiana Bolchini;Alberto Bosio;Luca Cassano;Antonio Miele;Salvatore Pappalardo;Dario Passarello;Annachiara Ruospo;Ernesto Sanchez;Matteo Sonza Reorda;Vittorio Turco","doi":"10.1109/TCAD.2025.3578297","DOIUrl":"https://doi.org/10.1109/TCAD.2025.3578297","url":null,"abstract":"The reliability assessment of systems powered by artificial intelligence (AI) is becoming a crucial step prior to their deployment in safety and mission-critical systems. Recently, many efforts have been made to develop sophisticated techniques to evaluate and improve the resilience of AI models against the occurrence of random hardware faults. However, due to the intrinsic nature of such models, the comparison of the results obtained in state-of-the-art works is crucial, as reference models are missing. Moreover, their resilience is strongly influenced by the training process, the adopted framework and data representation, and so on. To enable a common ground for future research targeting convolutional neural networks (CNNs) resilience analysis/hardening, this work proposes a first benchmark suite of deep learning (DL) models commonly adopted in this context, providing the models, the training/test data, and the resilience-related information (fault list, coverage, etc.) that can be used as a baseline for fair comparison. To this end, this research identifies a set of axes that have an impact on the resilience and classifies some popular CNN models, in both PyTorch and TensorFlow. Some final considerations are drawn, showing the relevance of a benchmark suite tailored for the resilience context.","PeriodicalId":13251,"journal":{"name":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","volume":"45 1","pages":"418-427"},"PeriodicalIF":2.9,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11029030","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145904288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DAGSIS: A DAG-Aware MAGIC-Based Synthesis Framework for In-Memory Computing DAGSIS:用于内存计算的基于magic的dag感知综合框架
IF 2.9 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-06-06 DOI: 10.1109/TCAD.2025.3577539
Lian Yao;Jigang Wu;Peng Liu;Siew-Kei Lam
This article presents a comprehensive synthesis framework, named DAGSIS, for memristor-aided logic (MAGIC)-based in-memory computing system. DAGSIS addresses the limitations of prior works, such as overlooking the benefits of MAGIC’s high fan-in capability and the impact of global properties of netlists on the scheduling of computation sequence (CS). DAGSIS achieves the optimization in two synthesis stages. In the technology-independent optimization stage, DAGSIS encourages the merging of nodes in the network to reduce circuit size, by utilizing equivalent transformation of multiplexer (MUX). In the CS scheduling stage, DAGSIS introduces two schemes for optimizing area overhead and latency, respectively. For area optimization, DAGSIS maximizes the utilization of memristive cells by erasing the expired data as early as possible. For latency optimization, DAGSIS aims to minimize erasing operations, by maximizing the number of erased cells in each epoch of filling the memory. To achieve better CS scheduling, DAGSIS introduces two design rules to guide CS scheduling, which fully considers the global attributes of circuit design, such as critical path and high fan-out nodes. Experiment results show that DAGSIS reduces the circuit size by 6.69% on ISCAS’85 benchmarks compared to ABC tool, an open-source logic synthesis framework. Compared to the state-of-the-art works, DAGSIS achieves a reduction of 40.68% and 12.67% in area overhead and erasing operations, respectively, on ISCAS’85 and EPFL benchmarks. The improvements are further translated into the reduction in energy consumption by up to 13.7%.
本文提出了一个基于忆阻器辅助逻辑(MAGIC)的内存计算系统的综合框架DAGSIS。DAGSIS解决了先前工作的局限性,例如忽略了MAGIC的高扇入能力的好处以及网络列表的全局属性对计算序列(CS)调度的影响。DAGSIS通过两个合成阶段实现优化。在与技术无关的优化阶段,DAGSIS通过利用多路复用器(MUX)的等效转换,鼓励网络中的节点合并以减小电路尺寸。在CS调度阶段,DAGSIS引入了两种方案,分别用于优化区域开销和延迟。对于面积优化,DAGSIS通过尽早擦除过期数据来最大化记忆单元的利用率。对于延迟优化,DAGSIS旨在通过在填充内存的每个epoch中最大化擦除单元的数量来最小化擦除操作。为了更好地实现CS调度,DAGSIS引入了两条设计规则来指导CS调度,充分考虑了电路设计的全局属性,如关键路径和高扇出节点。实验结果表明,与开源逻辑综合框架ABC工具相比,DAGSIS在ISCAS ' 85基准测试中减少了6.69%的电路尺寸。与最先进的工作相比,在ISCAS ' 85和EPFL基准测试中,DAGSIS在面积开销和擦除操作方面分别减少了40.68%和12.67%。这些改进进一步转化为能耗降低高达13.7%。
{"title":"DAGSIS: A DAG-Aware MAGIC-Based Synthesis Framework for In-Memory Computing","authors":"Lian Yao;Jigang Wu;Peng Liu;Siew-Kei Lam","doi":"10.1109/TCAD.2025.3577539","DOIUrl":"https://doi.org/10.1109/TCAD.2025.3577539","url":null,"abstract":"This article presents a comprehensive synthesis framework, named DAGSIS, for memristor-aided logic (MAGIC)-based in-memory computing system. DAGSIS addresses the limitations of prior works, such as overlooking the benefits of MAGIC’s high fan-in capability and the impact of global properties of netlists on the scheduling of computation sequence (CS). DAGSIS achieves the optimization in two synthesis stages. In the technology-independent optimization stage, DAGSIS encourages the merging of nodes in the network to reduce circuit size, by utilizing equivalent transformation of multiplexer (MUX). In the CS scheduling stage, DAGSIS introduces two schemes for optimizing area overhead and latency, respectively. For area optimization, DAGSIS maximizes the utilization of memristive cells by erasing the expired data as early as possible. For latency optimization, DAGSIS aims to minimize erasing operations, by maximizing the number of erased cells in each epoch of filling the memory. To achieve better CS scheduling, DAGSIS introduces two design rules to guide CS scheduling, which fully considers the global attributes of circuit design, such as critical path and high fan-out nodes. Experiment results show that DAGSIS reduces the circuit size by 6.69% on ISCAS’85 benchmarks compared to ABC tool, an open-source logic synthesis framework. Compared to the state-of-the-art works, DAGSIS achieves a reduction of 40.68% and 12.67% in area overhead and erasing operations, respectively, on ISCAS’85 and EPFL benchmarks. The improvements are further translated into the reduction in energy consumption by up to 13.7%.","PeriodicalId":13251,"journal":{"name":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","volume":"45 1","pages":"373-386"},"PeriodicalIF":2.9,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145904341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parallel Non-Monte Carlo Transient Noise Simulation With Flicker Noise 具有闪烁噪声的并行非蒙特卡罗瞬态噪声仿真
IF 2.9 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-06-05 DOI: 10.1109/TCAD.2025.3577018
Alex Goulet;Roni Khazaka
A parallel non-Monte Carlo transient noise analysis method for efficient general nonlinear analysis is presented. The proposed method extends a previous method to include flicker noise. The implementation of the proposed method in a SPICE-like circuit simulator is described. Additional practical considerations are discussed. Higher parallel efficiency is achieved by balancing the parallel loads. The optimal number of processors is automatically selected as part of load balancing. A new time domain flicker noise circuit representation that increases the computational efficiency of the proposed method and the underlying serial method is presented. Three examples of transient noise analysis are provided: a low-noise amplifier circuit, a mixer circuit, and a distributed amplifier circuit.
提出了一种适用于一般非线性分析的并行非蒙特卡罗暂态噪声分析方法。该方法扩展了先前的方法,以包含闪烁噪声。描述了该方法在类似spice的电路模拟器中的实现。还讨论了其他实际考虑。通过平衡并行负载可以实现更高的并行效率。作为负载平衡的一部分,系统会自动选择最优的处理器数量。提出了一种新的时域闪烁噪声电路表示方法,提高了所提方法和底层串行方法的计算效率。提供了三个瞬态噪声分析的例子:一个低噪声放大电路、一个混频器电路和一个分布式放大电路。
{"title":"Parallel Non-Monte Carlo Transient Noise Simulation With Flicker Noise","authors":"Alex Goulet;Roni Khazaka","doi":"10.1109/TCAD.2025.3577018","DOIUrl":"https://doi.org/10.1109/TCAD.2025.3577018","url":null,"abstract":"A parallel non-Monte Carlo transient noise analysis method for efficient general nonlinear analysis is presented. The proposed method extends a previous method to include flicker noise. The implementation of the proposed method in a SPICE-like circuit simulator is described. Additional practical considerations are discussed. Higher parallel efficiency is achieved by balancing the parallel loads. The optimal number of processors is automatically selected as part of load balancing. A new time domain flicker noise circuit representation that increases the computational efficiency of the proposed method and the underlying serial method is presented. Three examples of transient noise analysis are provided: a low-noise amplifier circuit, a mixer circuit, and a distributed amplifier circuit.","PeriodicalId":13251,"journal":{"name":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","volume":"45 1","pages":"323-334"},"PeriodicalIF":2.9,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145904314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CLAPS: A Graph Clustering-Based Approach for Partial Scan Design CLAPS:基于图聚类的局部扫描设计方法
IF 2.9 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-06-04 DOI: 10.1109/TCAD.2025.3576314
Jaeyoung Joung;Sangjun Lee;Jongho Park;Jaehyun Kim;Laesang Jung;Sungho Kang
Scan is one of the representative design for testability (DFT) techniques designed to test sequential circuits. However, the additional hardware overhead and performance degradation caused by scan insertion can be unacceptable in specific designs. Partial scan has been applied as an alternative to the scan to balance these issues. However, previous cell selection algorithms accompany high computational complexity depending on the number of circuit components, including flip-flops, and do not sufficiently consider the analysis of large-scale circuits. In this article, a graph theory-based partial scan approach is proposed to effectively address the issues caused by scan insertion and reduce the load of structural analysis. The proposed algorithm partitions the circuit into multiple portions using graph clustering. Scan cells are selected from each subgraph to reduce sequential test generation complexity and improve testability. By partially analyzing the circuit, the proposed approach not only addresses the complexity problem of structural analysis in large-scale circuits but also can be generally applied regardless of circuit size or the number of components. The experimental results show that the proposed algorithm achieves significantly reduced processing time in seconds and reduces scan cells by approximately 11.47% with only 0.21% of test coverage loss on average compared to full scan design.
扫描是一种典型的可测试性设计技术,用于测试顺序电路。然而,在特定的设计中,扫描插入引起的额外硬件开销和性能下降可能是不可接受的。部分扫描已被应用作为扫描的替代方案来平衡这些问题。然而,以前的单元选择算法伴随着高计算复杂度,这取决于电路组件的数量,包括触发器,并且没有充分考虑大规模电路的分析。本文提出了一种基于图论的局部扫描方法,有效地解决了扫描插入引起的问题,减少了结构分析的负荷。该算法利用图聚类将电路划分为多个部分。扫描单元从每个子图中选择,以减少顺序测试生成的复杂性和提高可测试性。通过对电路的局部分析,该方法不仅解决了大规模电路结构分析的复杂性问题,而且无论电路大小或元件数量如何,都可以普遍应用。实验结果表明,与全扫描设计相比,该算法显著缩短了处理时间(以秒为单位),减少了约11.47%的扫描单元,平均测试覆盖率损失仅为0.21%。
{"title":"CLAPS: A Graph Clustering-Based Approach for Partial Scan Design","authors":"Jaeyoung Joung;Sangjun Lee;Jongho Park;Jaehyun Kim;Laesang Jung;Sungho Kang","doi":"10.1109/TCAD.2025.3576314","DOIUrl":"https://doi.org/10.1109/TCAD.2025.3576314","url":null,"abstract":"Scan is one of the representative design for testability (DFT) techniques designed to test sequential circuits. However, the additional hardware overhead and performance degradation caused by scan insertion can be unacceptable in specific designs. Partial scan has been applied as an alternative to the scan to balance these issues. However, previous cell selection algorithms accompany high computational complexity depending on the number of circuit components, including flip-flops, and do not sufficiently consider the analysis of large-scale circuits. In this article, a graph theory-based partial scan approach is proposed to effectively address the issues caused by scan insertion and reduce the load of structural analysis. The proposed algorithm partitions the circuit into multiple portions using graph clustering. Scan cells are selected from each subgraph to reduce sequential test generation complexity and improve testability. By partially analyzing the circuit, the proposed approach not only addresses the complexity problem of structural analysis in large-scale circuits but also can be generally applied regardless of circuit size or the number of components. The experimental results show that the proposed algorithm achieves significantly reduced processing time in seconds and reduces scan cells by approximately 11.47% with only 0.21% of test coverage loss on average compared to full scan design.","PeriodicalId":13251,"journal":{"name":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","volume":"45 1","pages":"396-406"},"PeriodicalIF":2.9,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145904269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-Time Compensation Framework for Large-Scale ReRAM-Based Sparse LU Factorization 基于rram的大规模稀疏LU分解实时补偿框架
IF 2.9 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-06-04 DOI: 10.1109/TCAD.2025.3576332
Weiran Chen;Zaitian Chen;Bei Yu;Song Chen;Yi Kang;Qi Xu
Recently, resistive switching random access memory (ReRAM)-based hardware accelerators have demonstrated unprecedented performance compared to digital accelerators. However, due to limitations in the manufacturing process and large-scale integration, several significant nonideal effects, including IR-Drop, stuck-at-fault, and device noises in real ReRAM-based crossbar arrays, are typically incurred. These nonideal effects degrade signal integrity and performance, particularly in crossbar structures used for building high-density ReRAMs. Therefore, finding a fast and efficient software solution that can predict the effects of IR-drop without involving expensive hardware is highly desirable. In this work, addressing the main limitations of existing simulation methods, such as slow speed and high-resource costs, we propose an efficient analysis of large-scale ReRAM crossbar arrays and the corresponding nonideal factors based on sparse matrix modeling. We classify nonideal factors into linear (e.g., IR-drop) and nonlinear categories (e.g., shot noise). For linear factors, super-nodal sparse LU factorizations are used to solve. The array-level results show that compared to SPICE simulation, our method achieves a numerical solution accuracy of $10^{-15}$ with $506.8 sim 1253.3times $ faster and $17.46 sim 42934.3times $ reduced memory usage. For nonlinear factors, we propose two solutions based on different requirements. In one method, we obtain an approximate initial solution by solving a linear system while disregarding the nonlinear contributions and subsequently apply an extended Anderson acceleration method to solve the nonlinear equation, which is suitable for high-precision solutions. Another method simplifies the nonlinear equation into an equivalent linear form. Theoretical validation confirms the effectiveness of this method, significantly enhancing simulation speed while maintaining accuracy. Moreover, we build a high-precision ReRAM accelerator architecture with real-time compensation. Experimental results demonstrate that the proposed architecture effectively mitigates accuracy loss caused by nonideal factors.
近年来,与数字加速器相比,基于电阻开关随机存取存储器(ReRAM)的硬件加速器表现出前所未有的性能。然而,由于制造工艺和大规模集成的限制,在实际的基于reram的交叉棒阵列中通常会产生一些显着的非理想效应,包括IR-Drop,卡在故障和器件噪声。这些不理想的影响会降低信号的完整性和性能,特别是在用于构建高密度reram的交叉杆结构中。因此,在不涉及昂贵硬件的情况下,找到一种能够预测ir下降影响的快速有效的软件解决方案是非常理想的。在这项工作中,针对现有仿真方法的主要局限性,如速度慢和资源成本高,我们提出了一种基于稀疏矩阵建模的大规模ReRAM交叉棒阵列及其相应的非理想因素的有效分析。我们将非理想因素分为线性(例如,IR-drop)和非线性(例如,散粒噪声)两类。对于线性因子,采用超节点稀疏LU分解进行求解。阵列级结果表明,与SPICE模拟相比,我们的方法实现了$10^{-15}$的数值解精度,速度提高了$506.8 sim 1253.3 $,内存使用减少了$17.46 sim 42934.3 $。对于非线性因素,我们根据不同的要求提出了两种解决方案。其中一种方法是在不考虑非线性贡献的情况下,通过求解线性系统得到近似初始解,然后应用扩展的安德森加速法求解非线性方程,该方法适用于高精度解。另一种方法是将非线性方程简化成等价的线性形式。理论验证证实了该方法的有效性,在保持精度的同时显著提高了仿真速度。此外,我们还构建了具有实时补偿功能的高精度ReRAM加速器体系结构。实验结果表明,该结构有效地减轻了非理想因素造成的精度损失。
{"title":"Real-Time Compensation Framework for Large-Scale ReRAM-Based Sparse LU Factorization","authors":"Weiran Chen;Zaitian Chen;Bei Yu;Song Chen;Yi Kang;Qi Xu","doi":"10.1109/TCAD.2025.3576332","DOIUrl":"https://doi.org/10.1109/TCAD.2025.3576332","url":null,"abstract":"Recently, resistive switching random access memory (ReRAM)-based hardware accelerators have demonstrated unprecedented performance compared to digital accelerators. However, due to limitations in the manufacturing process and large-scale integration, several significant nonideal effects, including IR-Drop, stuck-at-fault, and device noises in real ReRAM-based crossbar arrays, are typically incurred. These nonideal effects degrade signal integrity and performance, particularly in crossbar structures used for building high-density ReRAMs. Therefore, finding a fast and efficient software solution that can predict the effects of IR-drop without involving expensive hardware is highly desirable. In this work, addressing the main limitations of existing simulation methods, such as slow speed and high-resource costs, we propose an efficient analysis of large-scale ReRAM crossbar arrays and the corresponding nonideal factors based on sparse matrix modeling. We classify nonideal factors into linear (e.g., IR-drop) and nonlinear categories (e.g., shot noise). For linear factors, super-nodal sparse LU factorizations are used to solve. The array-level results show that compared to SPICE simulation, our method achieves a numerical solution accuracy of <inline-formula> <tex-math>$10^{-15}$ </tex-math></inline-formula> with <inline-formula> <tex-math>$506.8 sim 1253.3times $ </tex-math></inline-formula> faster and <inline-formula> <tex-math>$17.46 sim 42934.3times $ </tex-math></inline-formula> reduced memory usage. For nonlinear factors, we propose two solutions based on different requirements. In one method, we obtain an approximate initial solution by solving a linear system while disregarding the nonlinear contributions and subsequently apply an extended Anderson acceleration method to solve the nonlinear equation, which is suitable for high-precision solutions. Another method simplifies the nonlinear equation into an equivalent linear form. Theoretical validation confirms the effectiveness of this method, significantly enhancing simulation speed while maintaining accuracy. Moreover, we build a high-precision ReRAM accelerator architecture with real-time compensation. Experimental results demonstrate that the proposed architecture effectively mitigates accuracy loss caused by nonideal factors.","PeriodicalId":13251,"journal":{"name":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","volume":"45 1","pages":"309-322"},"PeriodicalIF":2.9,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145904291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SPLAT: Revisiting Latency Attack on Dynamic Neural Networks SPLAT:动态神经网络的重访延迟攻击
IF 2.9 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-06-04 DOI: 10.1109/TCAD.2025.3576320
Yu Li;Biao Huang;Jinyin Hu;Cheng Zhuo
Dynamic deep neural networks, particularly multiexit networks, are increasingly recognized for their efficiency in edge-cloud scenarios. However, they are vulnerable to latency attacks that can degrade performance by increasing computation time. Current attack strategies often require white-box access to the model or lead to significant drops in inference accuracy, making them easily detectable. This article introduces SPLAT, a novel approach for executing stealthy and practical latency attacks on dynamic multiexit models under black-box conditions. SPLAT employs a two-stage mechanism: the first stage generates coarse-grained attack inputs using a functional surrogate model, while the second stage refines these perturbations through an efficient query strategy to enhance stealthiness and effectiveness. Extensive experiments validate that SPLAT significantly outperforms existing methods across various models and datasets.
动态深度神经网络,特别是多出口网络,在边缘云场景中的效率越来越得到认可。但是,它们容易受到延迟攻击,延迟攻击会通过增加计算时间来降低性能。当前的攻击策略通常需要对模型进行白盒访问,或者导致推理精度显著下降,这使得它们很容易被检测到。SPLAT是一种在黑箱条件下对动态多出口模型执行隐身和实用延迟攻击的新方法。SPLAT采用两阶段机制:第一阶段使用功能代理模型生成粗粒度的攻击输入,而第二阶段通过有效的查询策略对这些扰动进行细化,以增强隐秘性和有效性。大量的实验验证了SPLAT在各种模型和数据集上明显优于现有方法。
{"title":"SPLAT: Revisiting Latency Attack on Dynamic Neural Networks","authors":"Yu Li;Biao Huang;Jinyin Hu;Cheng Zhuo","doi":"10.1109/TCAD.2025.3576320","DOIUrl":"https://doi.org/10.1109/TCAD.2025.3576320","url":null,"abstract":"Dynamic deep neural networks, particularly multiexit networks, are increasingly recognized for their efficiency in edge-cloud scenarios. However, they are vulnerable to latency attacks that can degrade performance by increasing computation time. Current attack strategies often require white-box access to the model or lead to significant drops in inference accuracy, making them easily detectable. This article introduces SPLAT, a novel approach for executing stealthy and practical latency attacks on dynamic multiexit models under black-box conditions. SPLAT employs a two-stage mechanism: the first stage generates coarse-grained attack inputs using a functional surrogate model, while the second stage refines these perturbations through an efficient query strategy to enhance stealthiness and effectiveness. Extensive experiments validate that SPLAT significantly outperforms existing methods across various models and datasets.","PeriodicalId":13251,"journal":{"name":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","volume":"45 1","pages":"506-518"},"PeriodicalIF":2.9,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145904307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DIF-LUT Pro: An Automated Tool for Simple yet Scalable Approximation of Nonlinear Activation on FPGA DIF-LUT Pro:一个在FPGA上简单而可扩展的非线性激活近似的自动化工具
IF 2.9 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-06-04 DOI: 10.1109/TCAD.2025.3576333
Yang Liu;Shuyang Li;Yu Li;Ruiqi Chen;Shun Li;Jun Yu;Kun Wang
Nonlinear activation plays an essential role in neural networks (NNs) for their generalization ability. However, implementing intricate mathematical operations on hardware platforms, including field-programmable gate arrays (FPGAs), presents significant challenges. Prior works based on piecewise functions or look-up table (LUT) have encountered difficulties in balancing precision requirements with fair hardware overhead and often necessitating complex manual interventions. To address these issues, this article proposes DIF-LUT Pro, an automated tool for simple yet scalable approximation for various nonlinear activations on FPGA. Specifically, the proposed algorithm achieves self-adaptive hardware design oriented toward target precision, by piecewise linear matching to fit the function derivative roughly and range addressable LUT to offset the difference. Moreover, DIF-LUT Pro integrates the algorithm into an automated tool, allowing users to configure the customized interface and generate the corresponding hardware description language (HDL) code with a single click. Experimental results show that 1) DIF-LUT Pro features robust automation and fair generality, capable of generating equitable hardware designs under various user configurations across different FPGA platforms and 2) DIF-LUT Pro produces approximations that are simple yet effective, achieving competitive performance compared to previous expert-crafted designs. Furthermore, two detailed case studies demonstrate the efficient application of DIF-LUT Pro on NeRF and SEResnet, proving its practical value. Our source code is open-source and available at https://github.com/AdrianLiu00/DIF-LUT-Tool.
非线性激活在神经网络的泛化能力中起着至关重要的作用。然而,在硬件平台上实现复杂的数学运算,包括现场可编程门阵列(fpga),提出了重大挑战。先前基于分段函数或查找表(LUT)的工作在平衡精度要求和合理的硬件开销方面遇到了困难,并且经常需要复杂的人工干预。为了解决这些问题,本文提出了DIF-LUT Pro,这是一种自动化工具,可以对FPGA上的各种非线性激活进行简单而可扩展的近似。具体而言,该算法以目标精度为导向,通过分段线性匹配粗略拟合函数导数和范围可寻址LUT来抵消差值,实现自适应硬件设计。此外,DIF-LUT Pro将算法集成到一个自动化工具中,允许用户配置自定义接口,并通过一次点击生成相应的硬件描述语言(HDL)代码。实验结果表明:1)DIF-LUT Pro具有强大的自动化和公平的通用性,能够在不同FPGA平台的各种用户配置下生成公平的硬件设计;2)DIF-LUT Pro生成简单有效的近似,与以前的专家精心设计相比,实现了具有竞争力的性能。此外,两个详细的案例研究证明了DIF-LUT Pro在NeRF和SEResnet上的有效应用,证明了其实用价值。我们的源代码是开源的,可以在https://github.com/AdrianLiu00/DIF-LUT-Tool上获得。
{"title":"DIF-LUT Pro: An Automated Tool for Simple yet Scalable Approximation of Nonlinear Activation on FPGA","authors":"Yang Liu;Shuyang Li;Yu Li;Ruiqi Chen;Shun Li;Jun Yu;Kun Wang","doi":"10.1109/TCAD.2025.3576333","DOIUrl":"https://doi.org/10.1109/TCAD.2025.3576333","url":null,"abstract":"Nonlinear activation plays an essential role in neural networks (NNs) for their generalization ability. However, implementing intricate mathematical operations on hardware platforms, including field-programmable gate arrays (FPGAs), presents significant challenges. Prior works based on piecewise functions or look-up table (LUT) have encountered difficulties in balancing precision requirements with fair hardware overhead and often necessitating complex manual interventions. To address these issues, this article proposes DIF-LUT Pro, an automated tool for simple yet scalable approximation for various nonlinear activations on FPGA. Specifically, the proposed algorithm achieves self-adaptive hardware design oriented toward target precision, by piecewise linear matching to fit the function derivative roughly and range addressable LUT to offset the difference. Moreover, DIF-LUT Pro integrates the algorithm into an automated tool, allowing users to configure the customized interface and generate the corresponding hardware description language (HDL) code with a single click. Experimental results show that 1) DIF-LUT Pro features robust automation and fair generality, capable of generating equitable hardware designs under various user configurations across different FPGA platforms and 2) DIF-LUT Pro produces approximations that are simple yet effective, achieving competitive performance compared to previous expert-crafted designs. Furthermore, two detailed case studies demonstrate the efficient application of DIF-LUT Pro on NeRF and SEResnet, proving its practical value. Our source code is open-source and available at <uri>https://github.com/AdrianLiu00/DIF-LUT-Tool</uri>.","PeriodicalId":13251,"journal":{"name":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","volume":"45 1","pages":"295-308"},"PeriodicalIF":2.9,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145904285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1