首页 > 最新文献

ACM Journal on Emerging Technologies in Computing Systems最新文献

英文 中文
PUF-Based Digital Money with Propagation-of-Provenance and Offline Transfers Between Two Parties 基于 PUF 的数字货币,可在双方之间进行证明传播和离线转账
IF 2.2 4区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-05-24 DOI: 10.1145/3663676
Benjamin Bean, Cyrus Minwalla, Eirini Eleni Tsiropoulou, Jim Plusquellic

Building on prior concepts of electronic money (eCash), we introduce a digital currency where a physical unclonable function (PUF) engenders devices with the twin properties of being verifiably enrolled as a member of a legitimate set of eCash devices and of possessing a hardware-based root-of-trust. A hardware-obfuscated secure enclave (HOSE) is proposed as a means of enabling a PUF-based propagation-of-provenance (POP) mechanism, which allows eCash tokens (eCt) to be securely signed and validated by recipients without incurring any third party dependencies at transfer time. The POP scheme establishes a chain of custody starting with token creation, extending through multiple bilateral in-field transactions, and culminating in redemption at the token-issuing authority. A lightweight mutual-zero-trust (MZT) authentication protocol establishes a secure channel between any two fielded devices. The POP and MZT protocols, in combination with the HOSE, enables transitivity and anonymity of eCt transfers between online and offline devices.

基于之前的电子货币(eCash)概念,我们引入了一种数字货币,在这种数字货币中,物理不可克隆功能(PUF)使设备具有双重属性:可验证地注册为合法电子现金设备集合的成员,并拥有基于硬件的信任根。我们提出了一种硬件混淆安全飞地(HOSE),作为实现基于 PUF 的证明传播(POP)机制的一种手段,该机制允许电子现金令牌(eCt)由接收者安全签名和验证,而不会在传输时产生任何第三方依赖。POP 方案建立了一个监管链,从创建代币开始,经过多个双边场内交易,最后在代币发行机构赎回。轻量级零互信(MZT)认证协议可在任何两个现场设备之间建立安全通道。POP 和 MZT 协议与 HOSE 相结合,实现了在线和离线设备之间电子通信技术传输的可转移性和匿名性。
{"title":"PUF-Based Digital Money with Propagation-of-Provenance and Offline Transfers Between Two Parties","authors":"Benjamin Bean, Cyrus Minwalla, Eirini Eleni Tsiropoulou, Jim Plusquellic","doi":"10.1145/3663676","DOIUrl":"https://doi.org/10.1145/3663676","url":null,"abstract":"<p>Building on prior concepts of electronic money (eCash), we introduce a digital currency where a physical unclonable function (PUF) engenders devices with the twin properties of being verifiably enrolled as a member of a legitimate set of eCash devices and of possessing a hardware-based root-of-trust. A hardware-obfuscated secure enclave (HOSE) is proposed as a means of enabling a PUF-based propagation-of-provenance (POP) mechanism, which allows eCash tokens (<b>eCt</b>) to be securely signed and validated by recipients without incurring any third party dependencies at transfer time. The POP scheme establishes a chain of custody starting with token creation, extending through multiple bilateral in-field transactions, and culminating in redemption at the token-issuing authority. A lightweight mutual-zero-trust (MZT) authentication protocol establishes a secure channel between any two fielded devices. The POP and MZT protocols, in combination with the HOSE, enables transitivity and anonymity of <b>eCt</b> transfers between online and offline devices.</p>","PeriodicalId":50924,"journal":{"name":"ACM Journal on Emerging Technologies in Computing Systems","volume":"48 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141153157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SAT-based Exact Modulo Scheduling Mapping for Resource-Constrained CGRAs 基于 SAT 的资源受限 CGRA 精确模数调度映射
IF 2.2 4区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-05-22 DOI: 10.1145/3663675
Cristian Tirelli, Juan Sapriza, Rubén Rodríguez Álvarez, Lorenzo Ferretti, Benoît Denkinger, Giovanni Ansaloni, José Miranda Calero, David Atienza, Laura Pozzi

Coarse-Grain Reconfigurable Arrays (CGRAs) represent emerging low-power architectures designed to accelerate Compute-Intensive Loops (CILs). The effectiveness of CGRAs in providing acceleration relies on the quality of mapping: how efficiently the CIL is compiled onto the platform. State of the Art (SoA) compilation techniques utilize modulo scheduling to minimize the Iteration Interval (II) and use graph algorithms like Max-Clique Enumeration to address mapping challenges. Our work approaches the mapping problem through a satisfiability (SAT) formulation. We introduce the Kernel Mobility Schedule (KMS), an ad-hoc schedule used with the Data Flow Graph and CGRA architectural information to generate Boolean statements that, when satisfied, yield a valid mapping. Experimental results demonstrate SAT-MapIt outperforming SoA alternatives in almost 50% of explored benchmarks. Additionally, we evaluated the mapping results in a synthesizable CGRA design and emphasized the run-time metrics trends, i.e. energy efficiency and latency, across different CILs and CGRA sizes. We show that a hardware-agnostic analysis performed on compiler-level metrics can optimally prune the architectural design space, while still retaining Pareto-optimal configurations. Moreover, by exploring how implementation details impact cost and performance on real hardware, we highlight the importance of holistic software-to-hardware mapping flows, as the one presented herein.

粗粒度可重构阵列(CGRA)是新兴的低功耗架构,旨在加速计算密集型循环(CIL)。CGRA 的加速效果取决于映射的质量:CIL 在平台上的编译效率。先进的编译技术(SoA)利用模数调度(modulo scheduling)最小化迭代间隔(II),并使用最大簇枚举(Max-Clique Enumeration)等图算法解决映射难题。我们的工作通过可满足性(SAT)表述来解决映射问题。我们引入了内核移动时间表(KMS),它是一种与数据流图和 CGRA 架构信息一起使用的临时时间表,用于生成布尔语句,当满足这些语句时,就会产生有效的映射。实验结果表明,SAT-MapIt 在近 50% 的基准测试中表现优于 SoA 替代方案。此外,我们还评估了可综合 CGRA 设计中的映射结果,并强调了不同 CIL 和 CGRA 大小的运行时指标趋势,即能效和延迟。我们的研究表明,在编译器级指标上进行硬件无关性分析,可以对架构设计空间进行优化剪裁,同时仍能保留帕累托最优配置。此外,通过探索实现细节如何影响实际硬件的成本和性能,我们强调了整体软件到硬件映射流程的重要性,就像本文介绍的流程一样。
{"title":"SAT-based Exact Modulo Scheduling Mapping for Resource-Constrained CGRAs","authors":"Cristian Tirelli, Juan Sapriza, Rubén Rodríguez Álvarez, Lorenzo Ferretti, Benoît Denkinger, Giovanni Ansaloni, José Miranda Calero, David Atienza, Laura Pozzi","doi":"10.1145/3663675","DOIUrl":"https://doi.org/10.1145/3663675","url":null,"abstract":"<p>Coarse-Grain Reconfigurable Arrays (CGRAs) represent emerging low-power architectures designed to accelerate Compute-Intensive Loops (CILs). The effectiveness of CGRAs in providing acceleration relies on the quality of mapping: how efficiently the CIL is compiled onto the platform. State of the Art (SoA) compilation techniques utilize modulo scheduling to minimize the Iteration Interval (II) and use graph algorithms like Max-Clique Enumeration to address mapping challenges. Our work approaches the mapping problem through a satisfiability (SAT) formulation. We introduce the Kernel Mobility Schedule (KMS), an ad-hoc schedule used with the Data Flow Graph and CGRA architectural information to generate Boolean statements that, when satisfied, yield a valid mapping. Experimental results demonstrate SAT-MapIt outperforming SoA alternatives in almost 50% of explored benchmarks. Additionally, we evaluated the mapping results in a synthesizable CGRA design and emphasized the run-time metrics trends, i.e. energy efficiency and latency, across different CILs and CGRA sizes. We show that a hardware-agnostic analysis performed on compiler-level metrics can optimally prune the architectural design space, while still retaining Pareto-optimal configurations. Moreover, by exploring how implementation details impact cost and performance on real hardware, we highlight the importance of holistic software-to-hardware mapping flows, as the one presented herein.</p>","PeriodicalId":50924,"journal":{"name":"ACM Journal on Emerging Technologies in Computing Systems","volume":"55 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141153005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards practical superconducting accelerators for machine learning using U-SFQ 利用 U-SFQ 实现用于机器学习的实用超导加速器
IF 2.2 4区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-04-09 DOI: 10.1145/3653073
Patricia Gonzalez-Guerrero, Kylie Huch, Nirmalendu Patra, Thom Popovici, George Michelogiannakis

Most popular superconducting circuits operate on information carried by ps-wide, (boldsymbol{mu})V-tall, single flux quantum (SFQ) pulses. These circuits can operate at frequencies of hundreds of GHz with orders of magnitude lower switching energy than complementary-metal-oxide-semiconductors (CMOS). However, under the stringent area constraints of modern superconductor technologies, fully-fledged, CMOS-inspired superconducting architectures cannot be fabricated at large scales. Unary SFQ (U-SFQ) is an alternative computing paradigm that can address these area constraints. In U-SFQ, information is mapped to a combination of streams of SFQ pulses and in the temporal domain. In this work, we extend U-SFQ to introduce novel building blocks such as a multiplier and an accumulator. These blocks reduce area and power consumption by 2(times) and 4(times) compared with previously-proposed U-SFQ building blocks, and yield at least 97% area savings compared with binary approaches. Using these multiplier and adder, we propose a U-SFQ Convolutional Neural Network (CNN) hardware accelerator capable of comparable peak performance with state-of-the-art superconducting binary approach (B-SFQ) in 32(times) less area. CNNs can operate with 5-8 bits of resolution with no significant degradation in classification accuracy. For 5 bits of resolution, our proposed accelerator yields 5(times)-63(times) better performance than CMOS and 15(times)-173(times) better area efficiency than B-SFQ.

大多数流行的超导电路都是通过ps宽、(boldsymbol{mu})V高的单通量子(SFQ)脉冲来传输信息的。与互补金属氧化物半导体(CMOS)相比,这些电路可以在数百 GHz 的频率下工作,开关能量低几个数量级。然而,在现代超导技术严格的面积限制下,受 CMOS 启发的成熟超导架构无法大规模制造。一元 SFQ(U-SFQ)是一种可解决这些面积限制的替代计算模式。在 U-SFQ 中,信息被映射到 SFQ 脉冲流的组合和时域中。在这项工作中,我们对 U-SFQ 进行了扩展,引入了乘法器和累加器等新型构件。与之前提出的 U-SFQ 构建模块相比,这些模块的面积和功耗分别减少了 2(times)和 4(times),与二进制方法相比,至少节省了 97% 的面积。利用这些乘法器和加法器,我们提出了一种U-SFQ卷积神经网络(CNN)硬件加速器,能够在32(times)小的面积内实现与最先进的超导二进制方法(B-SFQ)相当的峰值性能。CNN 可以在分辨率为 5-8 位的情况下运行,而分类精度不会明显降低。对于 5 位分辨率,我们提出的加速器的性能比 CMOS 高 5(次)-63(次),面积效率比 B-SFQ 高 15(次)-173(次)。
{"title":"Towards practical superconducting accelerators for machine learning using U-SFQ","authors":"Patricia Gonzalez-Guerrero, Kylie Huch, Nirmalendu Patra, Thom Popovici, George Michelogiannakis","doi":"10.1145/3653073","DOIUrl":"https://doi.org/10.1145/3653073","url":null,"abstract":"<p>Most popular superconducting circuits operate on information carried by ps-wide, (boldsymbol{mu})V-tall, single flux quantum (SFQ) pulses. These circuits can operate at frequencies of hundreds of GHz with orders of magnitude lower switching energy than complementary-metal-oxide-semiconductors (CMOS). However, under the stringent area constraints of modern superconductor technologies, fully-fledged, CMOS-inspired superconducting architectures cannot be fabricated at large scales. Unary SFQ (U-SFQ) is an alternative computing paradigm that can address these area constraints. In U-SFQ, information is mapped to a combination of streams of SFQ pulses and in the temporal domain. In this work, we extend U-SFQ to introduce novel building blocks such as a multiplier and an accumulator. These blocks reduce area and power consumption by 2(times) and 4(times) compared with previously-proposed U-SFQ building blocks, and yield at least 97% area savings compared with binary approaches. Using these multiplier and adder, we propose a U-SFQ Convolutional Neural Network (CNN) hardware accelerator capable of comparable peak performance with state-of-the-art superconducting binary approach (B-SFQ) in 32(times) less area. CNNs can operate with 5-8 bits of resolution with no significant degradation in classification accuracy. For 5 bits of resolution, our proposed accelerator yields 5(times)-63(times) better performance than CMOS and 15(times)-173(times) better area efficiency than B-SFQ.</p>","PeriodicalId":50924,"journal":{"name":"ACM Journal on Emerging Technologies in Computing Systems","volume":"106 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140580656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Energy-Efficient Spiking Neural Networks: A Robust Hybrid CMOS-Memristive Accelerator 迈向高能效尖峰神经网络:一种稳健的cmos -记忆体混合加速器
IF 2.2 4区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-12-05 DOI: 10.1145/3635165
Fabiha Nowshin, Hongyu An, Yang Yi

Spiking Neural Networks (SNNs) are energy-efficient artificial neural network models that can carry out data-intensive applications. Energy consumption, latency, and memory bottleneck are some of the major issues that arise in machine learning applications due to their data-demanding nature. Memristor-enabled Computing-In-Memory (CIM) architectures have been able to tackle the memory wall issue, eliminating the energy and time-consuming movement of data. In this work we develop a scalable CIM-based SNN architecture with our fabricated two-layer memristor crossbar array. In addition to having an enhanced heat dissipation capability, our memristor exhibits substantial enhancement of 10% to 66% in design area, power and latency compared to state-of-the-art memristors. This design incorporates an inter-spike interval (ISI) encoding scheme due to its high information density to convert the incoming input signals into spikes. Furthermore, we include a time-to-first-spike (TTFS) based output processing stage for its energy-efficiency to carry out the final classification. With the combination of ISI, CIM and TTFS, this network has a competitive inference speed of 2μs/image and can successfully classify handwritten digits with 2.9mW of power and 2.51pJ energy per spike. The proposed architecture with the ISI encoding scheme can achieve ∼10% higher accuracy than those of other encoding schemes in the MNIST dataset.

峰值神经网络(snn)是一种高效节能的人工神经网络模型,可用于数据密集型应用。能源消耗、延迟和内存瓶颈是机器学习应用中出现的一些主要问题,因为它们对数据的要求很高。支持忆阻器的内存计算(CIM)体系结构已经能够解决内存墙问题,消除了数据移动的能量和耗时。在这项工作中,我们开发了一个可扩展的基于cim的SNN架构,我们制造了两层忆阻交叉栅阵列。除了具有增强的散热能力外,与最先进的忆阻器相比,我们的忆阻器在设计面积,功率和延迟方面显着提高了10%至66%。由于其高信息密度,该设计采用了尖峰间隔(ISI)编码方案,将输入信号转换为尖峰。此外,我们还包括一个基于时间到第一峰值(TTFS)的输出处理阶段,以提高其能源效率,从而进行最终分类。结合ISI、CIM和TTFS,该网络具有2μs/图像的极具竞争力的推理速度,能够以2.9mW的功率和2.51pJ的峰值能量成功地对手写数字进行分类。在MNIST数据集中,ISI编码方案所提出的体系结构比其他编码方案的精度高约10%。
{"title":"Towards Energy-Efficient Spiking Neural Networks: A Robust Hybrid CMOS-Memristive Accelerator","authors":"Fabiha Nowshin, Hongyu An, Yang Yi","doi":"10.1145/3635165","DOIUrl":"https://doi.org/10.1145/3635165","url":null,"abstract":"<p>Spiking Neural Networks (SNNs) are energy-efficient artificial neural network models that can carry out data-intensive applications. Energy consumption, latency, and memory bottleneck are some of the major issues that arise in machine learning applications due to their data-demanding nature. Memristor-enabled Computing-In-Memory (CIM) architectures have been able to tackle the memory wall issue, eliminating the energy and time-consuming movement of data. In this work we develop a scalable CIM-based SNN architecture with our fabricated two-layer memristor crossbar array. In addition to having an enhanced heat dissipation capability, our memristor exhibits substantial enhancement of 10% to 66% in design area, power and latency compared to state-of-the-art memristors. This design incorporates an inter-spike interval (ISI) encoding scheme due to its high information density to convert the incoming input signals into spikes. Furthermore, we include a time-to-first-spike (TTFS) based output processing stage for its energy-efficiency to carry out the final classification. With the combination of ISI, CIM and TTFS, this network has a competitive inference speed of 2μs/image and can successfully classify handwritten digits with 2.9mW of power and 2.51pJ energy per spike. The proposed architecture with the ISI encoding scheme can achieve ∼10% higher accuracy than those of other encoding schemes in the MNIST dataset.</p>","PeriodicalId":50924,"journal":{"name":"ACM Journal on Emerging Technologies in Computing Systems","volume":"48 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138537934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Analysis of Various Design Pathways Towards Multi-Terabit Photonic On-Interposer Interconnects 多太比特光子中间层互连的各种设计途径分析
IF 2.2 4区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-12-01 DOI: 10.1145/3635031
Venkata Sai Praneeth Karempudi, Janibul Bashir, Ishan G Thakkar

In the wake of dwindling Moore’s Law, to address the rapidly increasing complexity and cost of fabricating large-scale, monolithic systems-on-chip (SoCs), the industry has adopted dis-aggregation as a solution, wherein a large monolithic SoC is partitioned into multiple smaller chiplets that are then assembled into a large system-in-package (SiP) using advanced packaging substrates such as silicon interposer. For such interposer-based SiPs, there is a push to realize on-interposer inter-chiplet communication bandwidth of multi-Tb/s and end-to-end communication latency of no more than 10 ns. This push comes as the natural progression from some recent prior works on SiP design, and is driven by the proliferating bandwidth demand of modern data-intensive workloads. To meet this bandwidth and latency goal, prior works have focused on a potential solution of using the silicon photonic interposer (SiPhI) for integrating and interconnecting a large number of chiplets into an SiP. Despite the early promise, the existing designs of on-SiPhI interconnects still have to evolve by leaps and bounds to meet the goal of multi-Tb/s bandwidth. However, the possible design pathways, upon which such an evolution can be achieved, have not been explored in any prior works yet. In this paper, we have identified several design pathways that can help evolve on-SiPhI interconnects to achieve multi-Tb/s aggregate bandwidth. We perform an extensive link-level and system-level analysis in which we explore these design pathways in isolation and in different combinations of each other. From our link-level analysis, we have observed that the design pathways that simultaneously enhance the spectral range and optical power budget available for wavelength multiplexing can render aggregate bandwidth of up to 4 Tb/s per on-SiPhI link. We also show that such high-bandwidth on-SiPhI links can substantially improve the performance and energy-efficiency of the state-of-the-art CPU and GPU chiplets based SiPs.

随着摩尔定律的逐渐消失,为了解决制造大规模单片片上系统(SoC)的复杂性和成本迅速增加的问题,业界采用了分解作为解决方案,其中大型单片SoC被分割成多个较小的芯片,然后使用先进的封装衬底(如硅中间层)组装成大型系统级封装(SiP)。对于这种基于中间层的sip,有必要实现中间层上的片间通信带宽达到数tb /s,端到端通信延迟不超过10 ns。这一推动是SiP设计的一些近期前期工作的自然发展,也是现代数据密集型工作负载激增的带宽需求的推动。为了满足这一带宽和延迟目标,先前的工作集中在使用硅光子中介器(SiPhI)将大量小芯片集成和互连到SiP中的潜在解决方案上。尽管有早期的承诺,但现有的on-SiPhI互连设计仍然需要突飞猛进地发展,以满足多tb /s带宽的目标。然而,可能的设计途径,在此基础上可以实现这种进化,还没有在任何先前的工作中探索。在本文中,我们已经确定了几种设计途径,可以帮助发展on-SiPhI互连以实现多tb /s的总带宽。我们进行了广泛的链接级和系统级分析,在这些分析中,我们分别以不同的组合方式探索这些设计途径。从我们的链路级分析中,我们已经观察到,同时增强波长复用可用的光谱范围和光功率预算的设计路径可以使每个on-SiPhI链路的总带宽高达4 Tb/s。我们还表明,这种高带宽的siphi链路可以大大提高基于sip的最先进的CPU和GPU小芯片的性能和能效。
{"title":"An Analysis of Various Design Pathways Towards Multi-Terabit Photonic On-Interposer Interconnects","authors":"Venkata Sai Praneeth Karempudi, Janibul Bashir, Ishan G Thakkar","doi":"10.1145/3635031","DOIUrl":"https://doi.org/10.1145/3635031","url":null,"abstract":"<p>In the wake of dwindling Moore’s Law, to address the rapidly increasing complexity and cost of fabricating large-scale, monolithic systems-on-chip (SoCs), the industry has adopted dis-aggregation as a solution, wherein a large monolithic SoC is partitioned into multiple smaller chiplets that are then assembled into a large system-in-package (SiP) using advanced packaging substrates such as silicon interposer. For such interposer-based SiPs, there is a push to realize on-interposer inter-chiplet communication bandwidth of multi-Tb/s and end-to-end communication latency of no more than 10 ns. This push comes as the natural progression from some recent prior works on SiP design, and is driven by the proliferating bandwidth demand of modern data-intensive workloads. To meet this bandwidth and latency goal, prior works have focused on a potential solution of using the silicon photonic interposer (SiPhI) for integrating and interconnecting a large number of chiplets into an SiP. Despite the early promise, the existing designs of on-SiPhI interconnects still have to evolve by leaps and bounds to meet the goal of multi-Tb/s bandwidth. However, the possible design pathways, upon which such an evolution can be achieved, have not been explored in any prior works yet. In this paper, we have identified several design pathways that can help evolve on-SiPhI interconnects to achieve multi-Tb/s aggregate bandwidth. We perform an extensive link-level and system-level analysis in which we explore these design pathways in isolation and in different combinations of each other. From our link-level analysis, we have observed that the design pathways that simultaneously enhance the spectral range and optical power budget available for wavelength multiplexing can render aggregate bandwidth of up to 4 Tb/s per on-SiPhI link. We also show that such high-bandwidth on-SiPhI links can substantially improve the performance and energy-efficiency of the state-of-the-art CPU and GPU chiplets based SiPs.</p>","PeriodicalId":50924,"journal":{"name":"ACM Journal on Emerging Technologies in Computing Systems","volume":"42 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138537966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introduction to the Special Issue on Next-Generation On-Chip and Off-Chip Communication Architectures for Edge, Cloud and HPC 面向边缘、云和高性能计算的下一代片上和片外通信架构特刊简介
IF 2.2 4区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-10-31 DOI: 10.1145/3631144
John Kim, Tushar Krishna
proposes a novel tree-based topology with additional microarchitectural features to enable reductions of arbitrary sized tensors across both space and time, enhancing the overall performance of DNN accelerators
提出了一种新颖的基于树的拓扑结构,该拓扑结构具有额外的微体系结构功能,可在空间和时间上减少任意大小的张量,从而提高 DNN 加速器的整体性能
{"title":"Introduction to the Special Issue on Next-Generation On-Chip and Off-Chip Communication Architectures for Edge, Cloud and HPC","authors":"John Kim, Tushar Krishna","doi":"10.1145/3631144","DOIUrl":"https://doi.org/10.1145/3631144","url":null,"abstract":"proposes a novel tree-based topology with additional microarchitectural features to enable reductions of arbitrary sized tensors across both space and time, enhancing the overall performance of DNN accelerators","PeriodicalId":50924,"journal":{"name":"ACM Journal on Emerging Technologies in Computing Systems","volume":"1 1","pages":"1 - 1"},"PeriodicalIF":2.2,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139307179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design-Time Reference Current Generation for Robust Spintronic-Based Neuromorphic Architecture 鲁棒自旋电子学神经形态架构的设计时参考电流生成
4区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-09-27 DOI: 10.1145/3625556
Soyed Tuhin Ahmed, Mahta Mayahinia, Michael Hefenbrock, Christopher Münch, Mehdi B. Tahoori
Neural Networks (NN) can be efficiently accelerated in a neuromorphic fabric based on emerging resistive non-volatile memories (NVM), such as Spin Transfer Torque Magnetic RAM (STT-MRAM). Compared to other NVM technologies, STT-MRAM offers many benefits, such as fast switching, high endurance, and CMOS process compatibility. However, due to its low ON/OFF-ratio, process variations and runtime temperature fluctuations can lead to miss-quantizing the sensed current and in turn, degradation of inference accuracy. In this paper, we analyze the impact of the sensed accumulated current variation on the inference accuracy in Binary NNs and propose a design-time reference current generation method to improve the robustness of the implemented NN under different temperature and process variation scenarios (up to 125 °C). Our proposed method is robust to both process and temperature variations. The proposed method improves the accuracy of NN inference by up to (20.51% ) on the MNIST, Fashion-MNIST, and CIFAR-10 benchmark datasets in the presence of process and temperature variations without additional runtime hardware overhead compared to existing solutions.
神经网络(NN)可以在基于新兴的电阻性非易失性存储器(NVM)的神经形态结构中有效地加速,例如自旋传递扭矩磁RAM (STT-MRAM)。与其他NVM技术相比,STT-MRAM具有许多优点,例如快速切换,高耐用性和CMOS工艺兼容性。然而,由于其低开/关比,过程变化和运行时温度波动可能导致误量化感测电流,进而降低推理精度。在本文中,我们分析了感知到的累积电流变化对二元神经网络推理精度的影响,并提出了一种设计时参考电流生成方法,以提高所实现的神经网络在不同温度和过程变化场景(高达125°C)下的鲁棒性。我们提出的方法对工艺和温度变化都具有鲁棒性。与现有解决方案相比,所提出的方法在存在过程和温度变化的情况下,在MNIST、时尚-MNIST和CIFAR-10基准数据集上提高了神经网络推理的精度,最高可达(20.51% ),而无需额外的运行时硬件开销。
{"title":"Design-Time Reference Current Generation for Robust Spintronic-Based Neuromorphic Architecture","authors":"Soyed Tuhin Ahmed, Mahta Mayahinia, Michael Hefenbrock, Christopher Münch, Mehdi B. Tahoori","doi":"10.1145/3625556","DOIUrl":"https://doi.org/10.1145/3625556","url":null,"abstract":"Neural Networks (NN) can be efficiently accelerated in a neuromorphic fabric based on emerging resistive non-volatile memories (NVM), such as Spin Transfer Torque Magnetic RAM (STT-MRAM). Compared to other NVM technologies, STT-MRAM offers many benefits, such as fast switching, high endurance, and CMOS process compatibility. However, due to its low ON/OFF-ratio, process variations and runtime temperature fluctuations can lead to miss-quantizing the sensed current and in turn, degradation of inference accuracy. In this paper, we analyze the impact of the sensed accumulated current variation on the inference accuracy in Binary NNs and propose a design-time reference current generation method to improve the robustness of the implemented NN under different temperature and process variation scenarios (up to 125 °C). Our proposed method is robust to both process and temperature variations. The proposed method improves the accuracy of NN inference by up to (20.51% ) on the MNIST, Fashion-MNIST, and CIFAR-10 benchmark datasets in the presence of process and temperature variations without additional runtime hardware overhead compared to existing solutions.","PeriodicalId":50924,"journal":{"name":"ACM Journal on Emerging Technologies in Computing Systems","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135537913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Secure and Lightweight Authentication Protocol Using PUF for the IoT-based Wireless Sensor Network 基于PUF的物联网无线传感器网络安全轻量级认证协议
4区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-09-18 DOI: 10.1145/3624477
Sourav Roy, Dipnarayan Das, Bibhash Sen
The wireless sensor network (WSN) has been gaining popularity for automation and performance improvement in different IoT-based applications. The resource-constrained nature and operating environment of IoT make the devices highly vulnerable to different attacks. On the other hand, the Physically Unclonable Function (PUF) helps to implement secure and lightweight authentication protocols for IoT. In this context, few computation-intensive authentication protocols are found in the literature that have addressed secure IoT communication in WSN. Besides, these protocols depend on the local storage of PUF-CRP, which is susceptible to security attacks. This work proposes a lightweight and secure authentication protocol for the IoT devices in WSN. A PUF and its machine learning (ML)–based soft model is integrated to ensure secure authentication and lightweight computation in WSN. PUF prevents physical attacks while carrying very less hardware fingerprints, and the ML-based PUF provides the desired resiliency against PUF identity-based attacks by eliminating the requirement of CRP-based storage. The proposed mechanism delivers two-way authentication while nullifying the attacks on IoT. The proposed protocol is implemented on Xilinx Artix-7 FPGA and Raspberry Pi for testability and performance evaluation. Experiment results and analysis signify its low-cost computations and lightweight features desired for IoT.
无线传感器网络(WSN)在各种基于物联网的应用中越来越受到自动化和性能改进的欢迎。物联网的资源约束性质和运行环境使得设备极易受到各种攻击。另一方面,物理不可克隆功能(PUF)有助于为物联网实现安全和轻量级的身份验证协议。在这种情况下,在解决WSN中安全物联网通信的文献中很少发现计算密集型认证协议。此外,这些协议依赖于PUF-CRP的本地存储,容易受到安全攻击。本文为WSN中的物联网设备提出了一种轻量级、安全的认证协议。将PUF及其基于机器学习的软模型集成在一起,保证了WSN的安全认证和轻量级计算。PUF可以防止物理攻击,同时携带很少的硬件指纹,并且基于ml的PUF通过消除基于rp的存储需求,提供了针对基于PUF身份的攻击所需的弹性。提出的机制提供双向认证,同时消除对物联网的攻击。提出的协议在Xilinx Artix-7 FPGA和树莓派上实现,以进行可测试性和性能评估。实验结果和分析表明其低成本计算和物联网所需的轻量化特性。
{"title":"Secure and Lightweight Authentication Protocol Using PUF for the IoT-based Wireless Sensor Network","authors":"Sourav Roy, Dipnarayan Das, Bibhash Sen","doi":"10.1145/3624477","DOIUrl":"https://doi.org/10.1145/3624477","url":null,"abstract":"The wireless sensor network (WSN) has been gaining popularity for automation and performance improvement in different IoT-based applications. The resource-constrained nature and operating environment of IoT make the devices highly vulnerable to different attacks. On the other hand, the Physically Unclonable Function (PUF) helps to implement secure and lightweight authentication protocols for IoT. In this context, few computation-intensive authentication protocols are found in the literature that have addressed secure IoT communication in WSN. Besides, these protocols depend on the local storage of PUF-CRP, which is susceptible to security attacks. This work proposes a lightweight and secure authentication protocol for the IoT devices in WSN. A PUF and its machine learning (ML)–based soft model is integrated to ensure secure authentication and lightweight computation in WSN. PUF prevents physical attacks while carrying very less hardware fingerprints, and the ML-based PUF provides the desired resiliency against PUF identity-based attacks by eliminating the requirement of CRP-based storage. The proposed mechanism delivers two-way authentication while nullifying the attacks on IoT. The proposed protocol is implemented on Xilinx Artix-7 FPGA and Raspberry Pi for testability and performance evaluation. Experiment results and analysis signify its low-cost computations and lightweight features desired for IoT.","PeriodicalId":50924,"journal":{"name":"ACM Journal on Emerging Technologies in Computing Systems","volume":"17 8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135154324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SkyBridge 2.0: A Fine-grained Vertical 3D-IC Technology for Future ICs SkyBridge 2.0:面向未来集成电路的细粒度垂直3D-IC技术
IF 2.2 4区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-08-31 DOI: 10.1145/3617501
Sachin Bhat, Mingyu Li, S. Kulkarni, C. A. Moritz
Gate-all-around FETs are set to replace FinFETs to enable continued miniaturization of ICs in the deep nanometer regime. IMEC and IRDS roadmaps project that 3D integration of gate-all-around FETs is a key path for the IC industry beyond 2024. In this paper, we present SkyBridge 2.0, an IC technology featuring high density fine-grained 3D integration of vertical gate-all-around nanowire FETs, contacts, and interconnect while also solving 3D routability. We utilize industry-standard EDA tools to develop a customized design and technology co-optimization (DTCO) flow to design and evaluate SkyBridge 2.0. This DTCO flow covers process emulation of standard cells and SRAM to enable scalable manufacturing pathway, TCAD characterization of vertical nanowire FETs to obtain IV and CV characteristics, compact modeling accurately the device behavior, RC parasitic extraction of 3D interconnects and performance, power and area assessment using ring oscillators. The technology assessment using ring oscillators shows that SkyBridge 2.0 at the chosen design point, using 10nm nanowires, achieves ∼ 18% performance and 31% energy efficiency benefits compared to 7nm FinFET technology. Area analysis of logic cells shows up to 6x density benefits versus aggressively scaled 2D-CMOS cells. In addition to logic, we architect 3D SRAM to support low-power memory designs. SkyBridge 2.0 SRAM shows ∼ 20% improvement in read and write static noise margin, up to 3x lower leakage current and up to 4x density benefits compared to 7nm FinFET technology.
栅极全能fet将取代finfet,以实现深度纳米领域ic的持续小型化。IMEC和IRDS路线图预测,栅极全方位场效应管的3D集成是2024年以后集成电路行业的关键路径。在本文中,我们提出了SkyBridge 2.0,这是一种集成电路技术,具有高密度细粒度的垂直栅极全方位纳米线场效应管,触点和互连的3D集成,同时还解决了3D可达性。我们利用行业标准的EDA工具来开发定制的设计和技术协同优化(DTCO)流程来设计和评估SkyBridge 2.0。该DTCO流程包括标准单元和SRAM的过程仿真,以实现可扩展的制造途径,垂直纳米线fet的TCAD表征,以获得IV和CV特性,精确建模器件行为,RC寄生提取3D互连和性能,使用环形振荡器进行功率和面积评估。使用环形振荡器的技术评估表明,SkyBridge 2.0在选定的设计点上,使用10nm纳米线,与7nm FinFET技术相比,实现了约18%的性能和31%的能效提升。逻辑单元的面积分析显示,与大规模扩展的2D-CMOS单元相比,其密度优势高达6倍。除逻辑外,我们还设计3D SRAM以支持低功耗存储器设计。与7nm FinFET技术相比,SkyBridge 2.0 SRAM的读写静态噪声裕度提高了约20%,泄漏电流降低了3倍,密度提高了4倍。
{"title":"SkyBridge 2.0: A Fine-grained Vertical 3D-IC Technology for Future ICs","authors":"Sachin Bhat, Mingyu Li, S. Kulkarni, C. A. Moritz","doi":"10.1145/3617501","DOIUrl":"https://doi.org/10.1145/3617501","url":null,"abstract":"Gate-all-around FETs are set to replace FinFETs to enable continued miniaturization of ICs in the deep nanometer regime. IMEC and IRDS roadmaps project that 3D integration of gate-all-around FETs is a key path for the IC industry beyond 2024. In this paper, we present SkyBridge 2.0, an IC technology featuring high density fine-grained 3D integration of vertical gate-all-around nanowire FETs, contacts, and interconnect while also solving 3D routability. We utilize industry-standard EDA tools to develop a customized design and technology co-optimization (DTCO) flow to design and evaluate SkyBridge 2.0. This DTCO flow covers process emulation of standard cells and SRAM to enable scalable manufacturing pathway, TCAD characterization of vertical nanowire FETs to obtain IV and CV characteristics, compact modeling accurately the device behavior, RC parasitic extraction of 3D interconnects and performance, power and area assessment using ring oscillators. The technology assessment using ring oscillators shows that SkyBridge 2.0 at the chosen design point, using 10nm nanowires, achieves ∼ 18% performance and 31% energy efficiency benefits compared to 7nm FinFET technology. Area analysis of logic cells shows up to 6x density benefits versus aggressively scaled 2D-CMOS cells. In addition to logic, we architect 3D SRAM to support low-power memory designs. SkyBridge 2.0 SRAM shows ∼ 20% improvement in read and write static noise margin, up to 3x lower leakage current and up to 4x density benefits compared to 7nm FinFET technology.","PeriodicalId":50924,"journal":{"name":"ACM Journal on Emerging Technologies in Computing Systems","volume":" ","pages":""},"PeriodicalIF":2.2,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46656792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Repercussions of Using DNN Compilers on Edge GPUs for Real Time and Safety Critical Systems: A Quantitative Audit 在边缘gpu上使用DNN编译器对实时和安全关键系统的影响:定量审计
IF 2.2 4区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-08-03 DOI: 10.1145/3611016
Omais Shafi, Mohammad Khalid Pandit, Amarjeet Saini, Gayathri Ananthanarayanan, Rijurekha Sen
Rapid advancements in edge devices has led to large deployment of deep neural network (DNN) based workloads. To utilize the resources at the edge effectively, many DNN compilers are proposed that efficiently map the high level DNN models developed in frameworks like PyTorch, Tensorflow, Caffe etc into minimum deployable lightweight execution engines. For real time applications like ADAS, these compiler optimized engines should give precise, reproducible and predictable inferences, both in-terms of runtime and output consistency. This paper is the first effort in empirically auditing state of the art DNN compilers viz TensorRT, AutoTVM and AutoScheduler. We characterize the NN compilers based on their performance predictability w.r.t inference latency, output reproducibility, hardware utilization. etc and based on that provide various recommendations. Our methodology and findings can potentially help the application developers, in making informed decision about the choice of DNN compiler, in a real time safety critical setting.
边缘设备的快速发展导致了基于深度神经网络(DNN)工作负载的大规模部署。为了有效地利用边缘资源,提出了许多DNN编译器,它们可以有效地将在PyTorch, Tensorflow, Caffe等框架中开发的高级DNN模型映射到最小可部署的轻量级执行引擎中。对于像ADAS这样的实时应用程序,这些编译器优化引擎应该在运行时和输出一致性方面提供精确的、可重复的和可预测的推断。本文是对最先进的DNN编译器(TensorRT, AutoTVM和AutoScheduler)进行经验审计的第一次努力。我们根据神经网络编译器的性能可预测性、推理延迟、输出再现性和硬件利用率来描述它们。等等,并在此基础上提供各种建议。我们的方法和发现可以潜在地帮助应用程序开发人员在实时安全关键设置中做出关于DNN编译器选择的明智决策。
{"title":"Repercussions of Using DNN Compilers on Edge GPUs for Real Time and Safety Critical Systems: A Quantitative Audit","authors":"Omais Shafi, Mohammad Khalid Pandit, Amarjeet Saini, Gayathri Ananthanarayanan, Rijurekha Sen","doi":"10.1145/3611016","DOIUrl":"https://doi.org/10.1145/3611016","url":null,"abstract":"Rapid advancements in edge devices has led to large deployment of deep neural network (DNN) based workloads. To utilize the resources at the edge effectively, many DNN compilers are proposed that efficiently map the high level DNN models developed in frameworks like PyTorch, Tensorflow, Caffe etc into minimum deployable lightweight execution engines. For real time applications like ADAS, these compiler optimized engines should give precise, reproducible and predictable inferences, both in-terms of runtime and output consistency. This paper is the first effort in empirically auditing state of the art DNN compilers viz TensorRT, AutoTVM and AutoScheduler. We characterize the NN compilers based on their performance predictability w.r.t inference latency, output reproducibility, hardware utilization. etc and based on that provide various recommendations. Our methodology and findings can potentially help the application developers, in making informed decision about the choice of DNN compiler, in a real time safety critical setting.","PeriodicalId":50924,"journal":{"name":"ACM Journal on Emerging Technologies in Computing Systems","volume":" ","pages":""},"PeriodicalIF":2.2,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49040941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
ACM Journal on Emerging Technologies in Computing Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1