首页 > 最新文献

IEEE Journal on Exploratory Solid-State Computational Devices and Circuits最新文献

英文 中文
INFORMATION FOR AUTHORS 作者信息
IF 2.4 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-12-01 DOI: 10.1109/JXCDC.2023.3263712
{"title":"INFORMATION FOR AUTHORS","authors":"","doi":"10.1109/JXCDC.2023.3263712","DOIUrl":"https://doi.org/10.1109/JXCDC.2023.3263712","url":null,"abstract":"","PeriodicalId":54149,"journal":{"name":"IEEE Journal on Exploratory Solid-State Computational Devices and Circuits","volume":"8 2","pages":"C3-C3"},"PeriodicalIF":2.4,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/6570653/9998452/10102689.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49978849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Binarized Neural Network Accelerator Macro Using Ultralow-Voltage Retention SRAM for Energy Minimum-Point Operation 用于能量最小点操作的超低电压保持SRAM二进制化神经网络加速器宏
IF 2.4 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-11-30 DOI: 10.1109/JXCDC.2022.3225744
Yusaku Shiotsu;Satoshi Sugahara
A binarized neural network (BNN) accelerator based on a processing-in-memory (PIM)/ computing-in-memory (CIM) architecture using ultralow-voltage retention static random access memory (ULVR-SRAM) is proposed for the energy minimum-point (EMP) operation. The BNN accelerator (BNA) macro is designed to perform stable inference operations at EMP and substantive power-gating (PG) using ULVR at an ultralow voltage (< EMP), which can be applied to fully connected layers (FCLs) with arbitrary shapes and sizes. The EMP operation of the BNA macro, which is enabled by applying the ULVR-SRAM to the macro, can dramatically improve the energy efficiency (TOPS/W) and significantly enlarge the number of parallelized multiply–accumulate (MAC) operations. In addition, the ULVR mode of the BNA macro, which also benefits from the usage of ULVR-SRAM, is effective at reducing the standby power. The proposed BNA macro can show a high energy efficiency of 65 TOPS/W for FCLs. This BNA macro concept using the ULVR-SRAM can be expanded to convolution layers, where the EMP operation is also expected to enhance the energy efficiency of convolution layers.
提出了一种基于存储器中处理(PIM)/存储器中计算(CIM)结构的二进制神经网络(BNN)加速器,该加速器使用超低电压保持静态随机存取存储器(ULVR-SRAM)进行能量最小点(EMP)操作。BNN加速器(BNA)宏被设计为在超低电压(<EMP)下使用ULVR在EMP和实质性功率门控(PG)下执行稳定的推断操作,该操作可以应用于任意形状和大小的全连接层(FCL)。通过将ULVR-SRAM应用于宏,BNA宏的EMP操作可以显著提高能效(TOPS/W),并显著增加并行乘法-累加(MAC)操作的数量。此外,BNA宏的ULVR模式也受益于ULVR-SRAM的使用,在降低待机功率方面是有效的。所提出的BNA宏可以显示FCL的65TOPS/W的高能效。使用ULVR-SRAM的BNA宏概念可以扩展到卷积层,其中EMP操作也有望提高卷积层的能效。
{"title":"Binarized Neural Network Accelerator Macro Using Ultralow-Voltage Retention SRAM for Energy Minimum-Point Operation","authors":"Yusaku Shiotsu;Satoshi Sugahara","doi":"10.1109/JXCDC.2022.3225744","DOIUrl":"10.1109/JXCDC.2022.3225744","url":null,"abstract":"A binarized neural network (BNN) accelerator based on a processing-in-memory (PIM)/ computing-in-memory (CIM) architecture using ultralow-voltage retention static random access memory (ULVR-SRAM) is proposed for the energy minimum-point (EMP) operation. The BNN accelerator (BNA) macro is designed to perform stable inference operations at EMP and substantive power-gating (PG) using ULVR at an ultralow voltage (< EMP), which can be applied to fully connected layers (FCLs) with arbitrary shapes and sizes. The EMP operation of the BNA macro, which is enabled by applying the ULVR-SRAM to the macro, can dramatically improve the energy efficiency (TOPS/W) and significantly enlarge the number of parallelized multiply–accumulate (MAC) operations. In addition, the ULVR mode of the BNA macro, which also benefits from the usage of ULVR-SRAM, is effective at reducing the standby power. The proposed BNA macro can show a high energy efficiency of 65 TOPS/W for FCLs. This BNA macro concept using the ULVR-SRAM can be expanded to convolution layers, where the EMP operation is also expected to enhance the energy efficiency of convolution layers.","PeriodicalId":54149,"journal":{"name":"IEEE Journal on Exploratory Solid-State Computational Devices and Circuits","volume":"8 2","pages":"134-144"},"PeriodicalIF":2.4,"publicationDate":"2022-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/6570653/9969523/09966581.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44044898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Valley-Spin Hall Effect-Based Nonvolatile Memory With Exchange-Coupling-Enabled Electrical Isolation of Read and Write Paths 基于谷自旋霍尔效应的具有交换耦合的非易失性存储器实现读写路径的电隔离
IF 2.4 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-11-29 DOI: 10.1109/JXCDC.2022.3224832
Karam Cho;Sumeet Kumar Gupta
Valley-spin hall (VSH) effect in monolayer WSe2 has been shown to exhibit highly beneficial features for nonvolatile memory (NVM) design. Key advantages of VSH-based magnetic random access memory (VSH-MRAM) over spin orbit torque (SOT)-MRAM include access transistor-less compact bit-cell and low-power switching of perpendicular magnetic anisotropy (PMA) magnets. Nevertheless, large device resistance in the read path ( $R_{S}$ ) due to low mobility of WSe2 and Schottky contacts deteriorates sense margin (SM), offsetting the benefits of VSH-MRAM. To address this limitation, we propose another flavor of VSH-MRAM that (while inheriting most of the benefits of VSH-MRAM) achieves lower $R_{S}$ in the read path by electrically isolating the read and write terminals. This is enabled by coupling VSH with electrically isolated but magnetically coupled PMA magnets via interlayer exchange coupling. Designing the proposed devices using object-oriented micromagnetic framework (OOMMF) simulation, we ensure the robustness of the exchange-coupled PMA system under process variations. To maintain a compact memory footprint, we share the read access transistor across multiple bit-cells. Compared with the existing VSH-MRAMs, our design achieves 39%–42% and 36%–46% reduction in read time and energy, respectively, along with $1.1times - 1.3times $ larger SM at a comparable area. This comes at the cost of $1.7times $ and $2.0times $ increase in write time and energy, respectively. Thus, the proposed design is suitable for applications in which reads are more dominant than writes.
单层WSe2中的谷自旋厅(VSH)效应在非易失性存储器(NVM)设计中表现出非常有益的特性。基于vsh的磁随机存取存储器(VSH-MRAM)相对于基于自旋轨道扭矩(SOT)的磁随机存取存储器(VSH-MRAM)的主要优点包括无接入晶体管的紧凑位单元和垂直磁各向异性(PMA)磁体的低功耗开关。然而,由于WSe2和Schottky触点的低迁移率,读取路径($R_{S}$)中的大器件电阻会恶化感知余量(SM),抵消了VSH-MRAM的好处。为了解决这一限制,我们提出了另一种风格的VSH-MRAM(在继承VSH-MRAM的大部分优点的同时),通过电隔离读写终端,在读路径中实现更低的R_{S}$。这是通过层间交换耦合将VSH与电隔离但磁耦合的PMA磁体耦合实现的。采用面向对象的微磁框架(OOMMF)仿真设计所提出的器件,保证了交换耦合PMA系统在工艺变化下的鲁棒性。为了保持紧凑的内存占用,我们跨多个位单元共享读访问晶体管。与现有的vsh - mram相比,我们的设计分别减少了39%-42%和36%-46%的读取时间和能量,同时在相同面积下的SM增大了1.1倍- 1.3倍。这样做的代价是写入时间和精力分别增加1.7倍和2.0倍。因此,所建议的设计适用于读比写更重要的应用程序。
{"title":"Valley-Spin Hall Effect-Based Nonvolatile Memory With Exchange-Coupling-Enabled Electrical Isolation of Read and Write Paths","authors":"Karam Cho;Sumeet Kumar Gupta","doi":"10.1109/JXCDC.2022.3224832","DOIUrl":"10.1109/JXCDC.2022.3224832","url":null,"abstract":"Valley-spin hall (VSH) effect in monolayer WSe2 has been shown to exhibit highly beneficial features for nonvolatile memory (NVM) design. Key advantages of VSH-based magnetic random access memory (VSH-MRAM) over spin orbit torque (SOT)-MRAM include access transistor-less compact bit-cell and low-power switching of perpendicular magnetic anisotropy (PMA) magnets. Nevertheless, large device resistance in the read path (\u0000<inline-formula> <tex-math>$R_{S}$ </tex-math></inline-formula>\u0000) due to low mobility of WSe2 and Schottky contacts deteriorates sense margin (SM), offsetting the benefits of VSH-MRAM. To address this limitation, we propose another flavor of VSH-MRAM that (while inheriting most of the benefits of VSH-MRAM) achieves lower \u0000<inline-formula> <tex-math>$R_{S}$ </tex-math></inline-formula>\u0000 in the read path by electrically isolating the read and write terminals. This is enabled by coupling VSH with electrically isolated but magnetically coupled PMA magnets via interlayer exchange coupling. Designing the proposed devices using object-oriented micromagnetic framework (OOMMF) simulation, we ensure the robustness of the exchange-coupled PMA system under process variations. To maintain a compact memory footprint, we share the read access transistor across multiple bit-cells. Compared with the existing VSH-MRAMs, our design achieves 39%–42% and 36%–46% reduction in read time and energy, respectively, along with \u0000<inline-formula> <tex-math>$1.1times - 1.3times $ </tex-math></inline-formula>\u0000 larger SM at a comparable area. This comes at the cost of \u0000<inline-formula> <tex-math>$1.7times $ </tex-math></inline-formula>\u0000 and \u0000<inline-formula> <tex-math>$2.0times $ </tex-math></inline-formula>\u0000 increase in write time and energy, respectively. Thus, the proposed design is suitable for applications in which reads are more dominant than writes.","PeriodicalId":54149,"journal":{"name":"IEEE Journal on Exploratory Solid-State Computational Devices and Circuits","volume":"8 2","pages":"157-165"},"PeriodicalIF":2.4,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/6570653/9998452/09966380.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46359153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Time-Based Compute-in-Memory for Cryogenic Neural Network With Successive Approximation Register Time-to-Digital Converter 逐次逼近寄存器时间-数字转换器低温神经网络中基于时间的内存计算
IF 2.4 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-11-29 DOI: 10.1109/JXCDC.2022.3225243
Dong Suk Kang;Shimeng Yu
This article explores a compute-in-memory (CIM) paradigm’s new application for cryogenic neural network. Using the 28-nm cryogenic transistor model calibrated at 4 K, the time-based CIM macro comprised of the following: 1) area-efficient unit delay cell design for cryogenic operation and 2) area and power efficient, and a high-resolution achievable successive approximation register (SAR) time-to-digital converter (TDC) is proposed. The benchmark simulation first shows that the proposed macro has better latency than the current-based CIM counterpart. Next, the simulation further shows that it has better scalability for a larger size decoder design and process technology optimization.
本文探讨了内存计算(CIM)范式在低温神经网络中的新应用。使用在4K下校准的28nm低温晶体管模型,提出了基于时间的CIM宏,该宏包括以下内容:1)用于低温操作的面积有效单位延迟单元设计,2)面积和功率有效,以及高分辨率可实现逐次逼近寄存器(SAR)时间-数字转换器(TDC)。基准测试仿真首先表明,所提出的宏比基于当前CIM的宏具有更好的延迟。接下来,仿真进一步表明,它对于更大尺寸的解码器设计和处理技术优化具有更好的可扩展性。
{"title":"Time-Based Compute-in-Memory for Cryogenic Neural Network With Successive Approximation Register Time-to-Digital Converter","authors":"Dong Suk Kang;Shimeng Yu","doi":"10.1109/JXCDC.2022.3225243","DOIUrl":"10.1109/JXCDC.2022.3225243","url":null,"abstract":"This article explores a compute-in-memory (CIM) paradigm’s new application for cryogenic neural network. Using the 28-nm cryogenic transistor model calibrated at 4 K, the time-based CIM macro comprised of the following: 1) area-efficient unit delay cell design for cryogenic operation and 2) area and power efficient, and a high-resolution achievable successive approximation register (SAR) time-to-digital converter (TDC) is proposed. The benchmark simulation first shows that the proposed macro has better latency than the current-based CIM counterpart. Next, the simulation further shows that it has better scalability for a larger size decoder design and process technology optimization.","PeriodicalId":54149,"journal":{"name":"IEEE Journal on Exploratory Solid-State Computational Devices and Circuits","volume":"8 2","pages":"128-133"},"PeriodicalIF":2.4,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/6570653/9969523/09966349.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42937846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
IMAGIN: Library of IMPLY and MAGIC NOR-Based Approximate Adders for In-Memory Computing IMAGIN:用于内存计算的基于IMPLY和MAGIC NOR的近似加法器库
IF 2.4 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-11-14 DOI: 10.1109/JXCDC.2022.3222015
Chandan Kumar Jha;Phrangboklang Lyngton Thangkhiew;Kamalika Datta;Rolf Drechsler
In-memory computing (IMC) has attracted significant interest in recent years as it aims to bridge the memory bottleneck in the Von Neumann architectures. IMC also improves the energy efficiency in these architectures. Another technique that has been explored to reduce the energy consumption is the use of approximate circuits, targeted toward error resilient applications. These applications have addition as one of their most frequently used operations. In literature, CMOS-based approximate adder libraries have been implemented to help designers choose from a variety of designs depending on the output quality requirements. However, the same is not true for memristor-based approximate adders targeted for IMC architectures. Hence, in this work, we developed a framework to generate approximate adder designs with varying output errors for the 8-, 12-, and 16-bit adders. We implemented a state-of-the-art scheduling algorithm to obtain the best mapping of these approximate adder designs for IMC. We performed an exhaustive design space exploration to obtain the pareto-optimal approximate adder designs for various design and error metrics. We then proposed IMAGIN, a library of approximate adders compatible with the memristor-based IMC architecture, which are based on the IMPLY and MAGIC design styles. We also performed mean filtering on the Kodak image dataset using the approximate adders from the IMAGIN library. IMAGIN can help designers select from a wide variety of approximate adders depending on the output quality requirements and serve as benchmarks for future research in this direction. All pareto-optimal designs will be made available at https://github.com/agra-uni-bremen/JxCDC2022-imagin-add.
近年来,内存计算(IMC)吸引了人们的极大兴趣,因为它旨在弥合冯·诺依曼体系结构中的内存瓶颈。IMC还提高了这些架构中的能源效率。另一种已被探索以降低能耗的技术是使用近似电路,其目标是具有容错性的应用。这些应用程序将加法作为其最常用的操作之一。在文献中,已经实现了基于CMOS的近似加法器库,以帮助设计者根据输出质量要求从各种设计中进行选择。然而,针对IMC架构的基于忆阻器的近似加法器并非如此。因此,在这项工作中,我们开发了一个框架,为8位、12位和16位加法器生成具有不同输出误差的近似加法器设计。我们实现了最先进的调度算法,以获得IMC的这些近似加法器设计的最佳映射。我们进行了详尽的设计空间探索,以获得各种设计和误差度量的帕累托最优近似加法器设计。然后,我们提出了IMAGIN,这是一个与基于忆阻器的IMC架构兼容的近似加法器库,该架构基于IMPLY和MAGIC设计风格。我们还使用IMAGIN库中的近似加法器对Kodak图像数据集进行了均值滤波。IMAGIN可以帮助设计者根据输出质量要求从各种近似加法器中进行选择,并作为未来这一方向研究的基准。所有帕累托最优设计将在https://github.com/agra-uni-bremen/JxCDC2022-imagin-add.
{"title":"IMAGIN: Library of IMPLY and MAGIC NOR-Based Approximate Adders for In-Memory Computing","authors":"Chandan Kumar Jha;Phrangboklang Lyngton Thangkhiew;Kamalika Datta;Rolf Drechsler","doi":"10.1109/JXCDC.2022.3222015","DOIUrl":"10.1109/JXCDC.2022.3222015","url":null,"abstract":"In-memory computing (IMC) has attracted significant interest in recent years as it aims to bridge the memory bottleneck in the Von Neumann architectures. IMC also improves the energy efficiency in these architectures. Another technique that has been explored to reduce the energy consumption is the use of approximate circuits, targeted toward error resilient applications. These applications have addition as one of their most frequently used operations. In literature, CMOS-based approximate adder libraries have been implemented to help designers choose from a variety of designs depending on the output quality requirements. However, the same is not true for memristor-based approximate adders targeted for IMC architectures. Hence, in this work, we developed a framework to generate approximate adder designs with varying output errors for the 8-, 12-, and 16-bit adders. We implemented a state-of-the-art scheduling algorithm to obtain the best mapping of these approximate adder designs for IMC. We performed an exhaustive design space exploration to obtain the pareto-optimal approximate adder designs for various design and error metrics. We then proposed IMAGIN, a library of approximate adders compatible with the memristor-based IMC architecture, which are based on the IMPLY and MAGIC design styles. We also performed mean filtering on the Kodak image dataset using the approximate adders from the IMAGIN library. IMAGIN can help designers select from a wide variety of approximate adders depending on the output quality requirements and serve as benchmarks for future research in this direction. All pareto-optimal designs will be made available at \u0000<uri>https://github.com/agra-uni-bremen/JxCDC2022-imagin-add</uri>\u0000.","PeriodicalId":54149,"journal":{"name":"IEEE Journal on Exploratory Solid-State Computational Devices and Circuits","volume":"8 2","pages":"68-76"},"PeriodicalIF":2.4,"publicationDate":"2022-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/6570653/9969523/09950064.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46854745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Stateful Logic Using Phase Change Memory 使用相变存储器的状态逻辑
IF 2.4 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-11-04 DOI: 10.1109/JXCDC.2022.3219731
Barak Hoffer;Nicolás Wainstein;Christopher M. Neumann;Eric Pop;Eilam Yalon;Shahar Kvatinsky
Stateful logic is a digital processing-in-memory (PIM) technique that could address von Neumann memory bottleneck challenges while maintaining backward compatibility with standard von Neumann architectures. In stateful logic, memory cells are used to perform the logic operations without reading or moving any data outside the memory array. Stateful logic has been previously demonstrated using several resistive memory types, mostly resistive RAM (RRAM). Here, we present a new method to design stateful logic using a different resistive memory-phase change memory (PCM). We propose and experimentally demonstrate four logic gate types (NOR, IMPLY, OR, NIMP) using commonly used PCM materials. Our stateful logic circuits are different than previously proposed circuits due to the different switching mechanisms and functionality of PCM compared to RRAM. Since the proposed stateful logic forms a functionally complete set, these gates enable the sequential execution of any logic function within the memory, paving the way to PCM-based digital PIM systems.
状态逻辑是一种内存中的数字处理(PIM)技术,可以解决冯·诺依曼内存瓶颈挑战,同时保持与标准冯·诺伊曼体系结构的向后兼容性。在有状态逻辑中,存储单元用于执行逻辑操作,而不读取或移动存储器阵列外部的任何数据。状态逻辑之前已经使用几种电阻存储器类型进行了演示,主要是电阻RAM(RRAM)。在这里,我们提出了一种使用不同的电阻存储器相变存储器(PCM)来设计有状态逻辑的新方法。我们提出并通过实验证明了使用常用PCM材料的四种逻辑门类型(NOR、IMPLY、OR、NIMP)。与RRAM相比,由于PCM的开关机制和功能不同,我们的状态逻辑电路与先前提出的电路不同。由于所提出的有状态逻辑形成了一个功能完整的集合,这些门能够顺序执行存储器中的任何逻辑功能,为基于PCM的数字PIM系统铺平了道路。
{"title":"Stateful Logic Using Phase Change Memory","authors":"Barak Hoffer;Nicolás Wainstein;Christopher M. Neumann;Eric Pop;Eilam Yalon;Shahar Kvatinsky","doi":"10.1109/JXCDC.2022.3219731","DOIUrl":"10.1109/JXCDC.2022.3219731","url":null,"abstract":"Stateful logic is a digital processing-in-memory (PIM) technique that could address von Neumann memory bottleneck challenges while maintaining backward compatibility with standard von Neumann architectures. In stateful logic, memory cells are used to perform the logic operations without reading or moving any data outside the memory array. Stateful logic has been previously demonstrated using several resistive memory types, mostly resistive RAM (RRAM). Here, we present a new method to design stateful logic using a different resistive memory-phase change memory (PCM). We propose and experimentally demonstrate four logic gate types (NOR, IMPLY, OR, NIMP) using commonly used PCM materials. Our stateful logic circuits are different than previously proposed circuits due to the different switching mechanisms and functionality of PCM compared to RRAM. Since the proposed stateful logic forms a functionally complete set, these gates enable the sequential execution of any logic function within the memory, paving the way to PCM-based digital PIM systems.","PeriodicalId":54149,"journal":{"name":"IEEE Journal on Exploratory Solid-State Computational Devices and Circuits","volume":"8 2","pages":"77-83"},"PeriodicalIF":2.4,"publicationDate":"2022-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/6570653/9969523/09938984.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43676180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
CRUS: A Hardware-Efficient Algorithm Mitigating Highly Nonlinear Weight Update in CIM Crossbar Arrays for Artificial Neural Networks CRUS:一种硬件有效的人工神经网络CIM交叉条阵列中高度非线性权重更新算法
IF 2.4 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-11-04 DOI: 10.1109/JXCDC.2022.3220032
Junmo Lee;Joon Hwang;Youngwoon Cho;Min-Kyu Park;Woo Young Choi;Sangbum Kim;Jong-Ho Lee
Mitigating the nonlinear weight update of synaptic devices is one of the main challenges in designing compute-in-memory (CIM) crossbar arrays for artificial neural networks (ANNs). While various nonlinearity mitigation schemes have been proposed so far, only a few of them have dealt with high-weight update nonlinearity. This article presents a hardware-efficient on-chip weight update scheme named the conditional reverse update scheme (CRUS), which algorithmically mitigates highly nonlinear weight change in synaptic devices. For hardware efficiency, CRUS is implemented on-chip using low precision (1-bit) and infrequent circuit operations. To utilize algorithmic insights, the impact of the nonlinear weight update on training is investigated. We first introduce a metric called update noise (UN), which quantifies the deviation of the actual weight update in synaptic devices from the expected weight update calculated from the stochastic gradient descent (SGD) algorithm. Based on UN analysis, we aim to reduce AUN, the UN average over the entire training process. The key principle to reducing average UN (AUN) is to conditionally skip long-term depression (LTD) pulses during training. The trends of AUN and accuracy under various LTD skip conditions are investigated to find maximum accuracy conditions. By properly tuning LTD skip conditions, CRUS achieves >90% accuracy on the Modified National Institute of Standards and Technology (MNIST) dataset even under high-weight update nonlinearity. Furthermore, it shows better accuracy than previous nonlinearity mitigation techniques under similar hardware conditions. It also exhibits robustness to cycle-to-cycle variations (CCVs) in conductance updates. The results suggest that CRUS can be an effective solution to relieve the algorithm-hardware tradeoff in CIM crossbar array design.
缓解突触设备的非线性权重更新是为人工神经网络设计内存计算(CIM)交叉阵列的主要挑战之一。虽然到目前为止已经提出了各种非线性缓解方案,但其中只有少数方案处理了高权重更新非线性。本文提出了一种硬件高效的片上权重更新方案,称为条件反向更新方案(CRUS),该方案在算法上缓解了突触设备中高度非线性的权重变化。为了提高硬件效率,CRUS是使用低精度(1位)和不频繁的电路操作在芯片上实现的。为了利用算法见解,研究了非线性权重更新对训练的影响。我们首先引入了一种称为更新噪声(UN)的度量,它量化了突触设备中实际权重更新与随机梯度下降(SGD)算法计算的预期权重更新的偏差。根据联合国的分析,我们的目标是减少AUN,即联合国在整个培训过程中的平均值。降低平均UN(AUN)的关键原则是在训练过程中有条件地跳过长期抑郁(LTD)脉冲。研究了AUN和精度在各种LTD跳跃条件下的趋势,以找到最大精度条件。通过适当调整LTD跳跃条件,CRUS在修改后的国家标准与技术研究所(MNIST)数据集上实现了>90%的准确率,即使在高权重更新非线性下也是如此。此外,在类似的硬件条件下,它显示出比以前的非线性抑制技术更好的精度。它还对电导更新中的周期间变化(CCV)表现出鲁棒性。结果表明,CRUS是一种有效的解决方案,可以缓解CIM交叉阵列设计中算法硬件的折衷。
{"title":"CRUS: A Hardware-Efficient Algorithm Mitigating Highly Nonlinear Weight Update in CIM Crossbar Arrays for Artificial Neural Networks","authors":"Junmo Lee;Joon Hwang;Youngwoon Cho;Min-Kyu Park;Woo Young Choi;Sangbum Kim;Jong-Ho Lee","doi":"10.1109/JXCDC.2022.3220032","DOIUrl":"10.1109/JXCDC.2022.3220032","url":null,"abstract":"Mitigating the nonlinear weight update of synaptic devices is one of the main challenges in designing compute-in-memory (CIM) crossbar arrays for artificial neural networks (ANNs). While various nonlinearity mitigation schemes have been proposed so far, only a few of them have dealt with high-weight update nonlinearity. This article presents a hardware-efficient on-chip weight update scheme named the conditional reverse update scheme (CRUS), which algorithmically mitigates highly nonlinear weight change in synaptic devices. For hardware efficiency, CRUS is implemented on-chip using low precision (1-bit) and infrequent circuit operations. To utilize algorithmic insights, the impact of the nonlinear weight update on training is investigated. We first introduce a metric called update noise (UN), which quantifies the deviation of the actual weight update in synaptic devices from the expected weight update calculated from the stochastic gradient descent (SGD) algorithm. Based on UN analysis, we aim to reduce AUN, the UN average over the entire training process. The key principle to reducing average UN (AUN) is to conditionally skip long-term depression (LTD) pulses during training. The trends of AUN and accuracy under various LTD skip conditions are investigated to find maximum accuracy conditions. By properly tuning LTD skip conditions, CRUS achieves >90% accuracy on the Modified National Institute of Standards and Technology (MNIST) dataset even under high-weight update nonlinearity. Furthermore, it shows better accuracy than previous nonlinearity mitigation techniques under similar hardware conditions. It also exhibits robustness to cycle-to-cycle variations (CCVs) in conductance updates. The results suggest that CRUS can be an effective solution to relieve the algorithm-hardware tradeoff in CIM crossbar array design.","PeriodicalId":54149,"journal":{"name":"IEEE Journal on Exploratory Solid-State Computational Devices and Circuits","volume":"8 2","pages":"145-154"},"PeriodicalIF":2.4,"publicationDate":"2022-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/6570653/9969523/09940271.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42642779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Memristive Devices for Time Domain Compute-in-Memory 内存中时域计算的记忆器件
IF 2.4 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-10-25 DOI: 10.1109/JXCDC.2022.3217098
Florian Freye;Jie Lou;Christopher Bengel;Stephan Menzel;Stefan Wiefels;Tobias Gemmeke
Analog compute schemes and compute-in-memory (CIM) have emerged in an effort to reduce the increasing power hunger of convolutional neural networks (CNNs), which exceeds the constraints of edge devices. Memristive device types are a relatively new offering with interesting opportunities for unexplored circuit concepts. In this work, the use of memristive devices in cascaded time-domain CIM (TDCIM) is introduced with the primary goal of reducing the size of fully unrolled architectures. The different effects influencing the determinism in memristive devices are outlined together with reliability concerns. Architectures for binary as well as multibit multiply and accumulate (MAC) cells are presented and evaluated. As more involved circuits offer more accurate compute result, a tradeoff between design effort and accuracy comes into the picture. To further evaluate this tradeoff, the impact of variations on overall compute accuracy is discussed. The presented cells reach an energy/OP of 0.23 fJ at a size of $1.2~{mu{ }}text{m}^{2}$ for binary and 6.04 fJ at $3.2~mu text{m}^{2}$ for $4times 4$ bit MAC operations.
模拟计算方案和内存计算(CIM)已经出现,以减少卷积神经网络(cnn)日益增长的功率饥渴,这超出了边缘设备的限制。记忆器件类型是一种相对较新的产品,为未探索的电路概念提供了有趣的机会。在这项工作中,介绍了级联时域CIM (TDCIM)中记忆器件的使用,其主要目标是减少完全展开架构的尺寸。概述了影响记忆器件确定性的不同因素以及可靠性问题。提出并评价了二进制和多比特乘法和累积(MAC)单元的结构。由于涉及的电路越多,计算结果就越精确,因此需要在设计努力和精度之间进行权衡。为了进一步评估这种权衡,我们讨论了变化对总体计算精度的影响。所提出的单元在大小为$1.2~{mu{}}text{m}^{2}$时的能量/OP为0.23 fJ,在$3.2~mu text{m}^{2}$时的能量/OP为6.04 fJ,用于$4 × 4$ bit的MAC操作。
{"title":"Memristive Devices for Time Domain Compute-in-Memory","authors":"Florian Freye;Jie Lou;Christopher Bengel;Stephan Menzel;Stefan Wiefels;Tobias Gemmeke","doi":"10.1109/JXCDC.2022.3217098","DOIUrl":"10.1109/JXCDC.2022.3217098","url":null,"abstract":"Analog compute schemes and compute-in-memory (CIM) have emerged in an effort to reduce the increasing power hunger of convolutional neural networks (CNNs), which exceeds the constraints of edge devices. Memristive device types are a relatively new offering with interesting opportunities for unexplored circuit concepts. In this work, the use of memristive devices in cascaded time-domain CIM (TDCIM) is introduced with the primary goal of reducing the size of fully unrolled architectures. The different effects influencing the determinism in memristive devices are outlined together with reliability concerns. Architectures for binary as well as multibit multiply and accumulate (MAC) cells are presented and evaluated. As more involved circuits offer more accurate compute result, a tradeoff between design effort and accuracy comes into the picture. To further evaluate this tradeoff, the impact of variations on overall compute accuracy is discussed. The presented cells reach an energy/OP of 0.23 fJ at a size of \u0000<inline-formula> <tex-math>$1.2~{mu{ }}text{m}^{2}$ </tex-math></inline-formula>\u0000 for binary and 6.04 fJ at \u0000<inline-formula> <tex-math>$3.2~mu text{m}^{2}$ </tex-math></inline-formula>\u0000 for \u0000<inline-formula> <tex-math>$4times 4$ </tex-math></inline-formula>\u0000 bit MAC operations.","PeriodicalId":54149,"journal":{"name":"IEEE Journal on Exploratory Solid-State Computational Devices and Circuits","volume":"8 2","pages":"119-127"},"PeriodicalIF":2.4,"publicationDate":"2022-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/6570653/9969523/09930136.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44685222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Leveraging Ferroelectric Stochasticity and In-Memory Computing for DNN IP Obfuscation 利用铁电性和内存计算实现DNN IP混淆
IF 2.4 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-10-25 DOI: 10.1109/JXCDC.2022.3217043
Likhitha Mankali;Nikhil Rangarajan;Swetaki Chatterjee;Shubham Kumar;Yogesh Singh Chauhan;Ozgur Sinanoglu;Hussam Amrouch
With the emergence of the Internet of Things (IoT), deep neural networks (DNNs) are widely used in different domains, such as computer vision, healthcare, social media, and defense. The hardware-level architecture of a DNN can be built using an in-memory computing-based design, which is loaded with the weights of a well-trained DNN model. However, such hardware-based DNN systems are vulnerable to model stealing attacks where an attacker reverse-engineers (REs) and extracts the weights of the DNN model. In this work, we propose an energy-efficient defense technique that combines a ferroelectric field effect transistor (FeFET)-based reconfigurable physically unclonable function (PUF) with an in-memory FeFET XNOR to thwart model stealing attacks. We leverage the inherent stochasticity in the FE domains to build a PUF that helps to corrupt the neural network’s (NN) weights when an adversarial attack is detected. We showcase the efficacy of the proposed defense scheme by performing experiments on graph-NNs (GNNs), a particular type of DNN. The proposed defense scheme is a first of its kind that evaluates the security of GNNs. We investigate the effect of corrupting the weights on different layers of the GNN on the accuracy degradation of the graph classification application for two specific error models of corrupting the FeFET-based PUFs and five different bioinformatics datasets. We demonstrate that our approach successfully degrades the inference accuracy of the graph classification by corrupting any layer of the GNN after a small rewrite pulse.
随着物联网(IoT)的出现,深度神经网络(DNN)被广泛应用于不同领域,如计算机视觉、医疗保健、社交媒体和国防。DNN的硬件级架构可以使用基于内存计算的设计来构建,该设计加载了训练有素的DNN模型的权重。然而,这种基于硬件的DNN系统容易受到模型窃取攻击,其中攻击者反向工程(RE)并提取DNN模型的权重。在这项工作中,我们提出了一种节能防御技术,该技术将基于铁电场效应晶体管(FeFET)的可重构物理不可克隆函数(PUF)与内存中的FeFET XNOR相结合,以阻止模型窃取攻击。我们利用FE域中固有的随机性来构建PUF,当检测到对抗性攻击时,PUF有助于破坏神经网络(NN)的权重。我们通过对图神经网络(GNN)(一种特殊类型的DNN)进行实验来展示所提出的防御方案的有效性。所提出的防御方案是第一个评估GNN安全性的防御方案。对于破坏基于FeFET的PUF的两个特定误差模型和五个不同的生物信息学数据集,我们研究了破坏GNN不同层上的权重对图分类应用程序精度下降的影响。我们证明,我们的方法在小的重写脉冲后破坏了GNN的任何层,从而成功地降低了图分类的推理精度。
{"title":"Leveraging Ferroelectric Stochasticity and In-Memory Computing for DNN IP Obfuscation","authors":"Likhitha Mankali;Nikhil Rangarajan;Swetaki Chatterjee;Shubham Kumar;Yogesh Singh Chauhan;Ozgur Sinanoglu;Hussam Amrouch","doi":"10.1109/JXCDC.2022.3217043","DOIUrl":"10.1109/JXCDC.2022.3217043","url":null,"abstract":"With the emergence of the Internet of Things (IoT), deep neural networks (DNNs) are widely used in different domains, such as computer vision, healthcare, social media, and defense. The hardware-level architecture of a DNN can be built using an in-memory computing-based design, which is loaded with the weights of a well-trained DNN model. However, such hardware-based DNN systems are vulnerable to model stealing attacks where an attacker reverse-engineers (REs) and extracts the weights of the DNN model. In this work, we propose an energy-efficient defense technique that combines a ferroelectric field effect transistor (FeFET)-based reconfigurable physically unclonable function (PUF) with an in-memory FeFET XNOR to thwart model stealing attacks. We leverage the inherent stochasticity in the FE domains to build a PUF that helps to corrupt the neural network’s (NN) weights when an adversarial attack is detected. We showcase the efficacy of the proposed defense scheme by performing experiments on graph-NNs (GNNs), a particular type of DNN. The proposed defense scheme is a first of its kind that evaluates the security of GNNs. We investigate the effect of corrupting the weights on different layers of the GNN on the accuracy degradation of the graph classification application for two specific error models of corrupting the FeFET-based PUFs and five different bioinformatics datasets. We demonstrate that our approach successfully degrades the inference accuracy of the graph classification by corrupting any layer of the GNN after a small rewrite pulse.","PeriodicalId":54149,"journal":{"name":"IEEE Journal on Exploratory Solid-State Computational Devices and Circuits","volume":"8 2","pages":"102-110"},"PeriodicalIF":2.4,"publicationDate":"2022-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/6570653/9969523/09930133.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43155261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
MR-PIPA: An Integrated Multilevel RRAM (HfOx)-Based Processing-In-Pixel Accelerator MR-PIPA:一种基于HfOx的集成多级RRAM处理像素加速器
IF 2.4 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-09-28 DOI: 10.1109/JXCDC.2022.3210509
Minhaz Abedin;Arman Roohi;Maximilian Liehr;Nathaniel Cady;Shaahin Angizi
This work paves the way to realize a processing-in-pixel (PIP) accelerator based on a multilevel HfOx resistive random access memory (RRAM) as a flexible, energy-efficient, and high-performance solution for real-time and smart image processing at edge devices. The proposed design intrinsically implements and supports a coarse-grained convolution operation in low-bit-width neural networks (NNs) leveraging a novel compute-pixel with nonvolatile weight storage at the sensor side. Our evaluations show that such a design can remarkably reduce the power consumption of data conversion and transmission to an off-chip processor maintaining accuracy compared with the recent in-sensor computing designs. Our proposed design, namely an integrated multilevel RRAM (HfOx)-based processing-in-pixel accelerator (MR-PIPA), achieves a frame rate of 1000 and efficiency of ~1.89 TOp/s/W, while it substantially reduces data conversion and transmission energy by ~84% compared to a baseline at the cost of minor accuracy degradation.
这项工作为实现基于多级HfOx电阻随机存取存储器(RRAM)的像素内处理(PIP)加速器铺平了道路,该加速器是一种灵活、节能、高性能的解决方案,用于边缘设备的实时和智能图像处理。所提出的设计本质上实现并支持低位宽神经网络(NN)中的粗粒度卷积操作,该网络利用了在传感器侧具有非易失性权重存储的新型计算像素。我们的评估表明,与最近的传感器计算设计相比,这种设计可以显著降低数据转换和传输到芯片外处理器的功耗,从而保持精度。我们提出的设计,即基于集成多级RRAM(HfOx)的像素加速器处理(MR-PIPA),实现了1000的帧速率和约1.89 TOp/s/W的效率,同时与基线相比,它以较小的精度下降为代价,大幅降低了约84%的数据转换和传输能量。
{"title":"MR-PIPA: An Integrated Multilevel RRAM (HfOx)-Based Processing-In-Pixel Accelerator","authors":"Minhaz Abedin;Arman Roohi;Maximilian Liehr;Nathaniel Cady;Shaahin Angizi","doi":"10.1109/JXCDC.2022.3210509","DOIUrl":"10.1109/JXCDC.2022.3210509","url":null,"abstract":"This work paves the way to realize a processing-in-pixel (PIP) accelerator based on a multilevel HfOx resistive random access memory (RRAM) as a flexible, energy-efficient, and high-performance solution for real-time and smart image processing at edge devices. The proposed design intrinsically implements and supports a coarse-grained convolution operation in low-bit-width neural networks (NNs) leveraging a novel compute-pixel with nonvolatile weight storage at the sensor side. Our evaluations show that such a design can remarkably reduce the power consumption of data conversion and transmission to an off-chip processor maintaining accuracy compared with the recent in-sensor computing designs. Our proposed design, namely an integrated multilevel RRAM (HfOx)-based processing-in-pixel accelerator (MR-PIPA), achieves a frame rate of 1000 and efficiency of ~1.89 TOp/s/W, while it substantially reduces data conversion and transmission energy by ~84% compared to a baseline at the cost of minor accuracy degradation.","PeriodicalId":54149,"journal":{"name":"IEEE Journal on Exploratory Solid-State Computational Devices and Circuits","volume":"8 2","pages":"59-67"},"PeriodicalIF":2.4,"publicationDate":"2022-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/6570653/9969523/09905572.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47970835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
期刊
IEEE Journal on Exploratory Solid-State Computational Devices and Circuits
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1