Pub Date : 2024-09-16DOI: 10.1109/JETCAS.2024.3450049
{"title":"IEEE Circuits and Systems Society Information","authors":"","doi":"10.1109/JETCAS.2024.3450049","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3450049","url":null,"abstract":"","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 3","pages":"C3-C3"},"PeriodicalIF":3.7,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10680688","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142235952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-16DOI: 10.1109/JETCAS.2024.3450055
{"title":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems Publication Information","authors":"","doi":"10.1109/JETCAS.2024.3450055","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3450055","url":null,"abstract":"","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 3","pages":"C2-C2"},"PeriodicalIF":3.7,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10680687","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142235953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This Special Issue of IEEE Journal on Emerging and Selected Topics in Circuits and Systems (JETCAS) is devoted to advancing the field of chip and package-scale communications across diverse computing domains, bridging academic research and industrial innovation. As we enter a new golden age of computer architecture, marked by both challenges and opportunities, the anticipated end of Moore’s law necessitates reimagining the future of computing systems as we approach the physical limits of transistors. Three leading approaches to address these challenges include the chiplet paradigm, domain-specific customization, and quantum computing. However, these architectural and technological innovations have shifted the primary bottleneck from computation to communication. Consequently, on-chip and on-package communication now play a critical role in determining the performance, efficiency, and scalability of general-purpose, domain-specific, and quantum computing systems. Their ever-growing importance has garnered significant attention from both academia and industry.
{"title":"Guest Editorial Chip and Package-Scale Communication-Aware Architectures for General-Purpose, Domain-Specific, and Quantum Computing Systems","authors":"Abhijit Das;Maurizio Palesi;John Kim;Partha Pratim Pande","doi":"10.1109/JETCAS.2024.3445208","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3445208","url":null,"abstract":"This Special Issue of IEEE Journal on Emerging and Selected Topics in Circuits and Systems (JETCAS) is devoted to advancing the field of chip and package-scale communications across diverse computing domains, bridging academic research and industrial innovation. As we enter a new golden age of computer architecture, marked by both challenges and opportunities, the anticipated end of Moore’s law necessitates reimagining the future of computing systems as we approach the physical limits of transistors. Three leading approaches to address these challenges include the chiplet paradigm, domain-specific customization, and quantum computing. However, these architectural and technological innovations have shifted the primary bottleneck from computation to communication. Consequently, on-chip and on-package communication now play a critical role in determining the performance, efficiency, and scalability of general-purpose, domain-specific, and quantum computing systems. Their ever-growing importance has garnered significant attention from both academia and industry.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 3","pages":"349-353"},"PeriodicalIF":3.7,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10680692","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142235954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-16DOI: 10.1109/JETCAS.2024.3450053
{"title":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems Information for Authors","authors":"","doi":"10.1109/JETCAS.2024.3450053","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3450053","url":null,"abstract":"","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 3","pages":"575-575"},"PeriodicalIF":3.7,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10680690","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142235951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-27DOI: 10.1109/jetcas.2024.3450527
Yanqi Qiao, Dazhuang Liu, Rui Wang, Kaitai Liang
{"title":"Stealthy Backdoor Attack against Federated Learning through Frequency Domain by Backdoor Neuron Constraint and Model Camouflage","authors":"Yanqi Qiao, Dazhuang Liu, Rui Wang, Kaitai Liang","doi":"10.1109/jetcas.2024.3450527","DOIUrl":"https://doi.org/10.1109/jetcas.2024.3450527","url":null,"abstract":"","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"6 1","pages":""},"PeriodicalIF":4.6,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142197276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The anticipated end of Moore’s law, coupled with the breakdown of Dennard scaling, compelled everyone to conceive forthcoming computing systems once transistors reach their limits. Three leading approaches to circumvent this situation are the chiplet paradigm, domain customisation and quantum computing. However, architectural and technological innovations have shifted the fundamental bottleneck from computation to communication. Hence, on-chip and on-package communication play a pivotal role in determining the performance, energy efficiency and scalability of general-purpose, domain-specific and quantum computing systems. This article reviews the recent advances in chip and package-scale interconnects due to the change in architecture, application and technology. The primary objective of this article is to present the current status, key challenges, and impact-worthy opportunities in this research area from the perspective of hardware architectures. The secondary objective of this article is to serve as a tutorial providing an overview of academic and industrial explorations in chip and package-scale communication infrastructure design for general-purpose, domain-specific and quantum computing systems.
{"title":"Chip and Package-Scale Interconnects for General-Purpose, Domain-Specific, and Quantum Computing Systems—Overview, Challenges, and Opportunities","authors":"Abhijit Das;Maurizio Palesi;John Kim;Partha Pratim Pande","doi":"10.1109/JETCAS.2024.3445829","DOIUrl":"10.1109/JETCAS.2024.3445829","url":null,"abstract":"The anticipated end of Moore’s law, coupled with the breakdown of Dennard scaling, compelled everyone to conceive forthcoming computing systems once transistors reach their limits. Three leading approaches to circumvent this situation are the chiplet paradigm, domain customisation and quantum computing. However, architectural and technological innovations have shifted the fundamental bottleneck from computation to communication. Hence, on-chip and on-package communication play a pivotal role in determining the performance, energy efficiency and scalability of general-purpose, domain-specific and quantum computing systems. This article reviews the recent advances in chip and package-scale interconnects due to the change in architecture, application and technology. The primary objective of this article is to present the current status, key challenges, and impact-worthy opportunities in this research area from the perspective of hardware architectures. The secondary objective of this article is to serve as a tutorial providing an overview of academic and industrial explorations in chip and package-scale communication infrastructure design for general-purpose, domain-specific and quantum computing systems.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 3","pages":"354-370"},"PeriodicalIF":3.7,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10638543","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142197237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-05DOI: 10.1109/JETCAS.2024.3438435
Haoyu Wang;Jianjie Ren;Basel Halak;Ahmad Atamli
In the rapidly evolving landscape of system design, Multi-Processor Systems-on-Chip (MPSoCs) have experienced significant growth in both scale and complexity, by integrating an array of Intellectual Properties (IPs) through Network-on-Chip (NoC) to execute complex parallel applications. However, this advancement has led to the emergence of security attacks caused by Malicious Third-Party IPs (M3PIPs), such as Denial-of-Service (DoS). Many current methods for detecting DoS attacks involve significant hardware overhead and are often inefficient in identifying anomalies at an early stage. Addressing this gap, we propose the Graph-based NoC Shield (GNS), a robust strategy meticulously crafted to detect, localize, and isolate malicious IPs at the very early stage of DoS appearance. Central to our approach is the use of a Graph Neural Network (GNN) and Long Short-Term Memory (LSTM) detection model. This combination capitalizes on network traffic data and routing dependency graphs to efficiently trace the source of network congestion and pinpoint attackers. Our extensive experimental analysis validates the effectiveness of the GNS framework, demonstrating a 98% detection accuracy and localization capabilities, achieved with minimal hardware overhead of 1.8% in each router, based on a pure 4*4 Mesh NoC system. The detection performance exceeds that of all other state-of-the-art works and most straightforward single machine learning inference models within the same context. Additionally, the hardware overhead is notably superior compared to other security schemes. Another key feature of our system is the implementation of a credit interposing mechanism. It was specifically designed to isolate M3PIPs engaging in Flooding-based DoS and effectively mitigate the spread of malicious traffic. This approach significantly enhances the security of NoC-based MPSoCs, offering early-stage detection with the superior accuracy compared to other models. Crucially, the GNS achieves this with up to 75% less hardware overhead than state-of-the-art solutions, thus striking a balance between efficiency and effectiveness in security implementation.
在快速发展的系统设计领域,多处理器片上系统(MPSoC)通过片上网络(NoC)集成了一系列知识产权(IP)以执行复杂的并行应用,其规模和复杂性都有了显著提高。然而,这一进步也导致了恶意第三方 IP(M3PIP)引起的安全攻击的出现,如拒绝服务(DoS)。目前许多检测 DoS 攻击的方法都涉及大量硬件开销,而且在早期识别异常情况方面往往效率低下。针对这一缺陷,我们提出了基于图形的 NoC 屏蔽(GNS),这是一种精心设计的强大策略,可在 DoS 出现的早期阶段检测、定位和隔离恶意 IP。我们方法的核心是使用图形神经网络(GNN)和长短期记忆(LSTM)检测模型。这一组合利用了网络流量数据和路由依赖图,可有效追踪网络拥塞的源头并精确定位攻击者。我们的大量实验分析验证了 GNS 框架的有效性,基于纯 4*4 网状 NoC 系统,每个路由器的硬件开销仅为 1.8%,却实现了 98% 的检测准确率和定位能力。其检测性能超过了所有其他最先进的研究成果和相同背景下最直接的单一机器学习推理模型。此外,硬件开销也明显优于其他安全方案。我们系统的另一个主要特点是实施了一种信用穿插机制。该机制专门用于隔离参与基于泛洪的 DoS 的 M3PIP,并有效缓解恶意流量的传播。这种方法大大增强了基于 NoC 的 MPSoC 的安全性,与其他模型相比,它能提供准确性更高的早期检测。最重要的是,与最先进的解决方案相比,GNS 可减少高达 75% 的硬件开销,从而在安全实施的效率和效果之间取得了平衡。
{"title":"GNS: Graph-Based Network-on-Chip Shield for Early Defense Against Malicious Nodes in MPSoC","authors":"Haoyu Wang;Jianjie Ren;Basel Halak;Ahmad Atamli","doi":"10.1109/JETCAS.2024.3438435","DOIUrl":"10.1109/JETCAS.2024.3438435","url":null,"abstract":"In the rapidly evolving landscape of system design, Multi-Processor Systems-on-Chip (MPSoCs) have experienced significant growth in both scale and complexity, by integrating an array of Intellectual Properties (IPs) through Network-on-Chip (NoC) to execute complex parallel applications. However, this advancement has led to the emergence of security attacks caused by Malicious Third-Party IPs (M3PIPs), such as Denial-of-Service (DoS). Many current methods for detecting DoS attacks involve significant hardware overhead and are often inefficient in identifying anomalies at an early stage. Addressing this gap, we propose the Graph-based NoC Shield (GNS), a robust strategy meticulously crafted to detect, localize, and isolate malicious IPs at the very early stage of DoS appearance. Central to our approach is the use of a Graph Neural Network (GNN) and Long Short-Term Memory (LSTM) detection model. This combination capitalizes on network traffic data and routing dependency graphs to efficiently trace the source of network congestion and pinpoint attackers. Our extensive experimental analysis validates the effectiveness of the GNS framework, demonstrating a 98% detection accuracy and localization capabilities, achieved with minimal hardware overhead of 1.8% in each router, based on a pure 4*4 Mesh NoC system. The detection performance exceeds that of all other state-of-the-art works and most straightforward single machine learning inference models within the same context. Additionally, the hardware overhead is notably superior compared to other security schemes. Another key feature of our system is the implementation of a credit interposing mechanism. It was specifically designed to isolate M3PIPs engaging in Flooding-based DoS and effectively mitigate the spread of malicious traffic. This approach significantly enhances the security of NoC-based MPSoCs, offering early-stage detection with the superior accuracy compared to other models. Crucially, the GNS achieves this with up to 75% less hardware overhead than state-of-the-art solutions, thus striking a balance between efficiency and effectiveness in security implementation.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 3","pages":"483-494"},"PeriodicalIF":3.7,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141934792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-05DOI: 10.1109/JETCAS.2024.3439193
Vahid Geraeinejad;Qiran Qian;Masoumeh Ebrahimi
GPUs are extensively employed as the primary devices for running a broad spectrum of applications, covering general-purpose applications as well as Artificial Intelligence (AI) applications. Register file, as the largest SRAM on the GPU die, accounts for over 20% of the total GPU energy consumption. Register cache has been introduced to reduce traffic from the register file and thus decrease total energy consumption when CUDA cores are utilized. However, the utilization of register cache has not been thoroughly investigated for Tensor Cores which are integrated into recent GPU architectures to meet AI workload demands. In this paper, we study the usage of register cache in both CUDA and Tensor Cores and conduct a thorough examination of their pros and cons. We have developed an open-source analytical simulator, called RFC-sim, to model and measure the energy consumption of both the register file and register cache. Our results show that while the register cache can reduce energy consumption by up to 40% in CUDA cores, it results in increased energy consumption by up to 23% in Tensor Cores. The main reason lies in the limited space of the register cache, which is not sufficient for the demand of Tensor cores to capture locality.
GPU 被广泛用作运行各种应用的主要设备,包括通用应用和人工智能(AI)应用。寄存器文件是 GPU 芯片上最大的 SRAM,占 GPU 总能耗的 20% 以上。引入寄存器缓存是为了减少寄存器文件的流量,从而降低使用 CUDA 内核时的总能耗。然而,对于集成到最近的 GPU 架构中以满足人工智能工作负载需求的张量核,寄存器缓存的利用率尚未得到深入研究。在本文中,我们研究了寄存器缓存在 CUDA 和 Tensor Cores 中的使用情况,并对它们的利弊进行了深入探讨。我们开发了一个名为 RFC-sim 的开源分析模拟器,对寄存器文件和寄存器缓存的能耗进行建模和测量。我们的结果表明,虽然寄存器缓存在 CUDA 内核中可以减少高达 40% 的能耗,但在 Tensor 内核中却会导致能耗增加高达 23%。主要原因在于寄存器缓存的空间有限,无法满足 Tensor 内核捕捉定位的需求。
{"title":"Investigating Register Cache Behavior: Implications for CUDA and Tensor Core Workloads on GPUs","authors":"Vahid Geraeinejad;Qiran Qian;Masoumeh Ebrahimi","doi":"10.1109/JETCAS.2024.3439193","DOIUrl":"10.1109/JETCAS.2024.3439193","url":null,"abstract":"GPUs are extensively employed as the primary devices for running a broad spectrum of applications, covering general-purpose applications as well as Artificial Intelligence (AI) applications. Register file, as the largest SRAM on the GPU die, accounts for over 20% of the total GPU energy consumption. Register cache has been introduced to reduce traffic from the register file and thus decrease total energy consumption when CUDA cores are utilized. However, the utilization of register cache has not been thoroughly investigated for Tensor Cores which are integrated into recent GPU architectures to meet AI workload demands. In this paper, we study the usage of register cache in both CUDA and Tensor Cores and conduct a thorough examination of their pros and cons. We have developed an open-source analytical simulator, called RFC-sim, to model and measure the energy consumption of both the register file and register cache. Our results show that while the register cache can reduce energy consumption by up to 40% in CUDA cores, it results in increased energy consumption by up to 23% in Tensor Cores. The main reason lies in the limited space of the register cache, which is not sufficient for the demand of Tensor cores to capture locality.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 3","pages":"469-482"},"PeriodicalIF":3.7,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141934967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-05DOI: 10.1109/JETCAS.2024.3438250
Kasem Khalil;Ashok Kumar;Magdy Bayoumi
Network-on-Chip (NoC) architecture provides speed-efficient and scalable communication in complex integrated circuits. Attaining fault tolerance in NoC architectures is an ongoing research problem aiming to enhance the architecture’s reliability and performance. It seeks to mitigate the impact of router failures and enhance the overall system robustness. Fault tolerance is achieved by adding additional hardware, and the research challenge is to attain high reliability, high Mean Time To Failure (MTTF), and low Energy-Delay-Product (EDP) while sacrificing an acceptable area. It is particularly vital for applications with uninterrupted data flow. This paper proposes a fault-tolerance approach for NoC systems focusing on NoC routers to yield increased reliability and MTTF with an acceptable area overhead and low EDP. The proposed method proposes a dynamic reconfiguration mechanism by using a dynamic allocation of virtual channels and a bypass crossbar mechanism, ensuring uninterrupted data flow within the NoC. Evaluations of the proposed method are done on different mesh sizes using VHDL and an Altera 10GX FPGA, demonstrating the method’s superiority in reliability, reduced latency, and enhanced throughput. The results show that the proposed method has an acceptable area overhead of 25.3%, and its MTTF values are 3.7 times to 18 times higher than the traditional methods for varying network sizes, showing remarkable robustness against faults. The results show that the proposed method attains the best-reported reliability with the least EDP. Additionally, a layout of the circuit is also created and studied.
{"title":"Dynamic Fault Tolerance Approach for Network-on-Chip Architecture","authors":"Kasem Khalil;Ashok Kumar;Magdy Bayoumi","doi":"10.1109/JETCAS.2024.3438250","DOIUrl":"10.1109/JETCAS.2024.3438250","url":null,"abstract":"Network-on-Chip (NoC) architecture provides speed-efficient and scalable communication in complex integrated circuits. Attaining fault tolerance in NoC architectures is an ongoing research problem aiming to enhance the architecture’s reliability and performance. It seeks to mitigate the impact of router failures and enhance the overall system robustness. Fault tolerance is achieved by adding additional hardware, and the research challenge is to attain high reliability, high Mean Time To Failure (MTTF), and low Energy-Delay-Product (EDP) while sacrificing an acceptable area. It is particularly vital for applications with uninterrupted data flow. This paper proposes a fault-tolerance approach for NoC systems focusing on NoC routers to yield increased reliability and MTTF with an acceptable area overhead and low EDP. The proposed method proposes a dynamic reconfiguration mechanism by using a dynamic allocation of virtual channels and a bypass crossbar mechanism, ensuring uninterrupted data flow within the NoC. Evaluations of the proposed method are done on different mesh sizes using VHDL and an Altera 10GX FPGA, demonstrating the method’s superiority in reliability, reduced latency, and enhanced throughput. The results show that the proposed method has an acceptable area overhead of 25.3%, and its MTTF values are 3.7 times to 18 times higher than the traditional methods for varying network sizes, showing remarkable robustness against faults. The results show that the proposed method attains the best-reported reliability with the least EDP. Additionally, a layout of the circuit is also created and studied.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 3","pages":"384-394"},"PeriodicalIF":3.7,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141968836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-02DOI: 10.1109/JETCAS.2024.3437408
Huidong Ji;Chen Ding;Boming Huang;Yuxiang Huan;Li-Rong Zheng;Zhuo Zou
Exploding development of convolutional neural network (CNN) benefits greatly from the hardware-based acceleration to maintain low latency and high utilization of resources. To enhance the processing efficiency of CNN algorithms, Field Programming Gate Array (FPGA)-based accelerators are designed with increased hardware resources to achieve high parallelism and throughput. However, there exist bottlenecks when more processing elements (PEs) in the form of PE clusters are introduced, including 1) the under-utilization of FPGA’s fixed hardware resources, which leads to the effective and peak performance mismatch; and 2) the limited clock frequency caused by the sophisticated routing and complex placement. In this paper, a 2-level hierarchical Network-on-Chip (NoC)-based CNN accelerator is proposed. In the upper level, a mesh-based NoC that interconnects multiple PE clusters is introduced. Such a design not only provides increased flexibility to balance different data communication models for better PE utilization and energy efficiency but also enables globally asynchronous, locally synchronous (GALS) architecture for better timing closure. At the lower level, local PEs are organized into a 3D-tiled PE cluster aiming to maximize the data reuse exploiting inherent dataflow of the convolution networks. Implementation and experiments on Xilinx ZU9EG FPGA for 4 benchmark CNN models: ResNet50, ResNet34, VGG16, and Darknet19 show that our work operates at a frequency of 300 MHz and delivers an effective throughput of 0.998 TOPS, 1.022 TOPS, 1.024 TOPS, and 1.026 TOPS. This result corresponds to 92.85%, 95.1%, 95.25%, and 95.46% PE utilization. Compared with the related FPGA-based designs, our work improves the resource efficiency of DSP by $5.36times $