{"title":"An Efficient Design Framework for 2×2 CNN Accelerator Chiplet Cluster with SerDes Interconnects","authors":"Yajie Wu, Tianze Li, Zhuang Shao, Li Du, Yuan Du","doi":"10.1109/AICAS57966.2023.10168573","DOIUrl":null,"url":null,"abstract":"Multi-Chiplet integrated systems for high-performance computing with dedicated CNN accelerators are highly demanded due to ever-increasing AI-related training and inferencing tasks; however, many design challenges hinder their large-scale applications, such as complicated multi-task scheduling, high-speed die-to-die SerDes (Serializer/Deserializer) link modeling, and detailed communication and computation hardware co-simulation. In this paper, an optimized 2×2 CNN accelerator chiplet framework with a SerDes link model is presented, which addresses the above challenges. A methodology for designing a 2×2 CNN accelerator chiplet framework is also proposed, and several experiments are conducted. The system performances of different designs are compared and analyzed with different design parameters of computation hardware, SerDes links, and improved scheduling algorithms. The results show that with the same interconnection structure and bandwidth, every 1TFLOPS increase in one chiplet’s computing power can bring an average 3.7% execution time reduction.","PeriodicalId":296649,"journal":{"name":"2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AICAS57966.2023.10168573","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Multi-Chiplet integrated systems for high-performance computing with dedicated CNN accelerators are highly demanded due to ever-increasing AI-related training and inferencing tasks; however, many design challenges hinder their large-scale applications, such as complicated multi-task scheduling, high-speed die-to-die SerDes (Serializer/Deserializer) link modeling, and detailed communication and computation hardware co-simulation. In this paper, an optimized 2×2 CNN accelerator chiplet framework with a SerDes link model is presented, which addresses the above challenges. A methodology for designing a 2×2 CNN accelerator chiplet framework is also proposed, and several experiments are conducted. The system performances of different designs are compared and analyzed with different design parameters of computation hardware, SerDes links, and improved scheduling algorithms. The results show that with the same interconnection structure and bandwidth, every 1TFLOPS increase in one chiplet’s computing power can bring an average 3.7% execution time reduction.