Wentai Zhang, Jiaxi Zhang, Minghua Shen, Nong Xiao, Guojie Luo
{"title":"Mapping Large-Scale DNNs on Asymmetric FPGAs: (Abstract Only)","authors":"Wentai Zhang, Jiaxi Zhang, Minghua Shen, Nong Xiao, Guojie Luo","doi":"10.1145/3174243.3174982","DOIUrl":null,"url":null,"abstract":"FPGAs are very attractive to accelerate the deep neural networks (DNNs). While single-FPGA can provide good performance for small-scale DNNs, support for large-scale DNNs is very limited due to they require higher resource demand. In this paper, we propose an efficient mapping approach for accelerating large-scale DNNs on an asymmetric multi-FPGA architecture. Relative to the state-of-the-art single-FPGA resource reuse for large-scale DNNs, we consider multi-FPGA fashion to strive for higher performance. In this fashion, the neural network mapping problem can be formulated as a resource allocation problem, and a dynamic programming-based partitioning is designed to solve this problem optimally. Notice that the network topology and communication bandwidth of multiple FPGAs are always used to guide the partitioning to boost the performance while satisfying the constraints of resource-performance trade-off in a single FPGA. Experimental results using the large-scale ResNet-152 demonstrate that our approach deploys sixteen FPGAs to provide an advantage of 16.4x GOPS over the state-of-the-art work.","PeriodicalId":164936,"journal":{"name":"Proceedings of the 2018 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2018 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3174243.3174982","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
FPGAs are very attractive to accelerate the deep neural networks (DNNs). While single-FPGA can provide good performance for small-scale DNNs, support for large-scale DNNs is very limited due to they require higher resource demand. In this paper, we propose an efficient mapping approach for accelerating large-scale DNNs on an asymmetric multi-FPGA architecture. Relative to the state-of-the-art single-FPGA resource reuse for large-scale DNNs, we consider multi-FPGA fashion to strive for higher performance. In this fashion, the neural network mapping problem can be formulated as a resource allocation problem, and a dynamic programming-based partitioning is designed to solve this problem optimally. Notice that the network topology and communication bandwidth of multiple FPGAs are always used to guide the partitioning to boost the performance while satisfying the constraints of resource-performance trade-off in a single FPGA. Experimental results using the large-scale ResNet-152 demonstrate that our approach deploys sixteen FPGAs to provide an advantage of 16.4x GOPS over the state-of-the-art work.