{"title":"Universal Neural Network Acceleration via Real-Time Loop Blocking","authors":"Jiaqi Zhang, Xiangru Chen, S. Ray","doi":"10.1109/ICCD53106.2021.00053","DOIUrl":null,"url":null,"abstract":"There is a recent trend that the DNN workloads and accelerators are increasingly heterogeneous and dynamic. Existing DNN acceleration solutions fail to address these challenges because they either rely on complicated ad hoc mapping or clumpy exhaustive search. To this end, this paper first proposes a formalization model that can comprehensively describe the accelerator design space. Instead of enforcing certain customized dataflows, the proposed model explicitly captures the intrinsic hardware functions of a given accelerator. We connect these functions with the data reuse opportunities of the DNN computation and build a correspondence between DNN loop blocking and accelerator constraints. Based on this, we implement an algorithm that efficiently and effectively performs universal loop blocking for various DNNs and accelerators without manual specifications. The evaluation shows that our results manifest 2.1x and 1.5x speedup and energy efficiency over dataflow-defined algorithm as well as significant improvement in blocking latency compared with search-based methods.","PeriodicalId":154014,"journal":{"name":"2021 IEEE 39th International Conference on Computer Design (ICCD)","volume":"54 82 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 39th International Conference on Computer Design (ICCD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCD53106.2021.00053","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
There is a recent trend that the DNN workloads and accelerators are increasingly heterogeneous and dynamic. Existing DNN acceleration solutions fail to address these challenges because they either rely on complicated ad hoc mapping or clumpy exhaustive search. To this end, this paper first proposes a formalization model that can comprehensively describe the accelerator design space. Instead of enforcing certain customized dataflows, the proposed model explicitly captures the intrinsic hardware functions of a given accelerator. We connect these functions with the data reuse opportunities of the DNN computation and build a correspondence between DNN loop blocking and accelerator constraints. Based on this, we implement an algorithm that efficiently and effectively performs universal loop blocking for various DNNs and accelerators without manual specifications. The evaluation shows that our results manifest 2.1x and 1.5x speedup and energy efficiency over dataflow-defined algorithm as well as significant improvement in blocking latency compared with search-based methods.