{"title":"一种将卷积网络映射到具有内存计算节点的片上网络的新方案","authors":"Jiayi Liu, Kejie Huang","doi":"10.1109/ISOCC50952.2020.9332940","DOIUrl":null,"url":null,"abstract":"Computing-In Memory (CIM) has been widely used to accelerate the inferencing speed of deep learning. Network-on-Chips (NoCs) are usually used together with CIM to enable the versatile ability of the hardware. This paper proposes a bandwidth aware mapping scheme to minimize both hops and bandwidth requirement. The simulation results show that the proposed scheme could reduce the hops and bandwidth requirements by more than 33.57% and 46.13%, respectively.","PeriodicalId":270577,"journal":{"name":"2020 International SoC Design Conference (ISOCC)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2020-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"A Novel Scheme to Map Convolutional Networks to Network-on-Chip with Computing-In-Memory Nodes\",\"authors\":\"Jiayi Liu, Kejie Huang\",\"doi\":\"10.1109/ISOCC50952.2020.9332940\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Computing-In Memory (CIM) has been widely used to accelerate the inferencing speed of deep learning. Network-on-Chips (NoCs) are usually used together with CIM to enable the versatile ability of the hardware. This paper proposes a bandwidth aware mapping scheme to minimize both hops and bandwidth requirement. The simulation results show that the proposed scheme could reduce the hops and bandwidth requirements by more than 33.57% and 46.13%, respectively.\",\"PeriodicalId\":270577,\"journal\":{\"name\":\"2020 International SoC Design Conference (ISOCC)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-10-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 International SoC Design Conference (ISOCC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISOCC50952.2020.9332940\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 International SoC Design Conference (ISOCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISOCC50952.2020.9332940","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Novel Scheme to Map Convolutional Networks to Network-on-Chip with Computing-In-Memory Nodes
Computing-In Memory (CIM) has been widely used to accelerate the inferencing speed of deep learning. Network-on-Chips (NoCs) are usually used together with CIM to enable the versatile ability of the hardware. This paper proposes a bandwidth aware mapping scheme to minimize both hops and bandwidth requirement. The simulation results show that the proposed scheme could reduce the hops and bandwidth requirements by more than 33.57% and 46.13%, respectively.