Zhaoyan Shen , Jinhao Wu , Xikun Jiang , Yuhao Zhang , Lei Ju , Zhiping Jia
{"title":"PRAP-PIM:一种用于基于ReRAM的PIM-DNN加速器的权重模式重用感知修剪方法","authors":"Zhaoyan Shen , Jinhao Wu , Xikun Jiang , Yuhao Zhang , Lei Ju , Zhiping Jia","doi":"10.1016/j.hcc.2023.100123","DOIUrl":null,"url":null,"abstract":"<div><p>Resistive Random-Access Memory (ReRAM) based Processing-in-Memory (PIM) frameworks are proposed to accelerate the working process of DNN models by eliminating the data movement between the computing and memory units. To further mitigate the space and energy consumption, DNN model weight sparsity and weight pattern repetition are exploited to optimize these ReRAM-based accelerators. However, most of these works only focus on one aspect of this software/hardware co-design framework and optimize them individually, which makes the design far from optimal. In this paper, we propose PRAP-PIM, which jointly exploits the weight sparsity and weight pattern repetition by using a weight pattern reusing aware pruning method. By relaxing the weight pattern reusing precondition, we propose a similarity-based weight pattern reusing method that can achieve a higher weight pattern reusing ratio. Experimental results show that PRAP-PIM achieves 1.64× performance improvement and 1.51× energy efficiency improvement in popular deep learning benchmarks, compared with the state-of-the-art ReRAM-based DNN accelerators.</p></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"3 2","pages":"Article 100123"},"PeriodicalIF":3.2000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"PRAP-PIM: A weight pattern reusing aware pruning method for ReRAM-based PIM DNN accelerators\",\"authors\":\"Zhaoyan Shen , Jinhao Wu , Xikun Jiang , Yuhao Zhang , Lei Ju , Zhiping Jia\",\"doi\":\"10.1016/j.hcc.2023.100123\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Resistive Random-Access Memory (ReRAM) based Processing-in-Memory (PIM) frameworks are proposed to accelerate the working process of DNN models by eliminating the data movement between the computing and memory units. To further mitigate the space and energy consumption, DNN model weight sparsity and weight pattern repetition are exploited to optimize these ReRAM-based accelerators. However, most of these works only focus on one aspect of this software/hardware co-design framework and optimize them individually, which makes the design far from optimal. In this paper, we propose PRAP-PIM, which jointly exploits the weight sparsity and weight pattern repetition by using a weight pattern reusing aware pruning method. By relaxing the weight pattern reusing precondition, we propose a similarity-based weight pattern reusing method that can achieve a higher weight pattern reusing ratio. Experimental results show that PRAP-PIM achieves 1.64× performance improvement and 1.51× energy efficiency improvement in popular deep learning benchmarks, compared with the state-of-the-art ReRAM-based DNN accelerators.</p></div>\",\"PeriodicalId\":100605,\"journal\":{\"name\":\"High-Confidence Computing\",\"volume\":\"3 2\",\"pages\":\"Article 100123\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2023-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"High-Confidence Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2667295223000211\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"High-Confidence Computing","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667295223000211","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
PRAP-PIM: A weight pattern reusing aware pruning method for ReRAM-based PIM DNN accelerators
Resistive Random-Access Memory (ReRAM) based Processing-in-Memory (PIM) frameworks are proposed to accelerate the working process of DNN models by eliminating the data movement between the computing and memory units. To further mitigate the space and energy consumption, DNN model weight sparsity and weight pattern repetition are exploited to optimize these ReRAM-based accelerators. However, most of these works only focus on one aspect of this software/hardware co-design framework and optimize them individually, which makes the design far from optimal. In this paper, we propose PRAP-PIM, which jointly exploits the weight sparsity and weight pattern repetition by using a weight pattern reusing aware pruning method. By relaxing the weight pattern reusing precondition, we propose a similarity-based weight pattern reusing method that can achieve a higher weight pattern reusing ratio. Experimental results show that PRAP-PIM achieves 1.64× performance improvement and 1.51× energy efficiency improvement in popular deep learning benchmarks, compared with the state-of-the-art ReRAM-based DNN accelerators.