{"title":"How Much Cache is Enough? A Cache Behavior Analysis for Machine Learning GPU Architectures","authors":"S. López, Y. Nimkar, G. Kotas","doi":"10.1109/IGCC.2018.8752137","DOIUrl":null,"url":null,"abstract":"Graphic Processing Units (GPUs) are highly parallel, power hungry devices with large numbers of transistors devoted to the cache hierarchy. Machine learning is a target application field of these devices, which take advantage of their high levels of parallelism to hide long latency memory access dependencies. Even though parallelism is the main source of performance in these devices, a large number of transistors is still devoted to the cache memory hierarchy. Upon detailed analysis, we measure the real impact of the cache hierarchy on the overall performance. Targeting Machine Learning applications, we observed that most of the successful cache accesses happen in a very reduced number of blocks.With this in mind, we propose a different cache configuration for the GPU, resulting in 25% of the leakage power consumption and 10% of the dynamic energy per access of the original cache configuration, with minimal impact on the overall performance.","PeriodicalId":388554,"journal":{"name":"2018 Ninth International Green and Sustainable Computing Conference (IGSC)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 Ninth International Green and Sustainable Computing Conference (IGSC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IGCC.2018.8752137","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Graphic Processing Units (GPUs) are highly parallel, power hungry devices with large numbers of transistors devoted to the cache hierarchy. Machine learning is a target application field of these devices, which take advantage of their high levels of parallelism to hide long latency memory access dependencies. Even though parallelism is the main source of performance in these devices, a large number of transistors is still devoted to the cache memory hierarchy. Upon detailed analysis, we measure the real impact of the cache hierarchy on the overall performance. Targeting Machine Learning applications, we observed that most of the successful cache accesses happen in a very reduced number of blocks.With this in mind, we propose a different cache configuration for the GPU, resulting in 25% of the leakage power consumption and 10% of the dynamic energy per access of the original cache configuration, with minimal impact on the overall performance.