K. Vivekanandarajah, T. Srikanthan, S. Bhattacharyya, Prasanna Venkatesh Kannan
{"title":"结合模式预测技术的高效节能滤波器缓存设计","authors":"K. Vivekanandarajah, T. Srikanthan, S. Bhattacharyya, Prasanna Venkatesh Kannan","doi":"10.1109/IWSOC.2003.1213003","DOIUrl":null,"url":null,"abstract":"A filter cache is proposed at a higher level than the L1 (main) cache in the memory hierarchy and is much smaller. The typical size of filter cache is of the order of 256 Bytes. Prediction algorithms popularly based upon the Next Fetch Prediction Table (NFPT) help making the choice between the filter cache and the main cache. In this paper we introduce a new prediction mechanism for predicting filter cache access, which relies on the hit or miss pattern of the instruction access stream over the past filter cache lines accesses. While NFPT makes predominantly incorrect hit-predictions, the proposed Pattern Table based approach reduces this. Predominantly correct prediction achieves efficient cache access, and eliminates cache-miss penalties. Our extensive simulations across a wide range of benchmark applications illustrate that the new prediction scheme is efficient as it results in improved prediction accuracy. Moreover, it reduces energy consumption of the filter cache by as much as 25% compared to NFPT based approaches. Further, the technique implemented is elegant in the form of hardware implementation as it consists only of a shift register and a Look up Table (LUT) and is hence area and energy efficient in contrast to the published prediction techniques.","PeriodicalId":259178,"journal":{"name":"The 3rd IEEE International Workshop on System-on-Chip for Real-Time Applications, 2003. Proceedings.","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2003-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"Incorporating pattern prediction technique for energy efficient filter cache design\",\"authors\":\"K. Vivekanandarajah, T. Srikanthan, S. Bhattacharyya, Prasanna Venkatesh Kannan\",\"doi\":\"10.1109/IWSOC.2003.1213003\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A filter cache is proposed at a higher level than the L1 (main) cache in the memory hierarchy and is much smaller. The typical size of filter cache is of the order of 256 Bytes. Prediction algorithms popularly based upon the Next Fetch Prediction Table (NFPT) help making the choice between the filter cache and the main cache. In this paper we introduce a new prediction mechanism for predicting filter cache access, which relies on the hit or miss pattern of the instruction access stream over the past filter cache lines accesses. While NFPT makes predominantly incorrect hit-predictions, the proposed Pattern Table based approach reduces this. Predominantly correct prediction achieves efficient cache access, and eliminates cache-miss penalties. Our extensive simulations across a wide range of benchmark applications illustrate that the new prediction scheme is efficient as it results in improved prediction accuracy. Moreover, it reduces energy consumption of the filter cache by as much as 25% compared to NFPT based approaches. Further, the technique implemented is elegant in the form of hardware implementation as it consists only of a shift register and a Look up Table (LUT) and is hence area and energy efficient in contrast to the published prediction techniques.\",\"PeriodicalId\":259178,\"journal\":{\"name\":\"The 3rd IEEE International Workshop on System-on-Chip for Real-Time Applications, 2003. Proceedings.\",\"volume\":\"44 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2003-07-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The 3rd IEEE International Workshop on System-on-Chip for Real-Time Applications, 2003. Proceedings.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IWSOC.2003.1213003\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The 3rd IEEE International Workshop on System-on-Chip for Real-Time Applications, 2003. Proceedings.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IWSOC.2003.1213003","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Incorporating pattern prediction technique for energy efficient filter cache design
A filter cache is proposed at a higher level than the L1 (main) cache in the memory hierarchy and is much smaller. The typical size of filter cache is of the order of 256 Bytes. Prediction algorithms popularly based upon the Next Fetch Prediction Table (NFPT) help making the choice between the filter cache and the main cache. In this paper we introduce a new prediction mechanism for predicting filter cache access, which relies on the hit or miss pattern of the instruction access stream over the past filter cache lines accesses. While NFPT makes predominantly incorrect hit-predictions, the proposed Pattern Table based approach reduces this. Predominantly correct prediction achieves efficient cache access, and eliminates cache-miss penalties. Our extensive simulations across a wide range of benchmark applications illustrate that the new prediction scheme is efficient as it results in improved prediction accuracy. Moreover, it reduces energy consumption of the filter cache by as much as 25% compared to NFPT based approaches. Further, the technique implemented is elegant in the form of hardware implementation as it consists only of a shift register and a Look up Table (LUT) and is hence area and energy efficient in contrast to the published prediction techniques.