K. Vivekanandarajah, T. Srikanthan, S. Bhattacharyya
{"title":"解码滤波器缓存节能指令缓存层次结构在超标量架构","authors":"K. Vivekanandarajah, T. Srikanthan, S. Bhattacharyya","doi":"10.1109/ASPDAC.2004.1337602","DOIUrl":null,"url":null,"abstract":"The power consumption of microprocessors has been increasing in step with the complexity of each progressive generation. In general purpose processors, this is primarily attributed to the high energy consumption of fetch and decode circuitry, pursuant to the high instruction issue rate required of these high performance processors. Predictive decode filter cache (DFC) has been shown to be effective in reducing the fetch and decode energy consumed by the instruction cache hierarchy of inorder single issue processors. We propose the architectural level enhancements to facilitate the incorporation of the DFC in wide issue superscalar processors for an energy efficient memory hierarchy. Extensive simulations on the modified superscalar architecture shows that the use of the (predictor based) DFC results in an average reduction of 17.33% and 25.09% fetch energy reduction in LI cache along with 37.2% and 46.6% reduction in number of decodes for 64 and 128 instruction DFC respectively. This fetch and decode energy savings are achieved with minimal reduction in the average instruction per cycle (IPC) of 0.54% and 0.73% for 64 and 128 instruction DFC for the selected set of spec2000 benchmarks.","PeriodicalId":426349,"journal":{"name":"ASP-DAC 2004: Asia and South Pacific Design Automation Conference 2004 (IEEE Cat. No.04EX753)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2004-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Decode filter cache for energy efficient instruction cache hierarchy in super scalar architectures\",\"authors\":\"K. Vivekanandarajah, T. Srikanthan, S. Bhattacharyya\",\"doi\":\"10.1109/ASPDAC.2004.1337602\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The power consumption of microprocessors has been increasing in step with the complexity of each progressive generation. In general purpose processors, this is primarily attributed to the high energy consumption of fetch and decode circuitry, pursuant to the high instruction issue rate required of these high performance processors. Predictive decode filter cache (DFC) has been shown to be effective in reducing the fetch and decode energy consumed by the instruction cache hierarchy of inorder single issue processors. We propose the architectural level enhancements to facilitate the incorporation of the DFC in wide issue superscalar processors for an energy efficient memory hierarchy. Extensive simulations on the modified superscalar architecture shows that the use of the (predictor based) DFC results in an average reduction of 17.33% and 25.09% fetch energy reduction in LI cache along with 37.2% and 46.6% reduction in number of decodes for 64 and 128 instruction DFC respectively. This fetch and decode energy savings are achieved with minimal reduction in the average instruction per cycle (IPC) of 0.54% and 0.73% for 64 and 128 instruction DFC for the selected set of spec2000 benchmarks.\",\"PeriodicalId\":426349,\"journal\":{\"name\":\"ASP-DAC 2004: Asia and South Pacific Design Automation Conference 2004 (IEEE Cat. No.04EX753)\",\"volume\":\"44 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2004-01-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ASP-DAC 2004: Asia and South Pacific Design Automation Conference 2004 (IEEE Cat. No.04EX753)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ASPDAC.2004.1337602\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ASP-DAC 2004: Asia and South Pacific Design Automation Conference 2004 (IEEE Cat. No.04EX753)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ASPDAC.2004.1337602","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Decode filter cache for energy efficient instruction cache hierarchy in super scalar architectures
The power consumption of microprocessors has been increasing in step with the complexity of each progressive generation. In general purpose processors, this is primarily attributed to the high energy consumption of fetch and decode circuitry, pursuant to the high instruction issue rate required of these high performance processors. Predictive decode filter cache (DFC) has been shown to be effective in reducing the fetch and decode energy consumed by the instruction cache hierarchy of inorder single issue processors. We propose the architectural level enhancements to facilitate the incorporation of the DFC in wide issue superscalar processors for an energy efficient memory hierarchy. Extensive simulations on the modified superscalar architecture shows that the use of the (predictor based) DFC results in an average reduction of 17.33% and 25.09% fetch energy reduction in LI cache along with 37.2% and 46.6% reduction in number of decodes for 64 and 128 instruction DFC respectively. This fetch and decode energy savings are achieved with minimal reduction in the average instruction per cycle (IPC) of 0.54% and 0.73% for 64 and 128 instruction DFC for the selected set of spec2000 benchmarks.