{"title":"Optimizing code allocation for hybrid on-chip memory in IoT systems","authors":"Zhe Sun , Zimeng Zhou , Fang-Wei Fu","doi":"10.1016/j.vlsi.2024.102195","DOIUrl":null,"url":null,"abstract":"<div><p>With the increasing application of IoT devices, the memory subsystem, as the performance and energy bottleneck of IoT systems, has received a lot of attention. One of the keys is on-chip memory which can bridge the performance gap between the CPU and main memory. While many off-the-shelf embedded processors utilize the hybrid on-chip memory architecture containing scratchpad memories (SPMs) and caches, most existing literature ignores the collaboration between caches and SPMs. This paper proposes static SPM allocation strategies for the architecture mentioned above in IoT systems, which try to minimize the overall instruction memory subsystem latency and/or energy consumption. We capture the intra- and inter-task cache conflict misses via a fine-grained temporal cache behavior model. Based on this cache conflict information, we propose an integer linear programming (ILP) algorithm to generate an optimal static function level SPM allocation for system performance. Furthermore, to improve the scalability of the proposed allocation scheme for an enormous task set, we offer the interference factor to calculate the interference impact quantitatively. Then, based on the interference factor, we present two approximate knapsack based heuristic algorithms to provide near optimal static allocation schemes at both function- and basic block-level granularities, which favors fast design space exploration. The experiment results demonstrate that the proposed solution achieves a 30.85% improvement in memory performance, and up to 31.39% reduction in energy consumption, compared to the existing SPM allocation scheme at the function level. In addition, the proposed basic block level allocation algorithm shows better performance than our function level allocation algorithm and other basic block level allocation algorithm.</p></div>","PeriodicalId":54973,"journal":{"name":"Integration-The Vlsi Journal","volume":null,"pages":null},"PeriodicalIF":2.2000,"publicationDate":"2024-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Integration-The Vlsi Journal","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167926024000592","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
With the increasing application of IoT devices, the memory subsystem, as the performance and energy bottleneck of IoT systems, has received a lot of attention. One of the keys is on-chip memory which can bridge the performance gap between the CPU and main memory. While many off-the-shelf embedded processors utilize the hybrid on-chip memory architecture containing scratchpad memories (SPMs) and caches, most existing literature ignores the collaboration between caches and SPMs. This paper proposes static SPM allocation strategies for the architecture mentioned above in IoT systems, which try to minimize the overall instruction memory subsystem latency and/or energy consumption. We capture the intra- and inter-task cache conflict misses via a fine-grained temporal cache behavior model. Based on this cache conflict information, we propose an integer linear programming (ILP) algorithm to generate an optimal static function level SPM allocation for system performance. Furthermore, to improve the scalability of the proposed allocation scheme for an enormous task set, we offer the interference factor to calculate the interference impact quantitatively. Then, based on the interference factor, we present two approximate knapsack based heuristic algorithms to provide near optimal static allocation schemes at both function- and basic block-level granularities, which favors fast design space exploration. The experiment results demonstrate that the proposed solution achieves a 30.85% improvement in memory performance, and up to 31.39% reduction in energy consumption, compared to the existing SPM allocation scheme at the function level. In addition, the proposed basic block level allocation algorithm shows better performance than our function level allocation algorithm and other basic block level allocation algorithm.
期刊介绍:
Integration''s aim is to cover every aspect of the VLSI area, with an emphasis on cross-fertilization between various fields of science, and the design, verification, test and applications of integrated circuits and systems, as well as closely related topics in process and device technologies. Individual issues will feature peer-reviewed tutorials and articles as well as reviews of recent publications. The intended coverage of the journal can be assessed by examining the following (non-exclusive) list of topics:
Specification methods and languages; Analog/Digital Integrated Circuits and Systems; VLSI architectures; Algorithms, methods and tools for modeling, simulation, synthesis and verification of integrated circuits and systems of any complexity; Embedded systems; High-level synthesis for VLSI systems; Logic synthesis and finite automata; Testing, design-for-test and test generation algorithms; Physical design; Formal verification; Algorithms implemented in VLSI systems; Systems engineering; Heterogeneous systems.