{"title":"一种基于分层可重构sram的边缘计算宏","authors":"Runxi Wang, Xinfei Guo","doi":"10.1109/AICAS57966.2023.10168564","DOIUrl":null,"url":null,"abstract":"AI running on the edge requires silicon that can meet demanding performance requirements while meeting the aggressive power and area budget. Frequently updated AI algorithms also demand matched processors to well employ their advantages. Compute-in-memory (CIM) architecture appears as a promising energy-efficient solution that completes the intensive computations in-situ where the data are stored. While prior works have shown great progress in designing SRAM-based CIM macros with fixed functionality that were tailored for specific AI applications, the flexibility reserved for wider usage scenarios is missing. In this paper, we propose a novel SRAM-based CIM macro that can be hierarchically configured to support various boolean operations, arithmetic operations, and macro operations. In addition, we demonstrate with an example that the proposed design can be expanded to support more essential edge computations with minimal overhead. Compared with the existing reconfigurable SRAM-based CIM macros, this work achieves a greater balance of reconfigurability vs. hardware cost by implementing flexibility at various design hierarchies.","PeriodicalId":296649,"journal":{"name":"2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Hierarchically Reconfigurable SRAM-Based Compute-in-Memory Macro for Edge Computing\",\"authors\":\"Runxi Wang, Xinfei Guo\",\"doi\":\"10.1109/AICAS57966.2023.10168564\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"AI running on the edge requires silicon that can meet demanding performance requirements while meeting the aggressive power and area budget. Frequently updated AI algorithms also demand matched processors to well employ their advantages. Compute-in-memory (CIM) architecture appears as a promising energy-efficient solution that completes the intensive computations in-situ where the data are stored. While prior works have shown great progress in designing SRAM-based CIM macros with fixed functionality that were tailored for specific AI applications, the flexibility reserved for wider usage scenarios is missing. In this paper, we propose a novel SRAM-based CIM macro that can be hierarchically configured to support various boolean operations, arithmetic operations, and macro operations. In addition, we demonstrate with an example that the proposed design can be expanded to support more essential edge computations with minimal overhead. Compared with the existing reconfigurable SRAM-based CIM macros, this work achieves a greater balance of reconfigurability vs. hardware cost by implementing flexibility at various design hierarchies.\",\"PeriodicalId\":296649,\"journal\":{\"name\":\"2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS)\",\"volume\":\"12 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AICAS57966.2023.10168564\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AICAS57966.2023.10168564","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Hierarchically Reconfigurable SRAM-Based Compute-in-Memory Macro for Edge Computing
AI running on the edge requires silicon that can meet demanding performance requirements while meeting the aggressive power and area budget. Frequently updated AI algorithms also demand matched processors to well employ their advantages. Compute-in-memory (CIM) architecture appears as a promising energy-efficient solution that completes the intensive computations in-situ where the data are stored. While prior works have shown great progress in designing SRAM-based CIM macros with fixed functionality that were tailored for specific AI applications, the flexibility reserved for wider usage scenarios is missing. In this paper, we propose a novel SRAM-based CIM macro that can be hierarchically configured to support various boolean operations, arithmetic operations, and macro operations. In addition, we demonstrate with an example that the proposed design can be expanded to support more essential edge computations with minimal overhead. Compared with the existing reconfigurable SRAM-based CIM macros, this work achieves a greater balance of reconfigurability vs. hardware cost by implementing flexibility at various design hierarchies.