{"title":"Dynamic-HDC: A Two-Stage Dynamic Inference Framework for Brain-Inspired Hyperdimensional Computing","authors":"Yu-Chuan Chuang;Cheng-Yang Chang;An-Yeu Wu","doi":"10.1109/JETCAS.2023.3328857","DOIUrl":null,"url":null,"abstract":"Brain-inspired hyperdimensional computing (HDC) has attracted attention due to its energy efficiency and noise resilience in various IoT applications. However, striking the right balance between accuracy and efficiency in HDC remains a challenge. Specifically, HDC represents data as high-dimensional vectors known as hypervectors (HVs), where each component of HVs can be a high-precision integer or a low-cost bipolar number (+1/−1). However, this choice presents HDC with a significant trade-off between accuracy and efficiency. To address this challenge, we propose a two-stage dynamic inference framework called Dynamic-HDC that offers IoT applications a more flexible solution rather than limiting them to choose between the two extreme options. Dynamic-HDC leverages the strategies of early exit and model parameter adaptation. Unlike prior works that use a single HDC model to classify all data, Dynamic-HDC employs a cascade of models for two-stage inference. The first stage involves a low-cost, low-precision bipolar model, while the second stage utilizes a high-cost, high-precision integer model. By doing so, Dynamic-HDC can save computational resources for easy samples by performing an early exit when the low-cost bipolar model exhibits high confidence in its classification. For difficult samples, the high-precision integer model is conditionally activated to achieve more accurate predictions. To further enhance the efficiency of Dynamic-HDC, we introduce dynamic dimension selection (DDS) and dynamic class selection (DCS). These techniques enable the framework to dynamically adapt the dimensions and the number of classes in the HDC model, further optimizing performance. We evaluate the effectiveness of Dynamic-HDC on three commonly used benchmarks in HDC research, namely MNIST, ISOLET, and UCIHAR. Our simulation results demonstrate that Dynamic-HDC with different configurations can reduce energy consumption by 19.8-51.1% and execution time by 22.5-49.9% with negligible 0.02-0.36 % accuracy degradation compared to a single integer model. Compared to a single bipolar model, Dynamic-HDC improves 3.1% accuracy with a slight 10% energy and 14% execution time overhead.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":null,"pages":null},"PeriodicalIF":3.7000,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10302281/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Brain-inspired hyperdimensional computing (HDC) has attracted attention due to its energy efficiency and noise resilience in various IoT applications. However, striking the right balance between accuracy and efficiency in HDC remains a challenge. Specifically, HDC represents data as high-dimensional vectors known as hypervectors (HVs), where each component of HVs can be a high-precision integer or a low-cost bipolar number (+1/−1). However, this choice presents HDC with a significant trade-off between accuracy and efficiency. To address this challenge, we propose a two-stage dynamic inference framework called Dynamic-HDC that offers IoT applications a more flexible solution rather than limiting them to choose between the two extreme options. Dynamic-HDC leverages the strategies of early exit and model parameter adaptation. Unlike prior works that use a single HDC model to classify all data, Dynamic-HDC employs a cascade of models for two-stage inference. The first stage involves a low-cost, low-precision bipolar model, while the second stage utilizes a high-cost, high-precision integer model. By doing so, Dynamic-HDC can save computational resources for easy samples by performing an early exit when the low-cost bipolar model exhibits high confidence in its classification. For difficult samples, the high-precision integer model is conditionally activated to achieve more accurate predictions. To further enhance the efficiency of Dynamic-HDC, we introduce dynamic dimension selection (DDS) and dynamic class selection (DCS). These techniques enable the framework to dynamically adapt the dimensions and the number of classes in the HDC model, further optimizing performance. We evaluate the effectiveness of Dynamic-HDC on three commonly used benchmarks in HDC research, namely MNIST, ISOLET, and UCIHAR. Our simulation results demonstrate that Dynamic-HDC with different configurations can reduce energy consumption by 19.8-51.1% and execution time by 22.5-49.9% with negligible 0.02-0.36 % accuracy degradation compared to a single integer model. Compared to a single bipolar model, Dynamic-HDC improves 3.1% accuracy with a slight 10% energy and 14% execution time overhead.
期刊介绍:
The IEEE Journal on Emerging and Selected Topics in Circuits and Systems is published quarterly and solicits, with particular emphasis on emerging areas, special issues on topics that cover the entire scope of the IEEE Circuits and Systems (CAS) Society, namely the theory, analysis, design, tools, and implementation of circuits and systems, spanning their theoretical foundations, applications, and architectures for signal and information processing.