Justin Morris, Si Thu Kaung Set, Gadi Rosen, M. Imani, Baris Aksanli, T. Simunic
{"title":"AdaptBit-HD:用于超维计算的自适应模型位宽","authors":"Justin Morris, Si Thu Kaung Set, Gadi Rosen, M. Imani, Baris Aksanli, T. Simunic","doi":"10.1109/ICCD53106.2021.00026","DOIUrl":null,"url":null,"abstract":"Brain-inspired Hyperdimensional (HD) computing is a novel computing paradigm emulating the neuron’s activity in high-dimensional space. The first step in HD computing is to map each data point into high-dimensional space (e.g., 10,000). This poses several problems. For instance, the size of the data can explode and all subsequent operations need to be performed in parallel in D = 10,000 dimensions. Prior work alleviated this issue with model quantization. The HVs could then be stored in less space than the original data and lower bitwidth operations can be used to save energy. However, prior work quantized all samples to the same bitwidth. We propose, AdaptBit-HD, an Adaptive Model Bitwidth Architecture for accelerating HD Computing. AdaptBit-HD operates on the bits of the quantized model one bit at a time to save energy when fewer bits can be used to find the correct class. With AdaptBit-HD, we can achieve both high accuracy by utilizing all the bits when necessary and high energy efficiency by terminating execution at lower bits when our design is confident in the output. We additionally design an endto-end FPGA accelerator for AdaptBit-HD. Compared to 16-bit models, AdaptBit-HD is 14× more energy efficient and compared to binary models, AdaptBit-HD is 1.1% more accurate, which is comparable in accuracy to 16-bit models. This demonstrates that AdaptBit-HD is able to achieve the accuracy of full precision models, with the energy efficiency of binary models.","PeriodicalId":154014,"journal":{"name":"2021 IEEE 39th International Conference on Computer Design (ICCD)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"AdaptBit-HD: Adaptive Model Bitwidth for Hyperdimensional Computing\",\"authors\":\"Justin Morris, Si Thu Kaung Set, Gadi Rosen, M. Imani, Baris Aksanli, T. Simunic\",\"doi\":\"10.1109/ICCD53106.2021.00026\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Brain-inspired Hyperdimensional (HD) computing is a novel computing paradigm emulating the neuron’s activity in high-dimensional space. The first step in HD computing is to map each data point into high-dimensional space (e.g., 10,000). This poses several problems. For instance, the size of the data can explode and all subsequent operations need to be performed in parallel in D = 10,000 dimensions. Prior work alleviated this issue with model quantization. The HVs could then be stored in less space than the original data and lower bitwidth operations can be used to save energy. However, prior work quantized all samples to the same bitwidth. We propose, AdaptBit-HD, an Adaptive Model Bitwidth Architecture for accelerating HD Computing. AdaptBit-HD operates on the bits of the quantized model one bit at a time to save energy when fewer bits can be used to find the correct class. With AdaptBit-HD, we can achieve both high accuracy by utilizing all the bits when necessary and high energy efficiency by terminating execution at lower bits when our design is confident in the output. We additionally design an endto-end FPGA accelerator for AdaptBit-HD. Compared to 16-bit models, AdaptBit-HD is 14× more energy efficient and compared to binary models, AdaptBit-HD is 1.1% more accurate, which is comparable in accuracy to 16-bit models. This demonstrates that AdaptBit-HD is able to achieve the accuracy of full precision models, with the energy efficiency of binary models.\",\"PeriodicalId\":154014,\"journal\":{\"name\":\"2021 IEEE 39th International Conference on Computer Design (ICCD)\",\"volume\":\"111 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE 39th International Conference on Computer Design (ICCD)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCD53106.2021.00026\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 39th International Conference on Computer Design (ICCD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCD53106.2021.00026","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
AdaptBit-HD: Adaptive Model Bitwidth for Hyperdimensional Computing
Brain-inspired Hyperdimensional (HD) computing is a novel computing paradigm emulating the neuron’s activity in high-dimensional space. The first step in HD computing is to map each data point into high-dimensional space (e.g., 10,000). This poses several problems. For instance, the size of the data can explode and all subsequent operations need to be performed in parallel in D = 10,000 dimensions. Prior work alleviated this issue with model quantization. The HVs could then be stored in less space than the original data and lower bitwidth operations can be used to save energy. However, prior work quantized all samples to the same bitwidth. We propose, AdaptBit-HD, an Adaptive Model Bitwidth Architecture for accelerating HD Computing. AdaptBit-HD operates on the bits of the quantized model one bit at a time to save energy when fewer bits can be used to find the correct class. With AdaptBit-HD, we can achieve both high accuracy by utilizing all the bits when necessary and high energy efficiency by terminating execution at lower bits when our design is confident in the output. We additionally design an endto-end FPGA accelerator for AdaptBit-HD. Compared to 16-bit models, AdaptBit-HD is 14× more energy efficient and compared to binary models, AdaptBit-HD is 1.1% more accurate, which is comparable in accuracy to 16-bit models. This demonstrates that AdaptBit-HD is able to achieve the accuracy of full precision models, with the energy efficiency of binary models.