推进基于可训练编码和自适应训练的超维计算,实现高效准确学习

IF 2.2 4区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE ACM Transactions on Design Automation of Electronic Systems Pub Date : 2024-06-04 DOI:10.1145/3665891
Jiseung Kim, Hyunsei Lee, Mohsen Imani, Yeseong Kim
{"title":"推进基于可训练编码和自适应训练的超维计算,实现高效准确学习","authors":"Jiseung Kim, Hyunsei Lee, Mohsen Imani, Yeseong Kim","doi":"10.1145/3665891","DOIUrl":null,"url":null,"abstract":"<p>Hyperdimensional computing (HDC) is a computing paradigm inspired by the mechanisms of human memory, characterizing data through high-dimensional vector representations, known as hypervectors. Recent advancements in HDC have explored its potential as a learning model, leveraging its straightforward arithmetic and high efficiency. The traditional HDC frameworks are hampered by two primary static elements: randomly generated encoders and fixed learning rates. These static components significantly limit model adaptability and accuracy. The static, randomly generated encoders, while ensuring high-dimensional representation, fail to adapt to evolving data relationships, thereby constraining the model’s ability to accurately capture and learn from complex patterns. Similarly, the fixed nature of the learning rate does not account for the varying needs of the training process over time, hindering efficient convergence and optimal performance. This paper introduces \\(\\mathsf {TrainableHD} \\), a novel HDC framework that enables dynamic training of the randomly generated encoder depending on the feedback of the learning data, thereby addressing the static nature of conventional HDC encoders. \\(\\mathsf {TrainableHD} \\) also enhances the training performance by incorporating adaptive optimizer algorithms in learning the hypervectors. We further refine \\(\\mathsf {TrainableHD} \\) with effective quantization to enhance efficiency, allowing the execution of the inference phase in low-precision accelerators. Our evaluations demonstrate that \\(\\mathsf {TrainableHD} \\) significantly improves HDC accuracy by up to 27.99% (averaging 7.02%) without additional computational costs during inference, achieving a performance level comparable to state-of-the-art deep learning models. Furthermore, \\(\\mathsf {TrainableHD} \\) is optimized for execution speed and energy efficiency. Compared to deep learning on a low-power GPU platform like NVIDIA Jetson Xavier, \\(\\mathsf {TrainableHD} \\) is 56.4 times faster and 73 times more energy efficient. This efficiency is further augmented through the use of Encoder Interval Training (EIT) and adaptive optimizer algorithms, enhancing the training process without compromising the model’s accuracy.</p>","PeriodicalId":50944,"journal":{"name":"ACM Transactions on Design Automation of Electronic Systems","volume":null,"pages":null},"PeriodicalIF":2.2000,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Advancing Hyperdimensional Computing Based on Trainable Encoding and Adaptive Training for Efficient and Accurate Learning\",\"authors\":\"Jiseung Kim, Hyunsei Lee, Mohsen Imani, Yeseong Kim\",\"doi\":\"10.1145/3665891\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Hyperdimensional computing (HDC) is a computing paradigm inspired by the mechanisms of human memory, characterizing data through high-dimensional vector representations, known as hypervectors. Recent advancements in HDC have explored its potential as a learning model, leveraging its straightforward arithmetic and high efficiency. The traditional HDC frameworks are hampered by two primary static elements: randomly generated encoders and fixed learning rates. These static components significantly limit model adaptability and accuracy. The static, randomly generated encoders, while ensuring high-dimensional representation, fail to adapt to evolving data relationships, thereby constraining the model’s ability to accurately capture and learn from complex patterns. Similarly, the fixed nature of the learning rate does not account for the varying needs of the training process over time, hindering efficient convergence and optimal performance. This paper introduces \\\\(\\\\mathsf {TrainableHD} \\\\), a novel HDC framework that enables dynamic training of the randomly generated encoder depending on the feedback of the learning data, thereby addressing the static nature of conventional HDC encoders. \\\\(\\\\mathsf {TrainableHD} \\\\) also enhances the training performance by incorporating adaptive optimizer algorithms in learning the hypervectors. We further refine \\\\(\\\\mathsf {TrainableHD} \\\\) with effective quantization to enhance efficiency, allowing the execution of the inference phase in low-precision accelerators. Our evaluations demonstrate that \\\\(\\\\mathsf {TrainableHD} \\\\) significantly improves HDC accuracy by up to 27.99% (averaging 7.02%) without additional computational costs during inference, achieving a performance level comparable to state-of-the-art deep learning models. Furthermore, \\\\(\\\\mathsf {TrainableHD} \\\\) is optimized for execution speed and energy efficiency. Compared to deep learning on a low-power GPU platform like NVIDIA Jetson Xavier, \\\\(\\\\mathsf {TrainableHD} \\\\) is 56.4 times faster and 73 times more energy efficient. This efficiency is further augmented through the use of Encoder Interval Training (EIT) and adaptive optimizer algorithms, enhancing the training process without compromising the model’s accuracy.</p>\",\"PeriodicalId\":50944,\"journal\":{\"name\":\"ACM Transactions on Design Automation of Electronic Systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2024-06-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Transactions on Design Automation of Electronic Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1145/3665891\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Design Automation of Electronic Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3665891","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

摘要

超维计算(HDC)是一种受人类记忆机制启发的计算范式,它通过高维向量表示(称为超向量)来描述数据特征。HDC 的最新进展探索了其作为学习模型的潜力,利用了其简单的运算和高效率。传统的 HDC 框架受到两个主要静态元素的阻碍:随机生成的编码器和固定的学习率。这些静态元素极大地限制了模型的适应性和准确性。随机生成的静态编码器虽然能确保高维表示,但却无法适应不断变化的数据关系,从而限制了模型准确捕捉和学习复杂模式的能力。同样,学习率的固定性也没有考虑到训练过程随时间变化的需求,从而阻碍了高效收敛和最佳性能的实现。本文介绍了一种新型 HDC 框架,它能够根据学习数据的反馈动态训练随机生成的编码器,从而解决传统 HDC 编码器的静态特性。在学习超向量的过程中,\(\mathsf {TrainableHD}\) 还加入了自适应优化算法,从而提高了训练性能。我们通过有效量化进一步完善了 \(\mathsf {TrainableHD} \),以提高效率,允许在低精度加速器中执行推理阶段。我们的评估表明,在推理过程中,\(\mathsf {TrainableHD} \)在不增加额外计算成本的情况下,将HDC的准确率显著提高了27.99%(平均为7.02%),达到了与最先进的深度学习模型相当的性能水平。此外,\(\mathsf {TrainableHD} \)还针对执行速度和能效进行了优化。与英伟达 Jetson Xavier 等低功耗 GPU 平台上的深度学习相比,\(\mathsf {TrainableHD}\) 的速度快 56.4 倍,能效高 73 倍。通过使用编码器间隔训练(Encoder Interval Training,EIT)和自适应优化算法,在不影响模型准确性的情况下增强了训练过程,从而进一步提高了效率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Advancing Hyperdimensional Computing Based on Trainable Encoding and Adaptive Training for Efficient and Accurate Learning

Hyperdimensional computing (HDC) is a computing paradigm inspired by the mechanisms of human memory, characterizing data through high-dimensional vector representations, known as hypervectors. Recent advancements in HDC have explored its potential as a learning model, leveraging its straightforward arithmetic and high efficiency. The traditional HDC frameworks are hampered by two primary static elements: randomly generated encoders and fixed learning rates. These static components significantly limit model adaptability and accuracy. The static, randomly generated encoders, while ensuring high-dimensional representation, fail to adapt to evolving data relationships, thereby constraining the model’s ability to accurately capture and learn from complex patterns. Similarly, the fixed nature of the learning rate does not account for the varying needs of the training process over time, hindering efficient convergence and optimal performance. This paper introduces \(\mathsf {TrainableHD} \), a novel HDC framework that enables dynamic training of the randomly generated encoder depending on the feedback of the learning data, thereby addressing the static nature of conventional HDC encoders. \(\mathsf {TrainableHD} \) also enhances the training performance by incorporating adaptive optimizer algorithms in learning the hypervectors. We further refine \(\mathsf {TrainableHD} \) with effective quantization to enhance efficiency, allowing the execution of the inference phase in low-precision accelerators. Our evaluations demonstrate that \(\mathsf {TrainableHD} \) significantly improves HDC accuracy by up to 27.99% (averaging 7.02%) without additional computational costs during inference, achieving a performance level comparable to state-of-the-art deep learning models. Furthermore, \(\mathsf {TrainableHD} \) is optimized for execution speed and energy efficiency. Compared to deep learning on a low-power GPU platform like NVIDIA Jetson Xavier, \(\mathsf {TrainableHD} \) is 56.4 times faster and 73 times more energy efficient. This efficiency is further augmented through the use of Encoder Interval Training (EIT) and adaptive optimizer algorithms, enhancing the training process without compromising the model’s accuracy.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
ACM Transactions on Design Automation of Electronic Systems
ACM Transactions on Design Automation of Electronic Systems 工程技术-计算机:软件工程
CiteScore
3.20
自引率
7.10%
发文量
105
审稿时长
3 months
期刊介绍: TODAES is a premier ACM journal in design and automation of electronic systems. It publishes innovative work documenting significant research and development advances on the specification, design, analysis, simulation, testing, and evaluation of electronic systems, emphasizing a computer science/engineering orientation. Both theoretical analysis and practical solutions are welcome.
期刊最新文献
Efficient Attacks on Strong PUFs via Covariance and Boolean Modeling PriorMSM: An Efficient Acceleration Architecture for Multi-Scalar Multiplication Multi-Stream Scheduling of Inference Pipelines on Edge Devices - a DRL Approach A Power Optimization Approach for Large-scale RM-TB Dual Logic Circuits Based on an Adaptive Multi-Task Intelligent Algorithm MAB-BMC: A Formal Verification Enhancer by Harnessing Multiple BMC Engines Together
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1