SpatialHD: AI应用的空间变压器与超维计算融合

M. Bettayeb, Eman Hassan, Baker Mohammad, H. Saleh
{"title":"SpatialHD: AI应用的空间变压器与超维计算融合","authors":"M. Bettayeb, Eman Hassan, Baker Mohammad, H. Saleh","doi":"10.1109/AICAS57966.2023.10168629","DOIUrl":null,"url":null,"abstract":"Brain-inspired computing methods have shown remarkable efficiency and robustness compared to deep neural networks (DNN). In particular, HyperDimensional Computing (HDC) and Vision Transformer (ViT) have demonstrated promising achievements in facilitating effective and reliable cognitive learning. This paper proposes SpatialHD, the first framework that combines spatial transformer networks (STN) and HDC. First, SpatialHD exploits the STN, which explicitly allows the spatial manipulation of data within the network. Then, it employs HDC to operate over STN output by mapping feature maps into high-dimensional space, learning abstracted information, and classifying data. In addition, the STN output is resized to generate a smaller input feature map. This further reduces computing complexity and memory storage compared to HDC alone. Finally, to test the model’s functionality, we applied spatial HD for image classification, utilizing the MNIST and Fashion-MNIST datasets, using only 25% of the dataset for training. Our results show that SpatialHD improves accuracy by ≈ 8% and enhances efficiency by approximately 2.5x compared to base-HDC.","PeriodicalId":296649,"journal":{"name":"2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"SpatialHD: Spatial Transformer Fused with Hyperdimensional Computing for AI Applications\",\"authors\":\"M. Bettayeb, Eman Hassan, Baker Mohammad, H. Saleh\",\"doi\":\"10.1109/AICAS57966.2023.10168629\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Brain-inspired computing methods have shown remarkable efficiency and robustness compared to deep neural networks (DNN). In particular, HyperDimensional Computing (HDC) and Vision Transformer (ViT) have demonstrated promising achievements in facilitating effective and reliable cognitive learning. This paper proposes SpatialHD, the first framework that combines spatial transformer networks (STN) and HDC. First, SpatialHD exploits the STN, which explicitly allows the spatial manipulation of data within the network. Then, it employs HDC to operate over STN output by mapping feature maps into high-dimensional space, learning abstracted information, and classifying data. In addition, the STN output is resized to generate a smaller input feature map. This further reduces computing complexity and memory storage compared to HDC alone. Finally, to test the model’s functionality, we applied spatial HD for image classification, utilizing the MNIST and Fashion-MNIST datasets, using only 25% of the dataset for training. Our results show that SpatialHD improves accuracy by ≈ 8% and enhances efficiency by approximately 2.5x compared to base-HDC.\",\"PeriodicalId\":296649,\"journal\":{\"name\":\"2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS)\",\"volume\":\"58 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AICAS57966.2023.10168629\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AICAS57966.2023.10168629","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

与深度神经网络(DNN)相比,脑启发计算方法显示出显著的效率和鲁棒性。特别是,超维计算(HDC)和视觉转换器(ViT)在促进有效和可靠的认知学习方面取得了可喜的成就。本文提出了首个结合空间变压器网络(STN)和HDC的框架SpatialHD。首先,SpatialHD利用了STN,它明确允许在网络中对数据进行空间操作。然后,利用HDC对STN输出进行操作,将特征映射映射到高维空间,学习抽象信息,并对数据进行分类。此外,STN输出被调整大小以生成更小的输入特征映射。与单独的HDC相比,这进一步降低了计算复杂性和内存存储。最后,为了测试模型的功能,我们利用MNIST和Fashion-MNIST数据集,仅使用25%的数据集进行训练,将空间高清应用于图像分类。我们的研究结果表明,与base-HDC相比,SpatialHD的精度提高了约8%,效率提高了约2.5倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
SpatialHD: Spatial Transformer Fused with Hyperdimensional Computing for AI Applications
Brain-inspired computing methods have shown remarkable efficiency and robustness compared to deep neural networks (DNN). In particular, HyperDimensional Computing (HDC) and Vision Transformer (ViT) have demonstrated promising achievements in facilitating effective and reliable cognitive learning. This paper proposes SpatialHD, the first framework that combines spatial transformer networks (STN) and HDC. First, SpatialHD exploits the STN, which explicitly allows the spatial manipulation of data within the network. Then, it employs HDC to operate over STN output by mapping feature maps into high-dimensional space, learning abstracted information, and classifying data. In addition, the STN output is resized to generate a smaller input feature map. This further reduces computing complexity and memory storage compared to HDC alone. Finally, to test the model’s functionality, we applied spatial HD for image classification, utilizing the MNIST and Fashion-MNIST datasets, using only 25% of the dataset for training. Our results show that SpatialHD improves accuracy by ≈ 8% and enhances efficiency by approximately 2.5x compared to base-HDC.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Synaptic metaplasticity with multi-level memristive devices Unsupervised Learning of Spike-Timing-Dependent Plasticity Based on a Neuromorphic Implementation A Fully Differential 4-Bit Analog Compute-In-Memory Architecture for Inference Application Convergent Waveform Relaxation Schemes for the Transient Analysis of Associative ReLU Arrays Performance Assessment of an Extremely Energy-Efficient Binary Neural Network Using Adiabatic Superconductor Devices
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1