F-CNN:更快的CNN利用统计分析数据重用

Fatmah Alantali, Y. Halawani, B. Mohammad, M. Al-Qutayri
{"title":"F-CNN:更快的CNN利用统计分析数据重用","authors":"Fatmah Alantali, Y. Halawani, B. Mohammad, M. Al-Qutayri","doi":"10.1109/AICAS57966.2023.10168606","DOIUrl":null,"url":null,"abstract":"Many of the current edge computing devices need efficient implementation of Artificial Intelligence (AI) applications due to strict latency, security and power requirements. Nonetheless, such devices, face various challenges when executing AI applications due to their limited computing and energy resources. In particular, Convolutional Neural Networks (CNN) is a popular machine learning method that derives a high-level function from being trained on various visual input examples. This paper contributes to enabling the use of CNN on resource-constrained devices offline, where a trade-off between accuracy, running time and power efficiency is verified. The paper investigates the use of minimum pre-processing methods of input data to identify nonessential computations in the convolutional layers. In this work, Spatial locality of input data is considered along with an efficient pre-processing method to mitigate the accuracy loss caused by the computational re-use approach. This technique was tested on LeNet and CIFAR-10 structures and was responsible for 1.9% and 1.6% accuracy loss while reducing the processing time by 38.3% and 20.9% and reducing the energy by 38.3%, and 20.7%, respectively. The models were deployed and verified on Raspberry Pi 4 B platform using the MATLAB coder to measure time and energy.","PeriodicalId":296649,"journal":{"name":"2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"F-CNN: Faster CNN Exploiting Data Re-Use with Statistical Analysis\",\"authors\":\"Fatmah Alantali, Y. Halawani, B. Mohammad, M. Al-Qutayri\",\"doi\":\"10.1109/AICAS57966.2023.10168606\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Many of the current edge computing devices need efficient implementation of Artificial Intelligence (AI) applications due to strict latency, security and power requirements. Nonetheless, such devices, face various challenges when executing AI applications due to their limited computing and energy resources. In particular, Convolutional Neural Networks (CNN) is a popular machine learning method that derives a high-level function from being trained on various visual input examples. This paper contributes to enabling the use of CNN on resource-constrained devices offline, where a trade-off between accuracy, running time and power efficiency is verified. The paper investigates the use of minimum pre-processing methods of input data to identify nonessential computations in the convolutional layers. In this work, Spatial locality of input data is considered along with an efficient pre-processing method to mitigate the accuracy loss caused by the computational re-use approach. This technique was tested on LeNet and CIFAR-10 structures and was responsible for 1.9% and 1.6% accuracy loss while reducing the processing time by 38.3% and 20.9% and reducing the energy by 38.3%, and 20.7%, respectively. The models were deployed and verified on Raspberry Pi 4 B platform using the MATLAB coder to measure time and energy.\",\"PeriodicalId\":296649,\"journal\":{\"name\":\"2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS)\",\"volume\":\"38 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AICAS57966.2023.10168606\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AICAS57966.2023.10168606","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

由于严格的延迟、安全性和功耗要求,许多当前的边缘计算设备需要高效地实现人工智能(AI)应用程序。尽管如此,这些设备在执行人工智能应用时,由于其有限的计算和能源资源,面临着各种挑战。特别是卷积神经网络(CNN)是一种流行的机器学习方法,它通过对各种视觉输入示例进行训练来获得高级函数。本文有助于在离线资源受限的设备上使用CNN,验证了准确性、运行时间和功率效率之间的权衡。本文研究了使用输入数据的最小预处理方法来识别卷积层中的非必要计算。在这项工作中,考虑了输入数据的空间局部性以及有效的预处理方法,以减轻计算重用方法造成的精度损失。该技术在LeNet和CIFAR-10结构上进行了测试,精度损失分别为1.9%和1.6%,处理时间和能量分别减少了38.3%和20.9%,能耗分别减少了38.3%和20.7%。在Raspberry Pi 4b平台上使用MATLAB编码器对模型进行了部署和验证,并测量了时间和能量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
F-CNN: Faster CNN Exploiting Data Re-Use with Statistical Analysis
Many of the current edge computing devices need efficient implementation of Artificial Intelligence (AI) applications due to strict latency, security and power requirements. Nonetheless, such devices, face various challenges when executing AI applications due to their limited computing and energy resources. In particular, Convolutional Neural Networks (CNN) is a popular machine learning method that derives a high-level function from being trained on various visual input examples. This paper contributes to enabling the use of CNN on resource-constrained devices offline, where a trade-off between accuracy, running time and power efficiency is verified. The paper investigates the use of minimum pre-processing methods of input data to identify nonessential computations in the convolutional layers. In this work, Spatial locality of input data is considered along with an efficient pre-processing method to mitigate the accuracy loss caused by the computational re-use approach. This technique was tested on LeNet and CIFAR-10 structures and was responsible for 1.9% and 1.6% accuracy loss while reducing the processing time by 38.3% and 20.9% and reducing the energy by 38.3%, and 20.7%, respectively. The models were deployed and verified on Raspberry Pi 4 B platform using the MATLAB coder to measure time and energy.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Synaptic metaplasticity with multi-level memristive devices Unsupervised Learning of Spike-Timing-Dependent Plasticity Based on a Neuromorphic Implementation A Fully Differential 4-Bit Analog Compute-In-Memory Architecture for Inference Application Convergent Waveform Relaxation Schemes for the Transient Analysis of Associative ReLU Arrays Performance Assessment of an Extremely Energy-Efficient Binary Neural Network Using Adiabatic Superconductor Devices
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1