Benchmarking the Accuracy of Algorithms for Memory-Constrained Image Classification

S. Müksch, Theo X. Olausson, John Wilhelm, Pavlos Andreadis
{"title":"Benchmarking the Accuracy of Algorithms for Memory-Constrained Image Classification","authors":"S. Müksch, Theo X. Olausson, John Wilhelm, Pavlos Andreadis","doi":"10.1109/SEC50012.2020.00059","DOIUrl":null,"url":null,"abstract":"Convolutional Neural Networks, or CNNs, are the state of the art for image classification, but typically come at the cost of a large memory footprint. This limits their usefulness in edge computing applications, where memory is often a scarce resource. Recently, there has been significant progress in the field of image classification on such memory-constrained devices, with novel contributions like the ProtoNN, Bonsai and FastGRNN algorithms. These have been shown to reach up to 98.2% accuracy on optical character recognition using MNIST-10, with a memory footprint as little as 6KB. However, their potential on more complex multi-class and multi-channel image classification has yet to be determined. In this paper, we compare CNNs with ProtoNN, Bonsai and FastGRNN when applied to 3-channel image classification using CIFAR-10. For our analysis, we use the existing Direct Convolution algorithm to implement the CNNs memory-optimally and propose new methods of adjusting the FastGRNN model to work with multi-channel images. We extend the evaluation of each algorithm to a memory size budget of 8KB, 16KB, 32KB, 64KB and 128KB to show quantitatively that Direct Convolution CNNs perform best for all chosen budgets, with a top performance of 65.7% accuracy at a memory footprint of 58.23KB.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SEC50012.2020.00059","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Convolutional Neural Networks, or CNNs, are the state of the art for image classification, but typically come at the cost of a large memory footprint. This limits their usefulness in edge computing applications, where memory is often a scarce resource. Recently, there has been significant progress in the field of image classification on such memory-constrained devices, with novel contributions like the ProtoNN, Bonsai and FastGRNN algorithms. These have been shown to reach up to 98.2% accuracy on optical character recognition using MNIST-10, with a memory footprint as little as 6KB. However, their potential on more complex multi-class and multi-channel image classification has yet to be determined. In this paper, we compare CNNs with ProtoNN, Bonsai and FastGRNN when applied to 3-channel image classification using CIFAR-10. For our analysis, we use the existing Direct Convolution algorithm to implement the CNNs memory-optimally and propose new methods of adjusting the FastGRNN model to work with multi-channel images. We extend the evaluation of each algorithm to a memory size budget of 8KB, 16KB, 32KB, 64KB and 128KB to show quantitatively that Direct Convolution CNNs perform best for all chosen budgets, with a top performance of 65.7% accuracy at a memory footprint of 58.23KB.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于内存约束的图像分类算法的准确率基准测试
卷积神经网络(cnn)是图像分类的最新技术,但通常以占用大量内存为代价。这限制了它们在边缘计算应用程序中的实用性,在边缘计算应用程序中,内存通常是稀缺资源。最近,在这种内存受限设备上的图像分类领域取得了重大进展,有了新的贡献,如ProtoNN、Bonsai和FastGRNN算法。在使用mist -10的光学字符识别上,这些方法的准确率高达98.2%,内存占用仅为6KB。然而,它们在更复杂的多类、多通道图像分类上的潜力还有待确定。在本文中,我们将cnn与ProtoNN、Bonsai和FastGRNN在使用CIFAR-10进行三通道图像分类时进行了比较。在我们的分析中,我们使用现有的直接卷积算法来优化实现cnn的内存,并提出了调整FastGRNN模型以处理多通道图像的新方法。我们将每种算法的评估扩展到8KB, 16KB, 32KB, 64KB和128KB的内存大小预算,以定量地显示直接卷积cnn在所有选择的预算中表现最佳,在58.23KB的内存占用下,最高性能达到65.7%的准确率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Position Paper: Towards a Robust Edge-Native Storage System Exploring Decentralized Collaboration in Heterogeneous Edge Training Message from the Program Co-Chairs FareQR: Fast and Reliable Screen-Camera Transfer System for Mobile Devices using QR Code Demo: EdgeVPN.io: Open-source Virtual Private Network for Seamless Edge Computing with Kubernetes
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1