Dynamic Neural Accelerator for Reconfigurable & Energy-efficient Neural Network Inference

Nikolay Nez, Antonio N. Vilchez, H. Zohouri, Oleg Khavin, Sakyasingha Dasgupta
{"title":"Dynamic Neural Accelerator for Reconfigurable & Energy-efficient Neural Network Inference","authors":"Nikolay Nez, Antonio N. Vilchez, H. Zohouri, Oleg Khavin, Sakyasingha Dasgupta","doi":"10.1109/HCS52781.2021.9566886","DOIUrl":null,"url":null,"abstract":"Unique Challenges for AI Inference Hardware at the Edge • Peak TOPS or TOPS/Watt are not ideal measures of performance at the edge. Cannot prioritize performance over power efficiency (throughput/watt) • Many AI Hardware rely on batching to improve utilization. Unsuitable for streaming data (batch size 1) use-case at the edge • AI hardware architectures that fully cache network parameters using large on-chip SRAM cannot be scaled down easily to sizes applicable for edge workloads. • Need adaptability to new workloads and the ability to deploy multiple AI models • AI-specific accelerator needs to operate within heterogenous compute environments • Need for efficient compiler & scheduling to maximize compute utilization • Need for high software robustness and usability","PeriodicalId":246531,"journal":{"name":"2021 IEEE Hot Chips 33 Symposium (HCS)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Hot Chips 33 Symposium (HCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HCS52781.2021.9566886","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Unique Challenges for AI Inference Hardware at the Edge • Peak TOPS or TOPS/Watt are not ideal measures of performance at the edge. Cannot prioritize performance over power efficiency (throughput/watt) • Many AI Hardware rely on batching to improve utilization. Unsuitable for streaming data (batch size 1) use-case at the edge • AI hardware architectures that fully cache network parameters using large on-chip SRAM cannot be scaled down easily to sizes applicable for edge workloads. • Need adaptability to new workloads and the ability to deploy multiple AI models • AI-specific accelerator needs to operate within heterogenous compute environments • Need for efficient compiler & scheduling to maximize compute utilization • Need for high software robustness and usability
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
可重构节能神经网络推理的动态神经加速器
•峰值TOPS或TOPS/Watt不是边缘性能的理想衡量标准。不能优先考虑性能而不是功率效率(吞吐量/瓦特)•许多AI硬件依赖批处理来提高利用率。不适合边缘的流数据(批处理大小为1)用例•使用大型片上SRAM完全缓存网络参数的AI硬件架构不能轻松缩放到适用于边缘工作负载的大小。•需要适应新的工作负载和部署多个AI模型的能力•AI特定加速器需要在异构计算环境中运行•需要高效的编译器和调度以最大限度地提高计算利用率•需要高软件鲁棒性和可用性
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Multi-Million Core, Multi-Wafer AI Cluster Next Generation “Zen 3” Core Intel’s Hyperscale-Ready Infrastructure Processing Unit (IPU) Sapphire Rapids SambaNova SN10 RDU:Accelerating Software 2.0 with Dataflow
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1