nn-Meter: towards accurate latency prediction of deep-learning model inference on diverse edge devices

L. Zhang, S. Han, Jianyu Wei, Ningxin Zheng, Ting Cao, Yuqing Yang, Yunxin Liu
{"title":"nn-Meter: towards accurate latency prediction of deep-learning model inference on diverse edge devices","authors":"L. Zhang, S. Han, Jianyu Wei, Ningxin Zheng, Ting Cao, Yuqing Yang, Yunxin Liu","doi":"10.1145/3458864.3467882","DOIUrl":null,"url":null,"abstract":"With the recent trend of on-device deep learning, inference latency has become a crucial metric in running Deep Neural Network (DNN) models on various mobile and edge devices. To this end, latency prediction of DNN model inference is highly desirable for many tasks where measuring the latency on real devices is infeasible or too costly, such as searching for efficient DNN models with latency constraints from a huge model-design space. Yet it is very challenging and existing approaches fail to achieve a high accuracy of prediction, due to the varying model-inference latency caused by the runtime optimizations on diverse edge devices. In this paper, we propose and develop nn-Meter, a novel and efficient system to accurately predict the inference latency of DNN models on diverse edge devices. The key idea of nn-Meter is dividing a whole model inference into kernels, i.e., the execution units on a device, and conducting kernel-level prediction. nn-Meter builds atop two key techniques: (i) kernel detection to automatically detect the execution unit of model inference via a set of well-designed test cases; and (ii) adaptive sampling to efficiently sample the most beneficial configurations from a large space to build accurate kernel-level latency predictors. Implemented on three popular platforms of edge hardware (mobile CPU, mobile GPU, and Intel VPU) and evaluated using a large dataset of 26,000 models, nn-Meter significantly outperforms the prior state-of-the-art.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"71","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3458864.3467882","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 71

Abstract

With the recent trend of on-device deep learning, inference latency has become a crucial metric in running Deep Neural Network (DNN) models on various mobile and edge devices. To this end, latency prediction of DNN model inference is highly desirable for many tasks where measuring the latency on real devices is infeasible or too costly, such as searching for efficient DNN models with latency constraints from a huge model-design space. Yet it is very challenging and existing approaches fail to achieve a high accuracy of prediction, due to the varying model-inference latency caused by the runtime optimizations on diverse edge devices. In this paper, we propose and develop nn-Meter, a novel and efficient system to accurately predict the inference latency of DNN models on diverse edge devices. The key idea of nn-Meter is dividing a whole model inference into kernels, i.e., the execution units on a device, and conducting kernel-level prediction. nn-Meter builds atop two key techniques: (i) kernel detection to automatically detect the execution unit of model inference via a set of well-designed test cases; and (ii) adaptive sampling to efficiently sample the most beneficial configurations from a large space to build accurate kernel-level latency predictors. Implemented on three popular platforms of edge hardware (mobile CPU, mobile GPU, and Intel VPU) and evaluated using a large dataset of 26,000 models, nn-Meter significantly outperforms the prior state-of-the-art.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
n- meter:在不同边缘设备上实现深度学习模型推理的准确延迟预测
随着设备上深度学习的最新趋势,推理延迟已成为在各种移动和边缘设备上运行深度神经网络(DNN)模型的关键指标。为此,DNN模型推理的延迟预测对于许多在真实设备上测量延迟是不可行的或成本过高的任务是非常可取的,例如从巨大的模型设计空间中搜索具有延迟约束的有效DNN模型。然而,由于在不同的边缘设备上运行时优化导致的模型推理延迟不同,现有的方法无法达到较高的预测精度。在本文中,我们提出并开发了一种新颖高效的系统nn-Meter,用于准确预测DNN模型在不同边缘设备上的推理延迟。nn-Meter的核心思想是将整个模型推理划分为核,即设备上的执行单元,并进行核级预测。nn-Meter建立在两个关键技术之上:(i)内核检测,通过一组设计良好的测试用例自动检测模型推理的执行单元;(ii)自适应采样,从大空间中有效地采样最有利的配置,以构建准确的核级延迟预测器。n- meter在三种流行的边缘硬件平台(移动CPU、移动GPU和英特尔VPU)上实现,并使用26,000个模型的大型数据集进行评估,其性能明显优于先前的最先进技术。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Open source RAN slicing on POWDER: a top-to-bottom O-RAN use case Measuring forest carbon with mobile phones ThingSpire OS: a WebAssembly-based IoT operating system for cloud-edge integration SOS: isolated health monitoring system to save our satellites Acoustic ruler using wireless earbud
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1