Understanding Serverless Inference in Mobile-Edge Networks: A Benchmark Approach

IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS IEEE Transactions on Cloud Computing Pub Date : 2024-12-24 DOI:10.1109/TCC.2024.3521657
Junhong Chen;Yanying Lin;Shijie Peng;Shuaipeng Wu;Kenneth Kent;Hao Dai;Kejiang Ye;Yang Wang
{"title":"Understanding Serverless Inference in Mobile-Edge Networks: A Benchmark Approach","authors":"Junhong Chen;Yanying Lin;Shijie Peng;Shuaipeng Wu;Kenneth Kent;Hao Dai;Kejiang Ye;Yang Wang","doi":"10.1109/TCC.2024.3521657","DOIUrl":null,"url":null,"abstract":"Although the emerging serverless paradigm has the potential to become a dominant way of deploying cloud-service tasks across millions of mobile and IoT devices, the overhead characteristics of executing these tasks on such a volume of mobile devices remain largely unclear. To address this issue, this paper conducts a deep analysis based on the OpenFaaS platform—a popular open-source serverless platform for mobile edge environments—to investigate the overhead of performing deep learning inference tasks on mobile devices. To thoroughly evaluate the inference overhead, we develop a performance benchmark, named <i>ESBench</i>, whereby a set of comprehensive experiments are conducted with respect to a bunch of simulated mobile devices associated with an edge cluster. Our investigation reveals that the performance of deep learning inference tasks is significantly influenced by the model size and resource contention in mobile devices, leading to up to <inline-formula><tex-math>$3\\times$</tex-math></inline-formula> degradation in performance. Moreover, we observe that the network environment can negatively impact the performance of mobile inference, increasing the CPU overhead under poor network conditions. Based on our findings, we further propose some recommendations for designing efficient serverless platforms and resource management strategies as well as for deploying serverless computing in the mobile edge environment.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 1","pages":"198-212"},"PeriodicalIF":5.0000,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Cloud Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10814988/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Although the emerging serverless paradigm has the potential to become a dominant way of deploying cloud-service tasks across millions of mobile and IoT devices, the overhead characteristics of executing these tasks on such a volume of mobile devices remain largely unclear. To address this issue, this paper conducts a deep analysis based on the OpenFaaS platform—a popular open-source serverless platform for mobile edge environments—to investigate the overhead of performing deep learning inference tasks on mobile devices. To thoroughly evaluate the inference overhead, we develop a performance benchmark, named ESBench, whereby a set of comprehensive experiments are conducted with respect to a bunch of simulated mobile devices associated with an edge cluster. Our investigation reveals that the performance of deep learning inference tasks is significantly influenced by the model size and resource contention in mobile devices, leading to up to $3\times$ degradation in performance. Moreover, we observe that the network environment can negatively impact the performance of mobile inference, increasing the CPU overhead under poor network conditions. Based on our findings, we further propose some recommendations for designing efficient serverless platforms and resource management strategies as well as for deploying serverless computing in the mobile edge environment.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
理解移动边缘网络中的无服务器推理:基准方法
尽管新兴的无服务器范式有可能成为在数百万移动和物联网设备上部署云服务任务的主要方式,但在如此大量的移动设备上执行这些任务的开销特征在很大程度上仍不清楚。为了解决这个问题,本文基于OpenFaaS平台(一种流行的用于移动边缘环境的开源无服务器平台)进行了深入分析,以调查在移动设备上执行深度学习推理任务的开销。为了彻底评估推理开销,我们开发了一个名为ESBench的性能基准,其中针对与边缘集群相关的一堆模拟移动设备进行了一组全面的实验。我们的研究表明,深度学习推理任务的性能受到移动设备中模型大小和资源争用的显著影响,导致性能下降高达3倍。此外,我们观察到网络环境会对移动推理的性能产生负面影响,在恶劣的网络条件下会增加CPU开销。基于我们的研究结果,我们进一步提出了一些建议,用于设计高效的无服务器平台和资源管理策略,以及在移动边缘环境中部署无服务器计算。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Cloud Computing
IEEE Transactions on Cloud Computing Computer Science-Software
CiteScore
9.40
自引率
6.20%
发文量
167
期刊介绍: The IEEE Transactions on Cloud Computing (TCC) is dedicated to the multidisciplinary field of cloud computing. It is committed to the publication of articles that present innovative research ideas, application results, and case studies in cloud computing, focusing on key technical issues related to theory, algorithms, systems, applications, and performance.
期刊最新文献
Smart-to-Compress: A Predictive and Game-Theoretic Framework for Data Reduction Decisions Side Channel Attacks on Resource-Constrained Devices Enabled Through Secure Cloud Outsourcing Security Weaknesses of a Lightweight Privacy-Preserving Edge Computing Based Ciphertext Retrieval Scheme Real-Time Adaptive Workflow Scheduling With Graph Learning and Transformer-Driven Reinforcement in Heterogeneous Clouds Transfer Learning-Enabled System for Drone Medicine Delivery Based on Spatio-Temporal Remote Sensing Data in Edge Cloud Networks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1