Reducing the cost of cold start time in serverless function executions using granularity trees

IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Future Generation Computer Systems-The International Journal of Escience Pub Date : 2024-11-05 DOI:10.1016/j.future.2024.107604
Mahrad Hanaforoosh, Mohammad Abdollahi Azgomi, Mehrdad Ashtiani
{"title":"Reducing the cost of cold start time in serverless function executions using granularity trees","authors":"Mahrad Hanaforoosh,&nbsp;Mohammad Abdollahi Azgomi,&nbsp;Mehrdad Ashtiani","doi":"10.1016/j.future.2024.107604","DOIUrl":null,"url":null,"abstract":"<div><div>In serverless computing, cold starts significantly impede performance. This paper presents a granularity tree-based scheduling strategy, dynamically adjusting serverless function deployment by package dependencies to mitigate cold starts and optimize resource usage. This approach notably reduces cold start and response times. Empirical results from evaluating functions across various datasets show the strategy outperforms existing methods. Specifically, it consistently delivers lower response times and decreases resource consumption, demonstrating its effectiveness in managing computational resources while ensuring swift function invocation. In particular scenarios, the proposed scheduler impressively reduced response times from 8134.1 ms to 392.8 ms and idle memory usage from 15.2 GB to 11.2 GB per machine. In other scenarios, it reduced response times from 12,152.7 ms to 504.2 ms while maintaining a 100% function execution percentage. These quantified improvements underscore the significant enhancements in cold start mitigation and overall system performance, highlighting the potential of granularity tree-based scheduling in enhancing serverless computing architectures by effectively balancing rapid response with reduced resource usage.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"164 ","pages":"Article 107604"},"PeriodicalIF":6.2000,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Future Generation Computer Systems-The International Journal of Escience","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167739X24005685","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

Abstract

In serverless computing, cold starts significantly impede performance. This paper presents a granularity tree-based scheduling strategy, dynamically adjusting serverless function deployment by package dependencies to mitigate cold starts and optimize resource usage. This approach notably reduces cold start and response times. Empirical results from evaluating functions across various datasets show the strategy outperforms existing methods. Specifically, it consistently delivers lower response times and decreases resource consumption, demonstrating its effectiveness in managing computational resources while ensuring swift function invocation. In particular scenarios, the proposed scheduler impressively reduced response times from 8134.1 ms to 392.8 ms and idle memory usage from 15.2 GB to 11.2 GB per machine. In other scenarios, it reduced response times from 12,152.7 ms to 504.2 ms while maintaining a 100% function execution percentage. These quantified improvements underscore the significant enhancements in cold start mitigation and overall system performance, highlighting the potential of granularity tree-based scheduling in enhancing serverless computing architectures by effectively balancing rapid response with reduced resource usage.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
使用粒度树降低无服务器函数执行中的冷启动时间成本
在无服务器计算中,冷启动会严重影响性能。本文提出了一种基于粒度树的调度策略,通过软件包依赖关系动态调整无服务器功能部署,以减少冷启动并优化资源使用。这种方法显著缩短了冷启动和响应时间。在各种数据集上评估函数的经验结果表明,该策略优于现有方法。具体来说,它能持续缩短响应时间并减少资源消耗,这表明它在管理计算资源的同时还能确保函数的快速调用。在特定场景中,拟议的调度器将响应时间从 8134.1 毫秒缩短到 392.8 毫秒,将每台机器的闲置内存使用量从 15.2 GB 减少到 11.2 GB,令人印象深刻。在其他场景中,它将响应时间从 12152.7 毫秒缩短至 504.2 毫秒,同时保持了 100% 的函数执行率。这些量化的改进强调了冷启动缓解和整体系统性能的显著提升,突出了基于粒度树的调度通过有效平衡快速响应和减少资源使用,在增强无服务器计算架构方面的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
19.90
自引率
2.70%
发文量
376
审稿时长
10.6 months
期刊介绍: Computing infrastructures and systems are constantly evolving, resulting in increasingly complex and collaborative scientific applications. To cope with these advancements, there is a growing need for collaborative tools that can effectively map, control, and execute these applications. Furthermore, with the explosion of Big Data, there is a requirement for innovative methods and infrastructures to collect, analyze, and derive meaningful insights from the vast amount of data generated. This necessitates the integration of computational and storage capabilities, databases, sensors, and human collaboration. Future Generation Computer Systems aims to pioneer advancements in distributed systems, collaborative environments, high-performance computing, and Big Data analytics. It strives to stay at the forefront of developments in grids, clouds, and the Internet of Things (IoT) to effectively address the challenges posed by these wide-area, fully distributed sensing and computing systems.
期刊最新文献
Editorial Board AIHO: Enhancing task offloading and reducing latency in serverless multi-edge-to-cloud systems DSDM-TCSE: Deterministic storage and deletion mechanism for trusted cloud service environments Energy management in smart grids: An Edge-Cloud Continuum approach with Deep Q-learning Service migration with edge collaboration: Multi-agent deep reinforcement learning approach combined with user preference adaptation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1