Learning-Based Dynamic Memory Allocation Schemes for Apache Spark Data Processing

IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS IEEE Transactions on Cloud Computing Pub Date : 2023-11-10 DOI:10.1109/TCC.2023.3329129
Danlin Jia;Li Wang;Natalia Valencia;Janki Bhimani;Bo Sheng;Ningfang Mi
{"title":"Learning-Based Dynamic Memory Allocation Schemes for Apache Spark Data Processing","authors":"Danlin Jia;Li Wang;Natalia Valencia;Janki Bhimani;Bo Sheng;Ningfang Mi","doi":"10.1109/TCC.2023.3329129","DOIUrl":null,"url":null,"abstract":"Apache Spark is an in-memory analytic framework that has been adopted in the industry and research fields. Two memory managers, Static and Unified, are available in Spark to allocate memory for caching Resilient Distributed Datasets (RDDs) and executing tasks. However, we find that the static memory manager (SMM) lacks flexibility, while the unified memory manager (UMM) puts heavy pressure on the garbage collection of the JVM on which Spark resides. To address these issues, we design a learning-based bidirectional usage-bounded memory allocation scheme to support dynamic memory allocation with the consideration of both memory demands and latency introduced by garbage collection. We first develop an auto-tuning memory manager (ATuMm) that adopts an intuitive feedback-based learning solution. However, ATuMm is a slow learner that can only alter the states of Java Virtual Memory (JVM) Heap in a limited range. That is, ATuMm decides to increase or decrease the boundary between the execution and storage memory pools by a fixed portion of JVM Heap size. To overcome this shortcoming, we further develop a new reinforcement learning-based memory manager (Q-ATuMm) that uses a Q-learning intelligent agent to dynamically learn and tune the partition of JVM Heap. We implement our new memory managers in Spark 2.4.0 and evaluate them by conducting experiments in a real Spark cluster. Our experimental results show that our memory manager can reduce the total garbage collection time and thus further improve Spark applications’ performance (i.e., reduced latency) compared to the existing Spark memory management solutions. By integrating our machine learning-driven memory manager into Spark, we can further obtain around 1.3x times reduction in the latency.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":5.3000,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Cloud Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10315019/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Apache Spark is an in-memory analytic framework that has been adopted in the industry and research fields. Two memory managers, Static and Unified, are available in Spark to allocate memory for caching Resilient Distributed Datasets (RDDs) and executing tasks. However, we find that the static memory manager (SMM) lacks flexibility, while the unified memory manager (UMM) puts heavy pressure on the garbage collection of the JVM on which Spark resides. To address these issues, we design a learning-based bidirectional usage-bounded memory allocation scheme to support dynamic memory allocation with the consideration of both memory demands and latency introduced by garbage collection. We first develop an auto-tuning memory manager (ATuMm) that adopts an intuitive feedback-based learning solution. However, ATuMm is a slow learner that can only alter the states of Java Virtual Memory (JVM) Heap in a limited range. That is, ATuMm decides to increase or decrease the boundary between the execution and storage memory pools by a fixed portion of JVM Heap size. To overcome this shortcoming, we further develop a new reinforcement learning-based memory manager (Q-ATuMm) that uses a Q-learning intelligent agent to dynamically learn and tune the partition of JVM Heap. We implement our new memory managers in Spark 2.4.0 and evaluate them by conducting experiments in a real Spark cluster. Our experimental results show that our memory manager can reduce the total garbage collection time and thus further improve Spark applications’ performance (i.e., reduced latency) compared to the existing Spark memory management solutions. By integrating our machine learning-driven memory manager into Spark, we can further obtain around 1.3x times reduction in the latency.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用于 Apache Spark 数据处理的基于学习的动态内存分配方案
Apache Spark 是一个内存分析框架,已被业界和研究领域采用。Spark 中有两种内存管理器(静态和统一),用于为缓存弹性分布式数据集(RDD)和执行任务分配内存。然而,我们发现静态内存管理器(SMM)缺乏灵活性,而统一内存管理器(UMM)则给 Spark 所在的 JVM 的垃圾回收带来了巨大压力。为了解决这些问题,我们设计了一种基于学习的双向使用限制内存分配方案,以支持动态内存分配,同时考虑内存需求和垃圾回收带来的延迟。我们首先开发了一种自动调整内存管理器(ATuMm),它采用了一种基于反馈的直观学习解决方案。但是,ATuMm 的学习速度较慢,只能在有限的范围内改变 Java 虚拟内存(JVM)堆的状态。也就是说,ATuMm 只能根据 JVM 堆大小的固定部分来决定增加或减少执行内存池和存储内存池之间的边界。为了克服这一缺陷,我们进一步开发了一种基于强化学习的新内存管理器(Q-ATuMm),它使用 Q-learning 智能代理来动态学习和调整 JVM Heap 的分区。我们在 Spark 2.4.0 中实现了新的内存管理器,并在真实的 Spark 集群中进行了实验评估。实验结果表明,与现有的 Spark 内存管理解决方案相比,我们的内存管理器可以减少总的垃圾回收时间,从而进一步提高 Spark 应用程序的性能(即减少延迟)。通过将我们的机器学习驱动型内存管理器集成到 Spark 中,我们可以进一步将延迟降低约 1.3 倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Cloud Computing
IEEE Transactions on Cloud Computing Computer Science-Software
CiteScore
9.40
自引率
6.20%
发文量
167
期刊介绍: The IEEE Transactions on Cloud Computing (TCC) is dedicated to the multidisciplinary field of cloud computing. It is committed to the publication of articles that present innovative research ideas, application results, and case studies in cloud computing, focusing on key technical issues related to theory, algorithms, systems, applications, and performance.
期刊最新文献
WorkloadDiff: Conditional Denoising Diffusion Probabilistic Models for Cloud Workload Prediction A Lightweight Privacy-Preserving Ciphertext Retrieval Scheme Based on Edge Computing Generative Adversarial Privacy for Multimedia Analytics Across the IoT-Edge Continuum Corrections to “DNN Surgery: Accelerating DNN Inference on the Edge through Layer Partitioning” FedPAW: Federated Learning With Personalized Aggregation Weights for Urban Vehicle Speed Prediction
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1