基于时序的智能LRU缓存构建

Pavan Nittur, Anuradha Kanukotla, Narendra Mutyala
{"title":"基于时序的智能LRU缓存构建","authors":"Pavan Nittur, Anuradha Kanukotla, Narendra Mutyala","doi":"10.1109/HiPC50609.2020.00045","DOIUrl":null,"url":null,"abstract":"In the Android platform, the cache-slots store applications upon their launch, which it later uses for prefetching. The Least Recently Used (LRU) based caching algorithm which governs these cache-slots can fail to maintain essential applications in the slot, especially in scenarios like memory-crunch, temporal-burst or volatile environment situations. The construction of these cache-slots can be ameliorated by selectively storing user critical applications before their launch. This reform would require a successful forecast of the user-app-launch pattern using intelligent machine learning agents without hindering the smooth execution of parallel processes. In this paper, we propose a sophisticated Temporal based Intelligent Process Management (TIPM) system, which learns to predict a Smart Application List (SAL) based on the usage pattern. Using SAL, we construct Intelligent LRU cache-slots, that retains essential user applications in the memory and provide improved launch rates. Our experimental results from testing TIPM with different users demonstrate significant improvement in cache-hit rate (95%) and yielding a gain of 26% to the current baseline (LRU), thereby making it a valuable enhancement to the platform.","PeriodicalId":375004,"journal":{"name":"2020 IEEE 27th International Conference on High Performance Computing, Data, and Analytics (HiPC)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Temporal Based Intelligent LRU Cache Construction\",\"authors\":\"Pavan Nittur, Anuradha Kanukotla, Narendra Mutyala\",\"doi\":\"10.1109/HiPC50609.2020.00045\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the Android platform, the cache-slots store applications upon their launch, which it later uses for prefetching. The Least Recently Used (LRU) based caching algorithm which governs these cache-slots can fail to maintain essential applications in the slot, especially in scenarios like memory-crunch, temporal-burst or volatile environment situations. The construction of these cache-slots can be ameliorated by selectively storing user critical applications before their launch. This reform would require a successful forecast of the user-app-launch pattern using intelligent machine learning agents without hindering the smooth execution of parallel processes. In this paper, we propose a sophisticated Temporal based Intelligent Process Management (TIPM) system, which learns to predict a Smart Application List (SAL) based on the usage pattern. Using SAL, we construct Intelligent LRU cache-slots, that retains essential user applications in the memory and provide improved launch rates. Our experimental results from testing TIPM with different users demonstrate significant improvement in cache-hit rate (95%) and yielding a gain of 26% to the current baseline (LRU), thereby making it a valuable enhancement to the platform.\",\"PeriodicalId\":375004,\"journal\":{\"name\":\"2020 IEEE 27th International Conference on High Performance Computing, Data, and Analytics (HiPC)\",\"volume\":\"111 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE 27th International Conference on High Performance Computing, Data, and Analytics (HiPC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/HiPC50609.2020.00045\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 27th International Conference on High Performance Computing, Data, and Analytics (HiPC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HiPC50609.2020.00045","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在Android平台上,缓存槽在应用程序启动时存储应用程序,之后用于预取。管理这些缓存槽的基于最近最少使用(LRU)的缓存算法可能无法在槽中维护必要的应用程序,特别是在内存紧张、时间突发或易变环境的情况下。这些缓存槽的构造可以通过在用户关键应用程序启动之前有选择地存储它们来改进。这种改革需要使用智能机器学习代理成功预测用户应用程序启动模式,同时不妨碍并行进程的顺利执行。在本文中,我们提出了一个复杂的基于时间的智能过程管理(TIPM)系统,该系统学习预测基于使用模式的智能应用程序列表(SAL)。使用SAL,我们构建了智能LRU缓存槽,它在内存中保留了基本的用户应用程序,并提供了改进的启动率。我们对不同用户测试TIPM的实验结果表明,缓存命中率(95%)有了显著提高,并且比当前基线(LRU)增加了26%,从而使其成为对平台的有价值的增强。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Temporal Based Intelligent LRU Cache Construction
In the Android platform, the cache-slots store applications upon their launch, which it later uses for prefetching. The Least Recently Used (LRU) based caching algorithm which governs these cache-slots can fail to maintain essential applications in the slot, especially in scenarios like memory-crunch, temporal-burst or volatile environment situations. The construction of these cache-slots can be ameliorated by selectively storing user critical applications before their launch. This reform would require a successful forecast of the user-app-launch pattern using intelligent machine learning agents without hindering the smooth execution of parallel processes. In this paper, we propose a sophisticated Temporal based Intelligent Process Management (TIPM) system, which learns to predict a Smart Application List (SAL) based on the usage pattern. Using SAL, we construct Intelligent LRU cache-slots, that retains essential user applications in the memory and provide improved launch rates. Our experimental results from testing TIPM with different users demonstrate significant improvement in cache-hit rate (95%) and yielding a gain of 26% to the current baseline (LRU), thereby making it a valuable enhancement to the platform.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
HiPC 2020 ORGANIZATION HiPC 2020 Industry Sponsors PufferFish: NUMA-Aware Work-stealing Library using Elastic Tasks Algorithms for Preemptive Co-scheduling of Kernels on GPUs 27th IEEE International Conference on High Performance Computing, Data, and Analytics (HiPC 2020) Technical program
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1