利用深度强化学习(DRL)最大限度降低视频点播(VoD)存储系统的功耗

IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Future Generation Computer Systems-The International Journal of Escience Pub Date : 2024-11-05 DOI:10.1016/j.future.2024.107582
Minseok Song, Mingoo Kwon
{"title":"利用深度强化学习(DRL)最大限度降低视频点播(VoD)存储系统的功耗","authors":"Minseok Song,&nbsp;Mingoo Kwon","doi":"10.1016/j.future.2024.107582","DOIUrl":null,"url":null,"abstract":"<div><div>As video streaming services such as Netflix become popular, resolving the problem of high power consumption arising from both large data size and high bandwidth in video storage systems has become important. However, because various factors, such as the power characteristics of heterogeneous storage devices, variable workloads, and disk array models, influence storage power consumption, reducing power consumption with deterministic policies is ineffective. To address this, we present a new deep reinforcement learning (DRL)-based file placement algorithm for replication-based video storage systems, which aims to minimize overall storage power consumption. We first model the video storage system with time-varying streaming workloads as the DRL environment, in which the agent aims to find power-efficient file placement. We then propose a proximal policy optimization (PPO) algorithm, consisting of (1) an action space that determines the placement of each file; (2) an observation space that allows the agent to learn a power-efficient placement based on the current I/O bandwidth utilization; (3) a reward model that assigns a greater penalty for increased power consumption for each action; and (4) an action masking model that supports effective learning by preventing agents from selecting unnecessary actions. Extensive simulations were performed to evaluate the proposed scheme under various solid-state disk (SSD) models and replication configurations. Results show that our scheme reduces storage power consumption by 5% to 25.8% (average 12%) compared to existing benchmark methods known to be effective for file placement.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"164 ","pages":"Article 107582"},"PeriodicalIF":6.2000,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Using Deep Reinforcement Learning (DRL) for minimizing power consumption in Video-on-Demand (VoD) storage systems\",\"authors\":\"Minseok Song,&nbsp;Mingoo Kwon\",\"doi\":\"10.1016/j.future.2024.107582\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>As video streaming services such as Netflix become popular, resolving the problem of high power consumption arising from both large data size and high bandwidth in video storage systems has become important. However, because various factors, such as the power characteristics of heterogeneous storage devices, variable workloads, and disk array models, influence storage power consumption, reducing power consumption with deterministic policies is ineffective. To address this, we present a new deep reinforcement learning (DRL)-based file placement algorithm for replication-based video storage systems, which aims to minimize overall storage power consumption. We first model the video storage system with time-varying streaming workloads as the DRL environment, in which the agent aims to find power-efficient file placement. We then propose a proximal policy optimization (PPO) algorithm, consisting of (1) an action space that determines the placement of each file; (2) an observation space that allows the agent to learn a power-efficient placement based on the current I/O bandwidth utilization; (3) a reward model that assigns a greater penalty for increased power consumption for each action; and (4) an action masking model that supports effective learning by preventing agents from selecting unnecessary actions. Extensive simulations were performed to evaluate the proposed scheme under various solid-state disk (SSD) models and replication configurations. Results show that our scheme reduces storage power consumption by 5% to 25.8% (average 12%) compared to existing benchmark methods known to be effective for file placement.</div></div>\",\"PeriodicalId\":55132,\"journal\":{\"name\":\"Future Generation Computer Systems-The International Journal of Escience\",\"volume\":\"164 \",\"pages\":\"Article 107582\"},\"PeriodicalIF\":6.2000,\"publicationDate\":\"2024-11-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Future Generation Computer Systems-The International Journal of Escience\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0167739X24005466\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Future Generation Computer Systems-The International Journal of Escience","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167739X24005466","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

摘要

随着 Netflix 等视频流服务的流行,解决视频存储系统中因数据量大和带宽高而产生的高功耗问题变得非常重要。然而,由于异构存储设备的功耗特性、可变工作负载和磁盘阵列模型等多种因素会影响存储功耗,因此采用确定性策略降低功耗的效果并不理想。为此,我们针对基于复制的视频存储系统提出了一种基于深度强化学习(DRL)的新型文件放置算法,旨在最大限度地降低整体存储功耗。我们首先将具有时变流媒体工作负载的视频存储系统建模为 DRL 环境,其中代理的目标是找到省电的文件放置位置。然后,我们提出了一种近端策略优化(PPO)算法,该算法由以下部分组成:(1) 确定每个文件放置位置的行动空间;(2) 允许代理根据当前 I/O 带宽利用率学习高能效放置位置的观察空间;(3) 为每个行动的能耗增加分配更大惩罚的奖励模型;(4) 通过防止代理选择不必要行动来支持有效学习的行动屏蔽模型。我们进行了大量模拟,以评估在各种固态硬盘(SSD)型号和复制配置下的拟议方案。结果表明,与已知有效的文件放置现有基准方法相比,我们的方案降低了 5% 到 25.8% 的存储功耗(平均为 12%)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Using Deep Reinforcement Learning (DRL) for minimizing power consumption in Video-on-Demand (VoD) storage systems
As video streaming services such as Netflix become popular, resolving the problem of high power consumption arising from both large data size and high bandwidth in video storage systems has become important. However, because various factors, such as the power characteristics of heterogeneous storage devices, variable workloads, and disk array models, influence storage power consumption, reducing power consumption with deterministic policies is ineffective. To address this, we present a new deep reinforcement learning (DRL)-based file placement algorithm for replication-based video storage systems, which aims to minimize overall storage power consumption. We first model the video storage system with time-varying streaming workloads as the DRL environment, in which the agent aims to find power-efficient file placement. We then propose a proximal policy optimization (PPO) algorithm, consisting of (1) an action space that determines the placement of each file; (2) an observation space that allows the agent to learn a power-efficient placement based on the current I/O bandwidth utilization; (3) a reward model that assigns a greater penalty for increased power consumption for each action; and (4) an action masking model that supports effective learning by preventing agents from selecting unnecessary actions. Extensive simulations were performed to evaluate the proposed scheme under various solid-state disk (SSD) models and replication configurations. Results show that our scheme reduces storage power consumption by 5% to 25.8% (average 12%) compared to existing benchmark methods known to be effective for file placement.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
19.90
自引率
2.70%
发文量
376
审稿时长
10.6 months
期刊介绍: Computing infrastructures and systems are constantly evolving, resulting in increasingly complex and collaborative scientific applications. To cope with these advancements, there is a growing need for collaborative tools that can effectively map, control, and execute these applications. Furthermore, with the explosion of Big Data, there is a requirement for innovative methods and infrastructures to collect, analyze, and derive meaningful insights from the vast amount of data generated. This necessitates the integration of computational and storage capabilities, databases, sensors, and human collaboration. Future Generation Computer Systems aims to pioneer advancements in distributed systems, collaborative environments, high-performance computing, and Big Data analytics. It strives to stay at the forefront of developments in grids, clouds, and the Internet of Things (IoT) to effectively address the challenges posed by these wide-area, fully distributed sensing and computing systems.
期刊最新文献
Identifying runtime libraries in statically linked linux binaries High throughput edit distance computation on FPGA-based accelerators using HLS In silico framework for genome analysis Adaptive ensemble optimization for memory-related hyperparameters in retraining DNN at edge Convergence-aware optimal checkpointing for exploratory deep learning training jobs
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1