基于强化学习的多源DASH自适应调度方法

IF 1.2 4区 计算机科学 Q4 COMPUTER SCIENCE, INFORMATION SYSTEMS Computer Science and Information Systems Pub Date : 2023-07-25 DOI:10.2298/csis220927055n
Nghia T. Nguyen, Long Luu, Phuong Vo, Sang Nguyen, Cuong T. Do, Ngoc-Thanh Nguyen
{"title":"基于强化学习的多源DASH自适应调度方法","authors":"Nghia T. Nguyen, Long Luu, Phuong Vo, Sang Nguyen, Cuong T. Do, Ngoc-Thanh Nguyen","doi":"10.2298/csis220927055n","DOIUrl":null,"url":null,"abstract":"Dynamic adaptive streaming over HTTP (DASH) has been widely used in video\n streaming recently. In DASH, the client downloads video chunks in order from\n a server. The rate adaptation function at the video client enhances the\n user?s quality-of-experience (QoE) by choosing a suitable quality level for\n each video chunk to download based on the network condition. Today networks\n such as content delivery networks, edge caching networks, content centric\n networks, etc. usually replicate video contents on multiple cache nodes. We\n study video streaming from multiple sources in this work. In multi-source\n streaming, video chunks may arrive out of order due to different conditions\n of the network paths. Hence, to guarantee a high QoE, the video client needs\n not only rate adaptation, but also chunk scheduling. Reinforcement learning\n (RL) has emerged as the state-of-the-art control method in various fields\n in recent years. This paper proposes two algorithms for streaming from\n multiple sources: RL-based adaptation with greedy scheduling (RLAGS) and\n RL-based adaptation and scheduling (RLAS). We also build a simulation\n environment for training and evaluation. The efficiency of the proposed\n algorithms is proved via extensive simulations with real-trace data.","PeriodicalId":50636,"journal":{"name":"Computer Science and Information Systems","volume":"95 1","pages":"157-173"},"PeriodicalIF":1.2000,"publicationDate":"2023-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Reinforcement learning - based adaptation and scheduling methods for multi-source DASH\",\"authors\":\"Nghia T. Nguyen, Long Luu, Phuong Vo, Sang Nguyen, Cuong T. Do, Ngoc-Thanh Nguyen\",\"doi\":\"10.2298/csis220927055n\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Dynamic adaptive streaming over HTTP (DASH) has been widely used in video\\n streaming recently. In DASH, the client downloads video chunks in order from\\n a server. The rate adaptation function at the video client enhances the\\n user?s quality-of-experience (QoE) by choosing a suitable quality level for\\n each video chunk to download based on the network condition. Today networks\\n such as content delivery networks, edge caching networks, content centric\\n networks, etc. usually replicate video contents on multiple cache nodes. We\\n study video streaming from multiple sources in this work. In multi-source\\n streaming, video chunks may arrive out of order due to different conditions\\n of the network paths. Hence, to guarantee a high QoE, the video client needs\\n not only rate adaptation, but also chunk scheduling. Reinforcement learning\\n (RL) has emerged as the state-of-the-art control method in various fields\\n in recent years. This paper proposes two algorithms for streaming from\\n multiple sources: RL-based adaptation with greedy scheduling (RLAGS) and\\n RL-based adaptation and scheduling (RLAS). We also build a simulation\\n environment for training and evaluation. The efficiency of the proposed\\n algorithms is proved via extensive simulations with real-trace data.\",\"PeriodicalId\":50636,\"journal\":{\"name\":\"Computer Science and Information Systems\",\"volume\":\"95 1\",\"pages\":\"157-173\"},\"PeriodicalIF\":1.2000,\"publicationDate\":\"2023-07-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Science and Information Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.2298/csis220927055n\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Science and Information Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.2298/csis220927055n","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

基于HTTP的动态自适应流媒体技术(DASH)近年来在视频流媒体中得到了广泛的应用。在DASH中,客户端按顺序从服务器下载视频块。视频客户端的速率自适应功能增强了用户体验。通过根据网络条件为下载的每个视频块选择合适的质量级别来提高视频的体验质量。今天的网络,如内容交付网络、边缘缓存网络、内容中心网络等,通常在多个缓存节点上复制视频内容。在这项工作中,我们研究了来自多个来源的视频流。在多源流媒体中,由于网络路径条件的不同,视频块可能会无序到达。因此,为了保证高QoE,视频客户端不仅需要速率适应,还需要块调度。近年来,强化学习(RL)已成为各个领域最先进的控制方法。本文提出了两种多源流处理算法:基于正则化的贪婪调度自适应算法(rlag)和基于正则化的自适应调度算法(RLAS)。我们还建立了一个模拟环境进行培训和评估。通过对实时跟踪数据的大量仿真,证明了所提算法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Reinforcement learning - based adaptation and scheduling methods for multi-source DASH
Dynamic adaptive streaming over HTTP (DASH) has been widely used in video streaming recently. In DASH, the client downloads video chunks in order from a server. The rate adaptation function at the video client enhances the user?s quality-of-experience (QoE) by choosing a suitable quality level for each video chunk to download based on the network condition. Today networks such as content delivery networks, edge caching networks, content centric networks, etc. usually replicate video contents on multiple cache nodes. We study video streaming from multiple sources in this work. In multi-source streaming, video chunks may arrive out of order due to different conditions of the network paths. Hence, to guarantee a high QoE, the video client needs not only rate adaptation, but also chunk scheduling. Reinforcement learning (RL) has emerged as the state-of-the-art control method in various fields in recent years. This paper proposes two algorithms for streaming from multiple sources: RL-based adaptation with greedy scheduling (RLAGS) and RL-based adaptation and scheduling (RLAS). We also build a simulation environment for training and evaluation. The efficiency of the proposed algorithms is proved via extensive simulations with real-trace data.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computer Science and Information Systems
Computer Science and Information Systems COMPUTER SCIENCE, INFORMATION SYSTEMS-COMPUTER SCIENCE, SOFTWARE ENGINEERING
CiteScore
2.30
自引率
21.40%
发文量
76
审稿时长
7.5 months
期刊介绍: About the journal Home page Contact information Aims and scope Indexing information Editorial policies ComSIS consortium Journal boards Managing board For authors Information for contributors Paper submission Article submission through OJS Copyright transfer form Download section For readers Forthcoming articles Current issue Archive Subscription For reviewers View and review submissions News Journal''s Facebook page Call for special issue New issue notification Aims and scope Computer Science and Information Systems (ComSIS) is an international refereed journal, published in Serbia. The objective of ComSIS is to communicate important research and development results in the areas of computer science, software engineering, and information systems.
期刊最新文献
Reviewer Acknowledgements for Computer and Information Science, Vol. 16, No. 3 Drawbacks of Traditional Environmental Monitoring Systems Improving the Classification Ability of Delegating Classifiers Using Different Supervised Machine Learning Algorithms Reinforcement learning - based adaptation and scheduling methods for multi-source DASH On the Convergence of Hypergeometric to Binomial Distributions
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1