Demand Aware Edge Caching Architecture for evolved Multimedia Broadcast Multicast Service to Reduce Latency and bandwidth Savings

Sridharan Natarajan, Debarata Das
{"title":"Demand Aware Edge Caching Architecture for evolved Multimedia Broadcast Multicast Service to Reduce Latency and bandwidth Savings","authors":"Sridharan Natarajan, Debarata Das","doi":"10.1109/CONECCT.2018.8482387","DOIUrl":null,"url":null,"abstract":"Network overload due to rampant increase in multimedia data consumption in LTE is the major cause of concern for telecom operators. To overcome the above said issue, operators are employing content caching in the LTE core network nodes like GW or in the access network nodes like eNodeB or a separate caching servers in the access or core networks and implementing efficient content distribution methods such as evolved multimedia broadcast and multicast service (eMBMS). To efficiently use the above methods, operators need to identify the right content and right location among the network of cached servers for caching. In this paper, we propose a novel idea for choosing the right content to cache based on the popularity rank calculated using the MooD (MBMS operation on Demand) framework. Furthermore, we proposed two new distribution architecture for the cached content in the LTE network based on the current demand estimation for the content. Our results reveal that, there are significant savings achieved in terms of reduced processing cost in the LTE network nodes and less latency from the proposed method against the existing concepts of non-caching standard architecture. We achieved around 27% improvement in latency and bandwidth from the proposed methods when the request for the content follows Zipfian type of distribution.","PeriodicalId":430389,"journal":{"name":"2018 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CONECCT.2018.8482387","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Network overload due to rampant increase in multimedia data consumption in LTE is the major cause of concern for telecom operators. To overcome the above said issue, operators are employing content caching in the LTE core network nodes like GW or in the access network nodes like eNodeB or a separate caching servers in the access or core networks and implementing efficient content distribution methods such as evolved multimedia broadcast and multicast service (eMBMS). To efficiently use the above methods, operators need to identify the right content and right location among the network of cached servers for caching. In this paper, we propose a novel idea for choosing the right content to cache based on the popularity rank calculated using the MooD (MBMS operation on Demand) framework. Furthermore, we proposed two new distribution architecture for the cached content in the LTE network based on the current demand estimation for the content. Our results reveal that, there are significant savings achieved in terms of reduced processing cost in the LTE network nodes and less latency from the proposed method against the existing concepts of non-caching standard architecture. We achieved around 27% improvement in latency and bandwidth from the proposed methods when the request for the content follows Zipfian type of distribution.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于需求感知边缘缓存架构的多媒体广播多播服务减少延迟和节省带宽
由于LTE的多媒体数据使用量剧增而导致的网络过载,是电信运营商们担忧的主要问题。为了克服上述问题,运营商正在LTE核心网节点(如GW)或接入网节点(如eNodeB)或接入网或核心网中的单独缓存服务器中采用内容缓存,并实施高效的内容分发方法,如演进的多媒体广播和多播服务(eMBMS)。为了有效地使用上述方法,操作人员需要在缓存服务器网络中确定正确的内容和正确的位置以进行缓存。在本文中,我们提出了一种基于MooD (MBMS operation on Demand)框架计算的流行度排名来选择正确的内容进行缓存的新思路。在此基础上,基于当前LTE网络对缓存内容的需求估计,提出了两种新的LTE网络缓存内容分布架构。我们的研究结果表明,相对于现有的非缓存标准架构概念,所提出的方法在降低LTE网络节点的处理成本和减少延迟方面取得了显著的节省。当对内容的请求遵循Zipfian类型的分发时,我们在延迟和带宽方面实现了27%的改进。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Strain Dependent Carrier Mobility in 8 − Pmmn Borophene: ab-initio study Diameter Scaling in III-V Gate-All-Around Transistor for Different Cross-Sections Atomistic Study of Acoustic Phonon Limited Mobility in Extremely Scaled Si and Ge Films Optimal Token Bucket Refilling for Tor network Traffic Pattern Analysis from GPS Data: A Case Study of Dhaka City
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1