{"title":"Demand Aware Edge Caching Architecture for evolved Multimedia Broadcast Multicast Service to Reduce Latency and bandwidth Savings","authors":"Sridharan Natarajan, Debarata Das","doi":"10.1109/CONECCT.2018.8482387","DOIUrl":null,"url":null,"abstract":"Network overload due to rampant increase in multimedia data consumption in LTE is the major cause of concern for telecom operators. To overcome the above said issue, operators are employing content caching in the LTE core network nodes like GW or in the access network nodes like eNodeB or a separate caching servers in the access or core networks and implementing efficient content distribution methods such as evolved multimedia broadcast and multicast service (eMBMS). To efficiently use the above methods, operators need to identify the right content and right location among the network of cached servers for caching. In this paper, we propose a novel idea for choosing the right content to cache based on the popularity rank calculated using the MooD (MBMS operation on Demand) framework. Furthermore, we proposed two new distribution architecture for the cached content in the LTE network based on the current demand estimation for the content. Our results reveal that, there are significant savings achieved in terms of reduced processing cost in the LTE network nodes and less latency from the proposed method against the existing concepts of non-caching standard architecture. We achieved around 27% improvement in latency and bandwidth from the proposed methods when the request for the content follows Zipfian type of distribution.","PeriodicalId":430389,"journal":{"name":"2018 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CONECCT.2018.8482387","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Network overload due to rampant increase in multimedia data consumption in LTE is the major cause of concern for telecom operators. To overcome the above said issue, operators are employing content caching in the LTE core network nodes like GW or in the access network nodes like eNodeB or a separate caching servers in the access or core networks and implementing efficient content distribution methods such as evolved multimedia broadcast and multicast service (eMBMS). To efficiently use the above methods, operators need to identify the right content and right location among the network of cached servers for caching. In this paper, we propose a novel idea for choosing the right content to cache based on the popularity rank calculated using the MooD (MBMS operation on Demand) framework. Furthermore, we proposed two new distribution architecture for the cached content in the LTE network based on the current demand estimation for the content. Our results reveal that, there are significant savings achieved in terms of reduced processing cost in the LTE network nodes and less latency from the proposed method against the existing concepts of non-caching standard architecture. We achieved around 27% improvement in latency and bandwidth from the proposed methods when the request for the content follows Zipfian type of distribution.
由于LTE的多媒体数据使用量剧增而导致的网络过载,是电信运营商们担忧的主要问题。为了克服上述问题,运营商正在LTE核心网节点(如GW)或接入网节点(如eNodeB)或接入网或核心网中的单独缓存服务器中采用内容缓存,并实施高效的内容分发方法,如演进的多媒体广播和多播服务(eMBMS)。为了有效地使用上述方法,操作人员需要在缓存服务器网络中确定正确的内容和正确的位置以进行缓存。在本文中,我们提出了一种基于MooD (MBMS operation on Demand)框架计算的流行度排名来选择正确的内容进行缓存的新思路。在此基础上,基于当前LTE网络对缓存内容的需求估计,提出了两种新的LTE网络缓存内容分布架构。我们的研究结果表明,相对于现有的非缓存标准架构概念,所提出的方法在降低LTE网络节点的处理成本和减少延迟方面取得了显著的节省。当对内容的请求遵循Zipfian类型的分发时,我们在延迟和带宽方面实现了27%的改进。