Abdelhamid Alleg, T. Ahmed, M. Mosbah, R. Riggio, R. Boutaba
{"title":"Delay-aware VNF placement and chaining based on a flexible resource allocation approach","authors":"Abdelhamid Alleg, T. Ahmed, M. Mosbah, R. Riggio, R. Boutaba","doi":"10.23919/CNSM.2017.8255993","DOIUrl":null,"url":null,"abstract":"Network Function Virtualization (NFV) is a promising technology that is receiving significant attention in both academia and the industry. NFV paradigm proposes to decouple Network Functions (NFs) from dedicated hardware equipment, offering a better sharing of physical resources and providing more flexibility to network operators. However, in such environment, efficient management mechanisms are crucial to address the problem of Placement and Chaining of Virtual Network Functions (PC-VNF). In this paper, we introduce a PC-VNF model based on a flexible resource allocation approach that takes into account service requirements in terms of latency, in addition to traditional connectivity and resource utilization. This is particularly important for emerging 5G services such as ultrareliable, low latency and massive machine type communications. The end-to-end performance needs to meet the user expectations as well as service requirements to provide the desired QoS/QoE. Our main goal is to determine the optimal VNF placement minimizing resource consumption while providing specific latency (i.e., end-to-end delay) and avoiding violation of Service Level Agreements (SLA) by constraining allocated resources to a given VNF to reach its required performance. Results show that our approach achieves the required latency with better resources utilization compared to the classical approaches, with a reduction of up to 40% of resource consumption and a higher rate of accepted requests by recovering 15 to 60 % of the rejected requests.","PeriodicalId":211611,"journal":{"name":"2017 13th International Conference on Network and Service Management (CNSM)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"77","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 13th International Conference on Network and Service Management (CNSM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/CNSM.2017.8255993","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 77
Abstract
Network Function Virtualization (NFV) is a promising technology that is receiving significant attention in both academia and the industry. NFV paradigm proposes to decouple Network Functions (NFs) from dedicated hardware equipment, offering a better sharing of physical resources and providing more flexibility to network operators. However, in such environment, efficient management mechanisms are crucial to address the problem of Placement and Chaining of Virtual Network Functions (PC-VNF). In this paper, we introduce a PC-VNF model based on a flexible resource allocation approach that takes into account service requirements in terms of latency, in addition to traditional connectivity and resource utilization. This is particularly important for emerging 5G services such as ultrareliable, low latency and massive machine type communications. The end-to-end performance needs to meet the user expectations as well as service requirements to provide the desired QoS/QoE. Our main goal is to determine the optimal VNF placement minimizing resource consumption while providing specific latency (i.e., end-to-end delay) and avoiding violation of Service Level Agreements (SLA) by constraining allocated resources to a given VNF to reach its required performance. Results show that our approach achieves the required latency with better resources utilization compared to the classical approaches, with a reduction of up to 40% of resource consumption and a higher rate of accepted requests by recovering 15 to 60 % of the rejected requests.