Yu-Huei Tseng, G. Aravinthan, Sofiane Imadali, D. Houatra, Bruno Mongazon-Cazavet
Network Function Virtualization (NFV) as new network paradigm provides an opportunity to accelerate network innovation in the next generation mobile network (MN). The monitoring, in this case, helps the analysis and gives better insights into the network function in real time. Performance indicators and runtime conditions related data can be streamed to enhance this process. In order to enable the deployment of the stream processing service automatic, we present the SPaaS-NFV framework, which is implemented to demonstrate the concept of streaming-processing-as-a-service for NFV. SPaaS-NFV can automate the deployment of the stream processing services by receiving user's request in Json. This framework intends to make user focus on data and business insights, without the worry of building stream processing infrastructure and tooling in the NFV environment.
{"title":"SPaaS-NFV: Enabling Stream-Processing-as-a-Service for NFV","authors":"Yu-Huei Tseng, G. Aravinthan, Sofiane Imadali, D. Houatra, Bruno Mongazon-Cazavet","doi":"10.1109/SC2.2018.00021","DOIUrl":"https://doi.org/10.1109/SC2.2018.00021","url":null,"abstract":"Network Function Virtualization (NFV) as new network paradigm provides an opportunity to accelerate network innovation in the next generation mobile network (MN). The monitoring, in this case, helps the analysis and gives better insights into the network function in real time. Performance indicators and runtime conditions related data can be streamed to enhance this process. In order to enable the deployment of the stream processing service automatic, we present the SPaaS-NFV framework, which is implemented to demonstrate the concept of streaming-processing-as-a-service for NFV. SPaaS-NFV can automate the deployment of the stream processing services by receiving user's request in Json. This framework intends to make user focus on data and business insights, without the worry of building stream processing infrastructure and tooling in the NFV environment.","PeriodicalId":340244,"journal":{"name":"2018 IEEE 8th International Symposium on Cloud and Service Computing (SC2)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129042224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Live migration is an important feature in modern software-defined datacenters and cloud computing environments. Dynamic resource management, load balance, power saving and fault tolerance are all dependent on the live migration feature. Despite the importance of live migration, the cost of live migration cannot be ignored and may result in service availability degradation. Live migration cost includes the migration time, downtime, CPU overhead, network and power consumption. There are many research articles that discuss the problem of live migration cost with different scopes like analyzing the cost and relate it to the parameters that control it, proposing new migration algorithms that minimize the cost and also predicting the migration cost. For the best of our knowledge, most of the papers that discuss the migration cost problem focus on open source hypervisors. For the research articles focus on VMware environments, none of the published articles proposed migration time, network overhead and power consumption modeling for single and multiple VMs live migration. In this paper, we propose empirical models for the live migration time, network overhead and power consumption for single and multiple VMs migration. The proposed models are obtained using a VMware based testbed.
{"title":"Enhanced Cost Analysis of Multiple Virtual Machines Live Migration in VMware Environments","authors":"M. E. Elsaid, Shawish Ahmed, C. Meinel","doi":"10.1109/SC2.2018.00010","DOIUrl":"https://doi.org/10.1109/SC2.2018.00010","url":null,"abstract":"Live migration is an important feature in modern software-defined datacenters and cloud computing environments. Dynamic resource management, load balance, power saving and fault tolerance are all dependent on the live migration feature. Despite the importance of live migration, the cost of live migration cannot be ignored and may result in service availability degradation. Live migration cost includes the migration time, downtime, CPU overhead, network and power consumption. There are many research articles that discuss the problem of live migration cost with different scopes like analyzing the cost and relate it to the parameters that control it, proposing new migration algorithms that minimize the cost and also predicting the migration cost. For the best of our knowledge, most of the papers that discuss the migration cost problem focus on open source hypervisors. For the research articles focus on VMware environments, none of the published articles proposed migration time, network overhead and power consumption modeling for single and multiple VMs live migration. In this paper, we propose empirical models for the live migration time, network overhead and power consumption for single and multiple VMs migration. The proposed models are obtained using a VMware based testbed.","PeriodicalId":340244,"journal":{"name":"2018 IEEE 8th International Symposium on Cloud and Service Computing (SC2)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121266677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Song, Qibo Sun, Ao Zhou, Shangguang Wang, Jinglin Li
Hierarchical Task Network (HTN) planning is an AI planning technique, which can be employed to implement service composition. Current HTN-based service composition systems fail to solve the problem comprehensively for only considering functional properties and ignoring the constraints of QoS. In this paper, we address this issue and solve the problem by exploiting the HTN planner JSHOP2. We implement an automatic service composition system by extending JSHOP2 with the consideration of both functional and non-functional properties. Furthermore, we conduct experiments on the system. Experiments show the effectiveness of our system.
{"title":"QoS-Aware Service Composition Using HTN Planner","authors":"Y. Song, Qibo Sun, Ao Zhou, Shangguang Wang, Jinglin Li","doi":"10.1109/SC2.2018.00022","DOIUrl":"https://doi.org/10.1109/SC2.2018.00022","url":null,"abstract":"Hierarchical Task Network (HTN) planning is an AI planning technique, which can be employed to implement service composition. Current HTN-based service composition systems fail to solve the problem comprehensively for only considering functional properties and ignoring the constraints of QoS. In this paper, we address this issue and solve the problem by exploiting the HTN planner JSHOP2. We implement an automatic service composition system by extending JSHOP2 with the consideration of both functional and non-functional properties. Furthermore, we conduct experiments on the system. Experiments show the effectiveness of our system.","PeriodicalId":340244,"journal":{"name":"2018 IEEE 8th International Symposium on Cloud and Service Computing (SC2)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127666879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Function as a Service(FaaS) has been widely prevalent in the cloud computing area with the evolution of the cloud computing paradigm and the growing demand for event-based computing models. We have analyzed the preparation load required for the actual execution of a function, from assignment of a function execution walker to loading a function on the FaaS platform, by testing the execution of a dummy function on a simple FaaS prototype. According to the analysis results, we found that the cost of first worker allocation requires 1,850ms even though the lightweight container is used, and then the worker re-allocation cost require 470ms at the same node. The result shows that the function service is not enough to be used as a high efficiency processing calculation platform. We propose a new worker scheduling algorithm to appropriately distribute the worker's preparation load related to execution of functions so that FaaS platform is suitable for high efficiency computing environment. Proposed algorithm is to distribute the worker 's allocation tasks in two steps before the request occurs, and predict the number of workers required to be allocated in advance. When applying the proposed worker scheduling algorithm in FaaS platform under development, we estimate that worker allocation request can be processed with an allocation cost of less than 3% compared to the FaaS prototype. Therefore, it is expected that the functional service will become a high efficiency computing platform through the significant improvement of the worker allocation cost.
{"title":"Design of the Cost Effective Execution Worker Scheduling Algorithm for FaaS Platform Using Two-Step Allocation and Dynamic Scaling","authors":"Youngho Kim, Gyuil Cha","doi":"10.1109/SC2.2018.00027","DOIUrl":"https://doi.org/10.1109/SC2.2018.00027","url":null,"abstract":"Function as a Service(FaaS) has been widely prevalent in the cloud computing area with the evolution of the cloud computing paradigm and the growing demand for event-based computing models. We have analyzed the preparation load required for the actual execution of a function, from assignment of a function execution walker to loading a function on the FaaS platform, by testing the execution of a dummy function on a simple FaaS prototype. According to the analysis results, we found that the cost of first worker allocation requires 1,850ms even though the lightweight container is used, and then the worker re-allocation cost require 470ms at the same node. The result shows that the function service is not enough to be used as a high efficiency processing calculation platform. We propose a new worker scheduling algorithm to appropriately distribute the worker's preparation load related to execution of functions so that FaaS platform is suitable for high efficiency computing environment. Proposed algorithm is to distribute the worker 's allocation tasks in two steps before the request occurs, and predict the number of workers required to be allocated in advance. When applying the proposed worker scheduling algorithm in FaaS platform under development, we estimate that worker allocation request can be processed with an allocation cost of less than 3% compared to the FaaS prototype. Therefore, it is expected that the functional service will become a high efficiency computing platform through the significant improvement of the worker allocation cost.","PeriodicalId":340244,"journal":{"name":"2018 IEEE 8th International Symposium on Cloud and Service Computing (SC2)","volume":"80 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131654127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Title Page i","authors":"","doi":"10.1109/sc2.2018.00001","DOIUrl":"https://doi.org/10.1109/sc2.2018.00001","url":null,"abstract":"","PeriodicalId":340244,"journal":{"name":"2018 IEEE 8th International Symposium on Cloud and Service Computing (SC2)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132962047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ziye Yang, Changpeng Liu, Yanbo Zhou, Xiaodong Liu, Gang Cao
Nowadays, more and more NVMe SSDs (PCIe SSDs accessed by NVMe protocol) are deployed and virtualized by cloud providers to improve the I/O experience in virtual machines rent by tenants. Though the IOPS and latency for read and write on NVMe SSDs are greatly improved, it seems that the existing software cannot efficiently explore the abilities of those NVMe SSDs, and it is even worse on virtualized platform. There is long I/O stack for applications to access NVMe SSDs in guest VMs, and the overhead of which can be divided into three parts, i.e., (1) I/O execution on emulated NVMe device in guest operating system (OS); (2) Context switch (e.g., VM_Exit) and data movement overhead between guest OS and host OS; (3) I/O execution overhead in host OS on physical NVMe SSDs. To address the long I/O stack issue, we propose SPDK-vhost-NVMe, an I/O service target relying on user space NVMe drivers, which can collaborate with hypervisor to accelerate NVMe I/Os inside VMs. Generally our approach eliminates the unnecessary VM_Exit overhead and also shrinks the I/O execution stack in host OS. Leveraged by SPDK-vhost-NVMe, the performance of storage I/Os in guest OS can be improved. Compared with QEMU native NVMe emulation solution, the best solution SPDK-vhost NVMe has 6X improvement in IOPS and 70% reduction in latency for some read workloads generated by FIO. Also spdk-vhost-NVMe has 5X performance improvement with some db_benchmark test cases (e.g., random read) on RocksDB. Even compared with other optimized SPDK vhost-scsi and vhost-blk solutions, SPDK-vhost-NVMe is also competitive in per core performance aspect.
{"title":"SPDK Vhost-NVMe: Accelerating I/Os in Virtual Machines on NVMe SSDs via User Space Vhost Target","authors":"Ziye Yang, Changpeng Liu, Yanbo Zhou, Xiaodong Liu, Gang Cao","doi":"10.1109/SC2.2018.00016","DOIUrl":"https://doi.org/10.1109/SC2.2018.00016","url":null,"abstract":"Nowadays, more and more NVMe SSDs (PCIe SSDs accessed by NVMe protocol) are deployed and virtualized by cloud providers to improve the I/O experience in virtual machines rent by tenants. Though the IOPS and latency for read and write on NVMe SSDs are greatly improved, it seems that the existing software cannot efficiently explore the abilities of those NVMe SSDs, and it is even worse on virtualized platform. There is long I/O stack for applications to access NVMe SSDs in guest VMs, and the overhead of which can be divided into three parts, i.e., (1) I/O execution on emulated NVMe device in guest operating system (OS); (2) Context switch (e.g., VM_Exit) and data movement overhead between guest OS and host OS; (3) I/O execution overhead in host OS on physical NVMe SSDs. To address the long I/O stack issue, we propose SPDK-vhost-NVMe, an I/O service target relying on user space NVMe drivers, which can collaborate with hypervisor to accelerate NVMe I/Os inside VMs. Generally our approach eliminates the unnecessary VM_Exit overhead and also shrinks the I/O execution stack in host OS. Leveraged by SPDK-vhost-NVMe, the performance of storage I/Os in guest OS can be improved. Compared with QEMU native NVMe emulation solution, the best solution SPDK-vhost NVMe has 6X improvement in IOPS and 70% reduction in latency for some read workloads generated by FIO. Also spdk-vhost-NVMe has 5X performance improvement with some db_benchmark test cases (e.g., random read) on RocksDB. Even compared with other optimized SPDK vhost-scsi and vhost-blk solutions, SPDK-vhost-NVMe is also competitive in per core performance aspect.","PeriodicalId":340244,"journal":{"name":"2018 IEEE 8th International Symposium on Cloud and Service Computing (SC2)","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127985455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The 5G user plane is bound to play a significant role in fulfilling the dynamic demand created from the heterogeneous device layer, with novel concepts introducing the flexible deployment of user plane functions and per-user traffic management. The focus in this paper lies on the dynamic control of 5G's SDNized transport network to optimize the user plane management for mobile users. For this purpose we propose the concept of Anticipatory User Plane Management for 5G, which aims at optimized, learning-based and foresighted user plane management, reducing the user plane reconfiguration latency caused by users' mobility. In particular we contribute with two different approaches that exploit the prediction of user behavior for improved post-handover procedures, i) by suitable selection of intermediate UPFs based on the anticipated user behavior, and ii) by applying a pre-configuration of the user data plane via the means of a novel UPF mode.
{"title":"Anticipatory User Plane Management for 5G","authors":"Sebastian Peters, M. A. Khan","doi":"10.1109/SC2.2018.00009","DOIUrl":"https://doi.org/10.1109/SC2.2018.00009","url":null,"abstract":"The 5G user plane is bound to play a significant role in fulfilling the dynamic demand created from the heterogeneous device layer, with novel concepts introducing the flexible deployment of user plane functions and per-user traffic management. The focus in this paper lies on the dynamic control of 5G's SDNized transport network to optimize the user plane management for mobile users. For this purpose we propose the concept of Anticipatory User Plane Management for 5G, which aims at optimized, learning-based and foresighted user plane management, reducing the user plane reconfiguration latency caused by users' mobility. In particular we contribute with two different approaches that exploit the prediction of user behavior for improved post-handover procedures, i) by suitable selection of intermediate UPFs based on the anticipated user behavior, and ii) by applying a pre-configuration of the user data plane via the means of a novel UPF mode.","PeriodicalId":340244,"journal":{"name":"2018 IEEE 8th International Symposium on Cloud and Service Computing (SC2)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122312457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Publisher's Information","authors":"","doi":"10.1109/sc2.2018.00030","DOIUrl":"https://doi.org/10.1109/sc2.2018.00030","url":null,"abstract":"","PeriodicalId":340244,"journal":{"name":"2018 IEEE 8th International Symposium on Cloud and Service Computing (SC2)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116368976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tom Goethals, Merlijn Sebrechts, A. Atrey, B. Volckaert, F. Turck
Unikernels are a relatively recent way to create and quickly deploy extremely small virtual machines that do not require as much functional and operational software overhead as containers or virtual machines by leaving out unnecessary parts. This paradigm aims to replace bulky virtual machines on one hand, and to open up new classes of hardware for virtualization and networking applications on the other. In recent years, the tool chains used to create unikernels have grown from proof of concept to platforms that can run both new and existing software written in various programming languages. This paper studies the performance (both execution time and memory footprint) of unikernels versus Docker containers in the context of REST services and heavy processing workloads, written in Java, Go, and Python. With the results of the performance evaluations, predictions can be made about which cases could benefit from the use of unikernels over containers.
{"title":"Unikernels vs Containers: An In-Depth Benchmarking Study in the Context of Microservice Applications","authors":"Tom Goethals, Merlijn Sebrechts, A. Atrey, B. Volckaert, F. Turck","doi":"10.1109/SC2.2018.00008","DOIUrl":"https://doi.org/10.1109/SC2.2018.00008","url":null,"abstract":"Unikernels are a relatively recent way to create and quickly deploy extremely small virtual machines that do not require as much functional and operational software overhead as containers or virtual machines by leaving out unnecessary parts. This paradigm aims to replace bulky virtual machines on one hand, and to open up new classes of hardware for virtualization and networking applications on the other. In recent years, the tool chains used to create unikernels have grown from proof of concept to platforms that can run both new and existing software written in various programming languages. This paper studies the performance (both execution time and memory footprint) of unikernels versus Docker containers in the context of REST services and heavy processing workloads, written in Java, Go, and Python. With the results of the performance evaluations, predictions can be made about which cases could benefit from the use of unikernels over containers.","PeriodicalId":340244,"journal":{"name":"2018 IEEE 8th International Symposium on Cloud and Service Computing (SC2)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128238737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The advent of 5G and its ever increasing stringent requirements for bandwidth, latency, and quality of service pushes the boundaries of legacy Mobile Network Operators' technologies. Network Function Virtualization is one promising attempt at solving those challenges. At its essence, NFV is about running network functions as software workloads on commodity hardware to optimize deployment costs and simplify the life-cycle management of network functions. However, with the advent of open source cloud native tools and architectures, early VM-based NFV designs may need to be upgraded to better benefit from new trends. We propose to review current NFV management solutions and a definition of the cloud native toolbox in the context of NFV. We then present a cloud native software platform allowing MNOs to expose their assets: networking resources, mobile services, and cloud computing to Over-The-Top players: 5GaaS. We also introduce our open source Cloud Native VNF API design as an application of the proposed design principles and describe from a standard perspective the feasibility of our prototype.
{"title":"Cloud Native 5G Virtual Network Functions: Design Principles and Use Cases","authors":"Sofiane Imadali, Ayoub Bousselmi","doi":"10.1109/SC2.2018.00019","DOIUrl":"https://doi.org/10.1109/SC2.2018.00019","url":null,"abstract":"The advent of 5G and its ever increasing stringent requirements for bandwidth, latency, and quality of service pushes the boundaries of legacy Mobile Network Operators' technologies. Network Function Virtualization is one promising attempt at solving those challenges. At its essence, NFV is about running network functions as software workloads on commodity hardware to optimize deployment costs and simplify the life-cycle management of network functions. However, with the advent of open source cloud native tools and architectures, early VM-based NFV designs may need to be upgraded to better benefit from new trends. We propose to review current NFV management solutions and a definition of the cloud native toolbox in the context of NFV. We then present a cloud native software platform allowing MNOs to expose their assets: networking resources, mobile services, and cloud computing to Over-The-Top players: 5GaaS. We also introduce our open source Cloud Native VNF API design as an application of the proposed design principles and describe from a standard perspective the feasibility of our prototype.","PeriodicalId":340244,"journal":{"name":"2018 IEEE 8th International Symposium on Cloud and Service Computing (SC2)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133651271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}