首页 > 最新文献

2018 IEEE 8th International Symposium on Cloud and Service Computing (SC2)最新文献

英文 中文
SPaaS-NFV: Enabling Stream-Processing-as-a-Service for NFV spas -NFV:为NFV启用流处理即服务
Pub Date : 2018-11-01 DOI: 10.1109/SC2.2018.00021
Yu-Huei Tseng, G. Aravinthan, Sofiane Imadali, D. Houatra, Bruno Mongazon-Cazavet
Network Function Virtualization (NFV) as new network paradigm provides an opportunity to accelerate network innovation in the next generation mobile network (MN). The monitoring, in this case, helps the analysis and gives better insights into the network function in real time. Performance indicators and runtime conditions related data can be streamed to enhance this process. In order to enable the deployment of the stream processing service automatic, we present the SPaaS-NFV framework, which is implemented to demonstrate the concept of streaming-processing-as-a-service for NFV. SPaaS-NFV can automate the deployment of the stream processing services by receiving user's request in Json. This framework intends to make user focus on data and business insights, without the worry of building stream processing infrastructure and tooling in the NFV environment.
网络功能虚拟化(NFV)作为一种新的网络范式,为下一代移动网络(MN)加速网络创新提供了契机。在这种情况下,监控有助于分析并实时更好地了解网络功能。性能指标和运行时条件相关的数据可以流式传输以增强此过程。为了使流处理服务能够自动部署,我们提出了SPaaS-NFV框架,该框架的实现是为了演示NFV的流处理即服务的概念。spas - nfv可以通过接收Json格式的用户请求来自动部署流处理服务。该框架旨在让用户专注于数据和业务洞察力,而不必担心在NFV环境中构建流处理基础设施和工具。
{"title":"SPaaS-NFV: Enabling Stream-Processing-as-a-Service for NFV","authors":"Yu-Huei Tseng, G. Aravinthan, Sofiane Imadali, D. Houatra, Bruno Mongazon-Cazavet","doi":"10.1109/SC2.2018.00021","DOIUrl":"https://doi.org/10.1109/SC2.2018.00021","url":null,"abstract":"Network Function Virtualization (NFV) as new network paradigm provides an opportunity to accelerate network innovation in the next generation mobile network (MN). The monitoring, in this case, helps the analysis and gives better insights into the network function in real time. Performance indicators and runtime conditions related data can be streamed to enhance this process. In order to enable the deployment of the stream processing service automatic, we present the SPaaS-NFV framework, which is implemented to demonstrate the concept of streaming-processing-as-a-service for NFV. SPaaS-NFV can automate the deployment of the stream processing services by receiving user's request in Json. This framework intends to make user focus on data and business insights, without the worry of building stream processing infrastructure and tooling in the NFV environment.","PeriodicalId":340244,"journal":{"name":"2018 IEEE 8th International Symposium on Cloud and Service Computing (SC2)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129042224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced Cost Analysis of Multiple Virtual Machines Live Migration in VMware Environments VMware环境下多虚拟机热迁移的增强成本分析
Pub Date : 2018-11-01 DOI: 10.1109/SC2.2018.00010
M. E. Elsaid, Shawish Ahmed, C. Meinel
Live migration is an important feature in modern software-defined datacenters and cloud computing environments. Dynamic resource management, load balance, power saving and fault tolerance are all dependent on the live migration feature. Despite the importance of live migration, the cost of live migration cannot be ignored and may result in service availability degradation. Live migration cost includes the migration time, downtime, CPU overhead, network and power consumption. There are many research articles that discuss the problem of live migration cost with different scopes like analyzing the cost and relate it to the parameters that control it, proposing new migration algorithms that minimize the cost and also predicting the migration cost. For the best of our knowledge, most of the papers that discuss the migration cost problem focus on open source hypervisors. For the research articles focus on VMware environments, none of the published articles proposed migration time, network overhead and power consumption modeling for single and multiple VMs live migration. In this paper, we propose empirical models for the live migration time, network overhead and power consumption for single and multiple VMs migration. The proposed models are obtained using a VMware based testbed.
实时迁移是现代软件定义数据中心和云计算环境中的一个重要特性。动态资源管理、负载平衡、节能和容错都依赖于实时迁移特性。尽管动态迁移很重要,但动态迁移的成本不容忽视,并且可能导致服务可用性降低。实时迁移成本包括迁移时间、停机时间、CPU开销、网络和功耗。许多研究文章从不同的角度讨论了实时迁移成本问题,如分析成本并将其与控制成本的参数联系起来,提出最小化成本的新迁移算法以及预测迁移成本。据我们所知,大多数讨论迁移成本问题的论文都集中在开源管理程序上。对于专注于VMware环境的研究文章,发表的文章中没有一篇提出针对单个和多个虚拟机实时迁移的迁移时间、网络开销和功耗建模。在本文中,我们提出了单个和多个虚拟机迁移的实时迁移时间、网络开销和功耗的经验模型。在基于VMware的测试平台上获得了所提出的模型。
{"title":"Enhanced Cost Analysis of Multiple Virtual Machines Live Migration in VMware Environments","authors":"M. E. Elsaid, Shawish Ahmed, C. Meinel","doi":"10.1109/SC2.2018.00010","DOIUrl":"https://doi.org/10.1109/SC2.2018.00010","url":null,"abstract":"Live migration is an important feature in modern software-defined datacenters and cloud computing environments. Dynamic resource management, load balance, power saving and fault tolerance are all dependent on the live migration feature. Despite the importance of live migration, the cost of live migration cannot be ignored and may result in service availability degradation. Live migration cost includes the migration time, downtime, CPU overhead, network and power consumption. There are many research articles that discuss the problem of live migration cost with different scopes like analyzing the cost and relate it to the parameters that control it, proposing new migration algorithms that minimize the cost and also predicting the migration cost. For the best of our knowledge, most of the papers that discuss the migration cost problem focus on open source hypervisors. For the research articles focus on VMware environments, none of the published articles proposed migration time, network overhead and power consumption modeling for single and multiple VMs live migration. In this paper, we propose empirical models for the live migration time, network overhead and power consumption for single and multiple VMs migration. The proposed models are obtained using a VMware based testbed.","PeriodicalId":340244,"journal":{"name":"2018 IEEE 8th International Symposium on Cloud and Service Computing (SC2)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121266677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
QoS-Aware Service Composition Using HTN Planner 使用HTN Planner的qos感知服务组合
Pub Date : 2018-11-01 DOI: 10.1109/SC2.2018.00022
Y. Song, Qibo Sun, Ao Zhou, Shangguang Wang, Jinglin Li
Hierarchical Task Network (HTN) planning is an AI planning technique, which can be employed to implement service composition. Current HTN-based service composition systems fail to solve the problem comprehensively for only considering functional properties and ignoring the constraints of QoS. In this paper, we address this issue and solve the problem by exploiting the HTN planner JSHOP2. We implement an automatic service composition system by extending JSHOP2 with the consideration of both functional and non-functional properties. Furthermore, we conduct experiments on the system. Experiments show the effectiveness of our system.
分层任务网络(HTN)规划是一种人工智能规划技术,可用于实现服务组合。目前基于html的业务组合系统只考虑功能属性,忽略了QoS的约束,未能全面解决问题。本文利用HTN规划器JSHOP2解决了这一问题。通过扩展JSHOP2,同时考虑功能性和非功能性属性,我们实现了一个自动服务组合系统。并对系统进行了实验研究。实验证明了该系统的有效性。
{"title":"QoS-Aware Service Composition Using HTN Planner","authors":"Y. Song, Qibo Sun, Ao Zhou, Shangguang Wang, Jinglin Li","doi":"10.1109/SC2.2018.00022","DOIUrl":"https://doi.org/10.1109/SC2.2018.00022","url":null,"abstract":"Hierarchical Task Network (HTN) planning is an AI planning technique, which can be employed to implement service composition. Current HTN-based service composition systems fail to solve the problem comprehensively for only considering functional properties and ignoring the constraints of QoS. In this paper, we address this issue and solve the problem by exploiting the HTN planner JSHOP2. We implement an automatic service composition system by extending JSHOP2 with the consideration of both functional and non-functional properties. Furthermore, we conduct experiments on the system. Experiments show the effectiveness of our system.","PeriodicalId":340244,"journal":{"name":"2018 IEEE 8th International Symposium on Cloud and Service Computing (SC2)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127666879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Design of the Cost Effective Execution Worker Scheduling Algorithm for FaaS Platform Using Two-Step Allocation and Dynamic Scaling 基于两步分配和动态扩展的FaaS平台高效调度算法设计
Pub Date : 2018-11-01 DOI: 10.1109/SC2.2018.00027
Youngho Kim, Gyuil Cha
Function as a Service(FaaS) has been widely prevalent in the cloud computing area with the evolution of the cloud computing paradigm and the growing demand for event-based computing models. We have analyzed the preparation load required for the actual execution of a function, from assignment of a function execution walker to loading a function on the FaaS platform, by testing the execution of a dummy function on a simple FaaS prototype. According to the analysis results, we found that the cost of first worker allocation requires 1,850ms even though the lightweight container is used, and then the worker re-allocation cost require 470ms at the same node. The result shows that the function service is not enough to be used as a high efficiency processing calculation platform. We propose a new worker scheduling algorithm to appropriately distribute the worker's preparation load related to execution of functions so that FaaS platform is suitable for high efficiency computing environment. Proposed algorithm is to distribute the worker 's allocation tasks in two steps before the request occurs, and predict the number of workers required to be allocated in advance. When applying the proposed worker scheduling algorithm in FaaS platform under development, we estimate that worker allocation request can be processed with an allocation cost of less than 3% compared to the FaaS prototype. Therefore, it is expected that the functional service will become a high efficiency computing platform through the significant improvement of the worker allocation cost.
随着云计算范式的发展和对基于事件的计算模型的需求不断增长,功能即服务(FaaS)在云计算领域广泛流行。通过在一个简单的FaaS原型上测试一个虚拟函数的执行,我们分析了函数实际执行所需的准备负载,从函数执行漫步器的分配到在FaaS平台上加载函数。根据分析结果,我们发现即使使用轻量级容器,第一个worker分配的成本也需要1,850ms,然后在同一节点上重新分配worker的成本需要470ms。结果表明,函数服务不足以作为一个高效的处理计算平台。我们提出了一种新的worker调度算法,合理分配worker在函数执行过程中的准备负荷,使FaaS平台更适合于高效的计算环境。提出的算法是在请求发生前分两步分配工人的分配任务,并提前预测需要分配的工人数量。将提出的工人调度算法应用于正在开发的FaaS平台时,我们估计与FaaS原型相比,可以以低于3%的分配成本处理工人分配请求。因此,期望功能服务通过显著提高工人分配成本,成为一个高效的计算平台。
{"title":"Design of the Cost Effective Execution Worker Scheduling Algorithm for FaaS Platform Using Two-Step Allocation and Dynamic Scaling","authors":"Youngho Kim, Gyuil Cha","doi":"10.1109/SC2.2018.00027","DOIUrl":"https://doi.org/10.1109/SC2.2018.00027","url":null,"abstract":"Function as a Service(FaaS) has been widely prevalent in the cloud computing area with the evolution of the cloud computing paradigm and the growing demand for event-based computing models. We have analyzed the preparation load required for the actual execution of a function, from assignment of a function execution walker to loading a function on the FaaS platform, by testing the execution of a dummy function on a simple FaaS prototype. According to the analysis results, we found that the cost of first worker allocation requires 1,850ms even though the lightweight container is used, and then the worker re-allocation cost require 470ms at the same node. The result shows that the function service is not enough to be used as a high efficiency processing calculation platform. We propose a new worker scheduling algorithm to appropriately distribute the worker's preparation load related to execution of functions so that FaaS platform is suitable for high efficiency computing environment. Proposed algorithm is to distribute the worker 's allocation tasks in two steps before the request occurs, and predict the number of workers required to be allocated in advance. When applying the proposed worker scheduling algorithm in FaaS platform under development, we estimate that worker allocation request can be processed with an allocation cost of less than 3% compared to the FaaS prototype. Therefore, it is expected that the functional service will become a high efficiency computing platform through the significant improvement of the worker allocation cost.","PeriodicalId":340244,"journal":{"name":"2018 IEEE 8th International Symposium on Cloud and Service Computing (SC2)","volume":"80 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131654127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Title Page i 第1页
Pub Date : 2018-11-01 DOI: 10.1109/sc2.2018.00001
{"title":"Title Page i","authors":"","doi":"10.1109/sc2.2018.00001","DOIUrl":"https://doi.org/10.1109/sc2.2018.00001","url":null,"abstract":"","PeriodicalId":340244,"journal":{"name":"2018 IEEE 8th International Symposium on Cloud and Service Computing (SC2)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132962047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SPDK Vhost-NVMe: Accelerating I/Os in Virtual Machines on NVMe SSDs via User Space Vhost Target SPDK Vhost-NVMe:通过用户空间Vhost目标加速NVMe ssd上虚拟机的I/ o
Pub Date : 2018-11-01 DOI: 10.1109/SC2.2018.00016
Ziye Yang, Changpeng Liu, Yanbo Zhou, Xiaodong Liu, Gang Cao
Nowadays, more and more NVMe SSDs (PCIe SSDs accessed by NVMe protocol) are deployed and virtualized by cloud providers to improve the I/O experience in virtual machines rent by tenants. Though the IOPS and latency for read and write on NVMe SSDs are greatly improved, it seems that the existing software cannot efficiently explore the abilities of those NVMe SSDs, and it is even worse on virtualized platform. There is long I/O stack for applications to access NVMe SSDs in guest VMs, and the overhead of which can be divided into three parts, i.e., (1) I/O execution on emulated NVMe device in guest operating system (OS); (2) Context switch (e.g., VM_Exit) and data movement overhead between guest OS and host OS; (3) I/O execution overhead in host OS on physical NVMe SSDs. To address the long I/O stack issue, we propose SPDK-vhost-NVMe, an I/O service target relying on user space NVMe drivers, which can collaborate with hypervisor to accelerate NVMe I/Os inside VMs. Generally our approach eliminates the unnecessary VM_Exit overhead and also shrinks the I/O execution stack in host OS. Leveraged by SPDK-vhost-NVMe, the performance of storage I/Os in guest OS can be improved. Compared with QEMU native NVMe emulation solution, the best solution SPDK-vhost NVMe has 6X improvement in IOPS and 70% reduction in latency for some read workloads generated by FIO. Also spdk-vhost-NVMe has 5X performance improvement with some db_benchmark test cases (e.g., random read) on RocksDB. Even compared with other optimized SPDK vhost-scsi and vhost-blk solutions, SPDK-vhost-NVMe is also competitive in per core performance aspect.
如今,越来越多的NVMe ssd(通过NVMe协议访问的PCIe ssd)被云提供商部署和虚拟化,以改善租户租用虚拟机的I/O体验。虽然NVMe ssd上的IOPS和读写延迟有了很大的提高,但现有的软件似乎无法有效地挖掘NVMe ssd的能力,在虚拟化平台上更是如此。应用程序访问来宾虚拟机中的NVMe ssd有很长的I/O堆栈,其开销可以分为三个部分,即(1)在来宾操作系统(OS)中模拟NVMe设备上的I/O执行;(2)上下文切换(例如,VM_Exit)和客户操作系统和主机操作系统之间的数据移动开销;(3)主机操作系统对物理NVMe ssd的I/O执行开销。为了解决长I/O堆栈问题,我们提出了SPDK-vhost-NVMe,一个依赖于用户空间NVMe驱动程序的I/O服务目标,它可以与管理程序协作以加速虚拟机内的NVMe I/O。通常,我们的方法消除了不必要的VM_Exit开销,并且还缩小了主机操作系统中的I/O执行堆栈。利用SPDK-vhost-NVMe,可以提高客户机操作系统中存储I/ o的性能。与QEMU原生NVMe仿真方案相比,最佳方案SPDK-vhost NVMe在处理部分由fifo产生的读工作负载时,IOPS提高了6倍,时延降低了70%。此外,在RocksDB上,spdk-vhost-NVMe在一些db_benchmark测试用例(例如随机读取)上有5倍的性能提升。即使与其他优化的SPDK vhost-scsi和vhost- block解决方案相比,SPDK-vhost- nvme在每核性能方面也具有竞争力。
{"title":"SPDK Vhost-NVMe: Accelerating I/Os in Virtual Machines on NVMe SSDs via User Space Vhost Target","authors":"Ziye Yang, Changpeng Liu, Yanbo Zhou, Xiaodong Liu, Gang Cao","doi":"10.1109/SC2.2018.00016","DOIUrl":"https://doi.org/10.1109/SC2.2018.00016","url":null,"abstract":"Nowadays, more and more NVMe SSDs (PCIe SSDs accessed by NVMe protocol) are deployed and virtualized by cloud providers to improve the I/O experience in virtual machines rent by tenants. Though the IOPS and latency for read and write on NVMe SSDs are greatly improved, it seems that the existing software cannot efficiently explore the abilities of those NVMe SSDs, and it is even worse on virtualized platform. There is long I/O stack for applications to access NVMe SSDs in guest VMs, and the overhead of which can be divided into three parts, i.e., (1) I/O execution on emulated NVMe device in guest operating system (OS); (2) Context switch (e.g., VM_Exit) and data movement overhead between guest OS and host OS; (3) I/O execution overhead in host OS on physical NVMe SSDs. To address the long I/O stack issue, we propose SPDK-vhost-NVMe, an I/O service target relying on user space NVMe drivers, which can collaborate with hypervisor to accelerate NVMe I/Os inside VMs. Generally our approach eliminates the unnecessary VM_Exit overhead and also shrinks the I/O execution stack in host OS. Leveraged by SPDK-vhost-NVMe, the performance of storage I/Os in guest OS can be improved. Compared with QEMU native NVMe emulation solution, the best solution SPDK-vhost NVMe has 6X improvement in IOPS and 70% reduction in latency for some read workloads generated by FIO. Also spdk-vhost-NVMe has 5X performance improvement with some db_benchmark test cases (e.g., random read) on RocksDB. Even compared with other optimized SPDK vhost-scsi and vhost-blk solutions, SPDK-vhost-NVMe is also competitive in per core performance aspect.","PeriodicalId":340244,"journal":{"name":"2018 IEEE 8th International Symposium on Cloud and Service Computing (SC2)","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127985455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Anticipatory User Plane Management for 5G 5G预期用户平面管理
Pub Date : 2018-11-01 DOI: 10.1109/SC2.2018.00009
Sebastian Peters, M. A. Khan
The 5G user plane is bound to play a significant role in fulfilling the dynamic demand created from the heterogeneous device layer, with novel concepts introducing the flexible deployment of user plane functions and per-user traffic management. The focus in this paper lies on the dynamic control of 5G's SDNized transport network to optimize the user plane management for mobile users. For this purpose we propose the concept of Anticipatory User Plane Management for 5G, which aims at optimized, learning-based and foresighted user plane management, reducing the user plane reconfiguration latency caused by users' mobility. In particular we contribute with two different approaches that exploit the prediction of user behavior for improved post-handover procedures, i) by suitable selection of intermediate UPFs based on the anticipated user behavior, and ii) by applying a pre-configuration of the user data plane via the means of a novel UPF mode.
5G用户平面将在满足异构设备层带来的动态需求方面发挥重要作用,其新颖的概念引入了用户平面功能的灵活部署和每个用户的流量管理。本文重点研究5G sdn化传输网络的动态控制,优化移动用户的用户平面管理。为此,我们提出了5G预期用户平面管理的概念,旨在优化、学习和预见用户平面管理,减少用户移动性带来的用户平面重构延迟。特别是,我们提供了两种不同的方法,利用用户行为的预测来改进移交后程序,i)根据预期的用户行为适当选择中间UPF, ii)通过新颖的UPF模式应用用户数据平面的预配置。
{"title":"Anticipatory User Plane Management for 5G","authors":"Sebastian Peters, M. A. Khan","doi":"10.1109/SC2.2018.00009","DOIUrl":"https://doi.org/10.1109/SC2.2018.00009","url":null,"abstract":"The 5G user plane is bound to play a significant role in fulfilling the dynamic demand created from the heterogeneous device layer, with novel concepts introducing the flexible deployment of user plane functions and per-user traffic management. The focus in this paper lies on the dynamic control of 5G's SDNized transport network to optimize the user plane management for mobile users. For this purpose we propose the concept of Anticipatory User Plane Management for 5G, which aims at optimized, learning-based and foresighted user plane management, reducing the user plane reconfiguration latency caused by users' mobility. In particular we contribute with two different approaches that exploit the prediction of user behavior for improved post-handover procedures, i) by suitable selection of intermediate UPFs based on the anticipated user behavior, and ii) by applying a pre-configuration of the user data plane via the means of a novel UPF mode.","PeriodicalId":340244,"journal":{"name":"2018 IEEE 8th International Symposium on Cloud and Service Computing (SC2)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122312457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Publisher's Information 出版商的信息
Pub Date : 2018-11-01 DOI: 10.1109/sc2.2018.00030
{"title":"Publisher's Information","authors":"","doi":"10.1109/sc2.2018.00030","DOIUrl":"https://doi.org/10.1109/sc2.2018.00030","url":null,"abstract":"","PeriodicalId":340244,"journal":{"name":"2018 IEEE 8th International Symposium on Cloud and Service Computing (SC2)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116368976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unikernels vs Containers: An In-Depth Benchmarking Study in the Context of Microservice Applications Unikernels vs Containers:微服务应用环境下的深度基准测试研究
Pub Date : 2018-11-01 DOI: 10.1109/SC2.2018.00008
Tom Goethals, Merlijn Sebrechts, A. Atrey, B. Volckaert, F. Turck
Unikernels are a relatively recent way to create and quickly deploy extremely small virtual machines that do not require as much functional and operational software overhead as containers or virtual machines by leaving out unnecessary parts. This paradigm aims to replace bulky virtual machines on one hand, and to open up new classes of hardware for virtualization and networking applications on the other. In recent years, the tool chains used to create unikernels have grown from proof of concept to platforms that can run both new and existing software written in various programming languages. This paper studies the performance (both execution time and memory footprint) of unikernels versus Docker containers in the context of REST services and heavy processing workloads, written in Java, Go, and Python. With the results of the performance evaluations, predictions can be made about which cases could benefit from the use of unikernels over containers.
Unikernels是一种相对较新的方法,用于创建和快速部署非常小的虚拟机,它不像容器或虚拟机那样需要太多的功能和操作软件开销,因为它省去了不必要的部分。这种范例旨在一方面取代笨重的虚拟机,另一方面为虚拟化和网络应用程序开辟新的硬件类别。近年来,用于创建unikernel的工具链已经从概念验证发展到可以运行用各种编程语言编写的新软件和现有软件的平台。本文研究了在使用Java、Go和Python编写的REST服务和繁重处理工作负载的环境下,unikernels与Docker容器的性能(执行时间和内存占用)。根据性能评估的结果,可以预测哪些情况可以从使用unikernels而不是容器中获益。
{"title":"Unikernels vs Containers: An In-Depth Benchmarking Study in the Context of Microservice Applications","authors":"Tom Goethals, Merlijn Sebrechts, A. Atrey, B. Volckaert, F. Turck","doi":"10.1109/SC2.2018.00008","DOIUrl":"https://doi.org/10.1109/SC2.2018.00008","url":null,"abstract":"Unikernels are a relatively recent way to create and quickly deploy extremely small virtual machines that do not require as much functional and operational software overhead as containers or virtual machines by leaving out unnecessary parts. This paradigm aims to replace bulky virtual machines on one hand, and to open up new classes of hardware for virtualization and networking applications on the other. In recent years, the tool chains used to create unikernels have grown from proof of concept to platforms that can run both new and existing software written in various programming languages. This paper studies the performance (both execution time and memory footprint) of unikernels versus Docker containers in the context of REST services and heavy processing workloads, written in Java, Go, and Python. With the results of the performance evaluations, predictions can be made about which cases could benefit from the use of unikernels over containers.","PeriodicalId":340244,"journal":{"name":"2018 IEEE 8th International Symposium on Cloud and Service Computing (SC2)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128238737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Cloud Native 5G Virtual Network Functions: Design Principles and Use Cases 云原生5G虚拟网络功能:设计原则和用例
Pub Date : 2018-11-01 DOI: 10.1109/SC2.2018.00019
Sofiane Imadali, Ayoub Bousselmi
The advent of 5G and its ever increasing stringent requirements for bandwidth, latency, and quality of service pushes the boundaries of legacy Mobile Network Operators' technologies. Network Function Virtualization is one promising attempt at solving those challenges. At its essence, NFV is about running network functions as software workloads on commodity hardware to optimize deployment costs and simplify the life-cycle management of network functions. However, with the advent of open source cloud native tools and architectures, early VM-based NFV designs may need to be upgraded to better benefit from new trends. We propose to review current NFV management solutions and a definition of the cloud native toolbox in the context of NFV. We then present a cloud native software platform allowing MNOs to expose their assets: networking resources, mobile services, and cloud computing to Over-The-Top players: 5GaaS. We also introduce our open source Cloud Native VNF API design as an application of the proposed design principles and describe from a standard perspective the feasibility of our prototype.
5G的出现及其对带宽、延迟和服务质量日益严格的要求推动了传统移动网络运营商技术的极限。网络功能虚拟化是解决这些挑战的一个有希望的尝试。本质上,NFV是将网络功能作为软件工作负载在商用硬件上运行,以优化部署成本并简化网络功能的生命周期管理。然而,随着开源云原生工具和架构的出现,早期基于虚拟机的NFV设计可能需要升级,以更好地从新趋势中受益。我们建议回顾当前的NFV管理解决方案,并在NFV的背景下定义云原生工具箱。然后,我们提出了一个云原生软件平台,允许移动网络运营商将其资产:网络资源、移动服务和云计算公开给over - top玩家:5GaaS。我们还介绍了我们的开源云原生VNF API设计,作为所提出的设计原则的应用,并从标准的角度描述了我们的原型的可行性。
{"title":"Cloud Native 5G Virtual Network Functions: Design Principles and Use Cases","authors":"Sofiane Imadali, Ayoub Bousselmi","doi":"10.1109/SC2.2018.00019","DOIUrl":"https://doi.org/10.1109/SC2.2018.00019","url":null,"abstract":"The advent of 5G and its ever increasing stringent requirements for bandwidth, latency, and quality of service pushes the boundaries of legacy Mobile Network Operators' technologies. Network Function Virtualization is one promising attempt at solving those challenges. At its essence, NFV is about running network functions as software workloads on commodity hardware to optimize deployment costs and simplify the life-cycle management of network functions. However, with the advent of open source cloud native tools and architectures, early VM-based NFV designs may need to be upgraded to better benefit from new trends. We propose to review current NFV management solutions and a definition of the cloud native toolbox in the context of NFV. We then present a cloud native software platform allowing MNOs to expose their assets: networking resources, mobile services, and cloud computing to Over-The-Top players: 5GaaS. We also introduce our open source Cloud Native VNF API design as an application of the proposed design principles and describe from a standard perspective the feasibility of our prototype.","PeriodicalId":340244,"journal":{"name":"2018 IEEE 8th International Symposium on Cloud and Service Computing (SC2)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133651271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
期刊
2018 IEEE 8th International Symposium on Cloud and Service Computing (SC2)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1