首页 > 最新文献

2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)最新文献

英文 中文
A 360° Video Adaptive Streaming Scheme Based on Multiple Video Qualities 一种基于多视频质量的360°视频自适应流媒体方案
Pub Date : 2020-12-01 DOI: 10.1109/UCC48980.2020.00063
Jie Zhang, Yi Zhong, Yi Han, Dongdong Li, Chenxi Yu, Junchang Mo
As an emerging multimedia service, virtual reality (VR) video streaming is facing two challenges: extremely large bandwidth requirements and strict delay requirements. There-fore, improving the utilization of network resources is of great significance to the application and development of VR video in order to provide a better quality of experience. Currently, many VR video streaming solutions are based on 360° video, which is a compromise between restricted bandwidth and streaming delay requirement. For a tile-based 360° video streaming using HTTP2 protocol, the video is split into different tiles with multiple quality levels. In the case of insufficient bandwidth, a limited number of quality levels will lead to large quality differences between adjacent zones, which will also limit the optimization of quality level adaptation, resulting in a lower QoE. In this paper, we propose a new tile-based 360° video adaptive streaming scheme based on multiple video quality levels. The proposed method provides more video quality options/levels, dividing the 360° video into different video zones according to its viewpoint position and assign them with different video quality levels based on bandwidth conditions during the streaming time, to achieve smooth video quality distribution, and ensure high QoE with high video bitrate, low-quality switches and minimized stall time. The experimental results show that the QoE of this proposed method is improved by approximately 28% compared with the existing adaptive 360° video streaming scheme.
虚拟现实视频流作为一种新兴的多媒体业务,面临着极大的带宽要求和严格的时延要求两大挑战。因此,提高网络资源的利用率对VR视频的应用和发展具有重要意义,以提供更好的体验质量。目前,许多VR视频流解决方案都是基于360°视频,这是限制带宽和流延迟要求之间的折衷。对于使用HTTP2协议的基于tile的360°视频流,视频被分成具有多个质量级别的不同tile。在带宽不足的情况下,由于质量等级数量有限,相邻区域之间的质量差异较大,这也限制了质量等级适应的优化,导致QoE降低。在本文中,我们提出了一种新的基于多视频质量等级的360°视频自适应流媒体方案。该方法提供了更多的视频质量选项/级别,根据视点位置将360°视频划分为不同的视频区域,并根据流媒体时间的带宽情况分配不同的视频质量级别,实现了视频质量的平滑分配,并以高视频比特率、低质量切换和最小的停顿时间保证了高QoE。实验结果表明,与现有的自适应360°视频流方案相比,该方法的QoE提高了约28%。
{"title":"A 360° Video Adaptive Streaming Scheme Based on Multiple Video Qualities","authors":"Jie Zhang, Yi Zhong, Yi Han, Dongdong Li, Chenxi Yu, Junchang Mo","doi":"10.1109/UCC48980.2020.00063","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00063","url":null,"abstract":"As an emerging multimedia service, virtual reality (VR) video streaming is facing two challenges: extremely large bandwidth requirements and strict delay requirements. There-fore, improving the utilization of network resources is of great significance to the application and development of VR video in order to provide a better quality of experience. Currently, many VR video streaming solutions are based on 360° video, which is a compromise between restricted bandwidth and streaming delay requirement. For a tile-based 360° video streaming using HTTP2 protocol, the video is split into different tiles with multiple quality levels. In the case of insufficient bandwidth, a limited number of quality levels will lead to large quality differences between adjacent zones, which will also limit the optimization of quality level adaptation, resulting in a lower QoE. In this paper, we propose a new tile-based 360° video adaptive streaming scheme based on multiple video quality levels. The proposed method provides more video quality options/levels, dividing the 360° video into different video zones according to its viewpoint position and assign them with different video quality levels based on bandwidth conditions during the streaming time, to achieve smooth video quality distribution, and ensure high QoE with high video bitrate, low-quality switches and minimized stall time. The experimental results show that the QoE of this proposed method is improved by approximately 28% compared with the existing adaptive 360° video streaming scheme.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125222458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Explaining probabilistic Artificial Intelligence (AI) models by discretizing Deep Neural Networks 用离散化深度神经网络解释概率人工智能模型
Pub Date : 2020-12-01 DOI: 10.1109/UCC48980.2020.00070
R. Saleem, Bo Yuan, Fatih Kurugollu, A. Anjum
Artificial Intelligence (AI) models can learn from data and make decisions without any human intervention. However, the deployment of such models is challenging and risky because we do not know how the internal decisionmaking is happening in these models. Especially, the high-risk decisions such as medical diagnosis or automated navigation demand explainability and verification of the decision making process in AI algorithms. This research paper aims to explain Artificial Intelligence (AI) models by discretizing the black-box process model of deep neural networks using partial differential equations. The PDEs based deterministic models would minimize the time and computational cost of the decision-making process and reduce the chances of uncertainty that make the prediction more trustworthy.
人工智能(AI)模型可以在没有任何人为干预的情况下从数据中学习并做出决策。然而,这些模型的部署是具有挑战性和风险的,因为我们不知道这些模型中的内部决策是如何发生的。特别是医疗诊断或自动导航等高风险决策,需要人工智能算法对决策过程的可解释性和可验证性。本文旨在利用偏微分方程对深度神经网络的黑箱过程模型进行离散化,以解释人工智能(AI)模型。基于偏微分方程的确定性模型将使决策过程的时间和计算成本最小化,并减少不确定性的机会,从而使预测更加可信。
{"title":"Explaining probabilistic Artificial Intelligence (AI) models by discretizing Deep Neural Networks","authors":"R. Saleem, Bo Yuan, Fatih Kurugollu, A. Anjum","doi":"10.1109/UCC48980.2020.00070","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00070","url":null,"abstract":"Artificial Intelligence (AI) models can learn from data and make decisions without any human intervention. However, the deployment of such models is challenging and risky because we do not know how the internal decisionmaking is happening in these models. Especially, the high-risk decisions such as medical diagnosis or automated navigation demand explainability and verification of the decision making process in AI algorithms. This research paper aims to explain Artificial Intelligence (AI) models by discretizing the black-box process model of deep neural networks using partial differential equations. The PDEs based deterministic models would minimize the time and computational cost of the decision-making process and reduce the chances of uncertainty that make the prediction more trustworthy.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115319880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the Potential of using Power as a First Class Parameter for Resource Allocation in Apache Mesos Managed Clouds 探索在Apache Mesos托管云中使用功率作为资源分配的第一类参数的潜力
Pub Date : 2020-12-01 DOI: 10.1109/UCC48980.2020.00040
Pradyumna Kaushik, S. Raghavendra, M. Govindaraju, Devesh Tiwari
We propose a resource allocation policy that uses (a) Power as a first class parameter as an indicator of the computational intensity of a task and its potential impact on peak power draw, and (b) Power Tolerance as an indicator of a task’s sensitivity towards degradation of performance as a result of resource contention. Through experimentation and analysis, we present coarse-grained and fine-grained Power Tolerance assignment methods that can be employed to make smarter peak power performance trade-offs. Our experiments show that (a) cloud operators can benefit from a uniform workload-wide setting of Power Tolerance to achieve significant reduction in peak power consumption, (b) fine-grained Power Tolerance assignment methods show potential in making smarter peak power and performance trade-offs.
我们提出了一种资源分配策略,该策略使用(a)功率作为第一类参数,作为任务的计算强度及其对峰值功耗的潜在影响的指标,以及(b)功率容差作为任务对由于资源争用而导致的性能下降的敏感性的指标。通过实验和分析,我们提出了粗粒度和细粒度的功率容限分配方法,可以用来做出更智能的峰值功率性能权衡。我们的实验表明:(a)云运营商可以从统一的工作负载范围内的功率容限设置中受益,从而显著降低峰值功耗;(b)细粒度的功率容限分配方法显示出更智能的峰值功率和性能权衡的潜力。
{"title":"Exploring the Potential of using Power as a First Class Parameter for Resource Allocation in Apache Mesos Managed Clouds","authors":"Pradyumna Kaushik, S. Raghavendra, M. Govindaraju, Devesh Tiwari","doi":"10.1109/UCC48980.2020.00040","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00040","url":null,"abstract":"We propose a resource allocation policy that uses (a) Power as a first class parameter as an indicator of the computational intensity of a task and its potential impact on peak power draw, and (b) Power Tolerance as an indicator of a task’s sensitivity towards degradation of performance as a result of resource contention. Through experimentation and analysis, we present coarse-grained and fine-grained Power Tolerance assignment methods that can be employed to make smarter peak power performance trade-offs. Our experiments show that (a) cloud operators can benefit from a uniform workload-wide setting of Power Tolerance to achieve significant reduction in peak power consumption, (b) fine-grained Power Tolerance assignment methods show potential in making smarter peak power and performance trade-offs.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116789119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Decentralized Kubernetes Federation Control Plane 分散的Kubernetes联邦控制平面
Pub Date : 2020-12-01 DOI: 10.1109/UCC48980.2020.00056
L. Larsson, H. Gustafsson, C. Klein, E. Elmroth
This position paper presents our vision for a distributed decentralized Kubernetes federation control plane. The goal is to support federations consisting of thousands of Kubernetes clusters, in order to support next generation edge cloud use-cases. Our review of the literature and experience with the current centralized state of the art Kubernetes federation controllers shows that it is unable to scale to a sufficient size, and centralization constitutes an unacceptable single point of failure. Our proposed system maintains cluster autonomy, allows clusters to collaboratively handle error conditions, and scales to support edge cloud use-cases. Our approach is based on a shared database of conflict-free replicated data types (CRDTs), shared among all clusters in the federation, and algorithms that make use of the data.
这份立场文件展示了我们对分布式分散Kubernetes联邦控制平面的愿景。目标是支持由数千个Kubernetes集群组成的联盟,以支持下一代边缘云用例。我们对目前集中式Kubernetes联邦控制器的文献和经验的回顾表明,它无法扩展到足够的规模,并且集中化构成了不可接受的单点故障。我们提出的系统保持集群自治,允许集群协作处理错误条件,并扩展以支持边缘云用例。我们的方法基于无冲突复制数据类型(crdt)的共享数据库,该数据库在联邦中的所有集群之间共享,以及使用该数据的算法。
{"title":"Decentralized Kubernetes Federation Control Plane","authors":"L. Larsson, H. Gustafsson, C. Klein, E. Elmroth","doi":"10.1109/UCC48980.2020.00056","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00056","url":null,"abstract":"This position paper presents our vision for a distributed decentralized Kubernetes federation control plane. The goal is to support federations consisting of thousands of Kubernetes clusters, in order to support next generation edge cloud use-cases. Our review of the literature and experience with the current centralized state of the art Kubernetes federation controllers shows that it is unable to scale to a sufficient size, and centralization constitutes an unacceptable single point of failure. Our proposed system maintains cluster autonomy, allows clusters to collaboratively handle error conditions, and scales to support edge cloud use-cases. Our approach is based on a shared database of conflict-free replicated data types (CRDTs), shared among all clusters in the federation, and algorithms that make use of the data.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116724103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Keynotes [7 abstracts] 主题演讲[7个摘要]
Pub Date : 2020-12-01 DOI: 10.1109/ucc48980.2020.00018
{"title":"Keynotes [7 abstracts]","authors":"","doi":"10.1109/ucc48980.2020.00018","DOIUrl":"https://doi.org/10.1109/ucc48980.2020.00018","url":null,"abstract":"","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127879538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Doctoral Symposium Technical Program Committee 博士研讨会技术计划委员会
Pub Date : 2020-12-01 DOI: 10.1109/ucc48980.2020.00017
{"title":"Doctoral Symposium Technical Program Committee","authors":"","doi":"10.1109/ucc48980.2020.00017","DOIUrl":"https://doi.org/10.1109/ucc48980.2020.00017","url":null,"abstract":"","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134233322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scission: Performance-driven and Context-aware Cloud-Edge Distribution of Deep Neural Networks 深度神经网络的性能驱动和上下文感知云边缘分布
Pub Date : 2020-12-01 DOI: 10.1109/UCC48980.2020.00044
Luke Lockhart, P. Harvey, Pierre Imai, P. Willis, B. Varghese
Partitioning and distributing deep neural networks (DNNs) across end-devices, edge resources and the cloud has a potential twofold advantage: preserving privacy of the input data, and reducing the ingress bandwidth demand beyond the edge. However, for a given DNN, identifying the optimal partition configuration for distributing the DNN that maximizes performance is a significant challenge. This is because the combination of potential target hardware resources that maximizes performance and the sequence of layers of the DNN that should be distributed across the target resources needs to be determined, while accounting for user-defined objectives/constraints for partitioning. This paper presents Scission, a tool for automated benchmarking of DNNs on a given set of target device, edge and cloud resources for determining optimal partitions that maximize DNN performance. The decision-making approach is context-aware by capitalizing on hardware capabilities of the target resources, their locality, the characteristics of DNN layers, and the network condition. Experimental studies are carried out on 18 DNNs. The decisions made by Scission cannot be manually made by a human given the complexity and the number of dimensions affecting the search space. The benchmarking overheads of Scission allow for responding to operational changes periodically rather than in real-time. Scission is available for public download 1.
跨终端设备、边缘资源和云划分和分布深度神经网络(dnn)具有潜在的双重优势:保护输入数据的隐私,并减少边缘之外的入口带宽需求。然而,对于给定的DNN,确定分配DNN的最佳分区配置以使性能最大化是一个重大挑战。这是因为需要确定最大化性能的潜在目标硬件资源的组合以及应该分布在目标资源上的DNN层的顺序,同时考虑到用户定义的分区目标/约束。本文介绍了scision,这是一个在给定的目标设备、边缘和云资源集上自动对DNN进行基准测试的工具,用于确定最大化DNN性能的最佳分区。决策方法是上下文感知的,通过利用目标资源的硬件能力、它们的位置、DNN层的特征和网络条件。对18个深度神经网络进行了实验研究。考虑到影响搜索空间的维度的复杂性和数量,由scision做出的决策不能由人工手动做出。scision的基准测试开销允许定期响应操作更改,而不是实时响应。scision可供公众下载1。
{"title":"Scission: Performance-driven and Context-aware Cloud-Edge Distribution of Deep Neural Networks","authors":"Luke Lockhart, P. Harvey, Pierre Imai, P. Willis, B. Varghese","doi":"10.1109/UCC48980.2020.00044","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00044","url":null,"abstract":"Partitioning and distributing deep neural networks (DNNs) across end-devices, edge resources and the cloud has a potential twofold advantage: preserving privacy of the input data, and reducing the ingress bandwidth demand beyond the edge. However, for a given DNN, identifying the optimal partition configuration for distributing the DNN that maximizes performance is a significant challenge. This is because the combination of potential target hardware resources that maximizes performance and the sequence of layers of the DNN that should be distributed across the target resources needs to be determined, while accounting for user-defined objectives/constraints for partitioning. This paper presents Scission, a tool for automated benchmarking of DNNs on a given set of target device, edge and cloud resources for determining optimal partitions that maximize DNN performance. The decision-making approach is context-aware by capitalizing on hardware capabilities of the target resources, their locality, the characteristics of DNN layers, and the network condition. Experimental studies are carried out on 18 DNNs. The decisions made by Scission cannot be manually made by a human given the complexity and the number of dimensions affecting the search space. The benchmarking overheads of Scission allow for responding to operational changes periodically rather than in real-time. Scission is available for public download 1.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125183268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Joint Host-Network Power Scaling with Minimizing VM Migration in SDN-enabled Cloud Data Centers 在支持sdn的云数据中心中,通过最小化虚拟机迁移来实现主机-网络联合功率扩展
Pub Date : 2020-12-01 DOI: 10.1109/UCC48980.2020.00020
Tuhin Chakraborty, A. Toosi, C. Kopp, Peter James Stuckey, Julien Mahet
In recent times, both industry and academia have paid significant attention to the power management of cloud data centers (CDCs), due to their typically very high electrical energy consumption. While servers remain the components with the highest power-consumption, network stacks can also consume about 10-20 percent of the total energy usage in a data center. Dynamic Virtual Machine (VM) consolidation is one way to reduce the number of active servers, which can be done by live migration of the VMs. But, migration operations in a data center bring several system and service level overheads that include downtime, elephant flows over the network, and potentially higher failure rates. In this work, we propose algorithms for minimizing the number of VM migrations to attain the optimized joint host-network power consumption in a cloud data center. We present a trade-off between the number of migrations, the joint host-network power consumption, and the computational time complexity of the proposed algorithms. Using Mininet and ONOS, an SDN enabled framework is utilised to evaluate the proposed algorithms. Experimental results show that our algorithms can reduce power consumption by about 11 percent, while completing between 18 to 25 percent more VM migrations compared to the baseline algorithm, which only minimizes migration without guaranteeing lowest power consumption.
近年来,业界和学术界都非常关注云数据中心(cdc)的电源管理,因为它们通常非常高的电能消耗。虽然服务器仍然是能耗最高的组件,但网络堆栈也可能消耗数据中心总能耗的10- 20%。动态虚拟机(VM)整合是减少活动服务器数量的一种方法,可以通过虚拟机的实时迁移来实现。但是,数据中心中的迁移操作会带来一些系统和服务级别的开销,包括停机时间、网络上的大象流和潜在的更高故障率。在这项工作中,我们提出了最小化VM迁移数量的算法,以达到云数据中心中优化的主机-网络联合功耗。我们提出了迁移次数、主机-网络联合功耗和所提出算法的计算时间复杂度之间的权衡。使用Mininet和ONOS,一个支持SDN的框架被用来评估提议的算法。实验结果表明,与基线算法相比,我们的算法可以将功耗降低约11%,同时完成18%到25%的VM迁移,这只是最小化迁移而不保证最低功耗。
{"title":"Joint Host-Network Power Scaling with Minimizing VM Migration in SDN-enabled Cloud Data Centers","authors":"Tuhin Chakraborty, A. Toosi, C. Kopp, Peter James Stuckey, Julien Mahet","doi":"10.1109/UCC48980.2020.00020","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00020","url":null,"abstract":"In recent times, both industry and academia have paid significant attention to the power management of cloud data centers (CDCs), due to their typically very high electrical energy consumption. While servers remain the components with the highest power-consumption, network stacks can also consume about 10-20 percent of the total energy usage in a data center. Dynamic Virtual Machine (VM) consolidation is one way to reduce the number of active servers, which can be done by live migration of the VMs. But, migration operations in a data center bring several system and service level overheads that include downtime, elephant flows over the network, and potentially higher failure rates. In this work, we propose algorithms for minimizing the number of VM migrations to attain the optimized joint host-network power consumption in a cloud data center. We present a trade-off between the number of migrations, the joint host-network power consumption, and the computational time complexity of the proposed algorithms. Using Mininet and ONOS, an SDN enabled framework is utilised to evaluate the proposed algorithms. Experimental results show that our algorithms can reduce power consumption by about 11 percent, while completing between 18 to 25 percent more VM migrations compared to the baseline algorithm, which only minimizes migration without guaranteeing lowest power consumption.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117197417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Auto-scaling of Web Applications in Clouds: A Tail Latency Evaluation 云中Web应用程序的自动伸缩:尾部延迟评估
Pub Date : 2020-12-01 DOI: 10.1109/UCC48980.2020.00037
M. Aslanpour, A. Toosi, R. Gaire, M. A. Cheema
Mechanisms for dynamically adding and removing Virtual Machines (VMs) to reduce cost while minimizing the latency are called auto-scaling. Latency improvements are mainly fulfilled through minimizing the "average" response times while unpredictabilities and fluctuations of the Web applications, aka flash crowds, can result in very high latencies for users’ requests. Requests influenced by flash crowd suffer from long latencies, known as outliers. Such outliers are inevitable to a large extent as auto-scaling solutions continue to improve the average, not the "tail" of latencies. In this paper, we study possible sources of tail latency in auto-scaling mechanisms for Web applications. Based on our extensive evaluations in a real cloud platform, we discovered sources of a tail latency as 1) large requests, i.e. those data-intensive; 2) long-term scaling intervals; 3) instant analysis of scaling parameters; 4) conservative, i.e. tight, threshold tuning; 5) load-unaware surplus VM selection policies used for executing a scale-down decision; 6) cooldown feature, although cost-effective; and 7) VM start-up delay. We also discovered that after improving the average latency by auto-scaling mechanisms, the tail may behave differently, demanding dedicated tail-aware solutions for auto-scaling mechanisms.
动态添加和删除虚拟机(vm)以降低成本同时最小化延迟的机制称为自动伸缩。延迟改进主要是通过最小化“平均”响应时间来实现的,而Web应用程序的不可预测性和波动(即flash crowd)可能会导致用户请求的非常高的延迟。受快闪人群影响的请求有很长的延迟,被称为异常值。这种异常值在很大程度上是不可避免的,因为自动缩放解决方案将继续提高平均延迟,而不是延迟的“尾部”。在本文中,我们研究了Web应用程序自动扩展机制中尾部延迟的可能来源。基于我们在真实云平台上的广泛评估,我们发现了尾部延迟的来源:1)大型请求,即那些数据密集型请求;2)长期缩放间隔;3)缩放参数的即时分析;4)保守,即紧,阈值调整;5)用于执行缩减决策的负载不感知剩余VM选择策略;6)冷却功能,虽然性价比高;7)虚拟机启动延迟。我们还发现,在通过自动缩放机制改善平均延迟后,尾部的行为可能会有所不同,这需要针对自动缩放机制的专用尾部感知解决方案。
{"title":"Auto-scaling of Web Applications in Clouds: A Tail Latency Evaluation","authors":"M. Aslanpour, A. Toosi, R. Gaire, M. A. Cheema","doi":"10.1109/UCC48980.2020.00037","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00037","url":null,"abstract":"Mechanisms for dynamically adding and removing Virtual Machines (VMs) to reduce cost while minimizing the latency are called auto-scaling. Latency improvements are mainly fulfilled through minimizing the \"average\" response times while unpredictabilities and fluctuations of the Web applications, aka flash crowds, can result in very high latencies for users’ requests. Requests influenced by flash crowd suffer from long latencies, known as outliers. Such outliers are inevitable to a large extent as auto-scaling solutions continue to improve the average, not the \"tail\" of latencies. In this paper, we study possible sources of tail latency in auto-scaling mechanisms for Web applications. Based on our extensive evaluations in a real cloud platform, we discovered sources of a tail latency as 1) large requests, i.e. those data-intensive; 2) long-term scaling intervals; 3) instant analysis of scaling parameters; 4) conservative, i.e. tight, threshold tuning; 5) load-unaware surplus VM selection policies used for executing a scale-down decision; 6) cooldown feature, although cost-effective; and 7) VM start-up delay. We also discovered that after improving the average latency by auto-scaling mechanisms, the tail may behave differently, demanding dedicated tail-aware solutions for auto-scaling mechanisms.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122662203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Message from the B2D2LM 2020 Workshop Chairs 2020年B2D2LM研讨会主席致辞
Pub Date : 2020-12-01 DOI: 10.1109/ucc48980.2020.00011
Shuihua Wang
Due to the proliferation of biomedical imaging modalities such as Photoacoustic Tomography, Computed Tomography (CT), etc., massive amounts of biomedical data are being generated on a daily basis. How can we utilize such big data to build better health profiles and better predictive models so that we can better diagnose and treat diseases and provide a better life for humans? In the past years, many successful learning methods such as deep learning were proposed to answer this crucial question, which has social, economic, as well as legal implications. However, several significant problems plague the processing of big biomedical data, such as data heterogeneity, data incompleteness, data imbalance, and high dimensionality. What is worse is that many data sets exhibit multiple such problems. Most existing learning methods can only deal with homogeneous, complete, class-balanced, and moderate-dimensional data. Therefore, data preprocessing techniques including data representation learning, dimensionality reduction, and missing value imputation should be developed to enhance the applicability of deep learning methods in real-world applications of biomedicine.
由于生物医学成像方式的扩散,如光声断层扫描、计算机断层扫描(CT)等,每天都在产生大量的生物医学数据。我们如何利用这些大数据来建立更好的健康档案和更好的预测模型,从而更好地诊断和治疗疾病,为人类提供更好的生活?在过去的几年里,人们提出了许多成功的学习方法,如深度学习,来回答这个具有社会、经济和法律意义的关键问题。然而,生物医学大数据处理中存在着数据异构、数据不完整、数据不平衡、高维等问题。更糟糕的是,许多数据集显示出多个这样的问题。大多数现有的学习方法只能处理同质的、完整的、类平衡的和中等维度的数据。因此,为了提高深度学习方法在生物医学实际应用中的适用性,需要开发数据表示学习、降维和缺失值归算等数据预处理技术。
{"title":"Message from the B2D2LM 2020 Workshop Chairs","authors":"Shuihua Wang","doi":"10.1109/ucc48980.2020.00011","DOIUrl":"https://doi.org/10.1109/ucc48980.2020.00011","url":null,"abstract":"Due to the proliferation of biomedical imaging modalities such as Photoacoustic Tomography, Computed Tomography (CT), etc., massive amounts of biomedical data are being generated on a daily basis. How can we utilize such big data to build better health profiles and better predictive models so that we can better diagnose and treat diseases and provide a better life for humans? In the past years, many successful learning methods such as deep learning were proposed to answer this crucial question, which has social, economic, as well as legal implications. However, several significant problems plague the processing of big biomedical data, such as data heterogeneity, data incompleteness, data imbalance, and high dimensionality. What is worse is that many data sets exhibit multiple such problems. Most existing learning methods can only deal with homogeneous, complete, class-balanced, and moderate-dimensional data. Therefore, data preprocessing techniques including data representation learning, dimensionality reduction, and missing value imputation should be developed to enhance the applicability of deep learning methods in real-world applications of biomedicine.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115930403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1