首页 > 最新文献

2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)最新文献

英文 中文
Job Classification in Cloud Computing: The Classification Effects on Energy Efficiency 云计算中的工作分类:分类对能效的影响
Auday Aldulaimy, R. Zantout, A. Zekri, W. Itani
One of the recent and major challenges in cloud computing is to enhance the energy efficiency in cloud data centers. Such enhancements can be done by improving the resource allocation and management algorithms. In this paper, a model that identifies common patterns for the jobs submitted to the cloud is proposed. This model is able to predict the type of the job submitted, and accordingly, the set of users' jobs is classified into four subsets. Each subset contains jobs that have similar requirements. In addition to the jobs' common pattern and requirements, the users' history is considered in the jobs' type prediction model. The goal of job classification is to find a way to propose useful strategy that helps improve energy efficiency. Following the process of jobs' classification, the best fit virtual machine is allocated to each job. Then, the virtual machines are placed to the physical machines according to a novel strategy called Mixed Type Placement strategy. The core idea of the proposed strategy is to place virtual machines of the jobs of different types in the same physical machine whenever possible, based on Knapsack Problem. This is because different types of jobs do not intensively use the same compute or storage resources in the physical machine. This strategy reduces the number of active physical machines which leads to major reduction in the total energy consumption in the data center. A simulation of the results shows that the presented strategy outperforms both Genetic Algorithm and Round Robin from an energy efficiency perspective.
云计算最近面临的主要挑战之一是提高云数据中心的能源效率。这种增强可以通过改进资源分配和管理算法来实现。在本文中,提出了一个识别提交到云的作业的通用模式的模型。该模型能够预测提交的作业的类型,并相应地将用户的作业集分为四个子集。每个子集包含具有相似要求的作业。除了作业的通用模式和需求外,作业类型预测模型还考虑了用户的历史记录。工作分类的目标是找到一种方法,提出有助于提高能源效率的有用策略。按照作业分类的过程,为每个作业分配最适合的虚拟机。然后,根据一种称为混合类型放置策略的新策略将虚拟机放置到物理机中。该策略的核心思想是基于背包问题,尽可能将不同类型作业的虚拟机放在同一物理机中。这是因为不同类型的作业不会密集地使用物理机器中相同的计算或存储资源。这种策略减少了活动物理机器的数量,从而大大降低了数据中心的总能耗。仿真结果表明,该策略在能效方面优于遗传算法和轮循算法。
{"title":"Job Classification in Cloud Computing: The Classification Effects on Energy Efficiency","authors":"Auday Aldulaimy, R. Zantout, A. Zekri, W. Itani","doi":"10.1109/UCC.2015.97","DOIUrl":"https://doi.org/10.1109/UCC.2015.97","url":null,"abstract":"One of the recent and major challenges in cloud computing is to enhance the energy efficiency in cloud data centers. Such enhancements can be done by improving the resource allocation and management algorithms. In this paper, a model that identifies common patterns for the jobs submitted to the cloud is proposed. This model is able to predict the type of the job submitted, and accordingly, the set of users' jobs is classified into four subsets. Each subset contains jobs that have similar requirements. In addition to the jobs' common pattern and requirements, the users' history is considered in the jobs' type prediction model. The goal of job classification is to find a way to propose useful strategy that helps improve energy efficiency. Following the process of jobs' classification, the best fit virtual machine is allocated to each job. Then, the virtual machines are placed to the physical machines according to a novel strategy called Mixed Type Placement strategy. The core idea of the proposed strategy is to place virtual machines of the jobs of different types in the same physical machine whenever possible, based on Knapsack Problem. This is because different types of jobs do not intensively use the same compute or storage resources in the physical machine. This strategy reduces the number of active physical machines which leads to major reduction in the total energy consumption in the data center. A simulation of the results shows that the presented strategy outperforms both Genetic Algorithm and Round Robin from an energy efficiency perspective.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"147 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124227570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Towards Power Consumption Modeling for Servers at Scale 面向大规模服务器的功耗建模
Timothy W. Harton, C. Walker, M. O'Sullivan
As of 2010 data centers use 1.5% of global electricity production and this is expected to keep growing [1]. There is a need for a near real-time power consumption modeling/monitoring system that could be used at scale within a Software Defined Data Center (SDDC). The power consumption models and information they provide can then be used to make better decisions for data center orchestration, e.g., whether to migrate virtual machines to reduce power consumption. We propose a scalable system that would 1) create initial power consumption models, as needed, for data center components, and 2) could be continually refined while the components are in use. The models will be used for the near real-time monitoring of power consumption, as well as predicting power consumption before and after potential orchestration decisions. The first step towards this goal of whole data center power modeling and prediction is to be able to predict the power consumption of one server effectively, based on high level utilization statistics from that server. In this paper we present a novel method for modeling whole system power consumption for a server, under varying random levels of CPU utilization, with a scalable random forest based model, that utilizes statistics available at the data center management level.
截至2010年,数据中心使用了全球电力生产的1.5%,预计这一比例将继续增长[1]。需要一种近乎实时的功耗建模/监控系统,该系统可以在软件定义数据中心(SDDC)中大规模使用。然后,它们提供的功耗模型和信息可用于为数据中心编排做出更好的决策,例如,是否迁移虚拟机以降低功耗。我们提出了一个可扩展的系统,它将1)根据需要为数据中心组件创建初始的功耗模型,2)可以在组件使用时不断改进。这些模型将用于近乎实时的功耗监控,以及预测潜在业务流程决策前后的功耗。实现整个数据中心功率建模和预测目标的第一步是能够根据服务器的高级利用率统计数据有效地预测一台服务器的功耗。在本文中,我们提出了一种新的方法来建模服务器的整个系统功耗,在不同的随机水平的CPU利用率,与一个可扩展的随机森林为基础的模型,利用统计数据在数据中心管理级别。
{"title":"Towards Power Consumption Modeling for Servers at Scale","authors":"Timothy W. Harton, C. Walker, M. O'Sullivan","doi":"10.1109/UCC.2015.50","DOIUrl":"https://doi.org/10.1109/UCC.2015.50","url":null,"abstract":"As of 2010 data centers use 1.5% of global electricity production and this is expected to keep growing [1]. There is a need for a near real-time power consumption modeling/monitoring system that could be used at scale within a Software Defined Data Center (SDDC). The power consumption models and information they provide can then be used to make better decisions for data center orchestration, e.g., whether to migrate virtual machines to reduce power consumption. We propose a scalable system that would 1) create initial power consumption models, as needed, for data center components, and 2) could be continually refined while the components are in use. The models will be used for the near real-time monitoring of power consumption, as well as predicting power consumption before and after potential orchestration decisions. The first step towards this goal of whole data center power modeling and prediction is to be able to predict the power consumption of one server effectively, based on high level utilization statistics from that server. In this paper we present a novel method for modeling whole system power consumption for a server, under varying random levels of CPU utilization, with a scalable random forest based model, that utilizes statistics available at the data center management level.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"69 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121926792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Towards a Multi-objective VM Reassignment for Large Decentralised Data Centres 面向大型分散数据中心的多目标虚拟机重新分配
Takfarinas Saber, Anthony Ventresque, I. Brandić, James Thorburn, L. Murphy
Optimising the IT infrastructure of large, often geographically distributed, organisations goes beyond the classical virtual machine reassignment problem, for two reasons: (i) the data centres of these organisations are composed of a number of hosting departments which have different preferences on what to host and where to host it, (ii) the top-level managers in these data centres make complex decisions and need to manipulate possible solutions favouring different objectives to find the right balance. This challenge has not yet been comprehensively addressed in the literature and in this paper we demonstrate that a multi-objective VM reassignment is feasible for large decentralised data centres. We show on a realistic data set that our solution outperforms other classical multi-objective algorithms for VM reassignment in terms of quantity of solutions (by about 15% on average) and quality of the solutions set (by over 6% on average).
优化大型(通常是地理分布的)组织的IT基础设施超越了传统的虚拟机重新分配问题,原因有两个:(i)这些机构的数据中心由多个托管部门组成,这些部门对托管的内容和托管的位置有不同的偏好;(ii)这些数据中心的高层管理人员做出复杂的决策,需要操纵有利于不同目标的可能解决方案,以找到适当的平衡。这一挑战尚未在文献中得到全面解决,在本文中,我们证明了多目标VM重新分配对于大型分散数据中心是可行的。我们在一个现实的数据集上显示,我们的解决方案在解决方案的数量(平均约15%)和解决方案集的质量(平均超过6%)方面优于其他经典的多目标算法。
{"title":"Towards a Multi-objective VM Reassignment for Large Decentralised Data Centres","authors":"Takfarinas Saber, Anthony Ventresque, I. Brandić, James Thorburn, L. Murphy","doi":"10.1109/UCC.2015.21","DOIUrl":"https://doi.org/10.1109/UCC.2015.21","url":null,"abstract":"Optimising the IT infrastructure of large, often geographically distributed, organisations goes beyond the classical virtual machine reassignment problem, for two reasons: (i) the data centres of these organisations are composed of a number of hosting departments which have different preferences on what to host and where to host it, (ii) the top-level managers in these data centres make complex decisions and need to manipulate possible solutions favouring different objectives to find the right balance. This challenge has not yet been comprehensively addressed in the literature and in this paper we demonstrate that a multi-objective VM reassignment is feasible for large decentralised data centres. We show on a realistic data set that our solution outperforms other classical multi-objective algorithms for VM reassignment in terms of quantity of solutions (by about 15% on average) and quality of the solutions set (by over 6% on average).","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131305447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Performance Study of Cloud Computing Back-End Solutions for Mobile Applications 移动应用云计算后端解决方案的性能研究
Guilherme Macedo, Christina Thorpe
Cloud Computing provides essential tools for building modern mobile applications. In order to leverage the advantages of the Cloud for developing and scaling applications, mobile developers must perform a technical analysis of the options currently available on the market. The objective of this paper is to investigate the various considerations of hosting mobile applications' back-end in the Cloud, more specifically, the ease of deployment and the application performance. We conducted a comprehensive performance analysis of three popular Platform-as-a-Service providers. Results show that there are important differences in the performance and other aspects of deployment that should be considered by mobile application developers.
云计算为构建现代移动应用程序提供了必要的工具。为了利用云的优势来开发和扩展应用程序,移动开发人员必须对目前市场上可用的选项进行技术分析。本文的目的是研究在云中托管移动应用程序后端的各种考虑因素,更具体地说,是部署的便利性和应用程序性能。我们对三家流行的平台即服务提供商进行了全面的性能分析。结果表明,在性能和部署的其他方面存在重要差异,这是移动应用程序开发人员应该考虑的。
{"title":"Performance Study of Cloud Computing Back-End Solutions for Mobile Applications","authors":"Guilherme Macedo, Christina Thorpe","doi":"10.1109/UCC.2015.52","DOIUrl":"https://doi.org/10.1109/UCC.2015.52","url":null,"abstract":"Cloud Computing provides essential tools for building modern mobile applications. In order to leverage the advantages of the Cloud for developing and scaling applications, mobile developers must perform a technical analysis of the options currently available on the market. The objective of this paper is to investigate the various considerations of hosting mobile applications' back-end in the Cloud, more specifically, the ease of deployment and the application performance. We conducted a comprehensive performance analysis of three popular Platform-as-a-Service providers. Results show that there are important differences in the performance and other aspects of deployment that should be considered by mobile application developers.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133372037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
PRACTISE -- Demonstrating a Neural Network Based Framework for Robust Prediction of Data Center Workload 实践——展示一种基于神经网络的数据中心工作负载鲁棒预测框架
T. Scherer, Ji Xue, Feng Yan, R. Birke, L. Chen, E. Smirni
We present a web based tool to demonstrate PRACTISE, a neural network based framework for efficient and accurate prediction of server workload time series in data centers. For the evaluation, we focus on resource utilization traces of CPU, memory, disk, and network. Compared with ARIMA and baseline neural network models, PRACTISE achieves significantly smaller average prediction errors. We demonstrate the benefits of PRACTISE in two scenarios: i) using recorded resource utilization traces from private cloud data centers, and ii) using real-time data collected from live data center systems.
我们提出了一个基于web的工具来演示实践,一个基于神经网络的框架,用于有效和准确地预测数据中心服务器工作负载时间序列。对于评估,我们关注CPU、内存、磁盘和网络的资源利用轨迹。与ARIMA和基线神经网络模型相比,practice的平均预测误差明显减小。我们在两个场景中展示了实践的好处:i)使用私有云数据中心记录的资源利用轨迹,ii)使用从实时数据中心系统收集的实时数据。
{"title":"PRACTISE -- Demonstrating a Neural Network Based Framework for Robust Prediction of Data Center Workload","authors":"T. Scherer, Ji Xue, Feng Yan, R. Birke, L. Chen, E. Smirni","doi":"10.1109/UCC.2015.65","DOIUrl":"https://doi.org/10.1109/UCC.2015.65","url":null,"abstract":"We present a web based tool to demonstrate PRACTISE, a neural network based framework for efficient and accurate prediction of server workload time series in data centers. For the evaluation, we focus on resource utilization traces of CPU, memory, disk, and network. Compared with ARIMA and baseline neural network models, PRACTISE achieves significantly smaller average prediction errors. We demonstrate the benefits of PRACTISE in two scenarios: i) using recorded resource utilization traces from private cloud data centers, and ii) using real-time data collected from live data center systems.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116357647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Workflow Scheduling on Power Constrained VMs 限电虚拟机工作流调度
D. Shepherd, Ilia Pietri, R. Sakellariou
With energy consumption being an issue of growing concern in large-scale cloud data centers, providers may wish to impose restrictions on the power usage of the hosts. This raises the challenge of operating cloud resources under power limits which may vary over time. Motivated by such a constraint, this paper considers the problem of scheduling scientific workflows in an environment where the number of VMs available is limited by a time-varying power cap. A simple scheduling algorithm for such cases is proposed and experimentally evaluated.
随着能源消耗成为大型云数据中心日益关注的问题,提供商可能希望对主机的电力使用施加限制。这就提出了在可能随时间变化的功率限制下操作云资源的挑战。在此约束下,本文考虑了在可用虚拟机数量受时变功率上限限制的环境下科学工作流的调度问题。针对这种情况,提出了一种简单的调度算法并进行了实验评估。
{"title":"Workflow Scheduling on Power Constrained VMs","authors":"D. Shepherd, Ilia Pietri, R. Sakellariou","doi":"10.1109/UCC.2015.74","DOIUrl":"https://doi.org/10.1109/UCC.2015.74","url":null,"abstract":"With energy consumption being an issue of growing concern in large-scale cloud data centers, providers may wish to impose restrictions on the power usage of the hosts. This raises the challenge of operating cloud resources under power limits which may vary over time. Motivated by such a constraint, this paper considers the problem of scheduling scientific workflows in an environment where the number of VMs available is limited by a time-varying power cap. A simple scheduling algorithm for such cases is proposed and experimentally evaluated.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"418 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116579957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
OS-Independent Live Migration Scheme for Bare-Metal Clouds 裸金属云独立操作系统热迁移方案
Takaaki Fukai, Yushi Omote, Takahiro Shinagawa, Kazuhiko Kato
Bare-metal clouds are an emerging and attractive platform for cloud users who demand extreme computer performance. Bare-metal clouds lease physical machines rather than virtual machines, eliminating a virtualization overhead and providing maximum computer hardware performance. Therefore, bare-metal clouds are suitable for applications that require intensive, consistent, and predictable performance, such as big-data and high-performance computing applications. Unfortunately, existing bare-metal clouds do not support live migration because they lack virtualization layers. Live migration is an essential feature for bare-metal cloud vendors to perform proactive maintenance and fault tolerance that can avoid long user application downtime when underlying physical hardware is about to fail. Existing live migration approaches require either a virtualization overhead or OS-dependence and are therefore unsuitable for bare-metal clouds. This paper introduces an OS-independent live migration scheme for bare-metal clouds. We utilize a very thin hypervisor layer that does not virtualize hardware and directly exposes physical hardware to a guest OS. During live migration, the hypervisor carefully monitors and controls access to physical devices to capture, transfer, and restore the device states while the guest OS is still controlling the devices. After live migration, the hypervisor does almost nothing to eliminate the virtualization overhead and provide bare-metal performance for the guest OS. Experimental results confirmed that network performance of our system was comparable with that of bare-metal machines.
裸金属云是一种新兴的、有吸引力的平台,适用于需要极端计算机性能的云用户。裸金属云租用物理机器而不是虚拟机器,从而消除了虚拟化开销并提供了最大的计算机硬件性能。因此,裸金属云适合大数据、高性能计算等对性能要求高、一致性高、可预测的应用。不幸的是,现有的裸机云不支持实时迁移,因为它们缺乏虚拟化层。实时迁移是裸机云供应商执行主动维护和容错的基本功能,可以避免在底层物理硬件即将发生故障时用户应用程序长时间停机。现有的实时迁移方法要么需要虚拟化开销,要么依赖于操作系统,因此不适合裸机云。本文介绍了一种裸机云独立于操作系统的实时迁移方案。我们利用一个非常薄的管理程序层,它不虚拟化硬件,而是直接将物理硬件暴露给客户操作系统。在实时迁移期间,管理程序仔细监视和控制对物理设备的访问,以便在来宾操作系统仍然控制设备时捕获、传输和恢复设备状态。在实时迁移之后,虚拟机监控程序几乎没有采取任何措施来消除虚拟化开销并为来宾操作系统提供裸机性能。实验结果表明,该系统的网络性能与裸机相当。
{"title":"OS-Independent Live Migration Scheme for Bare-Metal Clouds","authors":"Takaaki Fukai, Yushi Omote, Takahiro Shinagawa, Kazuhiko Kato","doi":"10.1109/UCC.2015.23","DOIUrl":"https://doi.org/10.1109/UCC.2015.23","url":null,"abstract":"Bare-metal clouds are an emerging and attractive platform for cloud users who demand extreme computer performance. Bare-metal clouds lease physical machines rather than virtual machines, eliminating a virtualization overhead and providing maximum computer hardware performance. Therefore, bare-metal clouds are suitable for applications that require intensive, consistent, and predictable performance, such as big-data and high-performance computing applications. Unfortunately, existing bare-metal clouds do not support live migration because they lack virtualization layers. Live migration is an essential feature for bare-metal cloud vendors to perform proactive maintenance and fault tolerance that can avoid long user application downtime when underlying physical hardware is about to fail. Existing live migration approaches require either a virtualization overhead or OS-dependence and are therefore unsuitable for bare-metal clouds. This paper introduces an OS-independent live migration scheme for bare-metal clouds. We utilize a very thin hypervisor layer that does not virtualize hardware and directly exposes physical hardware to a guest OS. During live migration, the hypervisor carefully monitors and controls access to physical devices to capture, transfer, and restore the device states while the guest OS is still controlling the devices. After live migration, the hypervisor does almost nothing to eliminate the virtualization overhead and provide bare-metal performance for the guest OS. Experimental results confirmed that network performance of our system was comparable with that of bare-metal machines.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123775337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Realizing Business Continuity Planning over FELIX Infrastructure 在FELIX基础设施上实现业务连续性计划
A. Takefusa, J. Haga, U. Toseef, T. Ikeda, T. Kudoh, J. Tanaka, K. Pentikousis
FELIX federates existing Future Internet (FI) experimental facilities across continents to build a test environment for large-scale SDN experiments. The management framework developed by FELIX allows the execution of experimental network services in a distributed environment comprised of heterogeneous resources. The demonstration described in this paper showcases the implementation of the FELIX architecture over the federated experimental facilities across Japan and Europe leveraging on both the infrastructure resources and the FELIX management stack. The presented use-case also provides an important experimental scenario for data center operators who are developing Business Continuity Planning for IT services.
FELIX联合各大洲现有的未来互联网(FI)实验设施,为大规模SDN实验建立一个测试环境。FELIX开发的管理框架允许在由异构资源组成的分布式环境中执行实验性网络服务。本文中描述的演示展示了利用基础设施资源和FELIX管理堆栈在日本和欧洲的联邦实验设施上实现FELIX体系结构。所提供的用例还为正在为IT服务开发业务连续性计划的数据中心运营商提供了一个重要的实验场景。
{"title":"Realizing Business Continuity Planning over FELIX Infrastructure","authors":"A. Takefusa, J. Haga, U. Toseef, T. Ikeda, T. Kudoh, J. Tanaka, K. Pentikousis","doi":"10.1109/UCC.2015.72","DOIUrl":"https://doi.org/10.1109/UCC.2015.72","url":null,"abstract":"FELIX federates existing Future Internet (FI) experimental facilities across continents to build a test environment for large-scale SDN experiments. The management framework developed by FELIX allows the execution of experimental network services in a distributed environment comprised of heterogeneous resources. The demonstration described in this paper showcases the implementation of the FELIX architecture over the federated experimental facilities across Japan and Europe leveraging on both the infrastructure resources and the FELIX management stack. The presented use-case also provides an important experimental scenario for data center operators who are developing Business Continuity Planning for IT services.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122226327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Algorithmic Strategies for Sensing-as-a-Service in the Internet-of-Things Era 物联网时代感知即服务的算法策略
S. Chattopadhyay, A. Banerjee
The objective of this thesis is to design efficient algorithms and architectures for enabling a Sensing as a Service paradigm in the recent era of Internet-of-things. With the widespread deployment of sensor architectures and sensor-enabled applications all around the globe, our planet today is witnessing an unprecedented instrumentation. The emerging paradigm of Sensing as a Service is replete with many open challenges, starting from systematic sensor deployment, regulated data collection, efficient data aggregation, scalable execution and proper participation. This dissertation aims to address some of these open challenges and attempts to carve a niche proposition by handling these problems from a purely algorithmic perspective. The objective is to examine each of the crucial pieces outlined above in the light of algorithmic design and come up with efficient mechanisms that are both practical and theoretically well-founded. The experiments are planned on real world data and hence, are expected to allow us to examine the efficacy of our proposals in a realistic setting.
本文的目标是设计高效的算法和架构,以实现在物联网时代的传感即服务范式。随着传感器架构和传感器应用在全球范围内的广泛部署,我们的星球正在见证前所未有的仪器仪表。传感即服务的新兴模式充满了许多开放的挑战,从系统的传感器部署、规范的数据收集、有效的数据聚合、可扩展的执行和适当的参与开始。本论文旨在解决这些开放的挑战,并试图通过从纯粹的算法角度处理这些问题来雕刻一个利基命题。我们的目标是根据算法设计来检查上面列出的每个关键部分,并提出既实用又理论上有充分依据的有效机制。这些实验是在真实世界的数据上计划的,因此,期望我们能够在现实环境中检验我们的建议的有效性。
{"title":"Algorithmic Strategies for Sensing-as-a-Service in the Internet-of-Things Era","authors":"S. Chattopadhyay, A. Banerjee","doi":"10.1109/UCC.2015.62","DOIUrl":"https://doi.org/10.1109/UCC.2015.62","url":null,"abstract":"The objective of this thesis is to design efficient algorithms and architectures for enabling a Sensing as a Service paradigm in the recent era of Internet-of-things. With the widespread deployment of sensor architectures and sensor-enabled applications all around the globe, our planet today is witnessing an unprecedented instrumentation. The emerging paradigm of Sensing as a Service is replete with many open challenges, starting from systematic sensor deployment, regulated data collection, efficient data aggregation, scalable execution and proper participation. This dissertation aims to address some of these open challenges and attempts to carve a niche proposition by handling these problems from a purely algorithmic perspective. The objective is to examine each of the crucial pieces outlined above in the light of algorithmic design and come up with efficient mechanisms that are both practical and theoretically well-founded. The experiments are planned on real world data and hence, are expected to allow us to examine the efficacy of our proposals in a realistic setting.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128777347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Improving Resource Efficiency in Internet Cafés by Virtualization and Optimal User Allocation 通过虚拟化和最优用户分配提高网吧的资源效率
I. Hamling, M. O'Sullivan, C. Walker, Clemens Thielen
The concept of using distributed computing to supply video games to end users has been growing in popularity. Internet cafés are one potential application for this concept. We consider a cloud-based model for Internet cafés where servers provide virtual machines with different specifications in order to meet different kinds of user demand (web browsing, low end gaming, medium end gaming, and high end gaming). In an Internet café, users arrive throughout the day with different demands and different durations for which they stay. Given the user demand over time and a fixed hardware set-up of servers, the task then consists of choosing which users to accept and how to allocate the accepted users to the servers in order to maximize the total profit of the Internet café. We formulate an integer programming model for computing an optimal choice of users to accept together with an efficient allocation of accepted users to servers. Computational results show that, when allocating users efficiently, using a cloud-based setting with servers providing virtual machines that exactly meet the users' demands can greatly improve resource efficiency in Internet cafés compared to classical zoning models that use desktop computers. At the same time, the total profit obtained from accepting users can be improved significantly due to the added flexibility when using an optimized user acceptance strategy.
使用分布式计算向终端用户提供视频游戏的概念越来越受欢迎。互联网咖啡厅是这一概念的一个潜在应用。我们考虑一种基于云的Internet网卡模型,其中服务器提供不同规格的虚拟机,以满足不同类型的用户需求(web浏览、低端游戏、中端游戏和高端游戏)。在网吧里,用户一整天都带着不同的需求和不同的停留时间来到这里。给定一段时间内的用户需求和固定的服务器硬件设置,接下来的任务包括选择接受哪些用户,以及如何将接受的用户分配给服务器,以最大化互联网咖啡厅的总利润。我们建立了一个整数规划模型,用于计算接受用户的最优选择以及接受用户到服务器的有效分配。计算结果表明,在高效分配用户时,使用基于云的设置,服务器提供的虚拟机正好满足用户的需求,与使用台式计算机的经典分区模型相比,可以大大提高Internet cafims的资源效率。同时,采用优化后的用户接受策略,由于增加了灵活性,接收用户获得的总利润可以显著提高。
{"title":"Improving Resource Efficiency in Internet Cafés by Virtualization and Optimal User Allocation","authors":"I. Hamling, M. O'Sullivan, C. Walker, Clemens Thielen","doi":"10.1109/UCC.2015.17","DOIUrl":"https://doi.org/10.1109/UCC.2015.17","url":null,"abstract":"The concept of using distributed computing to supply video games to end users has been growing in popularity. Internet cafés are one potential application for this concept. We consider a cloud-based model for Internet cafés where servers provide virtual machines with different specifications in order to meet different kinds of user demand (web browsing, low end gaming, medium end gaming, and high end gaming). In an Internet café, users arrive throughout the day with different demands and different durations for which they stay. Given the user demand over time and a fixed hardware set-up of servers, the task then consists of choosing which users to accept and how to allocate the accepted users to the servers in order to maximize the total profit of the Internet café. We formulate an integer programming model for computing an optimal choice of users to accept together with an efficient allocation of accepted users to servers. Computational results show that, when allocating users efficiently, using a cloud-based setting with servers providing virtual machines that exactly meet the users' demands can greatly improve resource efficiency in Internet cafés compared to classical zoning models that use desktop computers. At the same time, the total profit obtained from accepting users can be improved significantly due to the added flexibility when using an optimized user acceptance strategy.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132524562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1