D. Antoniades, N. Loulloudes, Athanasios Foudoulis, Chrystalla Sofokleous, Demetris Trihinas, G. Pallis, M. Dikaiakos, H. Kornmayer
The Cloud Application Management Framework (CAMF) enables Cloud application developers to design, deploy and manage their applications through an intuitive blueprint design. In this paper we show how Cloud application developers can utilize CAMF in order to have portable applications that can be deployed on different IaaS with minimal effort. Towards this goal, we introduce the Cloud Application Requirement Language (CARL). CARL can be used for defining the application software and hardware requirements, information that is then included into the TOSCA description of the Cloud application, alongside the application blueprint. CAMF's Information Service utilizes both these artifacts to provide IaaS specific configurations that fulfill the user's requirements.
{"title":"Enabling Cloud Application Portability","authors":"D. Antoniades, N. Loulloudes, Athanasios Foudoulis, Chrystalla Sofokleous, Demetris Trihinas, G. Pallis, M. Dikaiakos, H. Kornmayer","doi":"10.1109/UCC.2015.56","DOIUrl":"https://doi.org/10.1109/UCC.2015.56","url":null,"abstract":"The Cloud Application Management Framework (CAMF) enables Cloud application developers to design, deploy and manage their applications through an intuitive blueprint design. In this paper we show how Cloud application developers can utilize CAMF in order to have portable applications that can be deployed on different IaaS with minimal effort. Towards this goal, we introduce the Cloud Application Requirement Language (CARL). CARL can be used for defining the application software and hardware requirements, information that is then included into the TOSCA description of the Cloud application, alongside the application blueprint. CAMF's Information Service utilizes both these artifacts to provide IaaS specific configurations that fulfill the user's requirements.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114658214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With both the consolidation and the wide adoption of the cloud computing paradigm, some limitations inherent to this technology appear, hampering the effective and plain use of the paradigm to its full extent. To overcome such limitations, service providers can organize themselves into associations with the objective to surpass existing barriers in offering quality of service and scalability. Within the possible organization of multiple clouds, the Cloud Federation is an architecture that is regulated by a federative contract promoted by voluntary organizations participating in the federation. However, there is no consensus or classification of which voluntary characteristics exist and how they take place and impact in a multicloud organization. In this paper we discuss aspects of voluntary behavior in multiple cloud organizations and bring to discussion how these aspects affect the cloud federation characteristics.
{"title":"An Analysis of the Voluntary Aspect in Cloud Federations","authors":"M. M. Assis, L. Bittencourt","doi":"10.1109/UCC.2015.89","DOIUrl":"https://doi.org/10.1109/UCC.2015.89","url":null,"abstract":"With both the consolidation and the wide adoption of the cloud computing paradigm, some limitations inherent to this technology appear, hampering the effective and plain use of the paradigm to its full extent. To overcome such limitations, service providers can organize themselves into associations with the objective to surpass existing barriers in offering quality of service and scalability. Within the possible organization of multiple clouds, the Cloud Federation is an architecture that is regulated by a federative contract promoted by voluntary organizations participating in the federation. However, there is no consensus or classification of which voluntary characteristics exist and how they take place and impact in a multicloud organization. In this paper we discuss aspects of voluntary behavior in multiple cloud organizations and bring to discussion how these aspects affect the cloud federation characteristics.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"80 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120925786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Usually, IaaS providers use the inflexible supermarket approach for trading resources: a provider offers a resource for a fixed price and consumers can buy the offered resources without negotiating with the provider (take it or leave it). Another possibility is an auction based approach. Auctions have well defined rules which are necessary to ensure fair and transparent resource allocation. However, these rules are limiting flexibility of consumers and providers. In this paper we present a negotiation based resource allocation mechanism following the offer-counteroffer negotiation protocol paradigm. On the one hand, this allocation mechanisms is similar to the supermarket approach as consumer and provider are able to communicate directly. On the other hand, the approach shows also similarities to auctions as the price is determined in a dynamic way. For justification and evaluation we developed a so called Bazaar-Extension for CloudSim which allows to run negotiations and to develop and simulate new negotiation strategies and market scenarios. Further a negotiation strategy considering basic economical principles is introduced in this paper which was used for an exemplary resource allocation scenario. The scenario shows that negotiation based resource allocation can improve the well-being of consumer and provider.
{"title":"A Negotiation-Based Resource Allocation Model in IaaS-Markets","authors":"Benedikt Pittl, W. Mach, E. Schikuta","doi":"10.1109/UCC.2015.20","DOIUrl":"https://doi.org/10.1109/UCC.2015.20","url":null,"abstract":"Usually, IaaS providers use the inflexible supermarket approach for trading resources: a provider offers a resource for a fixed price and consumers can buy the offered resources without negotiating with the provider (take it or leave it). Another possibility is an auction based approach. Auctions have well defined rules which are necessary to ensure fair and transparent resource allocation. However, these rules are limiting flexibility of consumers and providers. In this paper we present a negotiation based resource allocation mechanism following the offer-counteroffer negotiation protocol paradigm. On the one hand, this allocation mechanisms is similar to the supermarket approach as consumer and provider are able to communicate directly. On the other hand, the approach shows also similarities to auctions as the price is determined in a dynamic way. For justification and evaluation we developed a so called Bazaar-Extension for CloudSim which allows to run negotiations and to develop and simulate new negotiation strategies and market scenarios. Further a negotiation strategy considering basic economical principles is introduced in this paper which was used for an exemplary resource allocation scenario. The scenario shows that negotiation based resource allocation can improve the well-being of consumer and provider.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116344195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Internet of things (IoT) is driving the big data revolution in smart cities. In order to make informed, accurate and real-time decisions, smart cities have to invest in powerful computing infrastructure with the minimal total cost of ownership. Smart city infrastructure will need to process data from various scientific and engineering domains like weather variability, traffic management, disease control etc in real-time while keeping the operational costs to minimum. In this paper we build a case for using General Purpose GPUs (GPGPU) as an alternate to the traditional CPU based computing. Utilising the GPUs in development of smart city infrastructure is an attractive alternate as it provides an efficient computing capacity when compared with traditional CPU only solutions. However, we find that naive deployment of applications on high-density GPUs results in lower scalability and performance. We show that designing a NUMA and GPU affinity aware parallel execution model can lead to substantial speed-ups. Our results show that smart cities can save over 45% in infrastructure power and over 90% in data centre space if high-density GPU solutions are used.
{"title":"Evaluation of High Density GPUs as Sustainable Smart City Infrastructure","authors":"Lei Shang, C. Lin, M. Atif, Allan Williams","doi":"10.1109/UCC.2015.86","DOIUrl":"https://doi.org/10.1109/UCC.2015.86","url":null,"abstract":"Internet of things (IoT) is driving the big data revolution in smart cities. In order to make informed, accurate and real-time decisions, smart cities have to invest in powerful computing infrastructure with the minimal total cost of ownership. Smart city infrastructure will need to process data from various scientific and engineering domains like weather variability, traffic management, disease control etc in real-time while keeping the operational costs to minimum. In this paper we build a case for using General Purpose GPUs (GPGPU) as an alternate to the traditional CPU based computing. Utilising the GPUs in development of smart city infrastructure is an attractive alternate as it provides an efficient computing capacity when compared with traditional CPU only solutions. However, we find that naive deployment of applications on high-density GPUs results in lower scalability and performance. We show that designing a NUMA and GPU affinity aware parallel execution model can lead to substantial speed-ups. Our results show that smart cities can save over 45% in infrastructure power and over 90% in data centre space if high-density GPU solutions are used.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122219335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As adoption of eHealth solutions advances, new computing paradigms - such as cloud computing - bring the potential to improve efficiency in managing medical health records and help reduce costs. However, these opportunities introduce new security risks which can not be ignored. In this paper, we present a forward-looking design for a privacy-preserving eHealth cloud system. The proposed solution, is based on a Symmetric Searchable Encryption scheme that allows patients of an electronic healthcare system to securely store encrypted versions of their medical data and search directly on them without having to decrypt them first. As a result, the proposed protocol offers better protection than the current available solutions and paves the way for the next generation of eHealth systems.
{"title":"Towards Trusted eHealth Services in the Cloud","authors":"A. Michalas, Rafael Dowsley","doi":"10.1109/UCC.2015.108","DOIUrl":"https://doi.org/10.1109/UCC.2015.108","url":null,"abstract":"As adoption of eHealth solutions advances, new computing paradigms - such as cloud computing - bring the potential to improve efficiency in managing medical health records and help reduce costs. However, these opportunities introduce new security risks which can not be ignored. In this paper, we present a forward-looking design for a privacy-preserving eHealth cloud system. The proposed solution, is based on a Symmetric Searchable Encryption scheme that allows patients of an electronic healthcare system to securely store encrypted versions of their medical data and search directly on them without having to decrypt them first. As a result, the proposed protocol offers better protection than the current available solutions and paves the way for the next generation of eHealth systems.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130303060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Pesch, A. Mcgibney, P. Sobonski, S. Rea, T. Scherer, L. Chen, Antonius P. J. Engbersen, D. Mehta, B. O’Sullivan, Enric Pages, J. Townley, Dhanaraja Kasinathan, J. Torrens, V. Zavřel, J. Hensen
We present an architecture for integrated data centre energy management developed in the EC funded GENiC project. The architecture was devised to create a platform that can integrate functions for workload management, cooling, power management and control of heat recovery for future, highly efficient data centres. The architecture is based on a distributed systems approach that allows the integration of components developed by several entities through defined interfaces and data formats. We also present use cases for the architecture, a brief description of the project's prototypical implementation, evaluation metrics and some lessons learned.
{"title":"The GENiC Architecture for Integrated Data Centre Energy Management","authors":"D. Pesch, A. Mcgibney, P. Sobonski, S. Rea, T. Scherer, L. Chen, Antonius P. J. Engbersen, D. Mehta, B. O’Sullivan, Enric Pages, J. Townley, Dhanaraja Kasinathan, J. Torrens, V. Zavřel, J. Hensen","doi":"10.1109/UCC.2015.96","DOIUrl":"https://doi.org/10.1109/UCC.2015.96","url":null,"abstract":"We present an architecture for integrated data centre energy management developed in the EC funded GENiC project. The architecture was devised to create a platform that can integrate functions for workload management, cooling, power management and control of heat recovery for future, highly efficient data centres. The architecture is based on a distributed systems approach that allows the integration of components developed by several entities through defined interfaces and data formats. We also present use cases for the architecture, a brief description of the project's prototypical implementation, evaluation metrics and some lessons learned.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129007431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Improvements in memory, storage devices, and network technologies are constantly exploited by distributed systems in order to meet the increasing data storage and I/O demands of modern large-scale data analytics. Some systems use memory and SSDs as a cache for local storage while others combine local with network-attached storage to increase performance. However, no work has ever looked at all layers together in a distributed setting. We present a novel design for a distributed file system that is aware of storage media (e.g., memory, SSDs, HDDs, NAS) with different capacities and performance characteristics. The storage media are explicitly exposed to users, allowing them to choose the distribution and placement of replicas in the cluster based on their own performance and fault tolerance requirements. Meanwhile, the system offers a variety of pluggable policies for automating data management with the dual goal of increased performance and better cluster utilization. These two features combined inspire new research opportunities for data-intensive processing systems.
{"title":"A Distributed File System with Storage-Media Awareness","authors":"H. Herodotou","doi":"10.1109/UCC.2015.67","DOIUrl":"https://doi.org/10.1109/UCC.2015.67","url":null,"abstract":"Improvements in memory, storage devices, and network technologies are constantly exploited by distributed systems in order to meet the increasing data storage and I/O demands of modern large-scale data analytics. Some systems use memory and SSDs as a cache for local storage while others combine local with network-attached storage to increase performance. However, no work has ever looked at all layers together in a distributed setting. We present a novel design for a distributed file system that is aware of storage media (e.g., memory, SSDs, HDDs, NAS) with different capacities and performance characteristics. The storage media are explicitly exposed to users, allowing them to choose the distribution and placement of replicas in the cluster based on their own performance and fault tolerance requirements. Meanwhile, the system offers a variety of pluggable policies for automating data management with the dual goal of increased performance and better cluster utilization. These two features combined inspire new research opportunities for data-intensive processing systems.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126132651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vahid Arabnejad, K. Bubendorfer, Bryan K. F. Ng, K. Chard
Effective use of elastic heterogeneous cloud resources represents a unique multi-objective scheduling challenge with respect to cost and time constraints. In this paper we introduce a novel deadline constrained scheduling algorithm, Deadline Constrained Critical Path (DCCP), that manages the scheduling of workloads on dynamically provisioned cloud resources. The DCCP algorithm consists of two stages: (i) task prioritization, and (ii) task assignment, and builds upon the concept of Constrained Critical Paths to execute a set of tasks on the same instance in order to fulfil our goal of reducing data movement between instances. We evaluated the normalized cost and success rate of DCCP and compared these results with IC-PCP. Overall, DCCP schedules with lower cost and exhibits a higher success rate in meeting deadline constraints.
{"title":"A Deadline Constrained Critical Path Heuristic for Cost-Effectively Scheduling Workflows","authors":"Vahid Arabnejad, K. Bubendorfer, Bryan K. F. Ng, K. Chard","doi":"10.1109/UCC.2015.41","DOIUrl":"https://doi.org/10.1109/UCC.2015.41","url":null,"abstract":"Effective use of elastic heterogeneous cloud resources represents a unique multi-objective scheduling challenge with respect to cost and time constraints. In this paper we introduce a novel deadline constrained scheduling algorithm, Deadline Constrained Critical Path (DCCP), that manages the scheduling of workloads on dynamically provisioned cloud resources. The DCCP algorithm consists of two stages: (i) task prioritization, and (ii) task assignment, and builds upon the concept of Constrained Critical Paths to execute a set of tasks on the same instance in order to fulfil our goal of reducing data movement between instances. We evaluated the normalized cost and success rate of DCCP and compared these results with IC-PCP. Overall, DCCP schedules with lower cost and exhibits a higher success rate in meeting deadline constraints.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133191196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtualization is growing rapidly as a result of the increasing number of alternative solutions in this area, and of the wide range of application field. Until now, hypervisor-based virtualization has been the de facto solution to perform server virtualization. Recently, container-based virtualization -- an alternative to hypervisors -- has gained more attention because of lightweight characteristics, attracting cloud providers that have already made use of it to deliver their services. However, a gap in the existing research on containers exists in the area of power consumption. This paper presents the results of a performance comparison in terms of power consumption of four different virtualization technologies: KVM and Xen, which are based on hypervisor virtualization, Docker and LXC which are based on container virtualization. The aim of this empirical investigation, carried out by means of a testbed, is to understand how these technologies react to particular workloads. Our initial results show how, despite of the number of virtual entities running, both kinds of virtualization alternatives behave similarly in idle state and in CPU/Memory stress test. Contrarily, the results on network performance show differences between the two technologies.
{"title":"Power Consumption of Virtualization Technologies: An Empirical Investigation","authors":"Roberto Morabito","doi":"10.1109/UCC.2015.93","DOIUrl":"https://doi.org/10.1109/UCC.2015.93","url":null,"abstract":"Virtualization is growing rapidly as a result of the increasing number of alternative solutions in this area, and of the wide range of application field. Until now, hypervisor-based virtualization has been the de facto solution to perform server virtualization. Recently, container-based virtualization -- an alternative to hypervisors -- has gained more attention because of lightweight characteristics, attracting cloud providers that have already made use of it to deliver their services. However, a gap in the existing research on containers exists in the area of power consumption. This paper presents the results of a performance comparison in terms of power consumption of four different virtualization technologies: KVM and Xen, which are based on hypervisor virtualization, Docker and LXC which are based on container virtualization. The aim of this empirical investigation, carried out by means of a testbed, is to understand how these technologies react to particular workloads. Our initial results show how, despite of the number of virtual entities running, both kinds of virtualization alternatives behave similarly in idle state and in CPU/Memory stress test. Contrarily, the results on network performance show differences between the two technologies.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121846039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}