One of the recent and major challenges in cloud computing is to enhance the energy efficiency in cloud data centers. Such enhancements can be done by improving the resource allocation and management algorithms. In this paper, a model that identifies common patterns for the jobs submitted to the cloud is proposed. This model is able to predict the type of the job submitted, and accordingly, the set of users' jobs is classified into four subsets. Each subset contains jobs that have similar requirements. In addition to the jobs' common pattern and requirements, the users' history is considered in the jobs' type prediction model. The goal of job classification is to find a way to propose useful strategy that helps improve energy efficiency. Following the process of jobs' classification, the best fit virtual machine is allocated to each job. Then, the virtual machines are placed to the physical machines according to a novel strategy called Mixed Type Placement strategy. The core idea of the proposed strategy is to place virtual machines of the jobs of different types in the same physical machine whenever possible, based on Knapsack Problem. This is because different types of jobs do not intensively use the same compute or storage resources in the physical machine. This strategy reduces the number of active physical machines which leads to major reduction in the total energy consumption in the data center. A simulation of the results shows that the presented strategy outperforms both Genetic Algorithm and Round Robin from an energy efficiency perspective.
{"title":"Job Classification in Cloud Computing: The Classification Effects on Energy Efficiency","authors":"Auday Aldulaimy, R. Zantout, A. Zekri, W. Itani","doi":"10.1109/UCC.2015.97","DOIUrl":"https://doi.org/10.1109/UCC.2015.97","url":null,"abstract":"One of the recent and major challenges in cloud computing is to enhance the energy efficiency in cloud data centers. Such enhancements can be done by improving the resource allocation and management algorithms. In this paper, a model that identifies common patterns for the jobs submitted to the cloud is proposed. This model is able to predict the type of the job submitted, and accordingly, the set of users' jobs is classified into four subsets. Each subset contains jobs that have similar requirements. In addition to the jobs' common pattern and requirements, the users' history is considered in the jobs' type prediction model. The goal of job classification is to find a way to propose useful strategy that helps improve energy efficiency. Following the process of jobs' classification, the best fit virtual machine is allocated to each job. Then, the virtual machines are placed to the physical machines according to a novel strategy called Mixed Type Placement strategy. The core idea of the proposed strategy is to place virtual machines of the jobs of different types in the same physical machine whenever possible, based on Knapsack Problem. This is because different types of jobs do not intensively use the same compute or storage resources in the physical machine. This strategy reduces the number of active physical machines which leads to major reduction in the total energy consumption in the data center. A simulation of the results shows that the presented strategy outperforms both Genetic Algorithm and Round Robin from an energy efficiency perspective.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"147 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124227570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As of 2010 data centers use 1.5% of global electricity production and this is expected to keep growing [1]. There is a need for a near real-time power consumption modeling/monitoring system that could be used at scale within a Software Defined Data Center (SDDC). The power consumption models and information they provide can then be used to make better decisions for data center orchestration, e.g., whether to migrate virtual machines to reduce power consumption. We propose a scalable system that would 1) create initial power consumption models, as needed, for data center components, and 2) could be continually refined while the components are in use. The models will be used for the near real-time monitoring of power consumption, as well as predicting power consumption before and after potential orchestration decisions. The first step towards this goal of whole data center power modeling and prediction is to be able to predict the power consumption of one server effectively, based on high level utilization statistics from that server. In this paper we present a novel method for modeling whole system power consumption for a server, under varying random levels of CPU utilization, with a scalable random forest based model, that utilizes statistics available at the data center management level.
{"title":"Towards Power Consumption Modeling for Servers at Scale","authors":"Timothy W. Harton, C. Walker, M. O'Sullivan","doi":"10.1109/UCC.2015.50","DOIUrl":"https://doi.org/10.1109/UCC.2015.50","url":null,"abstract":"As of 2010 data centers use 1.5% of global electricity production and this is expected to keep growing [1]. There is a need for a near real-time power consumption modeling/monitoring system that could be used at scale within a Software Defined Data Center (SDDC). The power consumption models and information they provide can then be used to make better decisions for data center orchestration, e.g., whether to migrate virtual machines to reduce power consumption. We propose a scalable system that would 1) create initial power consumption models, as needed, for data center components, and 2) could be continually refined while the components are in use. The models will be used for the near real-time monitoring of power consumption, as well as predicting power consumption before and after potential orchestration decisions. The first step towards this goal of whole data center power modeling and prediction is to be able to predict the power consumption of one server effectively, based on high level utilization statistics from that server. In this paper we present a novel method for modeling whole system power consumption for a server, under varying random levels of CPU utilization, with a scalable random forest based model, that utilizes statistics available at the data center management level.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"69 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121926792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Takfarinas Saber, Anthony Ventresque, I. Brandić, James Thorburn, L. Murphy
Optimising the IT infrastructure of large, often geographically distributed, organisations goes beyond the classical virtual machine reassignment problem, for two reasons: (i) the data centres of these organisations are composed of a number of hosting departments which have different preferences on what to host and where to host it, (ii) the top-level managers in these data centres make complex decisions and need to manipulate possible solutions favouring different objectives to find the right balance. This challenge has not yet been comprehensively addressed in the literature and in this paper we demonstrate that a multi-objective VM reassignment is feasible for large decentralised data centres. We show on a realistic data set that our solution outperforms other classical multi-objective algorithms for VM reassignment in terms of quantity of solutions (by about 15% on average) and quality of the solutions set (by over 6% on average).
{"title":"Towards a Multi-objective VM Reassignment for Large Decentralised Data Centres","authors":"Takfarinas Saber, Anthony Ventresque, I. Brandić, James Thorburn, L. Murphy","doi":"10.1109/UCC.2015.21","DOIUrl":"https://doi.org/10.1109/UCC.2015.21","url":null,"abstract":"Optimising the IT infrastructure of large, often geographically distributed, organisations goes beyond the classical virtual machine reassignment problem, for two reasons: (i) the data centres of these organisations are composed of a number of hosting departments which have different preferences on what to host and where to host it, (ii) the top-level managers in these data centres make complex decisions and need to manipulate possible solutions favouring different objectives to find the right balance. This challenge has not yet been comprehensively addressed in the literature and in this paper we demonstrate that a multi-objective VM reassignment is feasible for large decentralised data centres. We show on a realistic data set that our solution outperforms other classical multi-objective algorithms for VM reassignment in terms of quantity of solutions (by about 15% on average) and quality of the solutions set (by over 6% on average).","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131305447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud Computing provides essential tools for building modern mobile applications. In order to leverage the advantages of the Cloud for developing and scaling applications, mobile developers must perform a technical analysis of the options currently available on the market. The objective of this paper is to investigate the various considerations of hosting mobile applications' back-end in the Cloud, more specifically, the ease of deployment and the application performance. We conducted a comprehensive performance analysis of three popular Platform-as-a-Service providers. Results show that there are important differences in the performance and other aspects of deployment that should be considered by mobile application developers.
{"title":"Performance Study of Cloud Computing Back-End Solutions for Mobile Applications","authors":"Guilherme Macedo, Christina Thorpe","doi":"10.1109/UCC.2015.52","DOIUrl":"https://doi.org/10.1109/UCC.2015.52","url":null,"abstract":"Cloud Computing provides essential tools for building modern mobile applications. In order to leverage the advantages of the Cloud for developing and scaling applications, mobile developers must perform a technical analysis of the options currently available on the market. The objective of this paper is to investigate the various considerations of hosting mobile applications' back-end in the Cloud, more specifically, the ease of deployment and the application performance. We conducted a comprehensive performance analysis of three popular Platform-as-a-Service providers. Results show that there are important differences in the performance and other aspects of deployment that should be considered by mobile application developers.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133372037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Scherer, Ji Xue, Feng Yan, R. Birke, L. Chen, E. Smirni
We present a web based tool to demonstrate PRACTISE, a neural network based framework for efficient and accurate prediction of server workload time series in data centers. For the evaluation, we focus on resource utilization traces of CPU, memory, disk, and network. Compared with ARIMA and baseline neural network models, PRACTISE achieves significantly smaller average prediction errors. We demonstrate the benefits of PRACTISE in two scenarios: i) using recorded resource utilization traces from private cloud data centers, and ii) using real-time data collected from live data center systems.
{"title":"PRACTISE -- Demonstrating a Neural Network Based Framework for Robust Prediction of Data Center Workload","authors":"T. Scherer, Ji Xue, Feng Yan, R. Birke, L. Chen, E. Smirni","doi":"10.1109/UCC.2015.65","DOIUrl":"https://doi.org/10.1109/UCC.2015.65","url":null,"abstract":"We present a web based tool to demonstrate PRACTISE, a neural network based framework for efficient and accurate prediction of server workload time series in data centers. For the evaluation, we focus on resource utilization traces of CPU, memory, disk, and network. Compared with ARIMA and baseline neural network models, PRACTISE achieves significantly smaller average prediction errors. We demonstrate the benefits of PRACTISE in two scenarios: i) using recorded resource utilization traces from private cloud data centers, and ii) using real-time data collected from live data center systems.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116357647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With energy consumption being an issue of growing concern in large-scale cloud data centers, providers may wish to impose restrictions on the power usage of the hosts. This raises the challenge of operating cloud resources under power limits which may vary over time. Motivated by such a constraint, this paper considers the problem of scheduling scientific workflows in an environment where the number of VMs available is limited by a time-varying power cap. A simple scheduling algorithm for such cases is proposed and experimentally evaluated.
{"title":"Workflow Scheduling on Power Constrained VMs","authors":"D. Shepherd, Ilia Pietri, R. Sakellariou","doi":"10.1109/UCC.2015.74","DOIUrl":"https://doi.org/10.1109/UCC.2015.74","url":null,"abstract":"With energy consumption being an issue of growing concern in large-scale cloud data centers, providers may wish to impose restrictions on the power usage of the hosts. This raises the challenge of operating cloud resources under power limits which may vary over time. Motivated by such a constraint, this paper considers the problem of scheduling scientific workflows in an environment where the number of VMs available is limited by a time-varying power cap. A simple scheduling algorithm for such cases is proposed and experimentally evaluated.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"418 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116579957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bare-metal clouds are an emerging and attractive platform for cloud users who demand extreme computer performance. Bare-metal clouds lease physical machines rather than virtual machines, eliminating a virtualization overhead and providing maximum computer hardware performance. Therefore, bare-metal clouds are suitable for applications that require intensive, consistent, and predictable performance, such as big-data and high-performance computing applications. Unfortunately, existing bare-metal clouds do not support live migration because they lack virtualization layers. Live migration is an essential feature for bare-metal cloud vendors to perform proactive maintenance and fault tolerance that can avoid long user application downtime when underlying physical hardware is about to fail. Existing live migration approaches require either a virtualization overhead or OS-dependence and are therefore unsuitable for bare-metal clouds. This paper introduces an OS-independent live migration scheme for bare-metal clouds. We utilize a very thin hypervisor layer that does not virtualize hardware and directly exposes physical hardware to a guest OS. During live migration, the hypervisor carefully monitors and controls access to physical devices to capture, transfer, and restore the device states while the guest OS is still controlling the devices. After live migration, the hypervisor does almost nothing to eliminate the virtualization overhead and provide bare-metal performance for the guest OS. Experimental results confirmed that network performance of our system was comparable with that of bare-metal machines.
{"title":"OS-Independent Live Migration Scheme for Bare-Metal Clouds","authors":"Takaaki Fukai, Yushi Omote, Takahiro Shinagawa, Kazuhiko Kato","doi":"10.1109/UCC.2015.23","DOIUrl":"https://doi.org/10.1109/UCC.2015.23","url":null,"abstract":"Bare-metal clouds are an emerging and attractive platform for cloud users who demand extreme computer performance. Bare-metal clouds lease physical machines rather than virtual machines, eliminating a virtualization overhead and providing maximum computer hardware performance. Therefore, bare-metal clouds are suitable for applications that require intensive, consistent, and predictable performance, such as big-data and high-performance computing applications. Unfortunately, existing bare-metal clouds do not support live migration because they lack virtualization layers. Live migration is an essential feature for bare-metal cloud vendors to perform proactive maintenance and fault tolerance that can avoid long user application downtime when underlying physical hardware is about to fail. Existing live migration approaches require either a virtualization overhead or OS-dependence and are therefore unsuitable for bare-metal clouds. This paper introduces an OS-independent live migration scheme for bare-metal clouds. We utilize a very thin hypervisor layer that does not virtualize hardware and directly exposes physical hardware to a guest OS. During live migration, the hypervisor carefully monitors and controls access to physical devices to capture, transfer, and restore the device states while the guest OS is still controlling the devices. After live migration, the hypervisor does almost nothing to eliminate the virtualization overhead and provide bare-metal performance for the guest OS. Experimental results confirmed that network performance of our system was comparable with that of bare-metal machines.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123775337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Takefusa, J. Haga, U. Toseef, T. Ikeda, T. Kudoh, J. Tanaka, K. Pentikousis
FELIX federates existing Future Internet (FI) experimental facilities across continents to build a test environment for large-scale SDN experiments. The management framework developed by FELIX allows the execution of experimental network services in a distributed environment comprised of heterogeneous resources. The demonstration described in this paper showcases the implementation of the FELIX architecture over the federated experimental facilities across Japan and Europe leveraging on both the infrastructure resources and the FELIX management stack. The presented use-case also provides an important experimental scenario for data center operators who are developing Business Continuity Planning for IT services.
{"title":"Realizing Business Continuity Planning over FELIX Infrastructure","authors":"A. Takefusa, J. Haga, U. Toseef, T. Ikeda, T. Kudoh, J. Tanaka, K. Pentikousis","doi":"10.1109/UCC.2015.72","DOIUrl":"https://doi.org/10.1109/UCC.2015.72","url":null,"abstract":"FELIX federates existing Future Internet (FI) experimental facilities across continents to build a test environment for large-scale SDN experiments. The management framework developed by FELIX allows the execution of experimental network services in a distributed environment comprised of heterogeneous resources. The demonstration described in this paper showcases the implementation of the FELIX architecture over the federated experimental facilities across Japan and Europe leveraging on both the infrastructure resources and the FELIX management stack. The presented use-case also provides an important experimental scenario for data center operators who are developing Business Continuity Planning for IT services.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122226327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The objective of this thesis is to design efficient algorithms and architectures for enabling a Sensing as a Service paradigm in the recent era of Internet-of-things. With the widespread deployment of sensor architectures and sensor-enabled applications all around the globe, our planet today is witnessing an unprecedented instrumentation. The emerging paradigm of Sensing as a Service is replete with many open challenges, starting from systematic sensor deployment, regulated data collection, efficient data aggregation, scalable execution and proper participation. This dissertation aims to address some of these open challenges and attempts to carve a niche proposition by handling these problems from a purely algorithmic perspective. The objective is to examine each of the crucial pieces outlined above in the light of algorithmic design and come up with efficient mechanisms that are both practical and theoretically well-founded. The experiments are planned on real world data and hence, are expected to allow us to examine the efficacy of our proposals in a realistic setting.
{"title":"Algorithmic Strategies for Sensing-as-a-Service in the Internet-of-Things Era","authors":"S. Chattopadhyay, A. Banerjee","doi":"10.1109/UCC.2015.62","DOIUrl":"https://doi.org/10.1109/UCC.2015.62","url":null,"abstract":"The objective of this thesis is to design efficient algorithms and architectures for enabling a Sensing as a Service paradigm in the recent era of Internet-of-things. With the widespread deployment of sensor architectures and sensor-enabled applications all around the globe, our planet today is witnessing an unprecedented instrumentation. The emerging paradigm of Sensing as a Service is replete with many open challenges, starting from systematic sensor deployment, regulated data collection, efficient data aggregation, scalable execution and proper participation. This dissertation aims to address some of these open challenges and attempts to carve a niche proposition by handling these problems from a purely algorithmic perspective. The objective is to examine each of the crucial pieces outlined above in the light of algorithmic design and come up with efficient mechanisms that are both practical and theoretically well-founded. The experiments are planned on real world data and hence, are expected to allow us to examine the efficacy of our proposals in a realistic setting.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128777347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. Hamling, M. O'Sullivan, C. Walker, Clemens Thielen
The concept of using distributed computing to supply video games to end users has been growing in popularity. Internet cafés are one potential application for this concept. We consider a cloud-based model for Internet cafés where servers provide virtual machines with different specifications in order to meet different kinds of user demand (web browsing, low end gaming, medium end gaming, and high end gaming). In an Internet café, users arrive throughout the day with different demands and different durations for which they stay. Given the user demand over time and a fixed hardware set-up of servers, the task then consists of choosing which users to accept and how to allocate the accepted users to the servers in order to maximize the total profit of the Internet café. We formulate an integer programming model for computing an optimal choice of users to accept together with an efficient allocation of accepted users to servers. Computational results show that, when allocating users efficiently, using a cloud-based setting with servers providing virtual machines that exactly meet the users' demands can greatly improve resource efficiency in Internet cafés compared to classical zoning models that use desktop computers. At the same time, the total profit obtained from accepting users can be improved significantly due to the added flexibility when using an optimized user acceptance strategy.
{"title":"Improving Resource Efficiency in Internet Cafés by Virtualization and Optimal User Allocation","authors":"I. Hamling, M. O'Sullivan, C. Walker, Clemens Thielen","doi":"10.1109/UCC.2015.17","DOIUrl":"https://doi.org/10.1109/UCC.2015.17","url":null,"abstract":"The concept of using distributed computing to supply video games to end users has been growing in popularity. Internet cafés are one potential application for this concept. We consider a cloud-based model for Internet cafés where servers provide virtual machines with different specifications in order to meet different kinds of user demand (web browsing, low end gaming, medium end gaming, and high end gaming). In an Internet café, users arrive throughout the day with different demands and different durations for which they stay. Given the user demand over time and a fixed hardware set-up of servers, the task then consists of choosing which users to accept and how to allocate the accepted users to the servers in order to maximize the total profit of the Internet café. We formulate an integer programming model for computing an optimal choice of users to accept together with an efficient allocation of accepted users to servers. Computational results show that, when allocating users efficiently, using a cloud-based setting with servers providing virtual machines that exactly meet the users' demands can greatly improve resource efficiency in Internet cafés compared to classical zoning models that use desktop computers. At the same time, the total profit obtained from accepting users can be improved significantly due to the added flexibility when using an optimized user acceptance strategy.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132524562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}