S. Ruehl, Malte Rupprecht, Bjorn Morr, Matthias Reinhardt, S. Verclas
Software-as-a-Service (SaaS) is a delivery model whose basic idea is to provide applications to the customer on demand over the Internet. SaaS thereby promotes multi-tenancy as a tool to exploit economies of scale. This means that a single application instance serves multiple customers. However, a major drawback of SaaS is the customers' hesitation of sharing infrastructure, application code, or data with other tenants. This is due to the fact that one of the major threats of multi-tenancy is information disclosure due to a system malfunction, system error, or aggressive actions. So far the only approach in research to counteract on this hesitation has been to enhance the isolation between tenants using the same instance. Our approach (presented in earlier work) tackles this hesitation differently. It allows customers to choose if or even with whom they want to share the application. The approach enables the customer to define their constraints for individual application components and the underlying infrastructure. The contribution of this paper is an analysis of real-world applicability of the mixed-tenancy approach. This is done experimentally by applying the mixed-tenancy approach to OpenERP, an open source enterprise resource planning system used in industry. The conclusion gained from this experiment is that the mixed-tenancy approach is technically realizable for cases of the real world. However, there are scenarios where the mixed-tenancy approach is not economically worthwhile for the operator.
{"title":"Mixed-Tenancy in the Wild - Applicability of Mixed-Tenancy for Real-World Enterprise SaaS-Applications","authors":"S. Ruehl, Malte Rupprecht, Bjorn Morr, Matthias Reinhardt, S. Verclas","doi":"10.1109/CLOUD.2014.119","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.119","url":null,"abstract":"Software-as-a-Service (SaaS) is a delivery model whose basic idea is to provide applications to the customer on demand over the Internet. SaaS thereby promotes multi-tenancy as a tool to exploit economies of scale. This means that a single application instance serves multiple customers. However, a major drawback of SaaS is the customers' hesitation of sharing infrastructure, application code, or data with other tenants. This is due to the fact that one of the major threats of multi-tenancy is information disclosure due to a system malfunction, system error, or aggressive actions. So far the only approach in research to counteract on this hesitation has been to enhance the isolation between tenants using the same instance. Our approach (presented in earlier work) tackles this hesitation differently. It allows customers to choose if or even with whom they want to share the application. The approach enables the customer to define their constraints for individual application components and the underlying infrastructure. The contribution of this paper is an analysis of real-world applicability of the mixed-tenancy approach. This is done experimentally by applying the mixed-tenancy approach to OpenERP, an open source enterprise resource planning system used in industry. The conclusion gained from this experiment is that the mixed-tenancy approach is technically realizable for cases of the real world. However, there are scenarios where the mixed-tenancy approach is not economically worthwhile for the operator.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130767523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a model driven approach to define then coordinate the execution of protocols for secure outsourcing of computation of large datasets in cloud computing environments. First we present our Outsourcing Protocol Definition Language (OPDL) used to define a machine-processable protocols in an abstract and declarative way while leaving the implementation details to the underlying runtime components. The proposed language aims to simplify the design of these protocols while allowing their verification and the generation of cloud services composition to coordinate the protocol execution. We evaluated the expressiveness of OPDL by using it to define a set of representative secure outsourcing protocols from the literature.
{"title":"A Model Driven Framework for Secure Outsourcing of Computation to the Cloud","authors":"M. Nassar, A. Erradi, Farida Sabry, Q. Malluhi","doi":"10.1109/CLOUD.2014.145","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.145","url":null,"abstract":"This paper presents a model driven approach to define then coordinate the execution of protocols for secure outsourcing of computation of large datasets in cloud computing environments. First we present our Outsourcing Protocol Definition Language (OPDL) used to define a machine-processable protocols in an abstract and declarative way while leaving the implementation details to the underlying runtime components. The proposed language aims to simplify the design of these protocols while allowing their verification and the generation of cloud services composition to coordinate the protocol execution. We evaluated the expressiveness of OPDL by using it to define a set of representative secure outsourcing protocols from the literature.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128316757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ahsanul Haque, Brandon Parker, L. Khan, B. Thuraisingham
Big Data Stream mining has some inherent challenges which are not present in traditional data mining. Not only Big Data Stream receives large volume of data continuously, but also it may have different types of features. Moreover, the concepts and features tend to evolve throughout the stream. Traditional data mining techniques are not sufficient to address these challenges. In our current work, we have designed a multi-tiered ensemble based method HSMiner to address aforementioned challenges to label instances in an evolving Big Data Stream. However, this method requires building large number of AdaBoost ensembles for each of the numeric features after receiving each new data chunk which is very costly. Thus, HSMiner may face scalability issue in case of classifying Big Data Stream. To address this problem, we propose three approaches to build these large number of AdaBoost ensembles using MapReduce based parallelism. We compare each of these approaches from different aspects of design. We also empirically show that, these approaches are very useful for our base method to achieve significant scalability and speedup.
{"title":"Evolving Big Data Stream Classification with MapReduce","authors":"Ahsanul Haque, Brandon Parker, L. Khan, B. Thuraisingham","doi":"10.1109/CLOUD.2014.82","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.82","url":null,"abstract":"Big Data Stream mining has some inherent challenges which are not present in traditional data mining. Not only Big Data Stream receives large volume of data continuously, but also it may have different types of features. Moreover, the concepts and features tend to evolve throughout the stream. Traditional data mining techniques are not sufficient to address these challenges. In our current work, we have designed a multi-tiered ensemble based method HSMiner to address aforementioned challenges to label instances in an evolving Big Data Stream. However, this method requires building large number of AdaBoost ensembles for each of the numeric features after receiving each new data chunk which is very costly. Thus, HSMiner may face scalability issue in case of classifying Big Data Stream. To address this problem, we propose three approaches to build these large number of AdaBoost ensembles using MapReduce based parallelism. We compare each of these approaches from different aspects of design. We also empirically show that, these approaches are very useful for our base method to achieve significant scalability and speedup.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"99 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132871433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtualization technology allows a dynamic allocation of VMs to servers. It reduces server demand and increases energy efficiency of data centers. Dynamic control strategies migrate VMs between servers in dependence to their actual workload. A concept that promises further improvements in VM allocation efficiency. In this paper we evaluate the applicability of DSAP in a deterministic environment. DSAP is a linear program, calculating VM allocations and live-migrations on workload patterns known a priori. Efficiency is evaluated by simulations as well as an experimental test bed infrastructure. Results are compared against alternative control approaches that we studied in preliminary works. Our findings are, dynamic allocation can reduce server demand at a reasonable service quality. Countermeasures are required to keep the number of live-migrations under control.
{"title":"Evaluating Dynamic Resource Allocation Strategies in Virtualized Data Centers","authors":"A. Wolke, Lukas Ziegler","doi":"10.1109/CLOUD.2014.52","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.52","url":null,"abstract":"Virtualization technology allows a dynamic allocation of VMs to servers. It reduces server demand and increases energy efficiency of data centers. Dynamic control strategies migrate VMs between servers in dependence to their actual workload. A concept that promises further improvements in VM allocation efficiency. In this paper we evaluate the applicability of DSAP in a deterministic environment. DSAP is a linear program, calculating VM allocations and live-migrations on workload patterns known a priori. Efficiency is evaluated by simulations as well as an experimental test bed infrastructure. Results are compared against alternative control approaches that we studied in preliminary works. Our findings are, dynamic allocation can reduce server demand at a reasonable service quality. Countermeasures are required to keep the number of live-migrations under control.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132290885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hadoop is an emerging framework for parallel big data processing. While becoming popular, Hadoop is too complex for regular users to fully understand all the system parameters and tune them appropriately. Especially when processing a batch of jobs, default Hadoop setting may cause inefficient resource utilization and unnecessarily prolong the execution time. This paper considers an extremely important setting of slot configuration which by default is fixed and static. We proposed an enhanced Hadoop system called FRESH which can derive the best slot setting, dynamically configure slots, and appropriately assign tasks to the available slots. The experimental results show that when serving a batch of MapReduce jobs, FRESH significantly improves the makespan as well as the fairness among jobs.
{"title":"FRESH: Fair and Efficient Slot Configuration and Scheduling for Hadoop Clusters","authors":"Jiayin Wang, Yi Yao, Ying Mao, B. Sheng, N. Mi","doi":"10.1109/CLOUD.2014.106","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.106","url":null,"abstract":"Hadoop is an emerging framework for parallel big data processing. While becoming popular, Hadoop is too complex for regular users to fully understand all the system parameters and tune them appropriately. Especially when processing a batch of jobs, default Hadoop setting may cause inefficient resource utilization and unnecessarily prolong the execution time. This paper considers an extremely important setting of slot configuration which by default is fixed and static. We proposed an enhanced Hadoop system called FRESH which can derive the best slot setting, dynamically configure slots, and appropriately assign tasks to the available slots. The experimental results show that when serving a batch of MapReduce jobs, FRESH significantly improves the makespan as well as the fairness among jobs.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131601312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Soramichi Akiyama, Takahiro Hirofuchi, Ryousei Takano, S. Honiden
Virtualization techniques greatly benefit cloud computing. Live migration enables a datacenter to dynamically replace virtual machines (VMs) without disrupting services running on them. Efficient live migration is the key to improve the energy efficiency and resource utilization of a datacenter through dynamic placement of VMs. Recent studies have achieved efficient live migration by deleting the page cache of the guest OS to shrink the memory size of it before a migration. However, these studies do not solve the problem of IO performance penalty after a migration due to the loss of page cache. We propose an advanced memory transfer mechanism for live migration, which skips transferring the page cache to shorten total migration time while restoring it transparently from the guest OS via the SAN to prevent IO performance penalty. To start a migration, our mechanism collects the mapping information between page cache and disk blocks. During a migration, the source host skips transferring the page cache but transfers other memory content, while the destination host transfers the same data as the page cache from the disk blocks via the SAN. Experiments with web server and database workloads showed that our mechanism reduced total migration time with significantly small IO performance penalty.
{"title":"Fast Live Migration with Small IO Performance Penalty by Exploiting SAN in Parallel","authors":"Soramichi Akiyama, Takahiro Hirofuchi, Ryousei Takano, S. Honiden","doi":"10.1109/CLOUD.2014.16","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.16","url":null,"abstract":"Virtualization techniques greatly benefit cloud computing. Live migration enables a datacenter to dynamically replace virtual machines (VMs) without disrupting services running on them. Efficient live migration is the key to improve the energy efficiency and resource utilization of a datacenter through dynamic placement of VMs. Recent studies have achieved efficient live migration by deleting the page cache of the guest OS to shrink the memory size of it before a migration. However, these studies do not solve the problem of IO performance penalty after a migration due to the loss of page cache. We propose an advanced memory transfer mechanism for live migration, which skips transferring the page cache to shorten total migration time while restoring it transparently from the guest OS via the SAN to prevent IO performance penalty. To start a migration, our mechanism collects the mapping information between page cache and disk blocks. During a migration, the source host skips transferring the page cache but transfers other memory content, while the destination host transfers the same data as the page cache from the disk blocks via the SAN. Experiments with web server and database workloads showed that our mechanism reduced total migration time with significantly small IO performance penalty.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131920546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Literature about cloud computing often makes the assumption that the resource demands of computing clouds (and the virtual machines that constitute them) are unpredictable in the short term. There are, however, specific use cases where resource demands can be anticipated. This paper discusses dissertation work-in-progress which shows that, in certain predictable environments, preemptive virtual machine migration can improve both computational resource utilization and the overall user experience. A novel algorithm which reacts to anticipated future resource demands based on past behavior of virtual machines is presented. Simulations are used to quantify performance improvements.
{"title":"Virtual Machine Placement in Predictable Computing Clouds","authors":"R. Rauscher, R. Acharya","doi":"10.1109/CLOUD.2014.148","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.148","url":null,"abstract":"Literature about cloud computing often makes the assumption that the resource demands of computing clouds (and the virtual machines that constitute them) are unpredictable in the short term. There are, however, specific use cases where resource demands can be anticipated. This paper discusses dissertation work-in-progress which shows that, in certain predictable environments, preemptive virtual machine migration can improve both computational resource utilization and the overall user experience. A novel algorithm which reacts to anticipated future resource demands based on past behavior of virtual machines is presented. Simulations are used to quantify performance improvements.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128707563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chien-An Lai, Qingyang Wang, Joshua Kimball, Jack Li, Junhee Park, C. Pu
The performance unpredictability associated with migrating applications into cloud computing infrastructures has impeded this migration. For example, CPU contention between co-located applications has been shown to exhibit counter-intuitive behavior. In this paper, we investigate IO performance interference through the experimental study of consolidated n-tier applications leveraging the same disk. Surprisingly, we found that specifying a specific disk allocation, e.g., limiting the number of Input/Output Operations Per Second (IOPs) per VM, results in significantly lower performance than fully sharing disk across VMs. Moreover, we observe severe performance interference among VMs can not be totally eliminated even with a sharing strategy (e.g., response times for constant workloads still increase over 1,100%). By using a micro-benchmark (Filebench) and an n-tier application benchmark systems (RUBBoS), we demonstrate the existence of disk contention in consolidated environments, and how performance loss occurs when co-located database systems in order to maintain database consistency flush their logs from memory to disk. Potential solutions to these isolation issues are (1) to increase the log buffer size to amortize the disk IO cost (2) to decrease the number of write threads to alleviate disk contention. We validate these methods experimentally and find a 64% and 57% reduction in response time (or more generally, a reduction in performance interference) for constant and increasing workloads respectively.
{"title":"IO Performance Interference among Consolidated n-Tier Applications: Sharing Is Better Than Isolation for Disks","authors":"Chien-An Lai, Qingyang Wang, Joshua Kimball, Jack Li, Junhee Park, C. Pu","doi":"10.1109/CLOUD.2014.14","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.14","url":null,"abstract":"The performance unpredictability associated with migrating applications into cloud computing infrastructures has impeded this migration. For example, CPU contention between co-located applications has been shown to exhibit counter-intuitive behavior. In this paper, we investigate IO performance interference through the experimental study of consolidated n-tier applications leveraging the same disk. Surprisingly, we found that specifying a specific disk allocation, e.g., limiting the number of Input/Output Operations Per Second (IOPs) per VM, results in significantly lower performance than fully sharing disk across VMs. Moreover, we observe severe performance interference among VMs can not be totally eliminated even with a sharing strategy (e.g., response times for constant workloads still increase over 1,100%). By using a micro-benchmark (Filebench) and an n-tier application benchmark systems (RUBBoS), we demonstrate the existence of disk contention in consolidated environments, and how performance loss occurs when co-located database systems in order to maintain database consistency flush their logs from memory to disk. Potential solutions to these isolation issues are (1) to increase the log buffer size to amortize the disk IO cost (2) to decrease the number of write threads to alleviate disk contention. We validate these methods experimentally and find a 64% and 57% reduction in response time (or more generally, a reduction in performance interference) for constant and increasing workloads respectively.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"398 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120892128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhenyun Zhuang, C. Tran, H. Ramachandra, B. Sridharan
Cloud Computing promises a cost-effective and administration-effective solution to the traditional needs of computing resources. While bringing efficiency to the users thanks to the shared hardware and software, the multi-tenency characteristics also bring unique challenges to the backend cloud platforms. In particular, the JVM mechanisms used by Java applications, coupled with OS-level features, give rise to a set of problems that are not present in other deployment scenarios. In this work, we consider the problem of ensuring high-performance of mission-critical Java applications in multi-tenant cloud environments. Based on our experiences with Linkedin's platforms, we identify and solve a set of problems caused by multi-tenancy. We share the lessons and knowledge we learned during the course.
{"title":"Ensuring High-Performance of Mission-Critical Java Applications in Multi-tenant Cloud Platforms","authors":"Zhenyun Zhuang, C. Tran, H. Ramachandra, B. Sridharan","doi":"10.1109/CLOUD.2014.88","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.88","url":null,"abstract":"Cloud Computing promises a cost-effective and administration-effective solution to the traditional needs of computing resources. While bringing efficiency to the users thanks to the shared hardware and software, the multi-tenency characteristics also bring unique challenges to the backend cloud platforms. In particular, the JVM mechanisms used by Java applications, coupled with OS-level features, give rise to a set of problems that are not present in other deployment scenarios. In this work, we consider the problem of ensuring high-performance of mission-critical Java applications in multi-tenant cloud environments. Based on our experiences with Linkedin's platforms, we identify and solve a set of problems caused by multi-tenancy. We share the lessons and knowledge we learned during the course.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"407 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116526460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deploying an application to a cloud environment has recently become very trendy, since it offers many advantages such as improving reliability or scalability. These cloud environments provide a wide range of resources at different levels of functionality, which must be appropriately configured by stakeholders for the application to run properly. Handling this variability during the configuration and deployment stages is a complex and error-prone process, usually made in an ad hoc manner in existing solutions. In this paper, we propose a software product lines based approach to face these issues. Combined with a domain model used to select among cloud environments a suitable one, our approach supports stakeholders while configuring the selected cloud environment in a consistent way, and automates the deployment of such configurations through the generation of executable deployment scripts. To evaluate the soundness of the proposed approach, we conduct an experiment involving 10 participants with different levels of experience in cloud configuration and deployment. The experiment shows that using our approach significantly reduces time and most importantly, provides a reliable way to find a correct and suitable cloud configuration. Moreover, our empirical evaluation shows that our approach is effective and scalable to properly deal with a significant number of cloud environments.
{"title":"Automated Selection and Configuration of Cloud Environments Using Software Product Lines Principles","authors":"Clément Quinton, Daniel Romero, L. Duchien","doi":"10.1109/CLOUD.2014.29","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.29","url":null,"abstract":"Deploying an application to a cloud environment has recently become very trendy, since it offers many advantages such as improving reliability or scalability. These cloud environments provide a wide range of resources at different levels of functionality, which must be appropriately configured by stakeholders for the application to run properly. Handling this variability during the configuration and deployment stages is a complex and error-prone process, usually made in an ad hoc manner in existing solutions. In this paper, we propose a software product lines based approach to face these issues. Combined with a domain model used to select among cloud environments a suitable one, our approach supports stakeholders while configuring the selected cloud environment in a consistent way, and automates the deployment of such configurations through the generation of executable deployment scripts. To evaluate the soundness of the proposed approach, we conduct an experiment involving 10 participants with different levels of experience in cloud configuration and deployment. The experiment shows that using our approach significantly reduces time and most importantly, provides a reliable way to find a correct and suitable cloud configuration. Moreover, our empirical evaluation shows that our approach is effective and scalable to properly deal with a significant number of cloud environments.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121636019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}