C. Mei, Daniel Taylor, Chenyu Wang, A. Chandra, J. Weissman
Mobile devices, such as smart phones and tablets, are becoming the universal interface to online services and applications. However, such devices have limited computational power and battery life, which limits their ability to execute resource-intensive applications. Computation outsourcing to external resources has been proposed as a technique to alleviate this problem. Most existing work on mobile outsourcing has focused on either single application optimization or outsourcing to fixed, local resources, with the assumption that wide-area latency is prohibitively high. However, the opportunity of improving the outsourcing performance by utilizing the relation among multiple applications and optimizing the server provisioning is neglected. In this paper, we present the design and implementation of an Android/Amazon EC2-based mobile application outsourcing framework, leveraging the cloud for scalability, elasticity, and multi-user code/data sharing. Using this framework, we empirically demonstrate that the cloud is not only feasible but desirable as an offloading platform for latency-tolerant applications. We have proposed to use data mining techniques to detect data sharing across multiple applications, and developed novel scheduling algorithms that exploit such data sharing for better outsourcing performance. Additionally, our platform is designed to dynamically scale to support a large number of mobile users concurrently. Experiments show that our proposed techniques and algorithms substantially improve application performance, while achieving high efficiency in terms of computation resource and network usage.
{"title":"Sharing-Aware Cloud-Based Mobile Outsourcing","authors":"C. Mei, Daniel Taylor, Chenyu Wang, A. Chandra, J. Weissman","doi":"10.1109/CLOUD.2012.48","DOIUrl":"https://doi.org/10.1109/CLOUD.2012.48","url":null,"abstract":"Mobile devices, such as smart phones and tablets, are becoming the universal interface to online services and applications. However, such devices have limited computational power and battery life, which limits their ability to execute resource-intensive applications. Computation outsourcing to external resources has been proposed as a technique to alleviate this problem. Most existing work on mobile outsourcing has focused on either single application optimization or outsourcing to fixed, local resources, with the assumption that wide-area latency is prohibitively high. However, the opportunity of improving the outsourcing performance by utilizing the relation among multiple applications and optimizing the server provisioning is neglected. In this paper, we present the design and implementation of an Android/Amazon EC2-based mobile application outsourcing framework, leveraging the cloud for scalability, elasticity, and multi-user code/data sharing. Using this framework, we empirically demonstrate that the cloud is not only feasible but desirable as an offloading platform for latency-tolerant applications. We have proposed to use data mining techniques to detect data sharing across multiple applications, and developed novel scheduling algorithms that exploit such data sharing for better outsourcing performance. Additionally, our platform is designed to dynamically scale to support a large number of mobile users concurrently. Experiments show that our proposed techniques and algorithms substantially improve application performance, while achieving high efficiency in terms of computation resource and network usage.","PeriodicalId":214084,"journal":{"name":"2012 IEEE Fifth International Conference on Cloud Computing","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114267585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Energy efficiency has now become one of the major design constraints for current and future cloud data center operators. One way to conserve energy is to transition idle servers into a lower power-state (e.g. suspend). Therefore, virtual machine (VM) placement and dynamic VM scheduling algorithms are proposed to facilitate the creation of idle times. However, these algorithms are rarely integrated in a holistic approach and experimentally evaluated in a realistic environment. In this paper we present the energy management algorithms and mechanisms of a novel holistic energy-aware VM management framework for private clouds called Snooze. We conduct an extensive evaluation of the energy and performance implications of our system on 34 power-metered machines of the Grid'5000 experimentation testbed under dynamic web workloads. The results show that the energy saving mechanisms allow Snooze to dynamically scale data center energy consumption proportionally to the load, thus achieving substantial energy savings with only limited impact on application performance.
{"title":"Energy Management in IaaS Clouds: A Holistic Approach","authors":"Eugen Feller, C. Rohr, D. Margery, C. Morin","doi":"10.1109/CLOUD.2012.50","DOIUrl":"https://doi.org/10.1109/CLOUD.2012.50","url":null,"abstract":"Energy efficiency has now become one of the major design constraints for current and future cloud data center operators. One way to conserve energy is to transition idle servers into a lower power-state (e.g. suspend). Therefore, virtual machine (VM) placement and dynamic VM scheduling algorithms are proposed to facilitate the creation of idle times. However, these algorithms are rarely integrated in a holistic approach and experimentally evaluated in a realistic environment. In this paper we present the energy management algorithms and mechanisms of a novel holistic energy-aware VM management framework for private clouds called Snooze. We conduct an extensive evaluation of the energy and performance implications of our system on 34 power-metered machines of the Grid'5000 experimentation testbed under dynamic web workloads. The results show that the energy saving mechanisms allow Snooze to dynamically scale data center energy consumption proportionally to the load, thus achieving substantial energy savings with only limited impact on application performance.","PeriodicalId":214084,"journal":{"name":"2012 IEEE Fifth International Conference on Cloud Computing","volume":"165 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125933592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud services make use of data center resources so that hosted applications can utilize them as needed. To offer a large amount of computational resources, cloud service providers manage tens of geographically distributed data centers. Since each data center is made up of hundreds of thousands of physical machines, energy consumption is a major concern for cloud service providers. The electric cost imposes significant financial overheads on those companies and pushes up the price for the cloud users. This paper presents an energy-price-driven request dispatcher that forwards client requests to data centers in an electric-cost-saving way. In our technique, mapping nodes, which are used as authoritative DNS servers, forward client requests to data centers in which the electric price is relatively lower. We additionally develop a policy that gradually shifts client requests to electrically cheaper data centers, taking into account application latency requirements and data center loads. Our simulation-based results show that our technique can reduce electric cost by 15% more than randomly dispatching client requests.
{"title":"Energy-Price-Driven Request Dispatching for Cloud Data Centers","authors":"Takumi Sakamoto, H. Yamada, H. Horie, K. Kono","doi":"10.1109/CLOUD.2012.115","DOIUrl":"https://doi.org/10.1109/CLOUD.2012.115","url":null,"abstract":"Cloud services make use of data center resources so that hosted applications can utilize them as needed. To offer a large amount of computational resources, cloud service providers manage tens of geographically distributed data centers. Since each data center is made up of hundreds of thousands of physical machines, energy consumption is a major concern for cloud service providers. The electric cost imposes significant financial overheads on those companies and pushes up the price for the cloud users. This paper presents an energy-price-driven request dispatcher that forwards client requests to data centers in an electric-cost-saving way. In our technique, mapping nodes, which are used as authoritative DNS servers, forward client requests to data centers in which the electric price is relatively lower. We additionally develop a policy that gradually shifts client requests to electrically cheaper data centers, taking into account application latency requirements and data center loads. Our simulation-based results show that our technique can reduce electric cost by 15% more than randomly dispatching client requests.","PeriodicalId":214084,"journal":{"name":"2012 IEEE Fifth International Conference on Cloud Computing","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126786166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pei Fan, Zhenbang Chen, Ji Wang, Zibin Zheng, Michael R. Lyu
Nowadays, more and more scientific applications are moving to cloud computing. The optimal deployment of scientific applications is critical for providing good services to users. Scientific applications are usually topology-aware applications. Therefore, considering the topology of a scientific application during the development will benefit the performance of the application. However, it is challenging to automatically discover and make use of the communication pattern of a scientific application while deploying the application on cloud. To attack this challenge, in this paper, we propose a framework to discover the communication topology of a scientific application by pre-execution and multi-scale graph clustering, based on which the deployment can be optimized. Comprehensive experiments are conducted by employing a well-known MPI benchmark and comparing the performance of our method with those of other methods. The experimental results show the effectiveness of our topology-aware deployment method.
{"title":"Topology-Aware Deployment of Scientific Applications in Cloud Computing","authors":"Pei Fan, Zhenbang Chen, Ji Wang, Zibin Zheng, Michael R. Lyu","doi":"10.1109/CLOUD.2012.70","DOIUrl":"https://doi.org/10.1109/CLOUD.2012.70","url":null,"abstract":"Nowadays, more and more scientific applications are moving to cloud computing. The optimal deployment of scientific applications is critical for providing good services to users. Scientific applications are usually topology-aware applications. Therefore, considering the topology of a scientific application during the development will benefit the performance of the application. However, it is challenging to automatically discover and make use of the communication pattern of a scientific application while deploying the application on cloud. To attack this challenge, in this paper, we propose a framework to discover the communication topology of a scientific application by pre-execution and multi-scale graph clustering, based on which the deployment can be optimized. Comprehensive experiments are conducted by employing a well-known MPI benchmark and comparing the performance of our method with those of other methods. The experimental results show the effectiveness of our topology-aware deployment method.","PeriodicalId":214084,"journal":{"name":"2012 IEEE Fifth International Conference on Cloud Computing","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127041389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Salehi, P. Radha, Krishna Krishnamurty, Sai Deepak, R. Buyya
Energy efficiency is one of the main challenge hat data centers are facing nowadays. A considerable portion of the consumed energy in these environments is wasted because of idling resources. To avoid wastage, offering services with variety of SLAs (with different prices and priorities) is a common practice. The question we investigate in this research is how the energy consumption of a data center that offers various SLAs can be reduced. To answer this question we propose an adaptive energy management policy that employs virtual machine(VM) preemption to adjust the energy consumption based on user performance requirements. We have implementedour proposed energy management policy in Haize a as a real scheduling platform for virtualized data centers. Experimental results reveal 18% energy conservation (up to 4000 kWh in 30 days) comparing with other baseline policies without any major increase in SLA violation.
{"title":"Preemption-Aware Energy Management in Virtualized Data Centers","authors":"M. Salehi, P. Radha, Krishna Krishnamurty, Sai Deepak, R. Buyya","doi":"10.1109/CLOUD.2012.147","DOIUrl":"https://doi.org/10.1109/CLOUD.2012.147","url":null,"abstract":"Energy efficiency is one of the main challenge hat data centers are facing nowadays. A considerable portion of the consumed energy in these environments is wasted because of idling resources. To avoid wastage, offering services with variety of SLAs (with different prices and priorities) is a common practice. The question we investigate in this research is how the energy consumption of a data center that offers various SLAs can be reduced. To answer this question we propose an adaptive energy management policy that employs virtual machine(VM) preemption to adjust the energy consumption based on user performance requirements. We have implementedour proposed energy management policy in Haize a as a real scheduling platform for virtualized data centers. Experimental results reveal 18% energy conservation (up to 4000 kWh in 30 days) comparing with other baseline policies without any major increase in SLA violation.","PeriodicalId":214084,"journal":{"name":"2012 IEEE Fifth International Conference on Cloud Computing","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125153748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. He, Shiyao Chen, Hyoil Kim, L. Tong, Kang-Won Lee
We consider the problem of opportunistically scheduling low-priority tasks onto underutilized computation resources in the cloud left by high-priority tasks. To avoid conflicts with high-priority tasks, the scheduler must suspend the low-priority tasks (causing waiting), or move them to other underutilized servers (causing migration), if the high-priority tasks resume. The goal of opportunistic scheduling is to schedule the low-priority tasks onto intermittently available server resources while minimizing the combined cost of waiting and migration. Moreover, we aim to support multiple parallel low-priority tasks with synchronization constraints. Under the assumption that servers' availability to low-priority tasks can be modeled as ON/OFF Markov chains, we have shown that the optimal solution requires solving a Markov Decision Process (MDP) that has exponential complexity, and efficient solutions are known only in the case of homogeneously behaving servers. In this paper, we propose an efficient heuristic scheduling policy by formulating the problem as restless Multi-Armed Bandits (MAB) under relaxed synchronization. We prove the index ability of the problem and provide closed-form formulas to compute the indices. Our evaluation using real data center traces shows that the performance result closely matches the prediction by the Markov chain model, and the proposed index policy achieves consistently good performance under various server dynamics compared with the existing policies.
{"title":"Scheduling Parallel Tasks onto Opportunistically Available Cloud Resources","authors":"T. He, Shiyao Chen, Hyoil Kim, L. Tong, Kang-Won Lee","doi":"10.1109/CLOUD.2012.15","DOIUrl":"https://doi.org/10.1109/CLOUD.2012.15","url":null,"abstract":"We consider the problem of opportunistically scheduling low-priority tasks onto underutilized computation resources in the cloud left by high-priority tasks. To avoid conflicts with high-priority tasks, the scheduler must suspend the low-priority tasks (causing waiting), or move them to other underutilized servers (causing migration), if the high-priority tasks resume. The goal of opportunistic scheduling is to schedule the low-priority tasks onto intermittently available server resources while minimizing the combined cost of waiting and migration. Moreover, we aim to support multiple parallel low-priority tasks with synchronization constraints. Under the assumption that servers' availability to low-priority tasks can be modeled as ON/OFF Markov chains, we have shown that the optimal solution requires solving a Markov Decision Process (MDP) that has exponential complexity, and efficient solutions are known only in the case of homogeneously behaving servers. In this paper, we propose an efficient heuristic scheduling policy by formulating the problem as restless Multi-Armed Bandits (MAB) under relaxed synchronization. We prove the index ability of the problem and provide closed-form formulas to compute the indices. Our evaluation using real data center traces shows that the performance result closely matches the prediction by the Markov chain model, and the proposed index policy achieves consistently good performance under various server dynamics compared with the existing policies.","PeriodicalId":214084,"journal":{"name":"2012 IEEE Fifth International Conference on Cloud Computing","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131454253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christoph Fehling, Thilo Ewald, F. Leymann, Michael Pauly, Jochen Rütschlin, D. Schumm
The industry-driven evolution of cloud computing tends to obfuscate the common underlying architectural concepts of cloud offerings and their implications on hosted applications. Patterns are one way to document such architectural principles and to make good solutions to reoccurring (architectural) cloud challenges reusable. To capture cloud computing best practice from existing cloud applications and provider-specific documentation, we propose to use an elaborated pattern format enabling abstraction of concepts and reusability of knowledge in various use cases. We present a detailed step-by-step pattern identification process supported by a pattern authoring toolkit. We continuously apply this process to identify a large set of cloud patterns. In this paper, we introduce two new cloud patterns we identified in industrial scenarios recently. The approach aims at cloud architects, developers, and researchers alike to also apply this pattern identification process to create traceable and well-structured pieces of knowledge in their individual field of expertise. As entry point, we recap challenges introduced by cloud computing in various domains.
{"title":"Capturing Cloud Computing Knowledge and Experience in Patterns","authors":"Christoph Fehling, Thilo Ewald, F. Leymann, Michael Pauly, Jochen Rütschlin, D. Schumm","doi":"10.1109/CLOUD.2012.124","DOIUrl":"https://doi.org/10.1109/CLOUD.2012.124","url":null,"abstract":"The industry-driven evolution of cloud computing tends to obfuscate the common underlying architectural concepts of cloud offerings and their implications on hosted applications. Patterns are one way to document such architectural principles and to make good solutions to reoccurring (architectural) cloud challenges reusable. To capture cloud computing best practice from existing cloud applications and provider-specific documentation, we propose to use an elaborated pattern format enabling abstraction of concepts and reusability of knowledge in various use cases. We present a detailed step-by-step pattern identification process supported by a pattern authoring toolkit. We continuously apply this process to identify a large set of cloud patterns. In this paper, we introduce two new cloud patterns we identified in industrial scenarios recently. The approach aims at cloud architects, developers, and researchers alike to also apply this pattern identification process to create traceable and well-structured pieces of knowledge in their individual field of expertise. As entry point, we recap challenges introduced by cloud computing in various domains.","PeriodicalId":214084,"journal":{"name":"2012 IEEE Fifth International Conference on Cloud Computing","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131916415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud computing offers users the ability to access large pools of computational and storage resources on-demand without the burden of managing and maintaining their own IT assets. Today's cloud providers charge users based upon the amount of resources used or reserved, with only minimal guarantees of the quality-of-service (QoS) experienced byte users applications. As virtualization technologies proliferate among cloud providers, consolidating multiple user applications onto multi-core servers increases revenue and improves resource utilization. However, consolidation introduces performance interference between co-located workloads, which significantly impacts application QoS. A critical requirement for effective consolidation is to be able to predict the impact of application performance in the presence of interference from on-chip resources, e.g., CPU and last-level cache (LLC)/memory bandwidth sharing, to storage devices and network bandwidth contention. In this work, we propose an interference model which predicts the application QoS metric. The key distinctive feature is the consideration of time-variant inter-dependency among different levels of resource interference. We use applications from a test suite and SPECWeb2005 to illustrate the effectiveness of our model and an average prediction error of less than 8% is achieved. Furthermore, we demonstrate using the proposed interference model to optimize the cloud provider's metric (here the number of successfully executed applications) to realize better workload placement decisions and thereby maintaining the user's application QoS.
{"title":"A Performance Interference Model for Managing Consolidated Workloads in QoS-Aware Clouds","authors":"Qian Zhu, Teresa Tung","doi":"10.1109/CLOUD.2012.25","DOIUrl":"https://doi.org/10.1109/CLOUD.2012.25","url":null,"abstract":"Cloud computing offers users the ability to access large pools of computational and storage resources on-demand without the burden of managing and maintaining their own IT assets. Today's cloud providers charge users based upon the amount of resources used or reserved, with only minimal guarantees of the quality-of-service (QoS) experienced byte users applications. As virtualization technologies proliferate among cloud providers, consolidating multiple user applications onto multi-core servers increases revenue and improves resource utilization. However, consolidation introduces performance interference between co-located workloads, which significantly impacts application QoS. A critical requirement for effective consolidation is to be able to predict the impact of application performance in the presence of interference from on-chip resources, e.g., CPU and last-level cache (LLC)/memory bandwidth sharing, to storage devices and network bandwidth contention. In this work, we propose an interference model which predicts the application QoS metric. The key distinctive feature is the consideration of time-variant inter-dependency among different levels of resource interference. We use applications from a test suite and SPECWeb2005 to illustrate the effectiveness of our model and an average prediction error of less than 8% is achieved. Furthermore, we demonstrate using the proposed interference model to optimize the cloud provider's metric (here the number of successfully executed applications) to realize better workload placement decisions and thereby maintaining the user's application QoS.","PeriodicalId":214084,"journal":{"name":"2012 IEEE Fifth International Conference on Cloud Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128399301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we outline a tiered approach to auditing information in the cloud. The approach provides perspectives on auditable events that may include compositions of independently formed audit trails. Filtering and reasoning over the audit trails can manifest potential security vulnerabilities and performance attributes as desired by stakeholders.
{"title":"A Tiered Strategy for Auditing in the Cloud","authors":"Rui Xie, R. Gamble","doi":"10.1109/CLOUD.2012.144","DOIUrl":"https://doi.org/10.1109/CLOUD.2012.144","url":null,"abstract":"In this paper, we outline a tiered approach to auditing information in the cloud. The approach provides perspectives on auditable events that may include compositions of independently formed audit trails. Filtering and reasoning over the audit trails can manifest potential security vulnerabilities and performance attributes as desired by stakeholders.","PeriodicalId":214084,"journal":{"name":"2012 IEEE Fifth International Conference on Cloud Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134059205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we define practical schemes to protect the cloud consumer's identity (ID) during message exchanges (connection anonymity) in SaaS. We describe the typical/target scenario for service consumption and provide a detailed privacy assessment. This is used to identify different levels of interactions between consumers and providers, as well as to evaluate how privacy is affected. We propose a multi-layered anonymity framework, where different anonymity techniques are employed together to protect ID, location, behavior and data privacy, during each level of consumer-provider interaction. We also define two schemes for generating and managing anonymous credentials, which are used to implement the proposed framework. These schemes provide two options of connection anonymity: traceable (anonymity can be disclosed, if required) and untraceable (anonymity cannot be disclosed). The consumer and provider will be able to choose which is more suitable to their needs and regulatory environments.
{"title":"Defining and Implementing Connection Anonymity for SaaS Web Services","authors":"Vinícius M. Pacheco, R. Puttini","doi":"10.1109/CLOUD.2012.88","DOIUrl":"https://doi.org/10.1109/CLOUD.2012.88","url":null,"abstract":"In this paper, we define practical schemes to protect the cloud consumer's identity (ID) during message exchanges (connection anonymity) in SaaS. We describe the typical/target scenario for service consumption and provide a detailed privacy assessment. This is used to identify different levels of interactions between consumers and providers, as well as to evaluate how privacy is affected. We propose a multi-layered anonymity framework, where different anonymity techniques are employed together to protect ID, location, behavior and data privacy, during each level of consumer-provider interaction. We also define two schemes for generating and managing anonymous credentials, which are used to implement the proposed framework. These schemes provide two options of connection anonymity: traceable (anonymity can be disclosed, if required) and untraceable (anonymity cannot be disclosed). The consumer and provider will be able to choose which is more suitable to their needs and regulatory environments.","PeriodicalId":214084,"journal":{"name":"2012 IEEE Fifth International Conference on Cloud Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132084287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}