Knowing the number of virtual machines (VMs) that a cloud physical hardware can (further) support is critical as it has implications on provisioning and hardware procurement. However, current methods for estimating the maximum number of VMs possible on a given hardware is usually the ratio of the specifications of a VM to the underlying cloud hardware's specifications. Such naive and linear estimation methods mostly yield impractical limits as to how many VMs the hardware can actually support. It was found that if we base on the naive division method, user experience on VMs at those limits would be severely degraded. In this paper, we demonstrate through experimental results, the significant gap between the limits derived using the estimation method mentioned above and the actual situation. We believe for a more practicable estimation of the limits of the underlying infrastructure, dominant workload of VMs should also be factored in.
{"title":"Virtual Numbers for Virtual Machines?","authors":"Yu Shyang Tan, R. Ko, V. Mendiratta","doi":"10.1109/CLOUD.2014.147","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.147","url":null,"abstract":"Knowing the number of virtual machines (VMs) that a cloud physical hardware can (further) support is critical as it has implications on provisioning and hardware procurement. However, current methods for estimating the maximum number of VMs possible on a given hardware is usually the ratio of the specifications of a VM to the underlying cloud hardware's specifications. Such naive and linear estimation methods mostly yield impractical limits as to how many VMs the hardware can actually support. It was found that if we base on the naive division method, user experience on VMs at those limits would be severely degraded. In this paper, we demonstrate through experimental results, the significant gap between the limits derived using the estimation method mentioned above and the actual situation. We believe for a more practicable estimation of the limits of the underlying infrastructure, dominant workload of VMs should also be factored in.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121494818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ahmed El-Rheddane, N. D. Palma, A. Tchana, D. Hagimont
Today's systems are often distributed, and connecting their different components can be challenging. Message-Oriented-Middleware (MOM) is a popular tool to insure simple and reliable communication. With the ever growing loads of today's applications, MOMs needs to be scalable. But as the load changes, static scalability often underuses the resources it requires. This paper presents an elastic message queuing system leveraging cloud's on-demand resource provisioning, which allows the use of just enough resources to handle the current load. We will detail when and how provisioning decisions are made, and show the result of our system's evaluation on Amazon EC2 public cloud. This work is based on Joram, an open-source JMS compliant MOM and is now part of its distribution on OW2 consortium's website.
{"title":"Elastic Message Queues","authors":"Ahmed El-Rheddane, N. D. Palma, A. Tchana, D. Hagimont","doi":"10.1109/CLOUD.2014.13","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.13","url":null,"abstract":"Today's systems are often distributed, and connecting their different components can be challenging. Message-Oriented-Middleware (MOM) is a popular tool to insure simple and reliable communication. With the ever growing loads of today's applications, MOMs needs to be scalable. But as the load changes, static scalability often underuses the resources it requires. This paper presents an elastic message queuing system leveraging cloud's on-demand resource provisioning, which allows the use of just enough resources to handle the current load. We will detail when and how provisioning decisions are made, and show the result of our system's evaluation on Amazon EC2 public cloud. This work is based on Joram, an open-source JMS compliant MOM and is now part of its distribution on OW2 consortium's website.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"158 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121282228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud-based infrastructures enable applications to collect and analyze massive amounts of data. Sometimes these applications are the product of green-field engineering, but frequently they are the product of the evolution of traditional RDBMS-based implementations. In any case, NoSQL databases, endowed with high availability, elasticity and scalability through their easy deployment on cloud-computing platforms, have become an attractive data-storage solution for these big-data applications. Unfortunately, to date, there is little methodological and tool support for migrating existing applications to these new platforms. In this paper, we describe a hybrid architecture for location-aware applications on hierarchical cloud, a methodology for mapping relational (including spatio-temporal) data to HBase, and a process for migrating legacy applications to the new architecture.
{"title":"Federating Web-Based Applications on a Hierarchical Cloud","authors":"Dan Han, Eleni Stroulia","doi":"10.1109/CLOUD.2014.136","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.136","url":null,"abstract":"Cloud-based infrastructures enable applications to collect and analyze massive amounts of data. Sometimes these applications are the product of green-field engineering, but frequently they are the product of the evolution of traditional RDBMS-based implementations. In any case, NoSQL databases, endowed with high availability, elasticity and scalability through their easy deployment on cloud-computing platforms, have become an attractive data-storage solution for these big-data applications. Unfortunately, to date, there is little methodological and tool support for migrating existing applications to these new platforms. In this paper, we describe a hybrid architecture for location-aware applications on hierarchical cloud, a methodology for mapping relational (including spatio-temporal) data to HBase, and a process for migrating legacy applications to the new architecture.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"87 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115769015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The current pay-per-use model adopted by public cloud service providers has influenced the perception on how a cloud should provide its resources to end-users, i.e. on-demand and access to an unlimited amount of resources. However, not all clouds are equal. While such provisioning models work for well-endowed public clouds, they may not always work well in private clouds with limited budget and resources such as research and education clouds. Private clouds also stand to be impacted greatly by issues such as user resource hogging and the misuse of resources for nefarious activities. These problems are usually caused by challenges such as (1) limited physical servers/ budget, (2) growing number of users and (3) the inability to gracefully and automatically relinquish resources from inactive users. Currently, cloud resource management frameworks used for private cloud setups, such as OpenStack and CloudStack, only uses the pay-per-use model as the basis when provisioning resources to users. In this paper, we propose OpenStack Café, a novel methodology adopting the concepts of 'time' and booking systems' to manage resources of private clouds. By allowing users to book resources over specific time-slots, our proposed solution can efficiently and automatically help administrators manage users' access to resource, addressing the issue of resource hogging and gracefully relinquish resources back to the pool in resource-constrained private cloud setups. Work is currently in progress to adopt Café into OpenStack as a feature, and results of our prototype show promises. We also present some insights to lessons learnt during the design and implementation of our proposed methodology in this paper.
{"title":"'Time' for Cloud? Design and Implementation of a Time-Based Cloud Resource Management System","authors":"R. Ko, Yu Shyang Tan, Grace P. Y. Ng","doi":"10.1109/CLOUD.2014.77","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.77","url":null,"abstract":"The current pay-per-use model adopted by public cloud service providers has influenced the perception on how a cloud should provide its resources to end-users, i.e. on-demand and access to an unlimited amount of resources. However, not all clouds are equal. While such provisioning models work for well-endowed public clouds, they may not always work well in private clouds with limited budget and resources such as research and education clouds. Private clouds also stand to be impacted greatly by issues such as user resource hogging and the misuse of resources for nefarious activities. These problems are usually caused by challenges such as (1) limited physical servers/ budget, (2) growing number of users and (3) the inability to gracefully and automatically relinquish resources from inactive users. Currently, cloud resource management frameworks used for private cloud setups, such as OpenStack and CloudStack, only uses the pay-per-use model as the basis when provisioning resources to users. In this paper, we propose OpenStack Café, a novel methodology adopting the concepts of 'time' and booking systems' to manage resources of private clouds. By allowing users to book resources over specific time-slots, our proposed solution can efficiently and automatically help administrators manage users' access to resource, addressing the issue of resource hogging and gracefully relinquish resources back to the pool in resource-constrained private cloud setups. Work is currently in progress to adopt Café into OpenStack as a feature, and results of our prototype show promises. We also present some insights to lessons learnt during the design and implementation of our proposed methodology in this paper.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130824515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yvonne Thoß, Christoph Pohl, M. Hoffmann, Josef Spillner, A. Schill
The cloud computing paradigm is one of the most promising of its kind. The demand and supply of online delivered software have been continuously growing. However, reports have shown that various Software as a service solutions have not succeeded in reaching cloud user expectations. Hence, users need to be satisfied with the service quality which is determined by the fulfillment of both functional and non-functional requirements. Currently, the evaluation of the cloud service quality consider-ing individual requirements is up to the user. Unfortunately, the amount of non-functional quality criteria is very high and some are laborious to determine. We believe that an information system could support and enable users without specific knowledge to evaluate comprehensively and rapidly the quality of their cloud. In this paper we first present a quality model for SaaS. In addition, we demonstrate how the quality information should be structured and visualized in a user-friendly way. With the provided transparency users are able to establish quality awareness in the long term.
{"title":"User-Friendly Visualization of Cloud Quality","authors":"Yvonne Thoß, Christoph Pohl, M. Hoffmann, Josef Spillner, A. Schill","doi":"10.1109/CLOUD.2014.122","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.122","url":null,"abstract":"The cloud computing paradigm is one of the most promising of its kind. The demand and supply of online delivered software have been continuously growing. However, reports have shown that various Software as a service solutions have not succeeded in reaching cloud user expectations. Hence, users need to be satisfied with the service quality which is determined by the fulfillment of both functional and non-functional requirements. Currently, the evaluation of the cloud service quality consider-ing individual requirements is up to the user. Unfortunately, the amount of non-functional quality criteria is very high and some are laborious to determine. We believe that an information system could support and enable users without specific knowledge to evaluate comprehensively and rapidly the quality of their cloud. In this paper we first present a quality model for SaaS. In addition, we demonstrate how the quality information should be structured and visualized in a user-friendly way. With the provided transparency users are able to establish quality awareness in the long term.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"631 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113982115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data security is a major concern in cloud computing. After clients outsource their data to the cloud, will they lose control of the data? Prior research has proposed various schemes for clients to confirm the existence of their data on the cloud servers, and the goal is to ensure data integrity. This paper investigates a complementary problem: When clients delete data, how can they be sure that the deleted data will never resurface in the future if the clients do not perform the actual data removal themselves? How to confirm the non-existence of their data when the data is not in their possession? One obvious solution is to encrypt the outsourced data, but this solution has a significant technical challenge because a huge amount of key materials may have to be maintained if we allow fine-grained deletion. In this paper, we explore the feasibility of relieving clients from such a burden by outsourcing keys (after encryption) to the cloud. We propose a novel multi-layered key structure, called Recursively Encrypted Red-black Key tree (RERK), that ensures no key materials will be leaked, yet the client is able to manipulate keys by performing tree operations in collaboration with the servers. We implement our solution on the Amazon EC2. The experimental results show that our solution can efficiently support the deletion of outsourced data in cloud computing.
{"title":"On Deletion of Outsourced Data in Cloud Computing","authors":"Zhen Mo, Qingjun Xiao, Yian Zhou, Shigang Chen","doi":"10.1109/CLOUD.2014.54","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.54","url":null,"abstract":"Data security is a major concern in cloud computing. After clients outsource their data to the cloud, will they lose control of the data? Prior research has proposed various schemes for clients to confirm the existence of their data on the cloud servers, and the goal is to ensure data integrity. This paper investigates a complementary problem: When clients delete data, how can they be sure that the deleted data will never resurface in the future if the clients do not perform the actual data removal themselves? How to confirm the non-existence of their data when the data is not in their possession? One obvious solution is to encrypt the outsourced data, but this solution has a significant technical challenge because a huge amount of key materials may have to be maintained if we allow fine-grained deletion. In this paper, we explore the feasibility of relieving clients from such a burden by outsourcing keys (after encryption) to the cloud. We propose a novel multi-layered key structure, called Recursively Encrypted Red-black Key tree (RERK), that ensures no key materials will be leaked, yet the client is able to manipulate keys by performing tree operations in collaboration with the servers. We implement our solution on the Amazon EC2. The experimental results show that our solution can efficiently support the deletion of outsourced data in cloud computing.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114767660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When we evaluate performance of a virtualized system, we should consider two aspects, performance interference and the number of virtual machine (VM) combinations. Performance interference is caused by sharing physical resources among VMs and virtual machine monitor (VMM) scheduling. We should precisely incorporate these effects into performance evaluation. We also need to evaluate many VM combinations to find the optimal consolidation of servers, therefore, it is important we apply an efficient evaluation method to reduce evaluation time. We propose a layered performance model to address these aspects. We regard a virtualized system as a combination of two layers: one consisting of VMs and the other consisting of VMM and physical resources. We construct a performance model for each layer. We apply the white-box approach to the VM layer model and the black-box approach to the VMM/physical resource layer model. The white-box model is flexible for representing many VM combinations and the black-box model incorporates performance interference. By allocating each aspect to each model, our proposed model evaluates performance precisely and efficiently. We discuss the effectiveness of our proposed model with a case study of storage I/O contention.
{"title":"Performance Modeling to Divide Performance Interference of Virtualization and Virtual Machine Combination","authors":"Daichi Kimura, Eriko Numata, Masato Kawatsu","doi":"10.1109/CLOUD.2014.43","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.43","url":null,"abstract":"When we evaluate performance of a virtualized system, we should consider two aspects, performance interference and the number of virtual machine (VM) combinations. Performance interference is caused by sharing physical resources among VMs and virtual machine monitor (VMM) scheduling. We should precisely incorporate these effects into performance evaluation. We also need to evaluate many VM combinations to find the optimal consolidation of servers, therefore, it is important we apply an efficient evaluation method to reduce evaluation time. We propose a layered performance model to address these aspects. We regard a virtualized system as a combination of two layers: one consisting of VMs and the other consisting of VMM and physical resources. We construct a performance model for each layer. We apply the white-box approach to the VM layer model and the black-box approach to the VMM/physical resource layer model. The white-box model is flexible for representing many VM combinations and the black-box model incorporates performance interference. By allocating each aspect to each model, our proposed model evaluates performance precisely and efficiently. We discuss the effectiveness of our proposed model with a case study of storage I/O contention.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124337387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In cloud environments High Availability characteristics are established by the usage of failover software (like e.g. HAProxy, Keepalive or Pacemaker). Though these tools enable automatic recovery of cloud services from outages, the recovery can still be very slow if it is not configured adequately. In this paper we developed a "Recovery Time Test" to determine if recovery time depends on configuration of the failover software and how recovery time depends on configuration settings. Another goal of the Recovery Time Test is to determine the factor by which recovery time can be decreased by a given configuration. As proof of concept, we applied the Recovery Time Test to an OpenStack cloud environment which is controlled by the Pacemaker failover software. Pacemaker mean recovery time can take a value between 110 and 160 seconds, if the tool is configured badly. We found that with a proper configuration Pacemaker mean recovery time can be reduced significantly to a value between 15 and 20 seconds.
{"title":"Impact of Pacemaker Failover Configuration on Mean Time to Recovery for Small Cloud Clusters","authors":"Konstantin Benz, T. Bohnert","doi":"10.1109/CLOUD.2014.59","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.59","url":null,"abstract":"In cloud environments High Availability characteristics are established by the usage of failover software (like e.g. HAProxy, Keepalive or Pacemaker). Though these tools enable automatic recovery of cloud services from outages, the recovery can still be very slow if it is not configured adequately. In this paper we developed a \"Recovery Time Test\" to determine if recovery time depends on configuration of the failover software and how recovery time depends on configuration settings. Another goal of the Recovery Time Test is to determine the factor by which recovery time can be decreased by a given configuration. As proof of concept, we applied the Recovery Time Test to an OpenStack cloud environment which is controlled by the Pacemaker failover software. Pacemaker mean recovery time can take a value between 110 and 160 seconds, if the tool is configured badly. We found that with a proper configuration Pacemaker mean recovery time can be reduced significantly to a value between 15 and 20 seconds.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121955908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Renato L. F. Cunha, M. Assunção, C. Cardonha, M. Netto
An important feature of cloud computing is its elasticity, that is, the ability to have resource capacity dynamically modified according to the current system load. Auto-scaling is challenging because it must account for two conflicting objectives: minimising system capacity available to users and maximising QoS, which typically translates to short response times. Current auto-scaling techniques are based solely on load forecasts and ignore the perception that users have from cloud services. As a consequence, providers tend to provision a volume of resources that is significantly larger than necessary to keep users satisfied. In this article, we propose a scheduling algorithm and an auto-scaling triggering technique that explore user patience in order to identify critical times when auto-scaling is needed and the appropriate volume of capacity by which the cloud platform should either extend or shrink. The proposed technique assists service providers in reducing costs related to resource allocation while keeping the same QoS to users. Our experiments show that it is possible to reduce resource-hour by up to approximately 8% compared to auto-scaling based on system utilisation.
{"title":"Exploiting User Patience for Scaling Resource Capacity in Cloud Services","authors":"Renato L. F. Cunha, M. Assunção, C. Cardonha, M. Netto","doi":"10.1109/CLOUD.2014.67","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.67","url":null,"abstract":"An important feature of cloud computing is its elasticity, that is, the ability to have resource capacity dynamically modified according to the current system load. Auto-scaling is challenging because it must account for two conflicting objectives: minimising system capacity available to users and maximising QoS, which typically translates to short response times. Current auto-scaling techniques are based solely on load forecasts and ignore the perception that users have from cloud services. As a consequence, providers tend to provision a volume of resources that is significantly larger than necessary to keep users satisfied. In this article, we propose a scheduling algorithm and an auto-scaling triggering technique that explore user patience in order to identify critical times when auto-scaling is needed and the appropriate volume of capacity by which the cloud platform should either extend or shrink. The proposed technique assists service providers in reducing costs related to resource allocation while keeping the same QoS to users. Our experiments show that it is possible to reduce resource-hour by up to approximately 8% compared to auto-scaling based on system utilisation.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123199244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Along with a large amount of data being outsourced to the cloud, it is imperative to enforce a secure, efficient and privacy-aware access control scheme on the cloud. Decentralized Attribute-based Encryption (ABE) is a variant of multi-authority ABE scheme which is regarded as being more suited to access control in a large-scale cloud. Constructing a decentralized ABE scheme should not need a central Attribute Authority (AA) and any cooperative computing, where most schemes are not efficient enough. Moreover, they introduced a Global Identifier (GID) to resist the collusion attack from users, but corrupt AAs can trace a user by his GID, resulting in the leakage of the user's identity privacy. In this paper, we design a privacy-preserving decentralized access control framework for cloud storage systems, and propose a decentralized CP-ABE access control scheme with the privacy preserving secret key extraction. Our scheme does not require any central AA and coordination among multi-authorities. We adopt Pedersen commitment scheme and oblivious commitment based envelope protocols as the main cryptographic primitives to address the privacy problem, thus the users receive secret keys only for valid identity attributes while the AAs learn nothing about the attributes. Our theoretical analysis and extensive experiment demonstrate the presented scheme's security strength and effectiveness in terms of scalability, computation and storage.
{"title":"Privacy-Preserving Decentralized Access Control for Cloud Storage Systems","authors":"Jianwei Chen, Huadong Ma","doi":"10.1109/CLOUD.2014.74","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.74","url":null,"abstract":"Along with a large amount of data being outsourced to the cloud, it is imperative to enforce a secure, efficient and privacy-aware access control scheme on the cloud. Decentralized Attribute-based Encryption (ABE) is a variant of multi-authority ABE scheme which is regarded as being more suited to access control in a large-scale cloud. Constructing a decentralized ABE scheme should not need a central Attribute Authority (AA) and any cooperative computing, where most schemes are not efficient enough. Moreover, they introduced a Global Identifier (GID) to resist the collusion attack from users, but corrupt AAs can trace a user by his GID, resulting in the leakage of the user's identity privacy. In this paper, we design a privacy-preserving decentralized access control framework for cloud storage systems, and propose a decentralized CP-ABE access control scheme with the privacy preserving secret key extraction. Our scheme does not require any central AA and coordination among multi-authorities. We adopt Pedersen commitment scheme and oblivious commitment based envelope protocols as the main cryptographic primitives to address the privacy problem, thus the users receive secret keys only for valid identity attributes while the AAs learn nothing about the attributes. Our theoretical analysis and extensive experiment demonstrate the presented scheme's security strength and effectiveness in terms of scalability, computation and storage.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129584925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}