Daniel de Oliveira, Eduardo S. Ogasawara, F. Baião, M. Mattoso
Most of the large-scale scientific experiments modeled as scientific workflows produce a large amount of data and require workflow parallelism to reduce workflow execution time. Some of the existing Scientific Workflow Management Systems (SWfMS) explore parallelism techniques - such as parameter sweep and data fragmentation. In those systems, several computing resources are used to accomplish many computational tasks in homogeneous environments, such as multiprocessor machines or cluster systems. Cloud computing has become a popular high performance computing model in which (virtualized) resources are provided as services over the Web. Some scientists are starting to adopt the cloud model in scientific domains and are moving their scientific workflows (programs and data) from local environments to the cloud. Nevertheless, it is still difficult for the scientist to express a parallel computing paradigm for the workflow on the cloud. Capturing distributed provenance data at the cloud is also an issue. Existing approaches for executing scientific workflows using parallel processing are mainly focused on homogeneous environments whereas, in the cloud, the scientist has to manage new aspects such as initialization of virtualized instances, scheduling over different cloud environments, impact of data transferring and management of instance images. In this paper we propose SciCumulus, a cloud middleware that explores parameter sweep and data fragmentation parallelism in scientific workflow activities (with provenance support). It works between the SWfMS and the cloud. SciCumulus is designed considering cloud specificities. We have evaluated our approach by executing simulated experiments to analyze the overhead imposed by clouds on the workflow execution time.
{"title":"SciCumulus: A Lightweight Cloud Middleware to Explore Many Task Computing Paradigm in Scientific Workflows","authors":"Daniel de Oliveira, Eduardo S. Ogasawara, F. Baião, M. Mattoso","doi":"10.1109/CLOUD.2010.64","DOIUrl":"https://doi.org/10.1109/CLOUD.2010.64","url":null,"abstract":"Most of the large-scale scientific experiments modeled as scientific workflows produce a large amount of data and require workflow parallelism to reduce workflow execution time. Some of the existing Scientific Workflow Management Systems (SWfMS) explore parallelism techniques - such as parameter sweep and data fragmentation. In those systems, several computing resources are used to accomplish many computational tasks in homogeneous environments, such as multiprocessor machines or cluster systems. Cloud computing has become a popular high performance computing model in which (virtualized) resources are provided as services over the Web. Some scientists are starting to adopt the cloud model in scientific domains and are moving their scientific workflows (programs and data) from local environments to the cloud. Nevertheless, it is still difficult for the scientist to express a parallel computing paradigm for the workflow on the cloud. Capturing distributed provenance data at the cloud is also an issue. Existing approaches for executing scientific workflows using parallel processing are mainly focused on homogeneous environments whereas, in the cloud, the scientist has to manage new aspects such as initialization of virtualized instances, scheduling over different cloud environments, impact of data transferring and management of instance images. In this paper we propose SciCumulus, a cloud middleware that explores parameter sweep and data fragmentation parallelism in scientific workflow activities (with provenance support). It works between the SWfMS and the cloud. SciCumulus is designed considering cloud specificities. We have evaluated our approach by executing simulated experiments to analyze the overhead imposed by clouds on the workflow execution time.","PeriodicalId":375404,"journal":{"name":"2010 IEEE 3rd International Conference on Cloud Computing","volume":"154 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116579116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Ebneter, S. G. Grivas, Tripathi Uttam Kumar, H. Wache
Cloud computing has emerged as a strong factor driving companies to remarkable business success. Far from just being an IT level support solution cloud computing is triggering changes in their core business models by making them more efficient and cost-effective. This has generated an interest for a lot of companies to try and adopt cloud computing for their existing and new business process. In this research we present an approach which a company can use to analyze if its operations can be positively impacted by moving to the cloud. Further we describe our approach using which the company can make that transition to the cloud.
{"title":"Enterprise Architecture Frameworks for Enabling Cloud Computing","authors":"D. Ebneter, S. G. Grivas, Tripathi Uttam Kumar, H. Wache","doi":"10.1109/CLOUD.2010.47","DOIUrl":"https://doi.org/10.1109/CLOUD.2010.47","url":null,"abstract":"Cloud computing has emerged as a strong factor driving companies to remarkable business success. Far from just being an IT level support solution cloud computing is triggering changes in their core business models by making them more efficient and cost-effective. This has generated an interest for a lot of companies to try and adopt cloud computing for their existing and new business process. In this research we present an approach which a company can use to analyze if its operations can be positively impacted by moving to the cloud. Further we describe our approach using which the company can make that transition to the cloud.","PeriodicalId":375404,"journal":{"name":"2010 IEEE 3rd International Conference on Cloud Computing","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130411833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Hosono, He Huang, T. Hara, Y. Shimomura, T. Arai
This paper proposes a framework, which integrates the development and operation environments for cloud applications. Adopting perspectives on lifecycle management, the framework is equipped with tools and platforms, which seamlessly integrate lifetime phases: requirement analysis, architecture design, application implementation, operation and improvement. These are predicated on theories in design engineering, enabling identification of constraints arising in the development process and of dependencies among functional modules. A case study shows the feasibilities of the design principles, and indicates possibilities for the framework to be an Application Platform as a Service (APaaS), which can form an eco-system of datacenter operators, systems integrators and application providers.
{"title":"A Lifetime Supporting Framework for Cloud Applications","authors":"S. Hosono, He Huang, T. Hara, Y. Shimomura, T. Arai","doi":"10.1109/CLOUD.2010.63","DOIUrl":"https://doi.org/10.1109/CLOUD.2010.63","url":null,"abstract":"This paper proposes a framework, which integrates the development and operation environments for cloud applications. Adopting perspectives on lifecycle management, the framework is equipped with tools and platforms, which seamlessly integrate lifetime phases: requirement analysis, architecture design, application implementation, operation and improvement. These are predicated on theories in design engineering, enabling identification of constraints arising in the development process and of dependencies among functional modules. A case study shows the feasibilities of the design principles, and indicates possibilities for the framework to be an Application Platform as a Service (APaaS), which can form an eco-system of datacenter operators, systems integrators and application providers.","PeriodicalId":375404,"journal":{"name":"2010 IEEE 3rd International Conference on Cloud Computing","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131541863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe a cross-layer architecture we are developing in order to offer mobility support to wireless devices executing multimedia applications which require seamless communications. This architecture is based on the use of pairs of proxies, which enable the adaptive and concurrent use of different network interfaces during the communications. A cloud computing environment is used as the infrastructure to set up (and release) dynamically the proxies on the server-side, in accordance with the pay-as-you-go principle of cloud based services.
{"title":"Seamless Support of Multimedia Distributed Applications Through a Cloud","authors":"S. Ferretti, V. Ghini, F. Panzieri, E. Turrini","doi":"10.1109/CLOUD.2010.16","DOIUrl":"https://doi.org/10.1109/CLOUD.2010.16","url":null,"abstract":"We describe a cross-layer architecture we are developing in order to offer mobility support to wireless devices executing multimedia applications which require seamless communications. This architecture is based on the use of pairs of proxies, which enable the adaptive and concurrent use of different network interfaces during the communications. A cloud computing environment is used as the infrastructure to set up (and release) dynamically the proxies on the server-side, in accordance with the pay-as-you-go principle of cloud based services.","PeriodicalId":375404,"journal":{"name":"2010 IEEE 3rd International Conference on Cloud Computing","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121493386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xing Pu, Ling Liu, Yiduo Mei, Sankaran Sivathanu, Younggyun Koh, C. Pu
Server virtualization offers the ability to slice large, underutilized physical servers into smaller, parallel virtual machines (VMs), enabling diverse applications to run in isolated environments on a shared hardware platform. Effective management of virtualized cloud environments introduces new and unique challenges, such as efficient CPU scheduling for virtual machines, effective allocation of virtual machines to handle both CPU intensive and I/O intensive workloads. Although a fair number of research projects have dedicated to measuring, scheduling, and resource management of virtual machines, there still lacks of in-depth understanding of the performance factors that can impact the efficiency and effectiveness of resource multiplexing and resource scheduling among virtual machines. In this paper, we present our experimental study on the performance interference in parallel processing of CPU and network intensive workloads in the Xen Virtual Machine Monitors (VMMs). We conduct extensive experiments to measure the performance interference among VMs running network I/O workloads that are either CPU bound or network bound. Based on our experiments and observations, we conclude with four key findings that are critical to effective management of virtualized cloud environments for both cloud service providers and cloud consumers. First, running network-intensive workloads in isolated environments on a shared hardware platform can lead to high overheads due to extensive context switches and events in driver domain and VMM. Second, co-locating CPU-intensive workloads in isolated environments on a shared hardware platform can incur high CPU contention due to the demand for fast memory pages exchanges in I/O channel. Third, running CPU-intensive workloads and network-intensive workloads in conjunction incurs the least resource contention, delivering higher aggregate performance. Last but not the least, identifying factors that impact the total demand of the exchanged memory pages is critical to the in-depth understanding of the interference overheads in I/O channel in the driver domain and VMM.
{"title":"Understanding Performance Interference of I/O Workload in Virtualized Cloud Environments","authors":"Xing Pu, Ling Liu, Yiduo Mei, Sankaran Sivathanu, Younggyun Koh, C. Pu","doi":"10.1109/CLOUD.2010.65","DOIUrl":"https://doi.org/10.1109/CLOUD.2010.65","url":null,"abstract":"Server virtualization offers the ability to slice large, underutilized physical servers into smaller, parallel virtual machines (VMs), enabling diverse applications to run in isolated environments on a shared hardware platform. Effective management of virtualized cloud environments introduces new and unique challenges, such as efficient CPU scheduling for virtual machines, effective allocation of virtual machines to handle both CPU intensive and I/O intensive workloads. Although a fair number of research projects have dedicated to measuring, scheduling, and resource management of virtual machines, there still lacks of in-depth understanding of the performance factors that can impact the efficiency and effectiveness of resource multiplexing and resource scheduling among virtual machines. In this paper, we present our experimental study on the performance interference in parallel processing of CPU and network intensive workloads in the Xen Virtual Machine Monitors (VMMs). We conduct extensive experiments to measure the performance interference among VMs running network I/O workloads that are either CPU bound or network bound. Based on our experiments and observations, we conclude with four key findings that are critical to effective management of virtualized cloud environments for both cloud service providers and cloud consumers. First, running network-intensive workloads in isolated environments on a shared hardware platform can lead to high overheads due to extensive context switches and events in driver domain and VMM. Second, co-locating CPU-intensive workloads in isolated environments on a shared hardware platform can incur high CPU contention due to the demand for fast memory pages exchanges in I/O channel. Third, running CPU-intensive workloads and network-intensive workloads in conjunction incurs the least resource contention, delivering higher aggregate performance. Last but not the least, identifying factors that impact the total demand of the exchanged memory pages is critical to the in-depth understanding of the interference overheads in I/O channel in the driver domain and VMM.","PeriodicalId":375404,"journal":{"name":"2010 IEEE 3rd International Conference on Cloud Computing","volume":"06 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130652974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Provisioning and maintenance of infrastructure for Web based digital library search engines such as CiteSeer$^x$ present several challenges. CiteSeer$^x$ provides autonomous citation indexing, full text indexing, and extensive document metadata from document scrawled from the web across computer and information sciences and related fields. Infrastructure virtualization and cloud computing are particularly attractive choices for CiteSeer$^x$, which is challenged by both growth in the size of the indexed document collection, new features and most prominently usage. In this paper, we discuss constraints and choices faced by information retrieval systems like CiteSeer$^x$ by exploring in detail aspects of placing CiteSeer$^x$ into current cloud infrastructure offerings. We also implement an ad-hoc virtualized storage system for experimenting with adoption of cloud infrastructure services. Our results show that a cloud implementation of CiteSeer$^x$ may be a feasible alternative for its continued operation and growth
{"title":"Cloud Computing: A Digital Libraries Perspective","authors":"Pradeep B. Teregowda, B. Urgaonkar, C. Lee Giles","doi":"10.1109/CLOUD.2010.49","DOIUrl":"https://doi.org/10.1109/CLOUD.2010.49","url":null,"abstract":"Provisioning and maintenance of infrastructure for Web based digital library search engines such as CiteSeer$^x$ present several challenges. CiteSeer$^x$ provides autonomous citation indexing, full text indexing, and extensive document metadata from document scrawled from the web across computer and information sciences and related fields. Infrastructure virtualization and cloud computing are particularly attractive choices for CiteSeer$^x$, which is challenged by both growth in the size of the indexed document collection, new features and most prominently usage. In this paper, we discuss constraints and choices faced by information retrieval systems like CiteSeer$^x$ by exploring in detail aspects of placing CiteSeer$^x$ into current cloud infrastructure offerings. We also implement an ad-hoc virtualized storage system for experimenting with adoption of cloud infrastructure services. Our results show that a cloud implementation of CiteSeer$^x$ may be a feasible alternative for its continued operation and growth","PeriodicalId":375404,"journal":{"name":"2010 IEEE 3rd International Conference on Cloud Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130844119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Husain, L. Khan, Murat Kantarcioglu, B. Thuraisingham
Cloud computing is the newest paradigm in the IT world and hence the focus of new research. Companies hosting cloud computing services face the challenge of handling data intensive applications. Semantic web technologies can be an ideal candidate to be used together with cloud computing tools to provide a solution. These technologies have been standardized by the World Wide Web Consortium (W3C). One such standard is the Resource Description Framework (RDF). With the explosion of semantic web technologies, large RDF graphs are common place. Current frameworks do not scale for large RDF graphs. In this paper, we describe a framework that we built using Hadoop, a popular open source framework for Cloud Computing, to store and retrieve large numbers of RDF triples. We describe a scheme to store RDF data in Hadoop Distributed File System. We present an algorithm to generate the best possible query plan to answer a SPARQL Protocol and RDF Query Language (SPARQL) query based on a cost model. We use Hadoop's MapReduce framework to answer the queries. Our results show that we can store large RDF graphs in Hadoop clusters built with cheap commodity class hardware. Furthermore, we show that our framework is scalable and efficient and can easily handle billions of RDF triples, unlike traditional approaches.
{"title":"Data Intensive Query Processing for Large RDF Graphs Using Cloud Computing Tools","authors":"M. Husain, L. Khan, Murat Kantarcioglu, B. Thuraisingham","doi":"10.1109/CLOUD.2010.36","DOIUrl":"https://doi.org/10.1109/CLOUD.2010.36","url":null,"abstract":"Cloud computing is the newest paradigm in the IT world and hence the focus of new research. Companies hosting cloud computing services face the challenge of handling data intensive applications. Semantic web technologies can be an ideal candidate to be used together with cloud computing tools to provide a solution. These technologies have been standardized by the World Wide Web Consortium (W3C). One such standard is the Resource Description Framework (RDF). With the explosion of semantic web technologies, large RDF graphs are common place. Current frameworks do not scale for large RDF graphs. In this paper, we describe a framework that we built using Hadoop, a popular open source framework for Cloud Computing, to store and retrieve large numbers of RDF triples. We describe a scheme to store RDF data in Hadoop Distributed File System. We present an algorithm to generate the best possible query plan to answer a SPARQL Protocol and RDF Query Language (SPARQL) query based on a cost model. We use Hadoop's MapReduce framework to answer the queries. Our results show that we can store large RDF graphs in Hadoop clusters built with cheap commodity class hardware. Furthermore, we show that our framework is scalable and efficient and can easily handle billions of RDF triples, unlike traditional approaches.","PeriodicalId":375404,"journal":{"name":"2010 IEEE 3rd International Conference on Cloud Computing","volume":"81 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131205957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The increasing processing power, storage and support of multiple network interfaces are promising the mobile devices to host services and participate in service discovery network. A few efforts have been taken to facilitate provisioning mobile Web services. However they have not addressed the issue about how to host heavy-duty services on mobile devices with limited computing resources in terms of processing power and memory. In this paper, we propose a framework which partitions the workload of complex services in a distributed environment and keeps the Web service interfaces on mobile devices. The mobile device is the integration point with the support of backend nodes and other Web services. The functions which require the resources of the mobile device and interaction with the mobile user are executed locally. The framework provides support for hosting mobile Web services involving complex business processes by partitioning the tasks and delegating the heavy-duty tasks to remote servers. We have analyzed the proposed framework using a sample prototype. The experimental results have shown a significant performance improvement by deploying the proposed framework in hosting mobile Web services.
{"title":"Provisioning Web Services from Resource Constrained Mobile Devices","authors":"Mahbub Hassan, Weiliang Zhao, Jian Yang","doi":"10.1109/CLOUD.2010.30","DOIUrl":"https://doi.org/10.1109/CLOUD.2010.30","url":null,"abstract":"The increasing processing power, storage and support of multiple network interfaces are promising the mobile devices to host services and participate in service discovery network. A few efforts have been taken to facilitate provisioning mobile Web services. However they have not addressed the issue about how to host heavy-duty services on mobile devices with limited computing resources in terms of processing power and memory. In this paper, we propose a framework which partitions the workload of complex services in a distributed environment and keeps the Web service interfaces on mobile devices. The mobile device is the integration point with the support of backend nodes and other Web services. The functions which require the resources of the mobile device and interaction with the mobile user are executed locally. The framework provides support for hosting mobile Web services involving complex business processes by partitioning the tasks and delegating the heavy-duty tasks to remote servers. We have analyzed the proposed framework using a sample prototype. The experimental results have shown a significant performance improvement by deploying the proposed framework in hosting mobile Web services.","PeriodicalId":375404,"journal":{"name":"2010 IEEE 3rd International Conference on Cloud Computing","volume":"369 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132935116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtual Machine Clusters tend to increase the amount of unused memory. We propose a remote swap management framework in a VM cluster which configures a remote swap dynamically to running VMs according to memory usage. We explain the functional requirements and the design of the framework, and demonstrate the effectiveness of the framework using prototype implementation.
{"title":"A Remote Swap Management Framework in a Virtual Machine Cluster","authors":"T. Okuda, Y. Nagai, Y. Okamoto, Eiji Kawai","doi":"10.1109/CLOUD.2010.13","DOIUrl":"https://doi.org/10.1109/CLOUD.2010.13","url":null,"abstract":"Virtual Machine Clusters tend to increase the amount of unused memory. We propose a remote swap management framework in a VM cluster which configures a remote swap dynamically to running VMs according to memory usage. We explain the functional requirements and the design of the framework, and demonstrate the effectiveness of the framework using prototype implementation.","PeriodicalId":375404,"journal":{"name":"2010 IEEE 3rd International Conference on Cloud Computing","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123882211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An important aspect of trust in cloud computing consists in preventing the cloud provider from misusing the user's data. In this work-in-progress paper, we propose the approach of data anonymization to solve this problem. As this directly leads to problems of cloud usage accounting, we also propose a solution for anonymous yet reliable access control and accountability based on ring and group signatures.
{"title":"Towards an Anonymous Access Control and Accountability Scheme for Cloud Computing","authors":"Meiko Jensen, Sven Schäge, Jörg Schwenk","doi":"10.1109/CLOUD.2010.61","DOIUrl":"https://doi.org/10.1109/CLOUD.2010.61","url":null,"abstract":"An important aspect of trust in cloud computing consists in preventing the cloud provider from misusing the user's data. In this work-in-progress paper, we propose the approach of data anonymization to solve this problem. As this directly leads to problems of cloud usage accounting, we also propose a solution for anonymous yet reliable access control and accountability based on ring and group signatures.","PeriodicalId":375404,"journal":{"name":"2010 IEEE 3rd International Conference on Cloud Computing","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115865080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}