This paper presents the architecture and the organization of a Mashup Container that supports the deployment and the execution of Event Driven Mashups (i.e., Composite Services in which the Services interact through events rather than through the classical Call-Response paradigm) following the Platform as a Service model in the Cloud Computing paradigm. We describe the two main modules of the container, namely the Deployment Module and the Service Execution Platform, and focus our attention on the performance on of the latter. In particular we discuss the results of an evaluation test that we run in a virtualized environment (VMware based) supporting scalability and fault tolerance.
{"title":"An Architecture for a Mashup Container in Virtualized Environments","authors":"Michele Stecca, M. Maresca","doi":"10.1109/CLOUD.2010.34","DOIUrl":"https://doi.org/10.1109/CLOUD.2010.34","url":null,"abstract":"This paper presents the architecture and the organization of a Mashup Container that supports the deployment and the execution of Event Driven Mashups (i.e., Composite Services in which the Services interact through events rather than through the classical Call-Response paradigm) following the Platform as a Service model in the Cloud Computing paradigm. We describe the two main modules of the container, namely the Deployment Module and the Service Execution Platform, and focus our attention on the performance on of the latter. In particular we discuss the results of an evaluation test that we run in a virtualized environment (VMware based) supporting scalability and fault tolerance.","PeriodicalId":375404,"journal":{"name":"2010 IEEE 3rd International Conference on Cloud Computing","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117194217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Low Latency Fault Tolerance (LLFT) middleware provides fault tolerance for distributed applications deployed within a cloud computing or data center environment, using the leader/follower replication approach. The LLFT middleware consists of a Low Latency Messaging Protocol, a Leader-Determined Membership Protocol, and a Virtual Determinizer Framework. The Messaging Protocol provides are liable, totally ordered message delivery service by employing a direct group-to-group multicast where the ordering is determined by the primary replica in the group. The Membership Protocol provides a fast reconfiguration and recovery service when a replica becomes faulty and when a replica joins or leaves a group. The Virtual Determinizer Framework captures ordering information at the primary replica and enforces the same ordering at the backup replicas for major sources of non-determinism. The LLFT middleware maintains strong replica consistency, offers application transparency, and achieves low end-to-end latency.
{"title":"Fault Tolerance Middleware for Cloud Computing","authors":"Wenbing Zhao, P. Melliar-Smith, L. Moser","doi":"10.1109/CLOUD.2010.26","DOIUrl":"https://doi.org/10.1109/CLOUD.2010.26","url":null,"abstract":"The Low Latency Fault Tolerance (LLFT) middleware provides fault tolerance for distributed applications deployed within a cloud computing or data center environment, using the leader/follower replication approach. The LLFT middleware consists of a Low Latency Messaging Protocol, a Leader-Determined Membership Protocol, and a Virtual Determinizer Framework. The Messaging Protocol provides are liable, totally ordered message delivery service by employing a direct group-to-group multicast where the ordering is determined by the primary replica in the group. The Membership Protocol provides a fast reconfiguration and recovery service when a replica becomes faulty and when a replica joins or leaves a group. The Virtual Determinizer Framework captures ordering information at the primary replica and enforces the same ordering at the backup replicas for major sources of non-determinism. The LLFT middleware maintains strong replica consistency, offers application transparency, and achieves low end-to-end latency.","PeriodicalId":375404,"journal":{"name":"2010 IEEE 3rd International Conference on Cloud Computing","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127505801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Monitoring plays a significant role in improving the quality of service in cloud computing. It helps clouds to scale resource utilization adaptively, to identify defects in services for service developers, and to discover usage patterns of numerous end users. However, due to the heterogeneity of components in clouds and the complexity arising from the wealth of runtime information, monitoring in clouds faces many new challenges. In this paper, we propose a runtime model for cloud monitoring (RMCM), which denotes an intuitive representation of a running cloud by focusing on common monitoring concerns. Raw monitoring data gathered by multiple monitoring techniques are organized by RMCM to present a more intuitive profile of a running cloud. We applied RMCM in the implementation of a flexible monitoring framework, which can achieve a balance between runtime overhead and monitoring capability via adaptive management of monitoring facilities. Our experience of utilizing the monitoring framework on a real cloud demonstrates the feasibility and effectiveness of our approach.
{"title":"A Runtime Model Based Monitoring Approach for Cloud","authors":"Jin Shao, Hao Wei, Qianxiang Wang, Hong Mei","doi":"10.1109/CLOUD.2010.31","DOIUrl":"https://doi.org/10.1109/CLOUD.2010.31","url":null,"abstract":"Monitoring plays a significant role in improving the quality of service in cloud computing. It helps clouds to scale resource utilization adaptively, to identify defects in services for service developers, and to discover usage patterns of numerous end users. However, due to the heterogeneity of components in clouds and the complexity arising from the wealth of runtime information, monitoring in clouds faces many new challenges. In this paper, we propose a runtime model for cloud monitoring (RMCM), which denotes an intuitive representation of a running cloud by focusing on common monitoring concerns. Raw monitoring data gathered by multiple monitoring techniques are organized by RMCM to present a more intuitive profile of a running cloud. We applied RMCM in the implementation of a flexible monitoring framework, which can achieve a balance between runtime overhead and monitoring capability via adaptive management of monitoring facilities. Our experience of utilizing the monitoring framework on a real cloud demonstrates the feasibility and effectiveness of our approach.","PeriodicalId":375404,"journal":{"name":"2010 IEEE 3rd International Conference on Cloud Computing","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126572562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe a cross-layer architecture we are developing in order to offer mobility support to wireless devices executing multimedia applications which require seamless communications. This architecture is based on the use of pairs of proxies, which enable the adaptive and concurrent use of different network interfaces during the communications. A cloud computing environment is used as the infrastructure to set up (and release) dynamically the proxies on the server-side, in accordance with the pay-as-you-go principle of cloud based services.
{"title":"Seamless Support of Multimedia Distributed Applications Through a Cloud","authors":"S. Ferretti, V. Ghini, F. Panzieri, E. Turrini","doi":"10.1109/CLOUD.2010.16","DOIUrl":"https://doi.org/10.1109/CLOUD.2010.16","url":null,"abstract":"We describe a cross-layer architecture we are developing in order to offer mobility support to wireless devices executing multimedia applications which require seamless communications. This architecture is based on the use of pairs of proxies, which enable the adaptive and concurrent use of different network interfaces during the communications. A cloud computing environment is used as the infrastructure to set up (and release) dynamically the proxies on the server-side, in accordance with the pay-as-you-go principle of cloud based services.","PeriodicalId":375404,"journal":{"name":"2010 IEEE 3rd International Conference on Cloud Computing","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121493386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Hosono, He Huang, T. Hara, Y. Shimomura, T. Arai
This paper proposes a framework, which integrates the development and operation environments for cloud applications. Adopting perspectives on lifecycle management, the framework is equipped with tools and platforms, which seamlessly integrate lifetime phases: requirement analysis, architecture design, application implementation, operation and improvement. These are predicated on theories in design engineering, enabling identification of constraints arising in the development process and of dependencies among functional modules. A case study shows the feasibilities of the design principles, and indicates possibilities for the framework to be an Application Platform as a Service (APaaS), which can form an eco-system of datacenter operators, systems integrators and application providers.
{"title":"A Lifetime Supporting Framework for Cloud Applications","authors":"S. Hosono, He Huang, T. Hara, Y. Shimomura, T. Arai","doi":"10.1109/CLOUD.2010.63","DOIUrl":"https://doi.org/10.1109/CLOUD.2010.63","url":null,"abstract":"This paper proposes a framework, which integrates the development and operation environments for cloud applications. Adopting perspectives on lifecycle management, the framework is equipped with tools and platforms, which seamlessly integrate lifetime phases: requirement analysis, architecture design, application implementation, operation and improvement. These are predicated on theories in design engineering, enabling identification of constraints arising in the development process and of dependencies among functional modules. A case study shows the feasibilities of the design principles, and indicates possibilities for the framework to be an Application Platform as a Service (APaaS), which can form an eco-system of datacenter operators, systems integrators and application providers.","PeriodicalId":375404,"journal":{"name":"2010 IEEE 3rd International Conference on Cloud Computing","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131541863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel de Oliveira, Eduardo S. Ogasawara, F. Baião, M. Mattoso
Most of the large-scale scientific experiments modeled as scientific workflows produce a large amount of data and require workflow parallelism to reduce workflow execution time. Some of the existing Scientific Workflow Management Systems (SWfMS) explore parallelism techniques - such as parameter sweep and data fragmentation. In those systems, several computing resources are used to accomplish many computational tasks in homogeneous environments, such as multiprocessor machines or cluster systems. Cloud computing has become a popular high performance computing model in which (virtualized) resources are provided as services over the Web. Some scientists are starting to adopt the cloud model in scientific domains and are moving their scientific workflows (programs and data) from local environments to the cloud. Nevertheless, it is still difficult for the scientist to express a parallel computing paradigm for the workflow on the cloud. Capturing distributed provenance data at the cloud is also an issue. Existing approaches for executing scientific workflows using parallel processing are mainly focused on homogeneous environments whereas, in the cloud, the scientist has to manage new aspects such as initialization of virtualized instances, scheduling over different cloud environments, impact of data transferring and management of instance images. In this paper we propose SciCumulus, a cloud middleware that explores parameter sweep and data fragmentation parallelism in scientific workflow activities (with provenance support). It works between the SWfMS and the cloud. SciCumulus is designed considering cloud specificities. We have evaluated our approach by executing simulated experiments to analyze the overhead imposed by clouds on the workflow execution time.
{"title":"SciCumulus: A Lightweight Cloud Middleware to Explore Many Task Computing Paradigm in Scientific Workflows","authors":"Daniel de Oliveira, Eduardo S. Ogasawara, F. Baião, M. Mattoso","doi":"10.1109/CLOUD.2010.64","DOIUrl":"https://doi.org/10.1109/CLOUD.2010.64","url":null,"abstract":"Most of the large-scale scientific experiments modeled as scientific workflows produce a large amount of data and require workflow parallelism to reduce workflow execution time. Some of the existing Scientific Workflow Management Systems (SWfMS) explore parallelism techniques - such as parameter sweep and data fragmentation. In those systems, several computing resources are used to accomplish many computational tasks in homogeneous environments, such as multiprocessor machines or cluster systems. Cloud computing has become a popular high performance computing model in which (virtualized) resources are provided as services over the Web. Some scientists are starting to adopt the cloud model in scientific domains and are moving their scientific workflows (programs and data) from local environments to the cloud. Nevertheless, it is still difficult for the scientist to express a parallel computing paradigm for the workflow on the cloud. Capturing distributed provenance data at the cloud is also an issue. Existing approaches for executing scientific workflows using parallel processing are mainly focused on homogeneous environments whereas, in the cloud, the scientist has to manage new aspects such as initialization of virtualized instances, scheduling over different cloud environments, impact of data transferring and management of instance images. In this paper we propose SciCumulus, a cloud middleware that explores parameter sweep and data fragmentation parallelism in scientific workflow activities (with provenance support). It works between the SWfMS and the cloud. SciCumulus is designed considering cloud specificities. We have evaluated our approach by executing simulated experiments to analyze the overhead imposed by clouds on the workflow execution time.","PeriodicalId":375404,"journal":{"name":"2010 IEEE 3rd International Conference on Cloud Computing","volume":"154 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116579116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Husain, L. Khan, Murat Kantarcioglu, B. Thuraisingham
Cloud computing is the newest paradigm in the IT world and hence the focus of new research. Companies hosting cloud computing services face the challenge of handling data intensive applications. Semantic web technologies can be an ideal candidate to be used together with cloud computing tools to provide a solution. These technologies have been standardized by the World Wide Web Consortium (W3C). One such standard is the Resource Description Framework (RDF). With the explosion of semantic web technologies, large RDF graphs are common place. Current frameworks do not scale for large RDF graphs. In this paper, we describe a framework that we built using Hadoop, a popular open source framework for Cloud Computing, to store and retrieve large numbers of RDF triples. We describe a scheme to store RDF data in Hadoop Distributed File System. We present an algorithm to generate the best possible query plan to answer a SPARQL Protocol and RDF Query Language (SPARQL) query based on a cost model. We use Hadoop's MapReduce framework to answer the queries. Our results show that we can store large RDF graphs in Hadoop clusters built with cheap commodity class hardware. Furthermore, we show that our framework is scalable and efficient and can easily handle billions of RDF triples, unlike traditional approaches.
{"title":"Data Intensive Query Processing for Large RDF Graphs Using Cloud Computing Tools","authors":"M. Husain, L. Khan, Murat Kantarcioglu, B. Thuraisingham","doi":"10.1109/CLOUD.2010.36","DOIUrl":"https://doi.org/10.1109/CLOUD.2010.36","url":null,"abstract":"Cloud computing is the newest paradigm in the IT world and hence the focus of new research. Companies hosting cloud computing services face the challenge of handling data intensive applications. Semantic web technologies can be an ideal candidate to be used together with cloud computing tools to provide a solution. These technologies have been standardized by the World Wide Web Consortium (W3C). One such standard is the Resource Description Framework (RDF). With the explosion of semantic web technologies, large RDF graphs are common place. Current frameworks do not scale for large RDF graphs. In this paper, we describe a framework that we built using Hadoop, a popular open source framework for Cloud Computing, to store and retrieve large numbers of RDF triples. We describe a scheme to store RDF data in Hadoop Distributed File System. We present an algorithm to generate the best possible query plan to answer a SPARQL Protocol and RDF Query Language (SPARQL) query based on a cost model. We use Hadoop's MapReduce framework to answer the queries. Our results show that we can store large RDF graphs in Hadoop clusters built with cheap commodity class hardware. Furthermore, we show that our framework is scalable and efficient and can easily handle billions of RDF triples, unlike traditional approaches.","PeriodicalId":375404,"journal":{"name":"2010 IEEE 3rd International Conference on Cloud Computing","volume":"81 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131205957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The increasing processing power, storage and support of multiple network interfaces are promising the mobile devices to host services and participate in service discovery network. A few efforts have been taken to facilitate provisioning mobile Web services. However they have not addressed the issue about how to host heavy-duty services on mobile devices with limited computing resources in terms of processing power and memory. In this paper, we propose a framework which partitions the workload of complex services in a distributed environment and keeps the Web service interfaces on mobile devices. The mobile device is the integration point with the support of backend nodes and other Web services. The functions which require the resources of the mobile device and interaction with the mobile user are executed locally. The framework provides support for hosting mobile Web services involving complex business processes by partitioning the tasks and delegating the heavy-duty tasks to remote servers. We have analyzed the proposed framework using a sample prototype. The experimental results have shown a significant performance improvement by deploying the proposed framework in hosting mobile Web services.
{"title":"Provisioning Web Services from Resource Constrained Mobile Devices","authors":"Mahbub Hassan, Weiliang Zhao, Jian Yang","doi":"10.1109/CLOUD.2010.30","DOIUrl":"https://doi.org/10.1109/CLOUD.2010.30","url":null,"abstract":"The increasing processing power, storage and support of multiple network interfaces are promising the mobile devices to host services and participate in service discovery network. A few efforts have been taken to facilitate provisioning mobile Web services. However they have not addressed the issue about how to host heavy-duty services on mobile devices with limited computing resources in terms of processing power and memory. In this paper, we propose a framework which partitions the workload of complex services in a distributed environment and keeps the Web service interfaces on mobile devices. The mobile device is the integration point with the support of backend nodes and other Web services. The functions which require the resources of the mobile device and interaction with the mobile user are executed locally. The framework provides support for hosting mobile Web services involving complex business processes by partitioning the tasks and delegating the heavy-duty tasks to remote servers. We have analyzed the proposed framework using a sample prototype. The experimental results have shown a significant performance improvement by deploying the proposed framework in hosting mobile Web services.","PeriodicalId":375404,"journal":{"name":"2010 IEEE 3rd International Conference on Cloud Computing","volume":"369 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132935116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtual Machine Clusters tend to increase the amount of unused memory. We propose a remote swap management framework in a VM cluster which configures a remote swap dynamically to running VMs according to memory usage. We explain the functional requirements and the design of the framework, and demonstrate the effectiveness of the framework using prototype implementation.
{"title":"A Remote Swap Management Framework in a Virtual Machine Cluster","authors":"T. Okuda, Y. Nagai, Y. Okamoto, Eiji Kawai","doi":"10.1109/CLOUD.2010.13","DOIUrl":"https://doi.org/10.1109/CLOUD.2010.13","url":null,"abstract":"Virtual Machine Clusters tend to increase the amount of unused memory. We propose a remote swap management framework in a VM cluster which configures a remote swap dynamically to running VMs according to memory usage. We explain the functional requirements and the design of the framework, and demonstrate the effectiveness of the framework using prototype implementation.","PeriodicalId":375404,"journal":{"name":"2010 IEEE 3rd International Conference on Cloud Computing","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123882211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An important aspect of trust in cloud computing consists in preventing the cloud provider from misusing the user's data. In this work-in-progress paper, we propose the approach of data anonymization to solve this problem. As this directly leads to problems of cloud usage accounting, we also propose a solution for anonymous yet reliable access control and accountability based on ring and group signatures.
{"title":"Towards an Anonymous Access Control and Accountability Scheme for Cloud Computing","authors":"Meiko Jensen, Sven Schäge, Jörg Schwenk","doi":"10.1109/CLOUD.2010.61","DOIUrl":"https://doi.org/10.1109/CLOUD.2010.61","url":null,"abstract":"An important aspect of trust in cloud computing consists in preventing the cloud provider from misusing the user's data. In this work-in-progress paper, we propose the approach of data anonymization to solve this problem. As this directly leads to problems of cloud usage accounting, we also propose a solution for anonymous yet reliable access control and accountability based on ring and group signatures.","PeriodicalId":375404,"journal":{"name":"2010 IEEE 3rd International Conference on Cloud Computing","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115865080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}