W. Lloyd, S. Pallickara, O. David, M. Arabi, K. Rojas
Abstraction of physical hardware using infrastructure-as-a-service (IaaS) clouds leads to the simplistic view that resources are homogeneous and that infinite scaling is possible with linear increases in performance. Support for autonomic scaling of multi-tier service oriented applications requires determination of when, what, and where to scale. "When" is addressed by hotspot detection schemes using techniques including performance modeling and time series analysis. "What" relates to determining the quantity and size of new resources to provision. "Where" involves identification of the best location(s) to provision new resources. In this paper we investigate primarily "where" new infrastructure should be provisioned, and secondly "what" the infrastructure should be. Dynamic scaling of infrastructure for service oriented applications requires rapid response to changes in demand to meet application quality-of-service requirements. We investigate the performance and resource cost implications of VM placement when dynamically scaling server infrastructure of service oriented applications. We evaluate dynamic scaling in the context of providing modeling-as-a-service for two environmental science models.
{"title":"Dynamic Scaling for Service Oriented Applications: Implications of Virtual Machine Placement on IaaS Clouds","authors":"W. Lloyd, S. Pallickara, O. David, M. Arabi, K. Rojas","doi":"10.1109/IC2E.2014.40","DOIUrl":"https://doi.org/10.1109/IC2E.2014.40","url":null,"abstract":"Abstraction of physical hardware using infrastructure-as-a-service (IaaS) clouds leads to the simplistic view that resources are homogeneous and that infinite scaling is possible with linear increases in performance. Support for autonomic scaling of multi-tier service oriented applications requires determination of when, what, and where to scale. \"When\" is addressed by hotspot detection schemes using techniques including performance modeling and time series analysis. \"What\" relates to determining the quantity and size of new resources to provision. \"Where\" involves identification of the best location(s) to provision new resources. In this paper we investigate primarily \"where\" new infrastructure should be provisioned, and secondly \"what\" the infrastructure should be. Dynamic scaling of infrastructure for service oriented applications requires rapid response to changes in demand to meet application quality-of-service requirements. We investigate the performance and resource cost implications of VM placement when dynamically scaling server infrastructure of service oriented applications. We evaluate dynamic scaling in the context of providing modeling-as-a-service for two environmental science models.","PeriodicalId":273902,"journal":{"name":"2014 IEEE International Conference on Cloud Engineering","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132926169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Dettori, D. Frank, Seetharami R. Seelam, P. Feillet
Cloud offers numerous technical middleware services such as databases, caches, messaging systems, and storage but very few business middleware services as first tier managed services. Business middleware such as business process management, business rules, operational decision management, content management and business analytics, if deployed in a cloud environment, is typically only available in a hosted (black-box) model. This is partly due to where cloud is in its evolution, and mostly due to the relatively higher complexity of business middleware vs. technical middleware in the deployment, provisioning, usage, etc. Business middleware consists of multiple functions for business processes design and modeling, execution, optimization, monitoring, and analysis. These functions and their associated complexity have inhibited the wholesale migration of existing business middleware to the cloud. To better understand the complexity in bringing business middleware to the cloud and to develop a systematic cloud enablement approach, we studied the deployment of IBM's Operational Decision Manager (ODM) business middleware product as a managed service (Cloud Decision Service) in IBM's BlueMix cloud platform. Our study indicates that complex middleware must be componentized along functional boundaries, and provide these functions for different business users and developers with cloud experience. In addition, middleware services must leverage other cloud services and they should provide interfaces so that they can be consumed by Java applications as well as by polyglot applications (JavaScript, Ruby, Python, etc). Applications can bind to and use our Cloud Decision Service in a matter of seconds. In contrast, it takes hours to days to setup such a service in the traditional packaged software model. Based on the lessons learned from this experiment we develop a blueprint for enabling high value business middleware as managed cloud services.
{"title":"Blueprint for Business Middleware as a Managed Cloud Service","authors":"P. Dettori, D. Frank, Seetharami R. Seelam, P. Feillet","doi":"10.1109/IC2E.2014.68","DOIUrl":"https://doi.org/10.1109/IC2E.2014.68","url":null,"abstract":"Cloud offers numerous technical middleware services such as databases, caches, messaging systems, and storage but very few business middleware services as first tier managed services. Business middleware such as business process management, business rules, operational decision management, content management and business analytics, if deployed in a cloud environment, is typically only available in a hosted (black-box) model. This is partly due to where cloud is in its evolution, and mostly due to the relatively higher complexity of business middleware vs. technical middleware in the deployment, provisioning, usage, etc. Business middleware consists of multiple functions for business processes design and modeling, execution, optimization, monitoring, and analysis. These functions and their associated complexity have inhibited the wholesale migration of existing business middleware to the cloud. To better understand the complexity in bringing business middleware to the cloud and to develop a systematic cloud enablement approach, we studied the deployment of IBM's Operational Decision Manager (ODM) business middleware product as a managed service (Cloud Decision Service) in IBM's BlueMix cloud platform. Our study indicates that complex middleware must be componentized along functional boundaries, and provide these functions for different business users and developers with cloud experience. In addition, middleware services must leverage other cloud services and they should provide interfaces so that they can be consumed by Java applications as well as by polyglot applications (JavaScript, Ruby, Python, etc). Applications can bind to and use our Cloud Decision Service in a matter of seconds. In contrast, it takes hours to days to setup such a service in the traditional packaged software model. Based on the lessons learned from this experiment we develop a blueprint for enabling high value business middleware as managed cloud services.","PeriodicalId":273902,"journal":{"name":"2014 IEEE International Conference on Cloud Engineering","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126950108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud storage services and NoSQL systems typically guarantee only Eventual Consistency. Knowing the degree of inconsistency increases transparency and comparability, it also eases application development. As every change to the system implementation, configuration, and deployment may affect the consistency guarantees of a storage system, long-term experiments are necessary to analyze how consistency behavior evolves over time. Building on our original publication on consistency benchmarking, we describe extensions to our benchmarking approach and report the surprising development of consistency behavior in Amazon S3 over the last two years. Based on our findings, we argue that consistency behavior should be monitored continuously and that deployment decisions should be reconsidered periodically. For this purpose, we propose a new method called Indirect Consistency Monitoring which allows to track all application-relevant changes in consistency behavior in a much more cost-efficient way compared to continuously running consistency benchmarks.
{"title":"Benchmarking Eventual Consistency: Lessons Learned from Long-Term Experimental Studies","authors":"David Bermbach, S. Tai","doi":"10.1109/IC2E.2014.37","DOIUrl":"https://doi.org/10.1109/IC2E.2014.37","url":null,"abstract":"Cloud storage services and NoSQL systems typically guarantee only Eventual Consistency. Knowing the degree of inconsistency increases transparency and comparability, it also eases application development. As every change to the system implementation, configuration, and deployment may affect the consistency guarantees of a storage system, long-term experiments are necessary to analyze how consistency behavior evolves over time. Building on our original publication on consistency benchmarking, we describe extensions to our benchmarking approach and report the surprising development of consistency behavior in Amazon S3 over the last two years. Based on our findings, we argue that consistency behavior should be monitored continuously and that deployment decisions should be reconsidered periodically. For this purpose, we propose a new method called Indirect Consistency Monitoring which allows to track all application-relevant changes in consistency behavior in a much more cost-efficient way compared to continuously running consistency benchmarks.","PeriodicalId":273902,"journal":{"name":"2014 IEEE International Conference on Cloud Engineering","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130213752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Service brokers are commonly used in the cloud computing paradigm to represent service requesters to select a service provider. They act as an intermediary between the two parties. One model of the cloud computing paradigm involves 3 layers, the user, the SaaS provider and the Cloud provider. The selection of service requesters is challenging due to the different levels of Quality of Service that each service provider can provide. In this paper we propose a unique mechanism that allows communication between service brokers in different layers in order to further improve this selection. In addition, we introduce a metric, efficiency, which service brokers can use to deterministically compare service providers with each other.
{"title":"Communication of Technical QoS among Cloud Brokers","authors":"E. Lim, Philippe Thiran","doi":"10.1109/IC2E.2014.92","DOIUrl":"https://doi.org/10.1109/IC2E.2014.92","url":null,"abstract":"Service brokers are commonly used in the cloud computing paradigm to represent service requesters to select a service provider. They act as an intermediary between the two parties. One model of the cloud computing paradigm involves 3 layers, the user, the SaaS provider and the Cloud provider. The selection of service requesters is challenging due to the different levels of Quality of Service that each service provider can provide. In this paper we propose a unique mechanism that allows communication between service brokers in different layers in order to further improve this selection. In addition, we introduce a metric, efficiency, which service brokers can use to deterministically compare service providers with each other.","PeriodicalId":273902,"journal":{"name":"2014 IEEE International Conference on Cloud Engineering","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130891981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many application classes, such as archiving, backup of thousands of nodes in an organization, video sharing, etc., require highly reliable and scalable storage systems. Since it is now feasible to build such storage systems with advanced open source technologies, the challenge becomes how to best utilize those technologies to build and operate such a storage system with optimized cost and performance. The focus of this work is to provide an effective solution and key insights for this challenge within the context of the OpenStack Object Storage (Swift) platform.
{"title":"Building Cost-Effective Storage Clouds","authors":"Ning Zhang, C. Kant","doi":"10.1109/IC2E.2014.39","DOIUrl":"https://doi.org/10.1109/IC2E.2014.39","url":null,"abstract":"Many application classes, such as archiving, backup of thousands of nodes in an organization, video sharing, etc., require highly reliable and scalable storage systems. Since it is now feasible to build such storage systems with advanced open source technologies, the challenge becomes how to best utilize those technologies to build and operate such a storage system with optimized cost and performance. The focus of this work is to provide an effective solution and key insights for this challenge within the context of the OpenStack Object Storage (Swift) platform.","PeriodicalId":273902,"journal":{"name":"2014 IEEE International Conference on Cloud Engineering","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131865859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Felipe Díaz Sánchez, S. A. Zahr, M. Gagnaire, Jean-Pierre Laisné, I. Marshall
Cloud Brokers enable interoperability and portability of applications across multiple Cloud Providers. On the other hand, incoming Cloud Providers start to support more and more unbundled Cloud Instances offerings. Thus, consumers may set up at their will the quantity of CPU, network bandwidth and memory or hard disk capacities their Cloud Instances will have. These facts enable the standardization of interoperable Cloud Instance configurations. In this paper, CompatibleOne is presented as an approach to bring Cloud Computing as a commodity. For this, the requirements to make of a product a commodity have been identified and have been mapped into the CompatibleOne architecture components. Our approach shows the practical feasibility of bringing Cloud Computing as a commodity.
{"title":"CompatibleOne: Bringing Cloud as a Commodity","authors":"Felipe Díaz Sánchez, S. A. Zahr, M. Gagnaire, Jean-Pierre Laisné, I. Marshall","doi":"10.1109/IC2E.2014.62","DOIUrl":"https://doi.org/10.1109/IC2E.2014.62","url":null,"abstract":"Cloud Brokers enable interoperability and portability of applications across multiple Cloud Providers. On the other hand, incoming Cloud Providers start to support more and more unbundled Cloud Instances offerings. Thus, consumers may set up at their will the quantity of CPU, network bandwidth and memory or hard disk capacities their Cloud Instances will have. These facts enable the standardization of interoperable Cloud Instance configurations. In this paper, CompatibleOne is presented as an approach to bring Cloud Computing as a commodity. For this, the requirements to make of a product a commodity have been identified and have been mapped into the CompatibleOne architecture components. Our approach shows the practical feasibility of bringing Cloud Computing as a commodity.","PeriodicalId":273902,"journal":{"name":"2014 IEEE International Conference on Cloud Engineering","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126390563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the development of the cloud computing, many services that provide a desktop environment over a network are coming into widespread use. In recent years, the demand that globalized product developments and cost reductions is increasing. This leads to a growing demand for using these services in product design and development activities, a challenge in this situation is to make 3D-CAD and CAE software available over remote desktop services of the cloud, namely Engineering Cloud. In this paper, we propose a lossy image compression method for 3D-CAD and CAE software at high compression ratio. This method extracts constant gradients by a frequency transform, which exploits the nature as artificial images in that the local variations in pixel value are constant. We demonstrate that this method achieves a 1.4 times improvement in compression ratio as compared with conventional JPEG. We also apply this method to a remote desktop system, which demonstrates that the bandwidth is reduced by 43% of JPEG case.
{"title":"Image Compression for Remote Desktop for Engineering Cloud","authors":"Daichi Shimada, M. Hashima, Yuichi Sato","doi":"10.1109/IC2E.2014.55","DOIUrl":"https://doi.org/10.1109/IC2E.2014.55","url":null,"abstract":"With the development of the cloud computing, many services that provide a desktop environment over a network are coming into widespread use. In recent years, the demand that globalized product developments and cost reductions is increasing. This leads to a growing demand for using these services in product design and development activities, a challenge in this situation is to make 3D-CAD and CAE software available over remote desktop services of the cloud, namely Engineering Cloud. In this paper, we propose a lossy image compression method for 3D-CAD and CAE software at high compression ratio. This method extracts constant gradients by a frequency transform, which exploits the nature as artificial images in that the local variations in pixel value are constant. We demonstrate that this method achieves a 1.4 times improvement in compression ratio as compared with conventional JPEG. We also apply this method to a remote desktop system, which demonstrates that the bandwidth is reduced by 43% of JPEG case.","PeriodicalId":273902,"journal":{"name":"2014 IEEE International Conference on Cloud Engineering","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123324705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David Breitgand, Zvi Dubitzky, Amir Epstein, Oshrit Feder, A. Glikson, Inbar Shapira, G. T. Carughi
One of the key enablers of a cloud provider competitiveness is ability to over-commit shared infrastructure at ratios that are higher than those of other competitors, without compromising non-functional requirements, such as performance. A widely recognized impediment to achieving this goal is so called "Virtual Machines sprawl", a phenomenon referring to the situation when customers order Virtual Machines (VM) on the cloud, use them extensively and then leave them inactive for prolonged periods of time. Since a typical cloud provisioning system treats new VM provision requests according to the nominal virtual hardware specification, an often occurring situation is that the nominal resources of a cloud/pool become exhausted fast while the physical hosts utilization remains low.We present a novel cloud resources scheduler called Pulsar that extends OpenStack Nova Filter Scheduler. The key design principle of Pulsar is adaptivity. It recognises that effective safely attainable over-commit ratio varies with time due to workloads' variability and dynamically adapts the effective over-commit ratio to these changes. We evaluate Pulsar via extensive simulations and demonstrate its performance on the actual OpenStack based testbed running popular workloads.
云提供商竞争力的关键因素之一是能够以高于其他竞争对手的比率超额提交共享基础设施,而不会影响非功能需求,例如性能。实现这一目标的一个公认的障碍是所谓的“虚拟机蔓延”,这是一种现象,指的是客户在云上订购虚拟机(VM),广泛使用它们,然后长时间不使用它们。由于典型的云供应系统根据名义虚拟硬件规范处理新的VM供应请求,因此经常出现云/池的名义资源很快耗尽,而物理主机利用率仍然很低的情况。我们提出了一种新的云资源调度程序Pulsar,它扩展了OpenStack Nova Filter scheduler。脉冲星的关键设计原则是自适应。它认识到,由于工作负载的可变性,可安全实现的有效超额提交比率会随时间而变化,并根据这些变化动态调整有效超额提交比率。我们通过广泛的模拟来评估Pulsar,并在运行流行工作负载的实际基于OpenStack的测试平台上展示其性能。
{"title":"An Adaptive Utilization Accelerator for Virtualized Environments","authors":"David Breitgand, Zvi Dubitzky, Amir Epstein, Oshrit Feder, A. Glikson, Inbar Shapira, G. T. Carughi","doi":"10.1109/IC2E.2014.63","DOIUrl":"https://doi.org/10.1109/IC2E.2014.63","url":null,"abstract":"One of the key enablers of a cloud provider competitiveness is ability to over-commit shared infrastructure at ratios that are higher than those of other competitors, without compromising non-functional requirements, such as performance. A widely recognized impediment to achieving this goal is so called \"Virtual Machines sprawl\", a phenomenon referring to the situation when customers order Virtual Machines (VM) on the cloud, use them extensively and then leave them inactive for prolonged periods of time. Since a typical cloud provisioning system treats new VM provision requests according to the nominal virtual hardware specification, an often occurring situation is that the nominal resources of a cloud/pool become exhausted fast while the physical hosts utilization remains low.We present a novel cloud resources scheduler called Pulsar that extends OpenStack Nova Filter Scheduler. The key design principle of Pulsar is adaptivity. It recognises that effective safely attainable over-commit ratio varies with time due to workloads' variability and dynamically adapts the effective over-commit ratio to these changes. We evaluate Pulsar via extensive simulations and demonstrate its performance on the actual OpenStack based testbed running popular workloads.","PeriodicalId":273902,"journal":{"name":"2014 IEEE International Conference on Cloud Engineering","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124509549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mark Shtern, R. Sandel, Marin Litoiu, Chris Bachalo, V. Theodorou
Distributed Denial of Service attacks are a growing threat to organizations and, as defense mechanisms are becoming more advanced, hackers are aiming at the application layer. For example, application layer Low and Slow Distributed Denial of Service attacks are becoming a serious issue because, due to low resource consumption, they are hard to detect. In this position paper, we propose a reference architecture that mitigates the Low and Slow Distributed Denial of Service attacks by utilizing Software Defined Infrastructure capabilities. We also propose two concrete architectures based on the reference architecture: a Performance Model-Based and Off-The-Shelf Components based architecture, respectively. We introduce the Shark Tank concept, a cluster under detailed monitoring that has full application capabilities and where suspicious requests are redirected for further filtering.
{"title":"Towards Mitigation of Low and Slow Application DDoS Attacks","authors":"Mark Shtern, R. Sandel, Marin Litoiu, Chris Bachalo, V. Theodorou","doi":"10.1109/IC2E.2014.38","DOIUrl":"https://doi.org/10.1109/IC2E.2014.38","url":null,"abstract":"Distributed Denial of Service attacks are a growing threat to organizations and, as defense mechanisms are becoming more advanced, hackers are aiming at the application layer. For example, application layer Low and Slow Distributed Denial of Service attacks are becoming a serious issue because, due to low resource consumption, they are hard to detect. In this position paper, we propose a reference architecture that mitigates the Low and Slow Distributed Denial of Service attacks by utilizing Software Defined Infrastructure capabilities. We also propose two concrete architectures based on the reference architecture: a Performance Model-Based and Off-The-Shelf Components based architecture, respectively. We introduce the Shark Tank concept, a cluster under detailed monitoring that has full application capabilities and where suspicious requests are redirected for further filtering.","PeriodicalId":273902,"journal":{"name":"2014 IEEE International Conference on Cloud Engineering","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124983619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Santiago Gómez Sáez, V. Andrikopoulos, F. Leymann, Steve Strauch
Nowadays different Cloud services enable enterprises to migrate applications to the Cloud. An application can be partially migrated by replacing some of its components with Cloud services, or by migrating one or multiple of its layers to the Cloud. As a result, accessing application data stored off-premise requires mechanisms to mitigate the negative impact on Quality of Service (QoS), e.g. due to network latency. In this work, we propose and realize an approach for transparently accessing data migrated to the Cloud using a multi-tenant open source Enterprise Service Bus (ESB) as the basis. Furthermore, we enhance the ESB with QoS awareness by integrating it with an open source caching solution. For evaluation purposes we generate a representative application workload using data from the TPC-H benchmark. Based on this workload, we then evaluate the optimal caching strategy among multiple eviction algorithms when accessing relational databases located at different Cloud providers.
{"title":"Evaluating Caching Strategies for Cloud Data Access Using an Enterprise Service Bus","authors":"Santiago Gómez Sáez, V. Andrikopoulos, F. Leymann, Steve Strauch","doi":"10.1109/IC2E.2014.49","DOIUrl":"https://doi.org/10.1109/IC2E.2014.49","url":null,"abstract":"Nowadays different Cloud services enable enterprises to migrate applications to the Cloud. An application can be partially migrated by replacing some of its components with Cloud services, or by migrating one or multiple of its layers to the Cloud. As a result, accessing application data stored off-premise requires mechanisms to mitigate the negative impact on Quality of Service (QoS), e.g. due to network latency. In this work, we propose and realize an approach for transparently accessing data migrated to the Cloud using a multi-tenant open source Enterprise Service Bus (ESB) as the basis. Furthermore, we enhance the ESB with QoS awareness by integrating it with an open source caching solution. For evaluation purposes we generate a representative application workload using data from the TPC-H benchmark. Based on this workload, we then evaluate the optimal caching strategy among multiple eviction algorithms when accessing relational databases located at different Cloud providers.","PeriodicalId":273902,"journal":{"name":"2014 IEEE International Conference on Cloud Engineering","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124350402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}