The Internet of Things broadly interpreted covers everything from monitoring sensors, smartphones that today have 10 "things" each, robots and surveillance systems. The smartphones capture both the Internet access for social media sites with 1.8 billion photos uploaded every day and the content of tweets and Facebook posts that are being analyzed to capture in real-time the sentiment and thoughts of people. There are many estimates for the potential size of the IoT with at least 20 Billion devices expected by 2020. As well as the consumer IoT there is also the Industrial Internet of Things IIoT delivering intelligent machines and revolutionary industrial systems (e.g. manufacturing and transportation) of every type. The Cloud is often viewed as the natural controller for IoT devices and new software models ("Map-Streaming") like Apache Storm are emerging. The panel will take a broad look at the future of IoT covering devices and their cloud support.
{"title":"Panel on Cloud and Internet-of-Things","authors":"G. Fox","doi":"10.1109/IC2E.2015.102","DOIUrl":"https://doi.org/10.1109/IC2E.2015.102","url":null,"abstract":"The Internet of Things broadly interpreted covers everything from monitoring sensors, smartphones that today have 10 \"things\" each, robots and surveillance systems. The smartphones capture both the Internet access for social media sites with 1.8 billion photos uploaded every day and the content of tweets and Facebook posts that are being analyzed to capture in real-time the sentiment and thoughts of people. There are many estimates for the potential size of the IoT with at least 20 Billion devices expected by 2020. As well as the consumer IoT there is also the Industrial Internet of Things IIoT delivering intelligent machines and revolutionary industrial systems (e.g. manufacturing and transportation) of every type. The Cloud is often viewed as the natural controller for IoT devices and new software models (\"Map-Streaming\") like Apache Storm are emerging. The panel will take a broad look at the future of IoT covering devices and their cloud support.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122286683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guyue Liu, Michael Trotter, Yuxin Ren, Timothy Wood
In cloud data centers, more and more services are deployed across multiple tiers to increase flexibility and scalability. However, this makes it difficult for the cloud provider to identify which tier of the application is the bottleneck and how to resolve performance problems. Existing solutions approach this problem by constantly monitoring either in end-hosts or physical switches. Host based monitoring usually needs instrumentation of application code, making it less practical, while network hardware based monitoring is expensive and requires special features in each physical switch. Instead, we believe network wide monitoring should be flexible and easy to deploy in a non-intrusive way by exploiting recent advances in software-based network services. Towards this end we are developing a distributed software-based network monitoring framework for cloud data centers. Our system leverages knowledge of topology and routing information to build relationships between each tier of the application, and detect and locate performance bottlenecks by monitoring the network inside software switches.
{"title":"Cloud-Scale Application Performance Monitoring with SDN and NFV","authors":"Guyue Liu, Michael Trotter, Yuxin Ren, Timothy Wood","doi":"10.1145/2988336.2988344","DOIUrl":"https://doi.org/10.1145/2988336.2988344","url":null,"abstract":"In cloud data centers, more and more services are deployed across multiple tiers to increase flexibility and scalability. However, this makes it difficult for the cloud provider to identify which tier of the application is the bottleneck and how to resolve performance problems. Existing solutions approach this problem by constantly monitoring either in end-hosts or physical switches. Host based monitoring usually needs instrumentation of application code, making it less practical, while network hardware based monitoring is expensive and requires special features in each physical switch. Instead, we believe network wide monitoring should be flexible and easy to deploy in a non-intrusive way by exploiting recent advances in software-based network services. Towards this end we are developing a distributed software-based network monitoring framework for cloud data centers. Our system leverages knowledge of topology and routing information to build relationships between each tier of the application, and detect and locate performance bottlenecks by monitoring the network inside software switches.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"2012 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131914568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Restful APIs are widely adopted in designing components that are combined to form web information systems. The use of REST is growing with the inclusion of smart devices and the Internet of Things, within the scope of web information systems, along with large-scale distributed NoSQL data stores and other web-based and cloud-hosted services. There is an important subclass of web information systems and distributed applications which would benefit from stronger transactional support, as typically found in traditional enterprise systems. In this paper, we propose REST+T (REST with Transactions), a transactional Restful data access protocol and API that extends HTTP to provide multi-item transactional access to data and state information across heterogeneous systems. We describe a case study called Tora, where we provide access through REST+T to an existing key-value store (WiredTiger) that was intended for embedded operation.
Restful api在设计组件时被广泛采用,这些组件被组合成web信息系统。随着智能设备和物联网在web信息系统范围内的普及,以及大规模分布式NoSQL数据存储和其他基于web和云托管的服务,REST的使用也在不断增长。web信息系统和分布式应用程序有一个重要的子类,它将受益于更强大的事务支持,就像传统企业系统中通常发现的那样。在本文中,我们提出了REST+T (REST with Transactions),这是一种事务性Restful数据访问协议和API,它扩展了HTTP,以提供跨异构系统对数据和状态信息的多项事务性访问。我们描述了一个名为Tora的案例研究,其中我们通过REST+T提供对用于嵌入式操作的现有键值存储(WiredTiger)的访问。
{"title":"REST+T: Scalable Transactions over HTTP","authors":"Akon Dey, A. Fekete, Uwe Röhm","doi":"10.1109/IC2E.2015.11","DOIUrl":"https://doi.org/10.1109/IC2E.2015.11","url":null,"abstract":"Restful APIs are widely adopted in designing components that are combined to form web information systems. The use of REST is growing with the inclusion of smart devices and the Internet of Things, within the scope of web information systems, along with large-scale distributed NoSQL data stores and other web-based and cloud-hosted services. There is an important subclass of web information systems and distributed applications which would benefit from stronger transactional support, as typically found in traditional enterprise systems. In this paper, we propose REST+T (REST with Transactions), a transactional Restful data access protocol and API that extends HTTP to provide multi-item transactional access to data and state information across heterogeneous systems. We describe a case study called Tora, where we provide access through REST+T to an existing key-value store (WiredTiger) that was intended for embedded operation.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"800 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133550595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The rise of the Social Internet, in the past decade, stimulated the invention of human-centered technologies that study and serve humans as individuals and in groups. For instance, social networking services provide ways for individuals to connect and interact with their friends. Also, personalized recommender systems leverage the collaborative social intelligence of all users' opinions to recommend: books, news, movies, or products in general. These social technologies have been enhancing the quality of Internet services and enriching the end-user experience. Furthermore, the Mobile Internet allows hundreds of millions of users to frequently use their mobile devices to access their healthcare information and bank accounts, interact with friends, buy stuff online, search interesting places to visit on-the-go, ask for driving directions, and more. In consequence, everything we do on the Mob Social Internet leaves breadcrumbs of digital traces that, when managed and analyzed well, could definitely be leveraged to improve life. Services that leverage Mobile and/or Social data have become killer applications in the cloud. Nonetheless, a major challenge that Cloud Service providers face is how to manage (store, index, query) Mobi Social data hosted in the cloud. Unfortunately, classic data management systems are not well adapted to handle data-intensive Mobi Social applications. The tutorial surveys state-of-the-art Mobi Social data management systems and research prototypes from the following perspectives: (1) Geo-tagged Micro blog search, location-aware and mobile social news feed queries, and GeoSocial Graph search, (2) Mobile Recommendation Services, and (3) Geo-Crowd sourcing. We finally highlight the risks and threats (e.g., privacy) that result from combining mobility and social networking. We conclude the tutorial by summarizing and presenting open research directions.
{"title":"Mobi Social (Mobile and Social) Data Management: A Tutorial","authors":"Mohamed Sarwat, M. Mokbel","doi":"10.1109/IC2E.2015.34","DOIUrl":"https://doi.org/10.1109/IC2E.2015.34","url":null,"abstract":"The rise of the Social Internet, in the past decade, stimulated the invention of human-centered technologies that study and serve humans as individuals and in groups. For instance, social networking services provide ways for individuals to connect and interact with their friends. Also, personalized recommender systems leverage the collaborative social intelligence of all users' opinions to recommend: books, news, movies, or products in general. These social technologies have been enhancing the quality of Internet services and enriching the end-user experience. Furthermore, the Mobile Internet allows hundreds of millions of users to frequently use their mobile devices to access their healthcare information and bank accounts, interact with friends, buy stuff online, search interesting places to visit on-the-go, ask for driving directions, and more. In consequence, everything we do on the Mob Social Internet leaves breadcrumbs of digital traces that, when managed and analyzed well, could definitely be leveraged to improve life. Services that leverage Mobile and/or Social data have become killer applications in the cloud. Nonetheless, a major challenge that Cloud Service providers face is how to manage (store, index, query) Mobi Social data hosted in the cloud. Unfortunately, classic data management systems are not well adapted to handle data-intensive Mobi Social applications. The tutorial surveys state-of-the-art Mobi Social data management systems and research prototypes from the following perspectives: (1) Geo-tagged Micro blog search, location-aware and mobile social news feed queries, and GeoSocial Graph search, (2) Mobile Recommendation Services, and (3) Geo-Crowd sourcing. We finally highlight the risks and threats (e.g., privacy) that result from combining mobility and social networking. We conclude the tutorial by summarizing and presenting open research directions.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134015350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ali Anwar, A. Sailer, Andrzej Kochut, Charles O. Schulz, A. Segal, A. Butt
As the cloud services journey through their life-cycle towards commodities, cloud service providers have to carefully choose the metering and rating tools and scale their infrastructure to effectively process the collected metering data. In this paper, we focus on the metering and rating aspects of the revenue management and their adaptability to business and operational changes. We design a framework for IT cloud service providers to scale their revenue systems in a cost-aware manner. The main idea is to dynamically use existing or newly provisioned SaaS VMs, instead of dedicated setups, for deploying the revenue management systems. At on-boarding of new customers, our framework performs off-line analysis to recommend appropriate revenue tools and their scalable distribution by predicting the need for resources based on historical usage. This allows the revenue management to adapt to the ever evolving business context. We evaluated our framework on a test bed of 20 physical machines that were used to deploy 12 VMs within Open Stack environment. Our analysis shows that service management related tasks can be offloaded to the existing VMs with at most 15% overhead in CPU utilization, 10% overhead for memory usage, and negligible overhead for I/O and network usage. By dynamically scaling the setup, we were able to reduce the metering data processing time by many folds without incurring any additional cost.
{"title":"Scalable Metering for an Affordable IT Cloud Service Management","authors":"Ali Anwar, A. Sailer, Andrzej Kochut, Charles O. Schulz, A. Segal, A. Butt","doi":"10.1109/IC2E.2015.18","DOIUrl":"https://doi.org/10.1109/IC2E.2015.18","url":null,"abstract":"As the cloud services journey through their life-cycle towards commodities, cloud service providers have to carefully choose the metering and rating tools and scale their infrastructure to effectively process the collected metering data. In this paper, we focus on the metering and rating aspects of the revenue management and their adaptability to business and operational changes. We design a framework for IT cloud service providers to scale their revenue systems in a cost-aware manner. The main idea is to dynamically use existing or newly provisioned SaaS VMs, instead of dedicated setups, for deploying the revenue management systems. At on-boarding of new customers, our framework performs off-line analysis to recommend appropriate revenue tools and their scalable distribution by predicting the need for resources based on historical usage. This allows the revenue management to adapt to the ever evolving business context. We evaluated our framework on a test bed of 20 physical machines that were used to deploy 12 VMs within Open Stack environment. Our analysis shows that service management related tasks can be offloaded to the existing VMs with at most 15% overhead in CPU utilization, 10% overhead for memory usage, and negligible overhead for I/O and network usage. By dynamically scaling the setup, we were able to reduce the metering data processing time by many folds without incurring any additional cost.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133050572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Demchenko, Cosmin Dumitru, R. Koning, C. D. Laat, Taras Matselyukh, S. Filiposka, M. D. Vos, D. Arbel, Damir Regvart, Tasos Karaliotas, K. Baumann
This paper presents results of the ongoing development of the Open Cloud eXchange (OCX) that has been proposed in the framework of the GN3plus project. Its aim is to provide cloud aware network infrastructure to power and support modern data intensive research at European universities and research organisations. The paper describes the OCX concept, architecture, design and implementation options. OCX includes 3 major components: distributed L0-L2 (optionally L3) network infrastructure that includes OCX points of presence (OCXP) interconnected with GEANT backbone; the Trusted Third Party (TTP) for building dynamic trust federations; and the marketplace to enable publishing and discovery of cloud services. OCX intends to be neutral to actual cloud services provisioning and limits its services to Layer 0 through Layer 2 in order to remain transparent to current cloud services model. The recent developments include an architectural update, API definition, integration with higher-level applications and workflow control, signaling and intercloud topology modelling and visualization. The paper reports about results and experiences learnt from the recent OCX demonstrations at the SC14 Exhibition in November 2014 that demonstrated the benefits of an OCX enabled Intercloud infrastructure for running data intensive real-time cloud applications on top of the advanced GEANT multi-gigabit network. The implemented OCX functionality allowed applications to control the network path for data transfer and service delivery connectivity between multiple Cloud Service Providers (CSPs). It was used in combination with a multi-cloud workflow management and planning application (Vampire) that enables data processing performance monitoring and migration of VMs and processes to an alternative location based on performance predictions.
{"title":"Open Cloud eXchange (OCX): A Pivot for Intercloud Services Federation in Multi-provider Cloud Market Environment","authors":"Y. Demchenko, Cosmin Dumitru, R. Koning, C. D. Laat, Taras Matselyukh, S. Filiposka, M. D. Vos, D. Arbel, Damir Regvart, Tasos Karaliotas, K. Baumann","doi":"10.1109/IC2E.2015.84","DOIUrl":"https://doi.org/10.1109/IC2E.2015.84","url":null,"abstract":"This paper presents results of the ongoing development of the Open Cloud eXchange (OCX) that has been proposed in the framework of the GN3plus project. Its aim is to provide cloud aware network infrastructure to power and support modern data intensive research at European universities and research organisations. The paper describes the OCX concept, architecture, design and implementation options. OCX includes 3 major components: distributed L0-L2 (optionally L3) network infrastructure that includes OCX points of presence (OCXP) interconnected with GEANT backbone; the Trusted Third Party (TTP) for building dynamic trust federations; and the marketplace to enable publishing and discovery of cloud services. OCX intends to be neutral to actual cloud services provisioning and limits its services to Layer 0 through Layer 2 in order to remain transparent to current cloud services model. The recent developments include an architectural update, API definition, integration with higher-level applications and workflow control, signaling and intercloud topology modelling and visualization. The paper reports about results and experiences learnt from the recent OCX demonstrations at the SC14 Exhibition in November 2014 that demonstrated the benefits of an OCX enabled Intercloud infrastructure for running data intensive real-time cloud applications on top of the advanced GEANT multi-gigabit network. The implemented OCX functionality allowed applications to control the network path for data transfer and service delivery connectivity between multiple Cloud Service Providers (CSPs). It was used in combination with a multi-cloud workflow management and planning application (Vampire) that enables data processing performance monitoring and migration of VMs and processes to an alternative location based on performance predictions.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132673032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Calyam, S. Seetharam, B. Homchaudhuri, Manish Kumar
Similar to memory or disk fragmentation in personal computers, emerging "virtual desktop cloud" (VDC) services experience the problem of data center resource fragmentation which occurs due to on-the-fly provisioning of virtual desktop (VD) resources. Irregular resource holes due to fragmentation lead to sub-optimal VD resource allocations, and cause: (a)decreased user quality of experience (QoE), and (b) increased operational costs for VDC service providers. In this paper, we address this problem by developing a novel, optimal "Market-Driven Provisioning and Placement" (MDPP) scheme that is based upon distributed optimization principles. The MDPP scheme channelizes inherent distributed nature of the resource allocation problem by capturing VD resource bids via a virtual market to explore soft spots in the problem space, and consequently defragments a VDC through cost-aware utility-maximal VD re-allocations or migrations. Through extensive simulations of VD request allocations to multiple data centers for diverse VD application and user QoE profiles, we demonstrate that our MDPP scheme outperforms existing schemes that are largely based on centralized optimization principles. Moreover, MDPP scheme can achieve high VDC performance and scalability, measurable in terms of a 'Net Utility' metric, even when VD resource location constraints are imposed to meet orthogonal security objectives.
{"title":"Resource Defragmentation Using Market-Driven Allocation in Virtual Desktop Clouds","authors":"P. Calyam, S. Seetharam, B. Homchaudhuri, Manish Kumar","doi":"10.1109/IC2E.2015.37","DOIUrl":"https://doi.org/10.1109/IC2E.2015.37","url":null,"abstract":"Similar to memory or disk fragmentation in personal computers, emerging \"virtual desktop cloud\" (VDC) services experience the problem of data center resource fragmentation which occurs due to on-the-fly provisioning of virtual desktop (VD) resources. Irregular resource holes due to fragmentation lead to sub-optimal VD resource allocations, and cause: (a)decreased user quality of experience (QoE), and (b) increased operational costs for VDC service providers. In this paper, we address this problem by developing a novel, optimal \"Market-Driven Provisioning and Placement\" (MDPP) scheme that is based upon distributed optimization principles. The MDPP scheme channelizes inherent distributed nature of the resource allocation problem by capturing VD resource bids via a virtual market to explore soft spots in the problem space, and consequently defragments a VDC through cost-aware utility-maximal VD re-allocations or migrations. Through extensive simulations of VD request allocations to multiple data centers for diverse VD application and user QoE profiles, we demonstrate that our MDPP scheme outperforms existing schemes that are largely based on centralized optimization principles. Moreover, MDPP scheme can achieve high VDC performance and scalability, measurable in terms of a 'Net Utility' metric, even when VD resource location constraints are imposed to meet orthogonal security objectives.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124065612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtualization of operating systems provides a common way to run different services in the cloud. Recently, the lightweight virtualization technologies claim to offer superior performance. In this paper, we present a detailed performance comparison of traditional hypervisor based virtualization and new lightweight solutions. In our measurements, we use several benchmarks tools in order to understand the strengths, weaknesses, and anomalies introduced by these different platforms in terms of processing, storage, memory and network. Our results show that containers achieve generally better performance when compared with traditional virtual machines and other recent solutions. Albeit containers offer clearly more dense deployment of virtual machines, the performance difference with other technologies is in many cases relatively small.
{"title":"Hypervisors vs. Lightweight Virtualization: A Performance Comparison","authors":"Roberto Morabito, Jimmy Kjällman, M. Komu","doi":"10.1109/IC2E.2015.74","DOIUrl":"https://doi.org/10.1109/IC2E.2015.74","url":null,"abstract":"Virtualization of operating systems provides a common way to run different services in the cloud. Recently, the lightweight virtualization technologies claim to offer superior performance. In this paper, we present a detailed performance comparison of traditional hypervisor based virtualization and new lightweight solutions. In our measurements, we use several benchmarks tools in order to understand the strengths, weaknesses, and anomalies introduced by these different platforms in terms of processing, storage, memory and network. Our results show that containers achieve generally better performance when compared with traditional virtual machines and other recent solutions. Albeit containers offer clearly more dense deployment of virtual machines, the performance difference with other technologies is in many cases relatively small.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124901713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seetharami R. Seelam, P. Dettori, P. Westerink, B. Yang
Platform as a service (PaaS) is a cloud delivery model that provides software services and solution stacks to enable rapid development, deployment, and operations in many languages and run-times (polyglot). These applications require capabilities to rapidly grow and shrink the underlying resources to satisfy their workload needs. Auto scaling is a service that enables dynamic resource allocation and deal location to match application performance needs and service level agreements. In this paper we present the architecture and implementation of a polyglot auto scaling solution for IBM Blue mix PaaS. Our auto scaling service enables users to describe policies and set thresholds for scaling the applications based on CPU, memory and heap usage for applications developed in different languages (Java, Java Script, Ruby, etc). The auto scaling service consists of a set of monitoring agents, monitoring service, scaling service, and a persistence service. The service is developed with sharedmulti-tenancy model and offered as a managed cloud service. An application attached to the auto scaling service is monitored and its resources will be adjusted based on the auto scaling policies of the user and on the system conditions.
{"title":"Polyglot Application Auto Scaling Service for Platform as a Service Cloud","authors":"Seetharami R. Seelam, P. Dettori, P. Westerink, B. Yang","doi":"10.1109/IC2E.2015.30","DOIUrl":"https://doi.org/10.1109/IC2E.2015.30","url":null,"abstract":"Platform as a service (PaaS) is a cloud delivery model that provides software services and solution stacks to enable rapid development, deployment, and operations in many languages and run-times (polyglot). These applications require capabilities to rapidly grow and shrink the underlying resources to satisfy their workload needs. Auto scaling is a service that enables dynamic resource allocation and deal location to match application performance needs and service level agreements. In this paper we present the architecture and implementation of a polyglot auto scaling solution for IBM Blue mix PaaS. Our auto scaling service enables users to describe policies and set thresholds for scaling the applications based on CPU, memory and heap usage for applications developed in different languages (Java, Java Script, Ruby, etc). The auto scaling service consists of a set of monitoring agents, monitoring service, scaling service, and a persistence service. The service is developed with sharedmulti-tenancy model and offered as a managed cloud service. An application attached to the auto scaling service is monitored and its resources will be adjusted based on the auto scaling policies of the user and on the system conditions.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129093270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We initiate the study of the following problem: Suppose Alice and Bob would like to outsource their encrypted private data sets to the cloud, and they also want to conduct the set intersection operation on their plaintext data sets. The straightforward solution for them is to download their outsourced cipher texts, decrypt the cipher texts locally, and then execute a commodity two-party set intersection protocol. Unfortunately, this solution is not practical. We therefore motivate and introduce the novel notion of Verifiable Delegated Set Intersection on outsourced encrypted data (VDSI). The basic idea is to delegate the set intersection operation to the cloud, while (i) not giving the decryption capability to the cloud, and (ii) being able to hold the misbehaving cloud accountable. We formalize security properties of VDSI and present a construction. In our solution, the computational and communication costs on the users are linear to the size of the intersection set, meaning that the efficiency is optimal up to a constant factor.
{"title":"Verifiable Delegated Set Intersection Operations on Outsourced Encrypted Data","authors":"Qingji Zheng, Shouhuai Xu","doi":"10.1109/IC2E.2015.38","DOIUrl":"https://doi.org/10.1109/IC2E.2015.38","url":null,"abstract":"We initiate the study of the following problem: Suppose Alice and Bob would like to outsource their encrypted private data sets to the cloud, and they also want to conduct the set intersection operation on their plaintext data sets. The straightforward solution for them is to download their outsourced cipher texts, decrypt the cipher texts locally, and then execute a commodity two-party set intersection protocol. Unfortunately, this solution is not practical. We therefore motivate and introduce the novel notion of Verifiable Delegated Set Intersection on outsourced encrypted data (VDSI). The basic idea is to delegate the set intersection operation to the cloud, while (i) not giving the decryption capability to the cloud, and (ii) being able to hold the misbehaving cloud accountable. We formalize security properties of VDSI and present a construction. In our solution, the computational and communication costs on the users are linear to the size of the intersection set, meaning that the efficiency is optimal up to a constant factor.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132123329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}