While infrastructure as a service (IaaS) provides benefits such as cost reduction, dynamic deployment and high availability for users, it also blurs the boundary between the internal and external networks, causing security threats such as insider attacks which cannot be observed by traditional security devices in the network boundary. Coordination of network function virtualization (NFV) and software-defined networking (SDN) is a promising approach to address this issue, and an optimal placement mechanism is necessary to minimize the computing resources for network security monitoring. In this work, we present a mechanism of placing virtualized network functions (VNFs) for network security monitoring in a data center to watch communications between pairs of virtual machines (VMs) or between VMs and external hosts. The placement issue is modeled as the minimum vertex cover problem and the bin packing problem to optimize the number and positions of VNFs subject to the availability of computing resources and link capacity. We design a greedy algorithm to reduce the time complexity of the problems. A Mininet simulation evaluates this solution for various topology sizes and communication pairs. The experiments demonstrate that the VNF placement planned by this algorithm is close to optimality, but the execution time can be reduced significantly.
{"title":"Optimal Placement of Network Security Monitoring Functions in NFV-Enabled Data Centers","authors":"Po-Ching Lin, Chia-Feng Wu, Po-Hsien Shih","doi":"10.1109/SC2.2017.10","DOIUrl":"https://doi.org/10.1109/SC2.2017.10","url":null,"abstract":"While infrastructure as a service (IaaS) provides benefits such as cost reduction, dynamic deployment and high availability for users, it also blurs the boundary between the internal and external networks, causing security threats such as insider attacks which cannot be observed by traditional security devices in the network boundary. Coordination of network function virtualization (NFV) and software-defined networking (SDN) is a promising approach to address this issue, and an optimal placement mechanism is necessary to minimize the computing resources for network security monitoring. In this work, we present a mechanism of placing virtualized network functions (VNFs) for network security monitoring in a data center to watch communications between pairs of virtual machines (VMs) or between VMs and external hosts. The placement issue is modeled as the minimum vertex cover problem and the bin packing problem to optimize the number and positions of VNFs subject to the availability of computing resources and link capacity. We design a greedy algorithm to reduce the time complexity of the problems. A Mininet simulation evaluates this solution for various topology sizes and communication pairs. The experiments demonstrate that the VNF placement planned by this algorithm is close to optimality, but the execution time can be reduced significantly.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"335 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115880653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A multilayered autoscaling gets an increasing attention both in research and business communities. Introduction of new virtualization layers such as containers, pods, and clusters has turned a deployment and a management of cloud applications into a simple routine. Each virtualization layer usually provides its own solution for scaling. However, synchronization and collaboration of these solutions on multiple layers of virtualization remains an open topic. In the scope of the paper, we consider a wide research problem of the autoscaling across several layers for cloud applications. A novel approach to multilayered autoscalers performance measurement is introduced in this paper. This approach is implemented in Autoscaling Performance Measurement Tool (APMT), which architecture and functionality are also discussed. Results of model experiments on different requests patterns are also provided in the paper.
{"title":"Multilayered Cloud Applications Autoscaling Performance Estimation","authors":"Anshul Jindal, Vladimir Podolskiy, M. Gerndt","doi":"10.1109/SC2.2017.12","DOIUrl":"https://doi.org/10.1109/SC2.2017.12","url":null,"abstract":"A multilayered autoscaling gets an increasing attention both in research and business communities. Introduction of new virtualization layers such as containers, pods, and clusters has turned a deployment and a management of cloud applications into a simple routine. Each virtualization layer usually provides its own solution for scaling. However, synchronization and collaboration of these solutions on multiple layers of virtualization remains an open topic. In the scope of the paper, we consider a wide research problem of the autoscaling across several layers for cloud applications. A novel approach to multilayered autoscalers performance measurement is introduced in this paper. This approach is implemented in Autoscaling Performance Measurement Tool (APMT), which architecture and functionality are also discussed. Results of model experiments on different requests patterns are also provided in the paper.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115598913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ramon Nou, Alberto Miranda, Marc Siquier, Toni Cortes
This paper analyses how OpenStack Swift, a distributed object storage service for a globally used middleware, interacts with the I/O subsystem through the Operating System. This interaction, which seems organised and clean on the middleware side, becomes disordered on the device side when using mechanical disk drives, due to the way threads are used internally to request data. We will show that only modifying the Swift threading model we achieve an 18% mean improvement in performance with objects larger than 512 KiB and obtain a similar performance with smaller objects. Compared to the original scenario, the performance obtained on both scenarios is obtained in a fair way: the bandwidth is shared equally between concurrently accessed objects. Moreover, this threading model allows us to apply techniques for Software Defined Storage (SDS). We show an implementation of a Bandwidth Differentiation technique that can control each data stream and that guarantees a high utilization of the device.
{"title":"Improving OpenStack Swift interaction with the I/O Stack to Enable Software Defined Storage","authors":"Ramon Nou, Alberto Miranda, Marc Siquier, Toni Cortes","doi":"10.1109/SC2.2017.17","DOIUrl":"https://doi.org/10.1109/SC2.2017.17","url":null,"abstract":"This paper analyses how OpenStack Swift, a distributed object storage service for a globally used middleware, interacts with the I/O subsystem through the Operating System. This interaction, which seems organised and clean on the middleware side, becomes disordered on the device side when using mechanical disk drives, due to the way threads are used internally to request data. We will show that only modifying the Swift threading model we achieve an 18% mean improvement in performance with objects larger than 512 KiB and obtain a similar performance with smaller objects. Compared to the original scenario, the performance obtained on both scenarios is obtained in a fair way: the bandwidth is shared equally between concurrently accessed objects. Moreover, this threading model allows us to apply techniques for Software Defined Storage (SDS). We show an implementation of a Bandwidth Differentiation technique that can control each data stream and that guarantees a high utilization of the device.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129709770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Discount is important in buying behavior and purchasing habits. In this paper, we focus on buying behavior which discount strategy can encourage shopping to devise strategies to boost business owners' sales. This paper introduces a new problem for mining from discounted transaction, and proposes a mining method called the DTM algorithm, which is based on sliding window for maintaining stream transactions. Through the use of this approach, the specific time points at which frequent patterns have a significant increase or decrease in frequency are effectively captured.
{"title":"Sliding Window Based Discounted Transaction Mining","authors":"Wei-Yuan Lee, Chih-Hua Tai, Yue-Shan Chang","doi":"10.1109/SC2.2017.28","DOIUrl":"https://doi.org/10.1109/SC2.2017.28","url":null,"abstract":"Discount is important in buying behavior and purchasing habits. In this paper, we focus on buying behavior which discount strategy can encourage shopping to devise strategies to boost business owners' sales. This paper introduces a new problem for mining from discounted transaction, and proposes a mining method called the DTM algorithm, which is based on sliding window for maintaining stream transactions. Through the use of this approach, the specific time points at which frequent patterns have a significant increase or decrease in frequency are effectively captured.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127596225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Complex applications composed of many interconnected but functionally independent services or components are widely adopted and deployed on the cloud to exploit its elasticity. This allows the application to react to load changes by varying the amount of computational resources used. Deciding the proper scaling settings for a complex architecture is, however, a daunting task: many possible settings exists with big repercussions in terms of performance and cost. In this paper, we present a methodology that, by relying on modeling and automatic parameter configurators, allows to understand the best way to configure the scalability of the application to be deployed on the cloud. We exemplify the approach by using an existing service-oriented framework to dispatch car software updates.
{"title":"A Model-Based Scalability Optimization Methodology for Cloud Applications","authors":"Jia-Chun Lin, J. Mauro, T. Røst, Ingrid Chieh Yu","doi":"10.1109/SC2.2017.32","DOIUrl":"https://doi.org/10.1109/SC2.2017.32","url":null,"abstract":"Complex applications composed of many interconnected but functionally independent services or components are widely adopted and deployed on the cloud to exploit its elasticity. This allows the application to react to load changes by varying the amount of computational resources used. Deciding the proper scaling settings for a complex architecture is, however, a daunting task: many possible settings exists with big repercussions in terms of performance and cost. In this paper, we present a methodology that, by relying on modeling and automatic parameter configurators, allows to understand the best way to configure the scalability of the application to be deployed on the cloud. We exemplify the approach by using an existing service-oriented framework to dispatch car software updates.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131100400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Satheesh Abimannan, Ravikumar Ramadoss, N. Elango, Ching-Hsien Hsu
An integrated cloud service model using both public and private cloud services to offer a holistic deployment of the enterprise applications is the need of the hour. Enterprise systems can use this integrated model for cost-effective and sensitive services deployment to insure that all the services running inside the applications are seamlessly mixed. Adopting hybrid cloud during enterprise modernization delivers cost effective option and secured performance. To meet the desired return on investment (ROI) and also to satisfy the desired service level agreements (SLA), the proactive thought process towards whether the modernization is worth the effort or not has to be performed. It requires a systematic proactive understanding of the key challenges every enterprise might face and a predictive SLA model should be derived before doing the enterprise modernization. In this paper, we propose an algorithm to predict the SLA of the futuristic application by considering all the key modernization attributes. The proposed model will do as a singular methodology for the problem of prediction of SLA during enterprise modernization on hybrid cloud environments. The evaluation results on the proposed model shows the efficiency of the algorithm.
{"title":"EMAPM: Enterprise Modernization Autonomic Predictive Model in Hybrid Cloud Environments","authors":"Satheesh Abimannan, Ravikumar Ramadoss, N. Elango, Ching-Hsien Hsu","doi":"10.1109/SC2.2017.16","DOIUrl":"https://doi.org/10.1109/SC2.2017.16","url":null,"abstract":"An integrated cloud service model using both public and private cloud services to offer a holistic deployment of the enterprise applications is the need of the hour. Enterprise systems can use this integrated model for cost-effective and sensitive services deployment to insure that all the services running inside the applications are seamlessly mixed. Adopting hybrid cloud during enterprise modernization delivers cost effective option and secured performance. To meet the desired return on investment (ROI) and also to satisfy the desired service level agreements (SLA), the proactive thought process towards whether the modernization is worth the effort or not has to be performed. It requires a systematic proactive understanding of the key challenges every enterprise might face and a predictive SLA model should be derived before doing the enterprise modernization. In this paper, we propose an algorithm to predict the SLA of the futuristic application by considering all the key modernization attributes. The proposed model will do as a singular methodology for the problem of prediction of SLA during enterprise modernization on hybrid cloud environments. The evaluation results on the proposed model shows the efficiency of the algorithm.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130870695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Current software platforms for service composition are based on orchestration, choreography or hierarchical orchestration. However, such approaches for service composition only support partial compositionality; thereby, increasing the complexity of SOA development. In this paper, we propose DX-MAN, a platform that supports total compositionality. We describe the main concepts of DX-MAN with the help of a case study based on the popular MusicCorp.
{"title":"D-XMAN: A Platform For Total Compositionality in Service-Oriented Architectures","authors":"Damian Arellanes, K. Lau","doi":"10.1109/SC2.2017.55","DOIUrl":"https://doi.org/10.1109/SC2.2017.55","url":null,"abstract":"Current software platforms for service composition are based on orchestration, choreography or hierarchical orchestration. However, such approaches for service composition only support partial compositionality; thereby, increasing the complexity of SOA development. In this paper, we propose DX-MAN, a platform that supports total compositionality. We describe the main concepts of DX-MAN with the help of a case study based on the popular MusicCorp.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128709851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Patients must be continuous and consistent way links to their doctors to control continuous health status. Wireless Body Sensor Network (WBSN) plays an important role in communicating the patient's vital information to any remote healthcare center. These networks consist of individual nodes to collect the patient's physiological parameters and communicate with the destination if the sensed parameter value is beyond normal range. Therefore, they can monitor patient's health continuously. The nodes deployed with the patient form a WBSN and so the network send data from source node to the remote sink or base station by efficient links. It is necessary to extend the life of the system by selecting optimized paths. This paper presents a cluster-based routing protocol by new Q-learning approach (QL-CLUSTER) to find best routes between individual nodes and remote healthcare station. Simulations are made with a set of mobile biomedical wireless sensor nodes with an area of 1000 meters x 1000 meters flat space operating for 600 seconds of simulation time. Results show that the QL-CLUSTER based approach requires less time to route the packet from the source node to the destination remote station compared with other algorithms.
{"title":"Reinforcement Learning Based Routing Protocol for Wireless Body Sensor Networks","authors":"Farzad Kiani","doi":"10.1109/SC2.2017.18","DOIUrl":"https://doi.org/10.1109/SC2.2017.18","url":null,"abstract":"Patients must be continuous and consistent way links to their doctors to control continuous health status. Wireless Body Sensor Network (WBSN) plays an important role in communicating the patient's vital information to any remote healthcare center. These networks consist of individual nodes to collect the patient's physiological parameters and communicate with the destination if the sensed parameter value is beyond normal range. Therefore, they can monitor patient's health continuously. The nodes deployed with the patient form a WBSN and so the network send data from source node to the remote sink or base station by efficient links. It is necessary to extend the life of the system by selecting optimized paths. This paper presents a cluster-based routing protocol by new Q-learning approach (QL-CLUSTER) to find best routes between individual nodes and remote healthcare station. Simulations are made with a set of mobile biomedical wireless sensor nodes with an area of 1000 meters x 1000 meters flat space operating for 600 seconds of simulation time. Results show that the QL-CLUSTER based approach requires less time to route the packet from the source node to the destination remote station compared with other algorithms.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134496618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents our initial ideas for a new scheduling strategy integrated in the Docker Swarm scheduler. The aim of this paper is to introduce the basic concepts and the implementation details of a new scheduling strategy based on different Service Level Agreement (SLA) classes. This strategy is proposed to answer to the problems of companies that manage a private infrastructure of machines, and would like to optimize the scheduling of several requests submitted online by users. Each request is a demand of creating a container. Currently, Docker Swarm has three basic scheduling strategies (spread, binpack and random), each one executes a container with a fixed number of resources. However, the novelty of our new strategy consists in using the SLA class of the user to provision a container that must execute the service, based on a dynamic computation of the number of CPU cores that must be allocated to the container according to the user SLA class and the load of the parallel machines in the infrastructure. Testing of our new strategy is conducted, by emulation, on different part of our general framework and it demonstrates the potential of our approach for further development.
{"title":"A New Docker Swarm Scheduling Strategy","authors":"C. Cérin, Tarek Menouer, W. Saad, Wiem Abdallah","doi":"10.1109/SC2.2017.24","DOIUrl":"https://doi.org/10.1109/SC2.2017.24","url":null,"abstract":"This paper presents our initial ideas for a new scheduling strategy integrated in the Docker Swarm scheduler. The aim of this paper is to introduce the basic concepts and the implementation details of a new scheduling strategy based on different Service Level Agreement (SLA) classes. This strategy is proposed to answer to the problems of companies that manage a private infrastructure of machines, and would like to optimize the scheduling of several requests submitted online by users. Each request is a demand of creating a container. Currently, Docker Swarm has three basic scheduling strategies (spread, binpack and random), each one executes a container with a fixed number of resources. However, the novelty of our new strategy consists in using the SLA class of the user to provision a container that must execute the service, based on a dynamic computation of the number of CPU cores that must be allocated to the container according to the user SLA class and the load of the parallel machines in the infrastructure. Testing of our new strategy is conducted, by emulation, on different part of our general framework and it demonstrates the potential of our approach for further development.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121952564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Now, network security is becoming more and more serious, and network security has become a big data issue. But the big data processing technology is complex with low utilization, poor availability and impossible load balancing. In this paper, we propose a highly elastic and available big data processing prototype based on spark and docker. In our prototype, we use microservice as basic unit to provide service for business and docker container as carrier for microservice. By monitoring the actual running state of the business and the lightweight and fast start features of the container, we can quickly and dynamically add and remove containers to provide scaling service for business. Then we download the BGP routing table from www.routeviews.org and use our prototype for analysis. Experiments show that, based on the size of the BGP routing table, our prototype can scale to meet real-time business processing requirements.
{"title":"A Prototype for Analyzing the Internet Routing System Based on Spark and Docker","authors":"Hao Zeng, Baosheng Wang, Wenping Deng, Junxing Tang","doi":"10.1109/SC2.2017.51","DOIUrl":"https://doi.org/10.1109/SC2.2017.51","url":null,"abstract":"Now, network security is becoming more and more serious, and network security has become a big data issue. But the big data processing technology is complex with low utilization, poor availability and impossible load balancing. In this paper, we propose a highly elastic and available big data processing prototype based on spark and docker. In our prototype, we use microservice as basic unit to provide service for business and docker container as carrier for microservice. By monitoring the actual running state of the business and the lightweight and fast start features of the container, we can quickly and dynamically add and remove containers to provide scaling service for business. Then we download the BGP routing table from www.routeviews.org and use our prototype for analysis. Experiments show that, based on the size of the BGP routing table, our prototype can scale to meet real-time business processing requirements.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127630803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}