首页 > 最新文献

2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)最新文献

英文 中文
Optimal Placement of Network Security Monitoring Functions in NFV-Enabled Data Centers 网络安全监控功能在nfv数据中心的优化配置
Po-Ching Lin, Chia-Feng Wu, Po-Hsien Shih
While infrastructure as a service (IaaS) provides benefits such as cost reduction, dynamic deployment and high availability for users, it also blurs the boundary between the internal and external networks, causing security threats such as insider attacks which cannot be observed by traditional security devices in the network boundary. Coordination of network function virtualization (NFV) and software-defined networking (SDN) is a promising approach to address this issue, and an optimal placement mechanism is necessary to minimize the computing resources for network security monitoring. In this work, we present a mechanism of placing virtualized network functions (VNFs) for network security monitoring in a data center to watch communications between pairs of virtual machines (VMs) or between VMs and external hosts. The placement issue is modeled as the minimum vertex cover problem and the bin packing problem to optimize the number and positions of VNFs subject to the availability of computing resources and link capacity. We design a greedy algorithm to reduce the time complexity of the problems. A Mininet simulation evaluates this solution for various topology sizes and communication pairs. The experiments demonstrate that the VNF placement planned by this algorithm is close to optimality, but the execution time can be reduced significantly.
基础设施即服务(IaaS)在为用户提供降低成本、动态部署和高可用性等优势的同时,也模糊了内部网络和外部网络的边界,造成了网络边界内传统安全设备无法观察到的内部攻击等安全威胁。网络功能虚拟化(NFV)和软件定义网络(SDN)的协调是解决这一问题的一种很有前途的方法,需要一种优化的放置机制来最小化网络安全监控的计算资源。在这项工作中,我们提出了一种在数据中心放置用于网络安全监控的虚拟网络功能(VNFs)的机制,以监视虚拟机对之间或虚拟机与外部主机之间的通信。将放置问题建模为最小顶点覆盖问题和装箱问题,根据计算资源的可用性和链路容量优化VNFs的数量和位置。我们设计了一种贪心算法来降低问题的时间复杂度。Mininet模拟评估了各种拓扑大小和通信对的解决方案。实验表明,该算法规划的VNF布局接近最优,但执行时间明显缩短。
{"title":"Optimal Placement of Network Security Monitoring Functions in NFV-Enabled Data Centers","authors":"Po-Ching Lin, Chia-Feng Wu, Po-Hsien Shih","doi":"10.1109/SC2.2017.10","DOIUrl":"https://doi.org/10.1109/SC2.2017.10","url":null,"abstract":"While infrastructure as a service (IaaS) provides benefits such as cost reduction, dynamic deployment and high availability for users, it also blurs the boundary between the internal and external networks, causing security threats such as insider attacks which cannot be observed by traditional security devices in the network boundary. Coordination of network function virtualization (NFV) and software-defined networking (SDN) is a promising approach to address this issue, and an optimal placement mechanism is necessary to minimize the computing resources for network security monitoring. In this work, we present a mechanism of placing virtualized network functions (VNFs) for network security monitoring in a data center to watch communications between pairs of virtual machines (VMs) or between VMs and external hosts. The placement issue is modeled as the minimum vertex cover problem and the bin packing problem to optimize the number and positions of VNFs subject to the availability of computing resources and link capacity. We design a greedy algorithm to reduce the time complexity of the problems. A Mininet simulation evaluates this solution for various topology sizes and communication pairs. The experiments demonstrate that the VNF placement planned by this algorithm is close to optimality, but the execution time can be reduced significantly.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"335 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115880653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Multilayered Cloud Applications Autoscaling Performance Estimation 多层云应用程序自动伸缩性能估计
Anshul Jindal, Vladimir Podolskiy, M. Gerndt
A multilayered autoscaling gets an increasing attention both in research and business communities. Introduction of new virtualization layers such as containers, pods, and clusters has turned a deployment and a management of cloud applications into a simple routine. Each virtualization layer usually provides its own solution for scaling. However, synchronization and collaboration of these solutions on multiple layers of virtualization remains an open topic. In the scope of the paper, we consider a wide research problem of the autoscaling across several layers for cloud applications. A novel approach to multilayered autoscalers performance measurement is introduced in this paper. This approach is implemented in Autoscaling Performance Measurement Tool (APMT), which architecture and functionality are also discussed. Results of model experiments on different requests patterns are also provided in the paper.
多层自动伸缩在研究和商业领域都受到越来越多的关注。新的虚拟化层(如容器、pod和集群)的引入将云应用程序的部署和管理变成了简单的例程。每个虚拟化层通常提供自己的扩展解决方案。然而,这些解决方案在多个虚拟化层上的同步和协作仍然是一个开放的话题。在本文的范围内,我们考虑了一个广泛的研究问题,即云应用的跨多层自动伸缩。本文介绍了一种多层自标度器性能测量的新方法。该方法在自动缩放性能测量工具(APMT)中实现,并对其架构和功能进行了讨论。文中还给出了不同请求模式下的模型实验结果。
{"title":"Multilayered Cloud Applications Autoscaling Performance Estimation","authors":"Anshul Jindal, Vladimir Podolskiy, M. Gerndt","doi":"10.1109/SC2.2017.12","DOIUrl":"https://doi.org/10.1109/SC2.2017.12","url":null,"abstract":"A multilayered autoscaling gets an increasing attention both in research and business communities. Introduction of new virtualization layers such as containers, pods, and clusters has turned a deployment and a management of cloud applications into a simple routine. Each virtualization layer usually provides its own solution for scaling. However, synchronization and collaboration of these solutions on multiple layers of virtualization remains an open topic. In the scope of the paper, we consider a wide research problem of the autoscaling across several layers for cloud applications. A novel approach to multilayered autoscalers performance measurement is introduced in this paper. This approach is implemented in Autoscaling Performance Measurement Tool (APMT), which architecture and functionality are also discussed. Results of model experiments on different requests patterns are also provided in the paper.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115598913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Improving OpenStack Swift interaction with the I/O Stack to Enable Software Defined Storage 改进OpenStack Swift与I/O栈的交互,支持软件定义存储
Ramon Nou, Alberto Miranda, Marc Siquier, Toni Cortes
This paper analyses how OpenStack Swift, a distributed object storage service for a globally used middleware, interacts with the I/O subsystem through the Operating System. This interaction, which seems organised and clean on the middleware side, becomes disordered on the device side when using mechanical disk drives, due to the way threads are used internally to request data. We will show that only modifying the Swift threading model we achieve an 18% mean improvement in performance with objects larger than 512 KiB and obtain a similar performance with smaller objects. Compared to the original scenario, the performance obtained on both scenarios is obtained in a fair way: the bandwidth is shared equally between concurrently accessed objects. Moreover, this threading model allows us to apply techniques for Software Defined Storage (SDS). We show an implementation of a Bandwidth Differentiation technique that can control each data stream and that guarantees a high utilization of the device.
本文分析了OpenStack Swift(面向全局使用的分布式对象存储服务中间件)如何通过操作系统与I/O子系统进行交互。这种交互,在中间件端看起来是有组织的和干净的,当使用机械磁盘驱动器时,在设备端变得混乱,因为线程在内部使用请求数据的方式。我们将展示,仅修改Swift线程模型,我们就可以在大于512 KiB的对象上实现18%的平均性能提升,并且在较小的对象上获得类似的性能。与原始场景相比,两种场景的性能都比较公平:并发访问的对象之间平均共享带宽。此外,这个线程模型允许我们应用软件定义存储(SDS)技术。我们展示了带宽区分技术的实现,该技术可以控制每个数据流,并保证设备的高利用率。
{"title":"Improving OpenStack Swift interaction with the I/O Stack to Enable Software Defined Storage","authors":"Ramon Nou, Alberto Miranda, Marc Siquier, Toni Cortes","doi":"10.1109/SC2.2017.17","DOIUrl":"https://doi.org/10.1109/SC2.2017.17","url":null,"abstract":"This paper analyses how OpenStack Swift, a distributed object storage service for a globally used middleware, interacts with the I/O subsystem through the Operating System. This interaction, which seems organised and clean on the middleware side, becomes disordered on the device side when using mechanical disk drives, due to the way threads are used internally to request data. We will show that only modifying the Swift threading model we achieve an 18% mean improvement in performance with objects larger than 512 KiB and obtain a similar performance with smaller objects. Compared to the original scenario, the performance obtained on both scenarios is obtained in a fair way: the bandwidth is shared equally between concurrently accessed objects. Moreover, this threading model allows us to apply techniques for Software Defined Storage (SDS). We show an implementation of a Bandwidth Differentiation technique that can control each data stream and that guarantees a high utilization of the device.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129709770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Sliding Window Based Discounted Transaction Mining 基于滑动窗口的贴现交易挖掘
Wei-Yuan Lee, Chih-Hua Tai, Yue-Shan Chang
Discount is important in buying behavior and purchasing habits. In this paper, we focus on buying behavior which discount strategy can encourage shopping to devise strategies to boost business owners' sales. This paper introduces a new problem for mining from discounted transaction, and proposes a mining method called the DTM algorithm, which is based on sliding window for maintaining stream transactions. Through the use of this approach, the specific time points at which frequent patterns have a significant increase or decrease in frequency are effectively captured.
折扣在购买行为和购买习惯中很重要。本文主要研究折扣策略对消费者购买行为的激励作用,从而设计出促进商家销售的策略。本文提出了一种新的从贴现交易中挖掘的问题,并提出了一种基于滑动窗口维护流交易的挖掘方法DTM算法。通过使用这种方法,可以有效地捕获频率模式显著增加或减少的特定时间点。
{"title":"Sliding Window Based Discounted Transaction Mining","authors":"Wei-Yuan Lee, Chih-Hua Tai, Yue-Shan Chang","doi":"10.1109/SC2.2017.28","DOIUrl":"https://doi.org/10.1109/SC2.2017.28","url":null,"abstract":"Discount is important in buying behavior and purchasing habits. In this paper, we focus on buying behavior which discount strategy can encourage shopping to devise strategies to boost business owners' sales. This paper introduces a new problem for mining from discounted transaction, and proposes a mining method called the DTM algorithm, which is based on sliding window for maintaining stream transactions. Through the use of this approach, the specific time points at which frequent patterns have a significant increase or decrease in frequency are effectively captured.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127596225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Model-Based Scalability Optimization Methodology for Cloud Applications 基于模型的云应用可扩展性优化方法
Jia-Chun Lin, J. Mauro, T. Røst, Ingrid Chieh Yu
Complex applications composed of many interconnected but functionally independent services or components are widely adopted and deployed on the cloud to exploit its elasticity. This allows the application to react to load changes by varying the amount of computational resources used. Deciding the proper scaling settings for a complex architecture is, however, a daunting task: many possible settings exists with big repercussions in terms of performance and cost. In this paper, we present a methodology that, by relying on modeling and automatic parameter configurators, allows to understand the best way to configure the scalability of the application to be deployed on the cloud. We exemplify the approach by using an existing service-oriented framework to dispatch car software updates.
由许多相互连接但功能独立的服务或组件组成的复杂应用程序被广泛采用并部署在云上,以利用其弹性。这允许应用程序通过改变所使用的计算资源的数量来对负载变化作出反应。然而,为复杂的体系结构确定适当的缩放设置是一项艰巨的任务:存在许多可能的设置,在性能和成本方面会产生很大的影响。在本文中,我们提出了一种方法,通过依赖于建模和自动参数配置器,可以理解配置部署在云上的应用程序的可伸缩性的最佳方法。我们通过使用现有的面向服务的框架来调度汽车软件更新来举例说明这种方法。
{"title":"A Model-Based Scalability Optimization Methodology for Cloud Applications","authors":"Jia-Chun Lin, J. Mauro, T. Røst, Ingrid Chieh Yu","doi":"10.1109/SC2.2017.32","DOIUrl":"https://doi.org/10.1109/SC2.2017.32","url":null,"abstract":"Complex applications composed of many interconnected but functionally independent services or components are widely adopted and deployed on the cloud to exploit its elasticity. This allows the application to react to load changes by varying the amount of computational resources used. Deciding the proper scaling settings for a complex architecture is, however, a daunting task: many possible settings exists with big repercussions in terms of performance and cost. In this paper, we present a methodology that, by relying on modeling and automatic parameter configurators, allows to understand the best way to configure the scalability of the application to be deployed on the cloud. We exemplify the approach by using an existing service-oriented framework to dispatch car software updates.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131100400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
EMAPM: Enterprise Modernization Autonomic Predictive Model in Hybrid Cloud Environments EMAPM:混合云环境中的企业现代化自主预测模型
Satheesh Abimannan, Ravikumar Ramadoss, N. Elango, Ching-Hsien Hsu
An integrated cloud service model using both public and private cloud services to offer a holistic deployment of the enterprise applications is the need of the hour. Enterprise systems can use this integrated model for cost-effective and sensitive services deployment to insure that all the services running inside the applications are seamlessly mixed. Adopting hybrid cloud during enterprise modernization delivers cost effective option and secured performance. To meet the desired return on investment (ROI) and also to satisfy the desired service level agreements (SLA), the proactive thought process towards whether the modernization is worth the effort or not has to be performed. It requires a systematic proactive understanding of the key challenges every enterprise might face and a predictive SLA model should be derived before doing the enterprise modernization. In this paper, we propose an algorithm to predict the SLA of the futuristic application by considering all the key modernization attributes. The proposed model will do as a singular methodology for the problem of prediction of SLA during enterprise modernization on hybrid cloud environments. The evaluation results on the proposed model shows the efficiency of the algorithm.
当前需要一个集成的云服务模型,使用公共和私有云服务来提供企业应用程序的整体部署。企业系统可以使用这个集成模型进行经济高效和敏感的服务部署,以确保在应用程序内运行的所有服务都无缝混合。在企业现代化过程中采用混合云提供了具有成本效益的选择和安全的性能。为了满足期望的投资回报(ROI)和满足期望的服务水平协议(SLA),必须执行对现代化是否值得付出努力的前瞻性思考过程。它需要对每个企业可能面临的关键挑战有系统的、前瞻性的理解,并且应该在进行企业现代化之前推导出预测性SLA模型。在本文中,我们提出了一种通过考虑所有关键现代化属性来预测未来应用的SLA的算法。所提出的模型将作为混合云环境下企业现代化过程中SLA预测问题的单一方法。对该模型的评价结果表明了该算法的有效性。
{"title":"EMAPM: Enterprise Modernization Autonomic Predictive Model in Hybrid Cloud Environments","authors":"Satheesh Abimannan, Ravikumar Ramadoss, N. Elango, Ching-Hsien Hsu","doi":"10.1109/SC2.2017.16","DOIUrl":"https://doi.org/10.1109/SC2.2017.16","url":null,"abstract":"An integrated cloud service model using both public and private cloud services to offer a holistic deployment of the enterprise applications is the need of the hour. Enterprise systems can use this integrated model for cost-effective and sensitive services deployment to insure that all the services running inside the applications are seamlessly mixed. Adopting hybrid cloud during enterprise modernization delivers cost effective option and secured performance. To meet the desired return on investment (ROI) and also to satisfy the desired service level agreements (SLA), the proactive thought process towards whether the modernization is worth the effort or not has to be performed. It requires a systematic proactive understanding of the key challenges every enterprise might face and a predictive SLA model should be derived before doing the enterprise modernization. In this paper, we propose an algorithm to predict the SLA of the futuristic application by considering all the key modernization attributes. The proposed model will do as a singular methodology for the problem of prediction of SLA during enterprise modernization on hybrid cloud environments. The evaluation results on the proposed model shows the efficiency of the algorithm.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130870695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
D-XMAN: A Platform For Total Compositionality in Service-Oriented Architectures D-XMAN:面向服务的体系结构中的完全组合性平台
Damian Arellanes, K. Lau
Current software platforms for service composition are based on orchestration, choreography or hierarchical orchestration. However, such approaches for service composition only support partial compositionality; thereby, increasing the complexity of SOA development. In this paper, we propose DX-MAN, a platform that supports total compositionality. We describe the main concepts of DX-MAN with the help of a case study based on the popular MusicCorp.
当前用于服务组合的软件平台是基于编排、编排或分层编排的。然而,这种服务组合方法只支持部分组合性;从而增加了SOA开发的复杂性。在本文中,我们提出了DX-MAN,一个支持完全组合性的平台。我们通过一个基于流行音乐公司的案例研究来描述DX-MAN的主要概念。
{"title":"D-XMAN: A Platform For Total Compositionality in Service-Oriented Architectures","authors":"Damian Arellanes, K. Lau","doi":"10.1109/SC2.2017.55","DOIUrl":"https://doi.org/10.1109/SC2.2017.55","url":null,"abstract":"Current software platforms for service composition are based on orchestration, choreography or hierarchical orchestration. However, such approaches for service composition only support partial compositionality; thereby, increasing the complexity of SOA development. In this paper, we propose DX-MAN, a platform that supports total compositionality. We describe the main concepts of DX-MAN with the help of a case study based on the popular MusicCorp.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128709851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Reinforcement Learning Based Routing Protocol for Wireless Body Sensor Networks 基于强化学习的无线身体传感器网络路由协议
Farzad Kiani
Patients must be continuous and consistent way links to their doctors to control continuous health status. Wireless Body Sensor Network (WBSN) plays an important role in communicating the patient's vital information to any remote healthcare center. These networks consist of individual nodes to collect the patient's physiological parameters and communicate with the destination if the sensed parameter value is beyond normal range. Therefore, they can monitor patient's health continuously. The nodes deployed with the patient form a WBSN and so the network send data from source node to the remote sink or base station by efficient links. It is necessary to extend the life of the system by selecting optimized paths. This paper presents a cluster-based routing protocol by new Q-learning approach (QL-CLUSTER) to find best routes between individual nodes and remote healthcare station. Simulations are made with a set of mobile biomedical wireless sensor nodes with an area of 1000 meters x 1000 meters flat space operating for 600 seconds of simulation time. Results show that the QL-CLUSTER based approach requires less time to route the packet from the source node to the destination remote station compared with other algorithms.
患者必须以持续和一致的方式联系医生,以控制他们的持续健康状况。无线身体传感器网络(WBSN)在将患者的重要信息传输到任何远程医疗中心方面发挥着重要作用。这些网络由单个节点组成,用于收集患者的生理参数,当感知到的参数值超出正常范围时,与目的地进行通信。因此,他们可以持续监测患者的健康状况。与患者一起部署的节点形成WBSN,因此网络通过有效的链路将数据从源节点发送到远程接收器或基站。有必要通过选择优化的路径来延长系统的寿命。本文提出了一种基于集群的路由协议,利用新的q -学习方法(QL-CLUSTER)来寻找单个节点与远程医疗站之间的最佳路由。采用一组面积为1000米× 1000米平面空间的移动生物医学无线传感器节点进行仿真,仿真时间为600秒。结果表明,与其他算法相比,基于QL-CLUSTER的方法将数据包从源节点路由到目的远程站所需的时间更少。
{"title":"Reinforcement Learning Based Routing Protocol for Wireless Body Sensor Networks","authors":"Farzad Kiani","doi":"10.1109/SC2.2017.18","DOIUrl":"https://doi.org/10.1109/SC2.2017.18","url":null,"abstract":"Patients must be continuous and consistent way links to their doctors to control continuous health status. Wireless Body Sensor Network (WBSN) plays an important role in communicating the patient's vital information to any remote healthcare center. These networks consist of individual nodes to collect the patient's physiological parameters and communicate with the destination if the sensed parameter value is beyond normal range. Therefore, they can monitor patient's health continuously. The nodes deployed with the patient form a WBSN and so the network send data from source node to the remote sink or base station by efficient links. It is necessary to extend the life of the system by selecting optimized paths. This paper presents a cluster-based routing protocol by new Q-learning approach (QL-CLUSTER) to find best routes between individual nodes and remote healthcare station. Simulations are made with a set of mobile biomedical wireless sensor nodes with an area of 1000 meters x 1000 meters flat space operating for 600 seconds of simulation time. Results show that the QL-CLUSTER based approach requires less time to route the packet from the source node to the destination remote station compared with other algorithms.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134496618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
A New Docker Swarm Scheduling Strategy 一种新的码头群调度策略
C. Cérin, Tarek Menouer, W. Saad, Wiem Abdallah
This paper presents our initial ideas for a new scheduling strategy integrated in the Docker Swarm scheduler. The aim of this paper is to introduce the basic concepts and the implementation details of a new scheduling strategy based on different Service Level Agreement (SLA) classes. This strategy is proposed to answer to the problems of companies that manage a private infrastructure of machines, and would like to optimize the scheduling of several requests submitted online by users. Each request is a demand of creating a container. Currently, Docker Swarm has three basic scheduling strategies (spread, binpack and random), each one executes a container with a fixed number of resources. However, the novelty of our new strategy consists in using the SLA class of the user to provision a container that must execute the service, based on a dynamic computation of the number of CPU cores that must be allocated to the container according to the user SLA class and the load of the parallel machines in the infrastructure. Testing of our new strategy is conducted, by emulation, on different part of our general framework and it demonstrates the potential of our approach for further development.
本文提出了我们在Docker Swarm调度器中集成一种新的调度策略的初步想法。本文的目的是介绍一种基于不同服务水平协议(SLA)类的调度策略的基本概念和实现细节。该策略的提出是为了解决管理私有机器基础设施的公司的问题,并希望优化用户在线提交的多个请求的调度。每个请求都是创建容器的需求。目前,Docker Swarm有三种基本的调度策略(spread, binpack和random),每种策略都执行一个具有固定数量资源的容器。然而,我们新策略的新颖之处在于使用用户的SLA类来提供必须执行服务的容器,这是基于必须根据用户SLA类和基础架构中并行机器的负载分配给容器的CPU内核数量的动态计算。我们的新策略进行了测试,通过仿真,在我们的总体框架的不同部分,它证明了我们的方法的潜力,为进一步发展。
{"title":"A New Docker Swarm Scheduling Strategy","authors":"C. Cérin, Tarek Menouer, W. Saad, Wiem Abdallah","doi":"10.1109/SC2.2017.24","DOIUrl":"https://doi.org/10.1109/SC2.2017.24","url":null,"abstract":"This paper presents our initial ideas for a new scheduling strategy integrated in the Docker Swarm scheduler. The aim of this paper is to introduce the basic concepts and the implementation details of a new scheduling strategy based on different Service Level Agreement (SLA) classes. This strategy is proposed to answer to the problems of companies that manage a private infrastructure of machines, and would like to optimize the scheduling of several requests submitted online by users. Each request is a demand of creating a container. Currently, Docker Swarm has three basic scheduling strategies (spread, binpack and random), each one executes a container with a fixed number of resources. However, the novelty of our new strategy consists in using the SLA class of the user to provision a container that must execute the service, based on a dynamic computation of the number of CPU cores that must be allocated to the container according to the user SLA class and the load of the parallel machines in the infrastructure. Testing of our new strategy is conducted, by emulation, on different part of our general framework and it demonstrates the potential of our approach for further development.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121952564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
A Prototype for Analyzing the Internet Routing System Based on Spark and Docker 基于Spark和Docker的Internet路由系统分析原型
Hao Zeng, Baosheng Wang, Wenping Deng, Junxing Tang
Now, network security is becoming more and more serious, and network security has become a big data issue. But the big data processing technology is complex with low utilization, poor availability and impossible load balancing. In this paper, we propose a highly elastic and available big data processing prototype based on spark and docker. In our prototype, we use microservice as basic unit to provide service for business and docker container as carrier for microservice. By monitoring the actual running state of the business and the lightweight and fast start features of the container, we can quickly and dynamically add and remove containers to provide scaling service for business. Then we download the BGP routing table from www.routeviews.org and use our prototype for analysis. Experiments show that, based on the size of the BGP routing table, our prototype can scale to meet real-time business processing requirements.
现在,网络安全问题越来越严重,网络安全已经成为一个大数据问题。但大数据处理技术复杂,利用率低,可用性差,无法实现负载均衡。本文提出了一种基于spark和docker的高弹性、高可用性的大数据处理原型。在我们的原型中,我们使用微服务作为基本单元为业务提供服务,并使用docker容器作为微服务的载体。通过监控业务的实际运行状态以及容器的轻量级和快速启动特性,我们可以快速、动态地添加和删除容器,为业务提供扩展服务。然后我们从www.routeviews.org下载BGP路由表,并使用我们的原型进行分析。实验表明,基于BGP路由表的大小,我们的原型可以扩展到满足实时业务处理的需求。
{"title":"A Prototype for Analyzing the Internet Routing System Based on Spark and Docker","authors":"Hao Zeng, Baosheng Wang, Wenping Deng, Junxing Tang","doi":"10.1109/SC2.2017.51","DOIUrl":"https://doi.org/10.1109/SC2.2017.51","url":null,"abstract":"Now, network security is becoming more and more serious, and network security has become a big data issue. But the big data processing technology is complex with low utilization, poor availability and impossible load balancing. In this paper, we propose a highly elastic and available big data processing prototype based on spark and docker. In our prototype, we use microservice as basic unit to provide service for business and docker container as carrier for microservice. By monitoring the actual running state of the business and the lightweight and fast start features of the container, we can quickly and dynamically add and remove containers to provide scaling service for business. Then we download the BGP routing table from www.routeviews.org and use our prototype for analysis. Experiments show that, based on the size of the BGP routing table, our prototype can scale to meet real-time business processing requirements.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127630803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1