首页 > 最新文献

2022 IEEE International Conference on Joint Cloud Computing (JCC)最新文献

英文 中文
Improving scalability of multi-agent reinforcement learning with parameters sharing 利用参数共享提高多智能体强化学习的可扩展性
Pub Date : 2022-08-01 DOI: 10.1109/JCC56315.2022.00013
Ning Yang, Bo Ding, Peichang Shi, Dawei Feng
Improving the scalability of a multi-agent system is one of the key challenges for applying reinforcement learning to learn an effective policy. Parameter sharing is a common approach used to improve the efficiency of learning by reducing the volume of policy network parameters that need to be updated. However, sharing parameters also reduces the variance between agents’ policies, which further restricts the diversity of their behaviors. In this paper, we introduce a policy parameter sharing approach, it maintains a policy network for each agent, and only updates one of them. The differentiated behavior of agents is maintained by the policy, while sharing parameters are updated through a soft way. Experiments in foraging scenarios demonstrate that our method can effectively improve the performance and also the scalability of the multi-agent systems.
提高多智能体系统的可扩展性是应用强化学习学习有效策略的关键挑战之一。参数共享是一种常用的方法,通过减少需要更新的策略网络参数的数量来提高学习效率。然而,共享参数也减少了agent之间策略的差异,这进一步限制了agent行为的多样性。本文介绍了一种策略参数共享方法,它为每个代理维护一个策略网络,并且只更新其中一个。通过策略维护代理的差异化行为,同时通过软方式更新共享参数。在觅食场景下的实验表明,该方法可以有效地提高多智能体系统的性能和可扩展性。
{"title":"Improving scalability of multi-agent reinforcement learning with parameters sharing","authors":"Ning Yang, Bo Ding, Peichang Shi, Dawei Feng","doi":"10.1109/JCC56315.2022.00013","DOIUrl":"https://doi.org/10.1109/JCC56315.2022.00013","url":null,"abstract":"Improving the scalability of a multi-agent system is one of the key challenges for applying reinforcement learning to learn an effective policy. Parameter sharing is a common approach used to improve the efficiency of learning by reducing the volume of policy network parameters that need to be updated. However, sharing parameters also reduces the variance between agents’ policies, which further restricts the diversity of their behaviors. In this paper, we introduce a policy parameter sharing approach, it maintains a policy network for each agent, and only updates one of them. The differentiated behavior of agents is maintained by the policy, while sharing parameters are updated through a soft way. Experiments in foraging scenarios demonstrate that our method can effectively improve the performance and also the scalability of the multi-agent systems.","PeriodicalId":239996,"journal":{"name":"2022 IEEE International Conference on Joint Cloud Computing (JCC)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122856774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Query-Level Distributed Database Tuning System with Machine Learning 基于机器学习的查询级分布式数据库调优系统
Pub Date : 2022-08-01 DOI: 10.1109/JCC56315.2022.00012
Xiang Fang, Yi Zou, Yange Fang, Zhen Tang, Hui Li, Wei Wang
Knob tuning is important to improve the performance of database management system. However, the traditional manual tuning method by DBA is time-consuming and error-prone, and can not meet the requirements of different database instances. In recent years, the research on automatic knob tuning using machine learning algorithm has gradually sprung up, but most of them only support workload-level knob tuning, and the studies on query-level tuning is still in the initial stage. Furthermore, few works are focus on the knob tuning for distributed database. In this paper, we propose a query-level tuning system for distribute database with the machine learning method. This system can efficiently recommend knobs according to the feature of the query. We deployed our techniques onto CockroachDB, a distribute database, and experimental results show that our system achieves higher performance under typical OLAP workload. For all categories of queries, our system reduces the latency by 9.2% on average, and for some categories of queries, this system reduces the latency by more than 60%.
旋钮调优对于提高数据库管理系统的性能非常重要。然而,传统的DBA手动调优方法耗时长,且容易出错,不能满足不同数据库实例的需求。近年来,利用机器学习算法进行旋钮自动调优的研究逐渐兴起,但大多只支持工作负载级的旋钮调优,查询级的调优研究还处于起步阶段。此外,关于分布式数据库旋钮调优的研究也很少。本文提出了一种基于机器学习的分布式数据库查询级调优系统。该系统可以根据查询的特点高效地推荐旋钮。我们将我们的技术部署到分布式数据库CockroachDB上,实验结果表明,我们的系统在典型的OLAP工作负载下实现了更高的性能。对于所有类别的查询,我们的系统平均将延迟减少了9.2%,对于某些类别的查询,该系统将延迟减少了60%以上。
{"title":"A Query-Level Distributed Database Tuning System with Machine Learning","authors":"Xiang Fang, Yi Zou, Yange Fang, Zhen Tang, Hui Li, Wei Wang","doi":"10.1109/JCC56315.2022.00012","DOIUrl":"https://doi.org/10.1109/JCC56315.2022.00012","url":null,"abstract":"Knob tuning is important to improve the performance of database management system. However, the traditional manual tuning method by DBA is time-consuming and error-prone, and can not meet the requirements of different database instances. In recent years, the research on automatic knob tuning using machine learning algorithm has gradually sprung up, but most of them only support workload-level knob tuning, and the studies on query-level tuning is still in the initial stage. Furthermore, few works are focus on the knob tuning for distributed database. In this paper, we propose a query-level tuning system for distribute database with the machine learning method. This system can efficiently recommend knobs according to the feature of the query. We deployed our techniques onto CockroachDB, a distribute database, and experimental results show that our system achieves higher performance under typical OLAP workload. For all categories of queries, our system reduces the latency by 9.2% on average, and for some categories of queries, this system reduces the latency by more than 60%.","PeriodicalId":239996,"journal":{"name":"2022 IEEE International Conference on Joint Cloud Computing (JCC)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129977899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Threshold Based Load Balancing Algorithm in Cloud Computing 云计算中基于阈值的负载均衡算法
Pub Date : 2022-08-01 DOI: 10.1109/JCC56315.2022.00011
Shusmoy Chowdhury, Ajay Katangur
Cloud computing has become an emerging trend for the software industry with the requirement of large infrastructure and resources. The future success of cloud computing depends on the effectiveness of instantiation of the infrastructure and utilization of available resources. Load Balancing ensures the fulfillment of these conditions to improve the cloud environment for the users. Load Balancing dynamically distributes the workload among the nodes in such a way that no single resource is either overwhelmed with tasks or underutilized. In this paper we propose a threshold based load balancing algorithm to ensure the equal distribution of the workload among the nodes. The main objective of the algorithms is to stop the VMs in the cloud being overloaded with tasks or being idle for lack allocation of tasks, when there are active tasks. We have simulated our proposed algorithm in the Cloudanalyst simulator with real world data scenarios. Simulation results shows that our proposed threshold based algorithm can provide a better response time for the task/requests and data processing time for the datacenters compared to the existing algorithms such as First Come First Serve (FCFS), Round Robin(RR) and Equally Spread Current Execution Load Balancing algorithm(ESCELB).
云计算已经成为软件行业的一种新兴趋势,它需要大量的基础设施和资源。云计算未来的成功取决于基础设施实例化的有效性和可用资源的利用率。负载均衡确保了这些条件的满足,从而为用户改善云环境。负载平衡在节点之间动态分配工作负载,这样就不会有单个资源被任务淹没或未充分利用。本文提出了一种基于阈值的负载均衡算法,以保证负载在节点间的均匀分配。这些算法的主要目标是在存在活动任务时,防止云中的虚拟机因任务过载或因缺乏任务分配而处于空闲状态。我们在Cloudanalyst模拟器中使用真实世界的数据场景模拟了我们提出的算法。仿真结果表明,与现有算法如先到先服务(FCFS)、轮询(RR)和等分布当前执行负载平衡算法(ESCELB)相比,我们提出的基于阈值的算法可以提供更好的任务/请求响应时间和数据中心的数据处理时间。
{"title":"Threshold Based Load Balancing Algorithm in Cloud Computing","authors":"Shusmoy Chowdhury, Ajay Katangur","doi":"10.1109/JCC56315.2022.00011","DOIUrl":"https://doi.org/10.1109/JCC56315.2022.00011","url":null,"abstract":"Cloud computing has become an emerging trend for the software industry with the requirement of large infrastructure and resources. The future success of cloud computing depends on the effectiveness of instantiation of the infrastructure and utilization of available resources. Load Balancing ensures the fulfillment of these conditions to improve the cloud environment for the users. Load Balancing dynamically distributes the workload among the nodes in such a way that no single resource is either overwhelmed with tasks or underutilized. In this paper we propose a threshold based load balancing algorithm to ensure the equal distribution of the workload among the nodes. The main objective of the algorithms is to stop the VMs in the cloud being overloaded with tasks or being idle for lack allocation of tasks, when there are active tasks. We have simulated our proposed algorithm in the Cloudanalyst simulator with real world data scenarios. Simulation results shows that our proposed threshold based algorithm can provide a better response time for the task/requests and data processing time for the datacenters compared to the existing algorithms such as First Come First Serve (FCFS), Round Robin(RR) and Equally Spread Current Execution Load Balancing algorithm(ESCELB).","PeriodicalId":239996,"journal":{"name":"2022 IEEE International Conference on Joint Cloud Computing (JCC)","volume":"359 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115899658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Uncertainty Estimation based Intrinsic Reward For Efficient Reinforcement Learning 基于不确定性估计的高效强化学习内在奖励
Pub Date : 2022-08-01 DOI: 10.1109/JCC56315.2022.00008
Chao Chen, Tianjiao Wan, Peichang Shi, Bo Ding, Zijian Gao, Dawei Feng
For reinforcement learning, the extrinsic reward is a core factor for the learning process which however can be very sparse or completely missing. In response, researchers have proposed the idea of intrinsic reward, such as encouraging the agent to visit novel states through prediction error. However, the deep prediction model can provide over-confident and miscalibrated predictions. To mitigate the impact of inaccurate prediction, previous research applied deep ensembles and achieved superior results, despite the increased computation and storage space. In this paper, inspired by the uncertainty estimation, we leverage Monte Carlo Dropout to generate intrinsic reward from the perspective of uncertainty estimation with the goal to decrease the demands for computing resources while retaining superior performance. Utilizing the simple yet effective approach, we conduct extensive experiments across a variety of benchmark environments. The experimental results suggest that our method provides a competitive performance in final score and is faster in running speed, while requiring much fewer computing resources and storage space.
对于强化学习来说,外部奖励是学习过程的核心因素,但它可能非常稀疏或完全缺失。作为回应,研究人员提出了内在奖励的想法,例如通过预测错误来鼓励代理访问新状态。然而,深度预测模型可能会提供过度自信和错误校准的预测。为了减轻不准确预测的影响,以前的研究应用了深度集成,并取得了更好的结果,尽管增加了计算和存储空间。在本文中,受不确定性估计的启发,我们利用蒙特卡罗Dropout从不确定性估计的角度产生内在奖励,目的是在保持优越性能的同时减少对计算资源的需求。利用简单而有效的方法,我们在各种基准测试环境中进行了广泛的实验。实验结果表明,我们的方法在最终分数上具有竞争力,运行速度更快,同时需要更少的计算资源和存储空间。
{"title":"Uncertainty Estimation based Intrinsic Reward For Efficient Reinforcement Learning","authors":"Chao Chen, Tianjiao Wan, Peichang Shi, Bo Ding, Zijian Gao, Dawei Feng","doi":"10.1109/JCC56315.2022.00008","DOIUrl":"https://doi.org/10.1109/JCC56315.2022.00008","url":null,"abstract":"For reinforcement learning, the extrinsic reward is a core factor for the learning process which however can be very sparse or completely missing. In response, researchers have proposed the idea of intrinsic reward, such as encouraging the agent to visit novel states through prediction error. However, the deep prediction model can provide over-confident and miscalibrated predictions. To mitigate the impact of inaccurate prediction, previous research applied deep ensembles and achieved superior results, despite the increased computation and storage space. In this paper, inspired by the uncertainty estimation, we leverage Monte Carlo Dropout to generate intrinsic reward from the perspective of uncertainty estimation with the goal to decrease the demands for computing resources while retaining superior performance. Utilizing the simple yet effective approach, we conduct extensive experiments across a variety of benchmark environments. The experimental results suggest that our method provides a competitive performance in final score and is faster in running speed, while requiring much fewer computing resources and storage space.","PeriodicalId":239996,"journal":{"name":"2022 IEEE International Conference on Joint Cloud Computing (JCC)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125779884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
FSS: A Flexible Scaling Scheme for Blockchain Based on Stale Block Rate FSS:一种基于陈旧块率的区块链灵活扩容方案
Pub Date : 2022-08-01 DOI: 10.1109/JCC56315.2022.00015
Ming Chen, Peichang Shi, Xiang Fu, Feng Jiang, Fei Gao, Penghui Ma, Jinzhu Kong
In blockchain, there has long been a contradiction between the limited ability and the uncertain requirements of processing transactions, which seriously restricts the practical application of blockchain. Therefore, how to improve the scalability of blockchain has become an urgent issue to be solved. Some existing works have achieved blockchain expansion through increasing the upper limit of block size permanently, which makes the trade-off of the “Mundellian Trilemma ” in blockchain (i.e. a blockchain system cannot be optimal in all the three dimensions of scalability, security and decentralization at the same time) fixed and thus not adapted to the dynamic environment. In this paper, we propose FSS, a flexible scaling scheme for blockchain based on stale block rate, which dynamically adjusts the upper limit of block size according to the stale block rate, not only expanding the blockchain when allowed, but also shrinking it when necessary. Experimental results indicate that FSS can reasonably improve the scalability of blockchain with required stale block rate.
在区块链中,处理交易的有限能力与不确定需求之间长期存在矛盾,严重制约了区块链的实际应用。因此,如何提高区块链的可扩展性成为一个亟待解决的问题。现有的一些作品通过永久提高区块大小上限来实现区块链的扩展,这使得区块链中的“蒙代尔三难困境”(即区块链系统不可能同时在可扩展性、安全性和去中心化三个维度上都达到最优)的权衡是固定的,因此不适应动态环境。在本文中,我们提出了一种基于陈旧块率的区块链灵活扩容方案FSS,它根据陈旧块率动态调整区块大小上限,既可以在允许的情况下扩展区块链,也可以在必要时缩小区块链。实验结果表明,FSS可以在满足失效块率要求的情况下合理提高区块链的可扩展性。
{"title":"FSS: A Flexible Scaling Scheme for Blockchain Based on Stale Block Rate","authors":"Ming Chen, Peichang Shi, Xiang Fu, Feng Jiang, Fei Gao, Penghui Ma, Jinzhu Kong","doi":"10.1109/JCC56315.2022.00015","DOIUrl":"https://doi.org/10.1109/JCC56315.2022.00015","url":null,"abstract":"In blockchain, there has long been a contradiction between the limited ability and the uncertain requirements of processing transactions, which seriously restricts the practical application of blockchain. Therefore, how to improve the scalability of blockchain has become an urgent issue to be solved. Some existing works have achieved blockchain expansion through increasing the upper limit of block size permanently, which makes the trade-off of the “Mundellian Trilemma ” in blockchain (i.e. a blockchain system cannot be optimal in all the three dimensions of scalability, security and decentralization at the same time) fixed and thus not adapted to the dynamic environment. In this paper, we propose FSS, a flexible scaling scheme for blockchain based on stale block rate, which dynamically adjusts the upper limit of block size according to the stale block rate, not only expanding the blockchain when allowed, but also shrinking it when necessary. Experimental results indicate that FSS can reasonably improve the scalability of blockchain with required stale block rate.","PeriodicalId":239996,"journal":{"name":"2022 IEEE International Conference on Joint Cloud Computing (JCC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129848508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
MRASS: Dynamic Task Scheduling enabled High Multi-cluster Resource Availability in JointCloud MRASS:动态任务调度在JointCloud中实现高多集群资源可用性
Pub Date : 2022-08-01 DOI: 10.1109/JCC56315.2022.00014
Fei Gao, Huaimin Wang, Peichang Shi, Xiang Fu, Tao Zhong, Jinzhu Kong
As the new paradigm of JointCloud Computing matures, enterprises are trying to build multiple Kubernetes clusters on different clouds to deploy tasks, with the advantages of disaster backup, low latency, and avoidance of single vendor lock-in, etc. Tasks in a JointCloud environment, always have highly diversified resource demands on CPU, memory, disk, and network. However, the mismatch between these tasks and heterogeneous clusters can easily cause many resource fragments, resulting in low resource availability. Therefore, the task scheduling strategy is the key to solving the above problem. The existing task schedule strategies for multi-clusters are always aiming at clusters’ load balancing instead of increasing the resource availability. In this paper, we propose a dynamic task scheduling framework with the design of multi-cluster resource high-availability schedule strategy (MRASS) based on historical task resource consumption. MRASS conducts a cooperation model between multiple clusters and tasks, and proposes an indicator of resource availability, which is used to optimize the proportion of remaining resources of the cluster to keep approaching the proportion of resource requirements of future tasks, thereby execute more tasks within limited resources. Extensive numerical results confirm that the strategy has stable performance and performs well with different initial cluster resource setting, task resource type and task number. Compared with the existing algorithm, MRASS can place up to 20% more tasks, and the success rate of first placement of tasks can reach over 98%.
随着JointCloud计算新范式的成熟,企业正在尝试在不同的云上构建多个Kubernetes集群来部署任务,这些集群具有灾难备份、低延迟、避免单一供应商锁定等优点。JointCloud环境下的任务对CPU、内存、磁盘和网络的资源需求总是高度多样化的。但是,这些任务与异构集群之间的不匹配很容易造成大量的资源碎片,从而导致资源的低可用性。因此,任务调度策略是解决上述问题的关键。现有的多集群任务调度策略以集群的负载均衡为目标,而不是提高资源的可用性。本文提出了一种基于历史任务消耗的多集群资源高可用性调度策略的动态任务调度框架。MRASS进行了多集群与多任务之间的协作模型,提出了资源可用性指标,利用该指标优化集群剩余资源的比例,使其不断接近未来任务的资源需求比例,从而在有限的资源内执行更多的任务。大量的数值结果表明,该策略具有稳定的性能,并且在不同的初始集群资源设置、任务资源类型和任务数量下都具有良好的性能。与现有算法相比,MRASS可多放置20%的任务,任务首次放置成功率可达98%以上。
{"title":"MRASS: Dynamic Task Scheduling enabled High Multi-cluster Resource Availability in JointCloud","authors":"Fei Gao, Huaimin Wang, Peichang Shi, Xiang Fu, Tao Zhong, Jinzhu Kong","doi":"10.1109/JCC56315.2022.00014","DOIUrl":"https://doi.org/10.1109/JCC56315.2022.00014","url":null,"abstract":"As the new paradigm of JointCloud Computing matures, enterprises are trying to build multiple Kubernetes clusters on different clouds to deploy tasks, with the advantages of disaster backup, low latency, and avoidance of single vendor lock-in, etc. Tasks in a JointCloud environment, always have highly diversified resource demands on CPU, memory, disk, and network. However, the mismatch between these tasks and heterogeneous clusters can easily cause many resource fragments, resulting in low resource availability. Therefore, the task scheduling strategy is the key to solving the above problem. The existing task schedule strategies for multi-clusters are always aiming at clusters’ load balancing instead of increasing the resource availability. In this paper, we propose a dynamic task scheduling framework with the design of multi-cluster resource high-availability schedule strategy (MRASS) based on historical task resource consumption. MRASS conducts a cooperation model between multiple clusters and tasks, and proposes an indicator of resource availability, which is used to optimize the proportion of remaining resources of the cluster to keep approaching the proportion of resource requirements of future tasks, thereby execute more tasks within limited resources. Extensive numerical results confirm that the strategy has stable performance and performs well with different initial cluster resource setting, task resource type and task number. Compared with the existing algorithm, MRASS can place up to 20% more tasks, and the success rate of first placement of tasks can reach over 98%.","PeriodicalId":239996,"journal":{"name":"2022 IEEE International Conference on Joint Cloud Computing (JCC)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124911385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An Automatic Scaling System for Online Application with Microservices Architecture 基于微服务架构的在线应用自动伸缩系统
Pub Date : 2022-08-01 DOI: 10.1109/JCC56315.2022.00018
Youmei Song, Chao Li, Kuoran Zhuang, Tengyu Ma, Tianyu Wo
Auto-scaling is an efficient technique to handle fluctuations of application workloads by acquiring or releasing resources. However, performing auto-scaling in a microservice system for online applications faces critical challenges, including unpredictably massive microservice requests, without fine-granularity performance metrics, and complex dependencies among services. In this paper, we design a cost-efficient autoscaling system, which pinpoints the scaling-needed services as quickly as possible and makes decisions on the right resource amount allocation toward them. Specifically, we first propose a multi-level microservice monitoring mechanism to capture historical and latest service-level performance metrics, and detect the over-provisioning services and under-provisioning services via jointly considering the changes of latency and throughput. For the overload anomalies, a random walk method is further adopted for detecting the root causes based on the dependency topology of microservices. When anomalies are detected, we design a threshold-based method by incorporating the ARIMI method for predicting resource usage status to allocate or recycle the right number of computation resources for them. Extensive and systematic evaluations of different algorithm modules with real-world and simulated workload data confirm the superiority of our mechanism over multiple algorithms.
自动伸缩是一种通过获取或释放资源来处理应用程序工作负载波动的有效技术。然而,在微服务系统中为在线应用程序执行自动扩展面临着严峻的挑战,包括不可预测的大量微服务请求,没有细粒度的性能指标,以及服务之间复杂的依赖关系。在本文中,我们设计了一个经济高效的自动扩展系统,该系统可以快速地确定扩展所需的服务,并对它们做出正确的资源量分配决策。具体而言,我们首先提出了一种多级微服务监控机制,以捕获历史和最新的服务级性能指标,并通过联合考虑延迟和吞吐量的变化来检测服务的过度供应和不足。针对过载异常,进一步采用基于微服务依赖拓扑的随机游走方法检测根本原因。当检测到异常时,我们设计了一种基于阈值的方法,结合ARIMI方法来预测资源使用状态,从而为它们分配或回收适当数量的计算资源。广泛和系统的评估不同的算法模块与现实世界和模拟工作负载数据证实了我们的机制优于多种算法。
{"title":"An Automatic Scaling System for Online Application with Microservices Architecture","authors":"Youmei Song, Chao Li, Kuoran Zhuang, Tengyu Ma, Tianyu Wo","doi":"10.1109/JCC56315.2022.00018","DOIUrl":"https://doi.org/10.1109/JCC56315.2022.00018","url":null,"abstract":"Auto-scaling is an efficient technique to handle fluctuations of application workloads by acquiring or releasing resources. However, performing auto-scaling in a microservice system for online applications faces critical challenges, including unpredictably massive microservice requests, without fine-granularity performance metrics, and complex dependencies among services. In this paper, we design a cost-efficient autoscaling system, which pinpoints the scaling-needed services as quickly as possible and makes decisions on the right resource amount allocation toward them. Specifically, we first propose a multi-level microservice monitoring mechanism to capture historical and latest service-level performance metrics, and detect the over-provisioning services and under-provisioning services via jointly considering the changes of latency and throughput. For the overload anomalies, a random walk method is further adopted for detecting the root causes based on the dependency topology of microservices. When anomalies are detected, we design a threshold-based method by incorporating the ARIMI method for predicting resource usage status to allocate or recycle the right number of computation resources for them. Extensive and systematic evaluations of different algorithm modules with real-world and simulated workload data confirm the superiority of our mechanism over multiple algorithms.","PeriodicalId":239996,"journal":{"name":"2022 IEEE International Conference on Joint Cloud Computing (JCC)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133143373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Welcome Message from the TPC Chairs of IEEE JCC 2022 IEEE JCC 2022 TPC主席欢迎辞
Pub Date : 2022-08-01 DOI: 10.1109/jcc56315.2022.00006
{"title":"Welcome Message from the TPC Chairs of IEEE JCC 2022","authors":"","doi":"10.1109/jcc56315.2022.00006","DOIUrl":"https://doi.org/10.1109/jcc56315.2022.00006","url":null,"abstract":"","PeriodicalId":239996,"journal":{"name":"2022 IEEE International Conference on Joint Cloud Computing (JCC)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129933783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
JCC 2022 Organizers
Pub Date : 2022-08-01 DOI: 10.1109/jcc56315.2022.00007
{"title":"JCC 2022 Organizers","authors":"","doi":"10.1109/jcc56315.2022.00007","DOIUrl":"https://doi.org/10.1109/jcc56315.2022.00007","url":null,"abstract":"","PeriodicalId":239996,"journal":{"name":"2022 IEEE International Conference on Joint Cloud Computing (JCC)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126488977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Two-stage Scheduling of Stream Computing for Industrial Cloud-edge Collaboration 面向工业云边缘协作的流计算两阶段调度
Pub Date : 2022-08-01 DOI: 10.1109/JCC56315.2022.00016
Tiejun Wang, Xudong Mou, Juntao Hu, Rui Wang, Tianyu Wo
As the Industrial Internet of Things (IIoT) develops, intelligent services applying stream computing, such as industrial robot health management, are requiring higher timeliness of data processing, which may involve scheduling of stream tasks. However, traditional scheduling methods are no longer suitable for the currently widely used cloud-edge collaboration mode, not considering the cloud-edge heterogeneity, and focusing on the scheduling of single tasks instead of the optimization of the total tasks. To improve the performance of the cloud-edge collaboration, this paper establishes a practical model for task scheduling considering respectively cloud-edge environment collaboration models. We propose a novel two-stage scheduling method for IIoT. The algorithm utilizes the idea of maximum flow to divide the task into cloud-edge deployment schemes and find the best partitioning scheme, and then deploy the operator for the edge domain based on the network topology by using dynamic programming. Experimental results show that the proposed method could reduce 7.27% the cloud-edge bandwidth usage compared with the highest greedy algorithm for traffic difference, 24.33% end-to-end latency and 11.18% back-pressure rate compared with SBON.
随着工业物联网(IIoT)的发展,工业机器人健康管理等应用流计算的智能服务对数据处理的时效性提出了更高的要求,这可能涉及到流任务的调度。然而,传统的调度方法已经不适合目前广泛使用的云边缘协作模式,不考虑云边缘的异构性,关注单个任务的调度而不是整体任务的优化。为了提高云边缘协作的性能,本文分别考虑不同的云边缘环境协作模型,建立了一个实用的任务调度模型。提出了一种新的工业物联网两阶段调度方法。该算法利用最大流量的思想,将任务划分为云边缘部署方案,寻找最佳的划分方案,然后利用动态规划的方法,根据网络拓扑对边缘域进行算子部署。实验结果表明,与最高贪婪算法相比,该方法可将云边缘带宽利用率降低7.27%,将端到端延迟降低24.33%,将背压率降低11.18%。
{"title":"Two-stage Scheduling of Stream Computing for Industrial Cloud-edge Collaboration","authors":"Tiejun Wang, Xudong Mou, Juntao Hu, Rui Wang, Tianyu Wo","doi":"10.1109/JCC56315.2022.00016","DOIUrl":"https://doi.org/10.1109/JCC56315.2022.00016","url":null,"abstract":"As the Industrial Internet of Things (IIoT) develops, intelligent services applying stream computing, such as industrial robot health management, are requiring higher timeliness of data processing, which may involve scheduling of stream tasks. However, traditional scheduling methods are no longer suitable for the currently widely used cloud-edge collaboration mode, not considering the cloud-edge heterogeneity, and focusing on the scheduling of single tasks instead of the optimization of the total tasks. To improve the performance of the cloud-edge collaboration, this paper establishes a practical model for task scheduling considering respectively cloud-edge environment collaboration models. We propose a novel two-stage scheduling method for IIoT. The algorithm utilizes the idea of maximum flow to divide the task into cloud-edge deployment schemes and find the best partitioning scheme, and then deploy the operator for the edge domain based on the network topology by using dynamic programming. Experimental results show that the proposed method could reduce 7.27% the cloud-edge bandwidth usage compared with the highest greedy algorithm for traffic difference, 24.33% end-to-end latency and 11.18% back-pressure rate compared with SBON.","PeriodicalId":239996,"journal":{"name":"2022 IEEE International Conference on Joint Cloud Computing (JCC)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114612629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2022 IEEE International Conference on Joint Cloud Computing (JCC)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1