首页 > 最新文献

Journal of Network and Computer Applications最新文献

英文 中文
A novel community-driven recommendation-based approach to predict and select friendships on the social IoT utilizing deep reinforcement learning 一种新颖的基于社区驱动的推荐方法,利用深度强化学习来预测和选择社交物联网上的友谊
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-12-10 DOI: 10.1016/j.jnca.2024.104092
Babak Farhadi , Parvaneh Asghari , Ebrahim Mahdipour , Hamid Haj Seyyed Javadi
The study of how to integrate Complex Networks (CN) within the Internet of Things (IoT) ecosystem has advanced significantly because of the field's recent expansion. CNs can tackle the biggest IoT issues by providing a common conceptual framework that encompasses the IoT scope. To this end, the Social Internet of Things (SIoT) perspective is introduced. In this study, a dynamic community-driven recommendation-oriented connection prediction and choice strategy utilizing Deep Reinforcement Learning (DRL) is proposed to deal with the key challenges located in the SIoT friendship selection component. To increase the efficiency of exploration, we incorporate an approach motivated by curiosity to create an intrinsic bonus signal that encourages the DRL agent to efficiently interact with its surroundings. Also, a novel method for Dynamic Community Detection (DCD) on SIoT to carry out community-oriented object recommendations is introduced. Lastly, we complete the experimental verifications utilizing datasets from the real world, and the experimental findings demonstrate that, in comparison to the related baselines, the approach presented here can enhance the accuracy of the social IoT friendship selection task and the effectiveness of training.
如何在物联网(IoT)生态系统中集成复杂网络(CN)的研究由于该领域最近的扩展而取得了重大进展。cnn可以通过提供涵盖物联网范围的通用概念框架来解决最大的物联网问题。为此,引入了社会物联网(Social Internet of Things, SIoT)视角。本研究提出了一种基于深度强化学习(DRL)的动态社区驱动的面向推荐的连接预测和选择策略,以解决SIoT友谊选择组件中的关键挑战。为了提高探索的效率,我们采用了一种由好奇心驱动的方法来创造一个内在的奖励信号,鼓励DRL代理与周围环境有效地互动。在此基础上,提出了一种基于SIoT的动态社区检测(DCD)方法来进行面向社区的对象推荐。最后,我们利用来自现实世界的数据集完成了实验验证,实验结果表明,与相关基线相比,本文提出的方法可以提高社交物联网友谊选择任务的准确性和训练的有效性。
{"title":"A novel community-driven recommendation-based approach to predict and select friendships on the social IoT utilizing deep reinforcement learning","authors":"Babak Farhadi ,&nbsp;Parvaneh Asghari ,&nbsp;Ebrahim Mahdipour ,&nbsp;Hamid Haj Seyyed Javadi","doi":"10.1016/j.jnca.2024.104092","DOIUrl":"10.1016/j.jnca.2024.104092","url":null,"abstract":"<div><div>The study of how to integrate Complex Networks (CN) within the Internet of Things (IoT) ecosystem has advanced significantly because of the field's recent expansion. CNs can tackle the biggest IoT issues by providing a common conceptual framework that encompasses the IoT scope. To this end, the Social Internet of Things (SIoT) perspective is introduced. In this study, a dynamic community-driven recommendation-oriented connection prediction and choice strategy utilizing Deep Reinforcement Learning (DRL) is proposed to deal with the key challenges located in the SIoT friendship selection component. To increase the efficiency of exploration, we incorporate an approach motivated by curiosity to create an intrinsic bonus signal that encourages the DRL agent to efficiently interact with its surroundings. Also, a novel method for Dynamic Community Detection (DCD) on SIoT to carry out community-oriented object recommendations is introduced. Lastly, we complete the experimental verifications utilizing datasets from the real world, and the experimental findings demonstrate that, in comparison to the related baselines, the approach presented here can enhance the accuracy of the social IoT friendship selection task and the effectiveness of training.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"235 ","pages":"Article 104092"},"PeriodicalIF":7.7,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142873860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A secure routing and malicious node detection in mobile Ad hoc network using trust value evaluation with improved XGBoost mechanism 基于改进XGBoost机制的移动Ad hoc网络信任值评估安全路由和恶意节点检测
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-12-10 DOI: 10.1016/j.jnca.2024.104093
Geetika Dhand , Meena Rao , Parul Chaudhary , Kavita Sheoran
Mobile ad hoc networks (MANETs) are beneficial in a wide range of sectors because of their rapid network creation capabilities. If mobile nodes collaborate and have mutual trust, the network can function properly. Routing becomes more difficult, and vulnerabilities are exposed more quickly as a result of flexible network features and frequent relationship flaws induced by node movement. This paper proposes a method for evaluating trust nodes using direct trust values, indirect trust values, and comprehensive trust values. Then, evaluating the trust value, the network's malicious and non-malicious nodes are identified using the Improved Extreme Gradient Boosting (XGBoost) algorithm. From the detected malicious nodes, the cluster head is chosen to ensure effective data transmission. Finally, the optimal routes are chosen using a novel Enhanced Cat Swarm-assisted Optimized Link State Routing Protocol (ECSO OLSRP). Furthermore, the Cat Swarm Optimization (CSO) algorithm determines the ideal route path based on characteristics such as node stability degree and connection stability degree. Because the proposed technique provides secure data transmission, node path setup, and node efficiency evaluation, it can maintain network performance even in the presence of several hostile nodes. The performance of the proposed trust-based approach security routing technique in terms of packet delivery ratio of nodes (0.47), end-to-end delay time of nodes (0.06), network throughput of nodes (1852.22), and control overhead of nodes (7.41).
移动自组织网络(manet)由于其快速的网络创建能力,在广泛的领域都是有益的。如果移动节点相互协作,相互信任,网络才能正常运行。由于网络特性的灵活性和节点移动导致的频繁的关系缺陷,使得路由变得更加困难,漏洞暴露得更快。本文提出了一种利用直接信任值、间接信任值和综合信任值来评估信任节点的方法。然后,评估信任值,使用改进的极限梯度增强(XGBoost)算法识别网络的恶意和非恶意节点。从检测到的恶意节点中选择簇头,保证数据的有效传输。最后,利用一种新的增强型Cat群辅助优化链路状态路由协议(ECSO OLSRP)选择最优路由。此外,Cat Swarm Optimization (CSO)算法根据节点稳定度和连接稳定度等特征确定理想的路由路径。由于该技术提供了安全的数据传输、节点路径设置和节点效率评估,因此即使存在多个敌对节点,也可以保持网络性能。本文提出的基于信任的安全路由技术在节点的包投递率(0.47)、节点的端到端延迟时间(0.06)、节点的网络吞吐量(1852.22)和节点的控制开销(7.41)方面的性能。
{"title":"A secure routing and malicious node detection in mobile Ad hoc network using trust value evaluation with improved XGBoost mechanism","authors":"Geetika Dhand ,&nbsp;Meena Rao ,&nbsp;Parul Chaudhary ,&nbsp;Kavita Sheoran","doi":"10.1016/j.jnca.2024.104093","DOIUrl":"10.1016/j.jnca.2024.104093","url":null,"abstract":"<div><div>Mobile ad hoc networks (MANETs) are beneficial in a wide range of sectors because of their rapid network creation capabilities. If mobile nodes collaborate and have mutual trust, the network can function properly. Routing becomes more difficult, and vulnerabilities are exposed more quickly as a result of flexible network features and frequent relationship flaws induced by node movement. This paper proposes a method for evaluating trust nodes using direct trust values, indirect trust values, and comprehensive trust values. Then, evaluating the trust value, the network's malicious and non-malicious nodes are identified using the Improved Extreme Gradient Boosting (XGBoost) algorithm. From the detected malicious nodes, the cluster head is chosen to ensure effective data transmission. Finally, the optimal routes are chosen using a novel Enhanced Cat Swarm-assisted Optimized Link State Routing Protocol (ECSO OLSRP). Furthermore, the Cat Swarm Optimization (CSO) algorithm determines the ideal route path based on characteristics such as node stability degree and connection stability degree. Because the proposed technique provides secure data transmission, node path setup, and node efficiency evaluation, it can maintain network performance even in the presence of several hostile nodes. The performance of the proposed trust-based approach security routing technique in terms of packet delivery ratio of nodes (0.47), end-to-end delay time of nodes (0.06), network throughput of nodes (1852.22), and control overhead of nodes (7.41).</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"235 ","pages":"Article 104093"},"PeriodicalIF":7.7,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142873859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Label-aware learning to enhance unsupervised cross-domain rumor detection 标签感知学习增强无监督跨域谣言检测
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-12-09 DOI: 10.1016/j.jnca.2024.104084
Hongyan Ran, Xiaohong Li, Zhichang Zhang
Recently, massive research has achieved significant development in improving the performance of rumor detection. However, identifying rumors in an invisible domain is still an elusive challenge. To address this issue, we propose an unsupervised cross-domain rumor detection model that enhances contrastive learning and cross-attention by label-aware learning to alleviate the domain shift. The model performs cross-domain feature alignment and enforces target samples to align with the corresponding prototypes of a given source domain. Moreover, we use a cross-attention mechanism on a pair of source data and target data with the same labels to learn domain-invariant representations. Because the samples in a domain pair tend to express similar semantic patterns, especially on the people’s attitudes (e.g., supporting or denying) towards the same category of rumors. In addition, we add a label-aware learning module as an enhancement component to learn the correlations between labels and instances during training and generate a better label distribution to replace the original one-hot label vector to guide the model training. At the same time, we use the label representation learned by the label learning module to guide the production of pseudo-label for the target samples. We conduct experiments on four groups of cross-domain datasets and show that our proposed model achieves state-of-the-art performance.
最近,大量研究在提高谣言检测性能方面取得了重大进展。然而,在无形领域中识别谣言仍然是一个难以捉摸的挑战。为解决这一问题,我们提出了一种无监督跨域谣言检测模型,该模型通过标签感知学习来增强对比学习和交叉注意,从而缓解域转移问题。该模型执行跨域特征对齐,并强制目标样本与给定源域的相应原型对齐。此外,我们在具有相同标签的源数据和目标数据对上使用交叉关注机制来学习域不变表征。因为领域对中的样本往往会表达相似的语义模式,尤其是人们对同一类谣言的态度(如支持或否认)。此外,我们还添加了标签感知学习模块作为增强组件,以便在训练过程中学习标签与实例之间的相关性,并生成更好的标签分布来替代原始的单点标签向量,从而指导模型训练。同时,我们利用标签学习模块学习到的标签表示来指导目标样本伪标签的生成。我们在四组跨领域数据集上进行了实验,结果表明我们提出的模型达到了最先进的性能。
{"title":"Label-aware learning to enhance unsupervised cross-domain rumor detection","authors":"Hongyan Ran,&nbsp;Xiaohong Li,&nbsp;Zhichang Zhang","doi":"10.1016/j.jnca.2024.104084","DOIUrl":"10.1016/j.jnca.2024.104084","url":null,"abstract":"<div><div>Recently, massive research has achieved significant development in improving the performance of rumor detection. However, identifying rumors in an invisible domain is still an elusive challenge. To address this issue, we propose an unsupervised cross-domain rumor detection model that enhances contrastive learning and cross-attention by label-aware learning to alleviate the domain shift. The model performs cross-domain feature alignment and enforces target samples to align with the corresponding prototypes of a given source domain. Moreover, we use a cross-attention mechanism on a pair of source data and target data with the same labels to learn domain-invariant representations. Because the samples in a domain pair tend to express similar semantic patterns, especially on the people’s attitudes (e.g., supporting or denying) towards the same category of rumors. In addition, we add a label-aware learning module as an enhancement component to learn the correlations between labels and instances during training and generate a better label distribution to replace the original one-hot label vector to guide the model training. At the same time, we use the label representation learned by the label learning module to guide the production of pseudo-label for the target samples. We conduct experiments on four groups of cross-domain datasets and show that our proposed model achieves state-of-the-art performance.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"235 ","pages":"Article 104084"},"PeriodicalIF":7.7,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142825314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comprehensive plane-wise review of DDoS attacks in SDN: Leveraging detection and mitigation through machine learning and deep learning SDN中DDoS攻击的全面全面回顾:通过机器学习和深度学习利用检测和缓解
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-12-09 DOI: 10.1016/j.jnca.2024.104081
Dhruv Kalambe, Divyansh Sharma, Pushkar Kadam, Shivangi Surati
The traditional architecture of networks in Software Defined Networking (SDN) is divided into three distinct planes to incorporate intelligence into networks. However, this structure has also introduced security threats and challenges across these planes, including the widely recognized Distributed Denial of Service (DDoS) attack. Therefore, it is essential to predict such attacks and their variants at different planes in SDN to maintain seamless network operations. Apart from network based and flow analysis based solutions to detect the attacks; machine learning and deep learning based prediction and mitigation approaches are also explored by the researchers and applied at different planes of software defined networking. Consequently, a detailed analysis of DDoS attacks and a review that explores DDoS attacks in SDN along with their learning based prediction/mitigation strategies are required to be studied and presented in detail. This paper primarily aims to investigate and analyze DDoS attacks on each plane of SDN and to study as well as compare machine learning, advanced federated learning and deep learning approaches to predict these attacks. The real world case studies are also explored to compare the analysis. In addition, low-rate DDoS attacks and novel research directions are discussed that can further be utilized by SDN experts and researchers to confront the effects by DDoS attacks on SDN.
软件定义网络(SDN)中的传统网络架构被划分为三个不同的平面,以将智能融入网络。然而,这种结构也在这些平面上引入了安全威胁和挑战,包括广泛认可的分布式拒绝服务(DDoS)攻击。因此,在SDN中预测此类攻击及其在不同平面的变体,以保持网络的无缝运行至关重要。除了基于网络和基于流量分析的攻击检测方案;研究人员还探索了基于机器学习和深度学习的预测和缓解方法,并将其应用于软件定义网络的不同层面。因此,需要对DDoS攻击进行详细分析,并对SDN中的DDoS攻击及其基于学习的预测/缓解策略进行研究和详细介绍。本文的主要目的是调查和分析SDN各平面上的DDoS攻击,并研究和比较机器学习、高级联邦学习和深度学习方法来预测这些攻击。本文还探讨了现实世界的案例研究,以比较分析结果。此外,还讨论了低速率DDoS攻击和新的研究方向,SDN专家和研究人员可以进一步利用这些研究方向来应对DDoS攻击对SDN的影响。
{"title":"A comprehensive plane-wise review of DDoS attacks in SDN: Leveraging detection and mitigation through machine learning and deep learning","authors":"Dhruv Kalambe,&nbsp;Divyansh Sharma,&nbsp;Pushkar Kadam,&nbsp;Shivangi Surati","doi":"10.1016/j.jnca.2024.104081","DOIUrl":"10.1016/j.jnca.2024.104081","url":null,"abstract":"<div><div>The traditional architecture of networks in Software Defined Networking (SDN) is divided into three distinct planes to incorporate intelligence into networks. However, this structure has also introduced security threats and challenges across these planes, including the widely recognized Distributed Denial of Service (DDoS) attack. Therefore, it is essential to predict such attacks and their variants at different planes in SDN to maintain seamless network operations. Apart from network based and flow analysis based solutions to detect the attacks; machine learning and deep learning based prediction and mitigation approaches are also explored by the researchers and applied at different planes of software defined networking. Consequently, a detailed analysis of DDoS attacks and a review that explores DDoS attacks in SDN along with their learning based prediction/mitigation strategies are required to be studied and presented in detail. This paper primarily aims to investigate and analyze DDoS attacks on each plane of SDN and to study as well as compare machine learning, advanced federated learning and deep learning approaches to predict these attacks. The real world case studies are also explored to compare the analysis. In addition, low-rate DDoS attacks and novel research directions are discussed that can further be utilized by SDN experts and researchers to confront the effects by DDoS attacks on SDN.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"235 ","pages":"Article 104081"},"PeriodicalIF":7.7,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142825313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MDQ: A QoS-Congestion Aware Deep Reinforcement Learning Approach for Multi-Path Routing in SDN MDQ:面向 SDN 多路径路由的、意识到 QoS-拥塞的深度强化学习方法
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-12-09 DOI: 10.1016/j.jnca.2024.104082
Lizeth Patricia Aguirre Sanchez, Yao Shen, Minyi Guo
The challenge of link overutilization in networking persists, prompting the development of load-balancing methods such as multi-path strategies and flow rerouting. However, traditional rule-based heuristics struggle to adapt dynamically to network changes. This leads to complex models and lengthy convergence times, unsuitable for diverse QoS demands, particularly in time-sensitive applications. Existing routing approaches often result in specific types of traffic overloading links or general congestion, prolonged convergence delays, and scalability challenges. To tackle these issues, we propose a QoS-Congestion Aware Deep Reinforcement Learning Approach for Multi-Path Routing in Software-Defined Networking (MDQ). Leveraging Deep Reinforcement Learning, MDQ intelligently selects optimal multi-paths and allocates traffic based on flow needs. We design a multi-objective function using a combination of link and queue metrics to establish an efficient routing policy. Moreover, we integrate a congestion severity index into the learning process and incorporate a traffic classification phase to handle mice-elephant flows, ensuring that diverse class-of-service requirements are adequately addressed. Through an RYU-Docker-based Openflow framework integrating a Live QoS Monitor, DNC Classifier, and Online Routing, results demonstrate a 19%–22% reduction in delay compared to state-of-the-art algorithms, exhibiting robust reliability across diverse scenarios of network dynamics.
网络中链路利用率过高的挑战依然存在,这促使人们开发了多路径策略和流量重路由等负载平衡方法。然而,传统的基于规则的启发式方法难以动态适应网络变化。这导致了复杂的模型和漫长的收敛时间,不适合各种 QoS 需求,尤其是对时间敏感的应用。现有的路由选择方法通常会导致特定类型的流量超载链路或普遍拥塞、收敛延迟过长以及可扩展性挑战。为了解决这些问题,我们为软件定义网络(MDQ)中的多路径路由提出了一种考虑到 QoS-拥塞的深度强化学习方法。利用深度强化学习,MDQ 可以智能地选择最佳多路径,并根据流量需求分配流量。我们结合链路和队列指标设计了一个多目标函数,以建立高效的路由策略。此外,我们还将拥塞严重程度指数纳入学习过程,并结合流量分类阶段来处理小象流量,确保充分满足不同的服务等级要求。通过一个基于 RYU-Docker 的 Openflow 框架,该框架集成了实时 QoS 监控、DNC 分类器和在线路由,结果表明与最先进的算法相比,延迟降低了 19%-22%,在网络动态的各种情况下都表现出了强大的可靠性。
{"title":"MDQ: A QoS-Congestion Aware Deep Reinforcement Learning Approach for Multi-Path Routing in SDN","authors":"Lizeth Patricia Aguirre Sanchez,&nbsp;Yao Shen,&nbsp;Minyi Guo","doi":"10.1016/j.jnca.2024.104082","DOIUrl":"10.1016/j.jnca.2024.104082","url":null,"abstract":"<div><div>The challenge of link overutilization in networking persists, prompting the development of load-balancing methods such as multi-path strategies and flow rerouting. However, traditional rule-based heuristics struggle to adapt dynamically to network changes. This leads to complex models and lengthy convergence times, unsuitable for diverse QoS demands, particularly in time-sensitive applications. Existing routing approaches often result in specific types of traffic overloading links or general congestion, prolonged convergence delays, and scalability challenges. To tackle these issues, we propose a QoS-Congestion Aware Deep Reinforcement Learning Approach for Multi-Path Routing in Software-Defined Networking (MDQ). Leveraging Deep Reinforcement Learning, MDQ intelligently selects optimal multi-paths and allocates traffic based on flow needs. We design a multi-objective function using a combination of link and queue metrics to establish an efficient routing policy. Moreover, we integrate a congestion severity index into the learning process and incorporate a traffic classification phase to handle mice-elephant flows, ensuring that diverse class-of-service requirements are adequately addressed. Through an RYU-Docker-based Openflow framework integrating a Live QoS Monitor, DNC Classifier, and Online Routing, results demonstrate a 19%–22% reduction in delay compared to state-of-the-art algorithms, exhibiting robust reliability across diverse scenarios of network dynamics.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"235 ","pages":"Article 104082"},"PeriodicalIF":7.7,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142825309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Caching or re-computing: Online cost optimization for running big data tasks in IaaS clouds 缓存或重新计算:在IaaS云中运行大数据任务的在线成本优化
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-12-09 DOI: 10.1016/j.jnca.2024.104080
Xiankun Fu, Li Pan, Shijun Liu
High computing power and large storage capacity are necessary for running big data tasks, which leads to high infrastructure costs. Infrastructure-as-a-Service (IaaS) clouds can provide configuration environments and computing resources needed for running big data tasks, while saving users from expensive software and hardware infrastructure investments. Many studies show that the cost of computation can be reduced by caching intermediate results and reusing them instead of repeating computations. However, the storage cost incurred by caching a large number of intermediate results over a long period of time may exceed the cost of computation, ultimately leading to an increase in total cost instead. For making optimal caching decisions, future usage profiles for big data tasks are needed, but it is generally very hard to predict them precisely. In this paper, to address this problem, we propose two practical online algorithms, one deterministic and the other randomized, which can determine whether to cache intermediate results to reduce the total cost of big data tasks without requiring any future information. We prove theoretically that the competitive ratio of the proposed deterministic (randomized) algorithm is min(21ηδ,2ηβ) (resp., ee1). Using real-world Wikipedia data as well as synthetic datasets, we verify the effectiveness of our proposed algorithms through a large number of experiments based on the price of Alibaba’s public IaaS cloud products.
运行大数据任务需要高计算能力和大存储容量,这导致基础设施成本高昂。基础设施即服务(IaaS)云可以提供运行大数据任务所需的配置环境和计算资源,同时为用户节省昂贵的软件和硬件基础设施投资。许多研究表明,通过缓存中间结果并重复使用它们而不是重复计算,可以降低计算成本。然而,长期缓存大量中间结果所产生的存储成本可能会超过计算成本,最终导致总成本反而增加。要做出最佳缓存决策,需要大数据任务的未来使用情况,但通常很难精确预测。为了解决这个问题,我们在本文中提出了两种实用的在线算法,一种是确定性算法,另一种是随机算法,它们可以在不需要任何未来信息的情况下决定是否缓存中间结果,从而降低大数据任务的总成本。我们从理论上证明了所提出的确定性(随机)算法的竞争比为 min(2-1-ηδ,2-ηβ) (resp., ee-1)。我们使用真实世界的维基百科数据和合成数据集,以阿里巴巴公共 IaaS 云产品的价格为基础,通过大量实验验证了我们提出的算法的有效性。
{"title":"Caching or re-computing: Online cost optimization for running big data tasks in IaaS clouds","authors":"Xiankun Fu,&nbsp;Li Pan,&nbsp;Shijun Liu","doi":"10.1016/j.jnca.2024.104080","DOIUrl":"10.1016/j.jnca.2024.104080","url":null,"abstract":"<div><div>High computing power and large storage capacity are necessary for running big data tasks, which leads to high infrastructure costs. Infrastructure-as-a-Service (IaaS) clouds can provide configuration environments and computing resources needed for running big data tasks, while saving users from expensive software and hardware infrastructure investments. Many studies show that the cost of computation can be reduced by caching intermediate results and reusing them instead of repeating computations. However, the storage cost incurred by caching a large number of intermediate results over a long period of time may exceed the cost of computation, ultimately leading to an increase in total cost instead. For making optimal caching decisions, future usage profiles for big data tasks are needed, but it is generally very hard to predict them precisely. In this paper, to address this problem, we propose two practical online algorithms, one deterministic and the other randomized, which can determine whether to cache intermediate results to reduce the total cost of big data tasks without requiring any future information. We prove theoretically that the competitive ratio of the proposed deterministic (randomized) algorithm is <span><math><mrow><mi>m</mi><mi>i</mi><mi>n</mi><mrow><mo>(</mo><mn>2</mn><mo>−</mo><mfrac><mrow><mn>1</mn><mo>−</mo><mi>η</mi></mrow><mrow><mi>δ</mi></mrow></mfrac><mo>,</mo><mn>2</mn><mo>−</mo><mfrac><mrow><mi>η</mi></mrow><mrow><mi>β</mi></mrow></mfrac><mo>)</mo></mrow></mrow></math></span> (resp., <span><math><mfrac><mrow><mi>e</mi></mrow><mrow><mi>e</mi><mo>−</mo><mn>1</mn></mrow></mfrac></math></span>). Using real-world Wikipedia data as well as synthetic datasets, we verify the effectiveness of our proposed algorithms through a large number of experiments based on the price of Alibaba’s public IaaS cloud products.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"235 ","pages":"Article 104080"},"PeriodicalIF":7.7,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142825311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligent energy management with IoT framework in smart cities using intelligent analysis: An application of machine learning methods for complex networks and systems
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-12-07 DOI: 10.1016/j.jnca.2024.104089
Maryam Nikpour , Parisa Behvand Yousefi , Hadi Jafarzadeh , Kasra Danesh , Roya Shomali , Saeed Asadi , Ahmad Gholizadeh Lonbar , Mohsen Ahmadi
This study addresses the growing challenges of energy consumption and the depletion of energy resources, particularly in the context of smart buildings. As the demand for energy increases alongside the need for efficient building maintenance, it becomes imperative to explore innovative energy management solutions. We present a review of Internet of Things (IoT)-based frameworks aimed at managing smart city energy consumption, the pivotal role of IoT devices in addressing these issues due to their compactness, sensing, measurement, and computing capabilities. Our review methodology involves a thorough analysis of existing literature on IoT architectures and frameworks for intelligent energy management applications. We focus on systems that not only collect and store data but also support intelligent analysis for monitoring, controlling, and enhancing system efficiency. Additionally, we examine the potential for these frameworks to serve as platforms for the development of third-party applications, thereby extending their utility and adaptability. The findings from our review indicate that IoT-based frameworks offer potential to reduce energy consumption and environmental impact in smart buildings. By adopting intelligent mechanisms and solutions, these frameworks facilitate effective energy management, leading to improved system efficiency and sustainability. Considering these findings, we recommend further exploration and adoption of IoT-based wireless sensing systems in smart buildings as a strategic approach to energy management. Our review highlights the importance of incorporating intelligent analysis and enabling the development of third-party applications within the IoT framework to efficiently meet evolving energy demands and maintenance challenges.
{"title":"Intelligent energy management with IoT framework in smart cities using intelligent analysis: An application of machine learning methods for complex networks and systems","authors":"Maryam Nikpour ,&nbsp;Parisa Behvand Yousefi ,&nbsp;Hadi Jafarzadeh ,&nbsp;Kasra Danesh ,&nbsp;Roya Shomali ,&nbsp;Saeed Asadi ,&nbsp;Ahmad Gholizadeh Lonbar ,&nbsp;Mohsen Ahmadi","doi":"10.1016/j.jnca.2024.104089","DOIUrl":"10.1016/j.jnca.2024.104089","url":null,"abstract":"<div><div>This study addresses the growing challenges of energy consumption and the depletion of energy resources, particularly in the context of smart buildings. As the demand for energy increases alongside the need for efficient building maintenance, it becomes imperative to explore innovative energy management solutions. We present a review of Internet of Things (IoT)-based frameworks aimed at managing smart city energy consumption, the pivotal role of IoT devices in addressing these issues due to their compactness, sensing, measurement, and computing capabilities. Our review methodology involves a thorough analysis of existing literature on IoT architectures and frameworks for intelligent energy management applications. We focus on systems that not only collect and store data but also support intelligent analysis for monitoring, controlling, and enhancing system efficiency. Additionally, we examine the potential for these frameworks to serve as platforms for the development of third-party applications, thereby extending their utility and adaptability. The findings from our review indicate that IoT-based frameworks offer potential to reduce energy consumption and environmental impact in smart buildings. By adopting intelligent mechanisms and solutions, these frameworks facilitate effective energy management, leading to improved system efficiency and sustainability. Considering these findings, we recommend further exploration and adoption of IoT-based wireless sensing systems in smart buildings as a strategic approach to energy management. Our review highlights the importance of incorporating intelligent analysis and enabling the development of third-party applications within the IoT framework to efficiently meet evolving energy demands and maintenance challenges.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"235 ","pages":"Article 104089"},"PeriodicalIF":7.7,"publicationDate":"2024-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143135743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey on energy efficient medium access control for acoustic wireless communication networks in underwater environments 水下环境声无线通信网络节能介质接入控制研究
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-12-04 DOI: 10.1016/j.jnca.2024.104079
Walid K. Hasan , Iftekhar Ahmad , Daryoush Habibi , Quoc Viet Phung , Mohammad Al-Fawa'reh , Kazi Yasin Islam , Ruba Zaheer , Haitham Khaled
Underwater communication plays a crucial role in monitoring the aquatic environment on Earth. Due to their unique characteristics, underwater acoustic channels present unique challenges including lengthy signal transmission delays, limited data transfer bandwidth, variable signal quality, and fluctuating channel conditions. Furthermore, the reliance on battery power for most Underwater Wireless Acoustic Networks (UWAN) devices, coupled with the challenges associated with battery replacement or recharging, intensifies the challenges. Underwater acoustic communications are heavily constrained by available resources (e.g., very limited bandwidth, and limited energy storage). Consequently, the role of medium access control (MAC) protocol which distributes available resources among nodes is critical in maintaining a reliable underwater communication system. This study presents an extensive review of current research in MAC for UWAN. This study presents an extensive review of current research in MAC for UWAN. The paper explores the unique challenges and characteristics of UWAN, which are critical for the MAC protocol design. Subsequently, a diverse range of energy-efficient MAC techniques are categorized and reviewed. Potential future research avenues in energy-efficient MAC protocols are discussed, with a particular emphasis on the challenges to enable the broader implementation of the Green Internet of Underwater Things (GIoUT).
水下通信在监测地球上的水生环境中起着至关重要的作用。由于其独特的特性,水声信道面临着独特的挑战,包括长时间的信号传输延迟、有限的数据传输带宽、可变的信号质量和波动的信道条件。此外,大多数水下无线声学网络(UWAN)设备对电池供电的依赖,加上电池更换或充电的挑战,加剧了这一挑战。水声通信受到可用资源的严重限制(例如,非常有限的带宽和有限的能量存储)。因此,介质访问控制(MAC)协议在节点间分配可用资源的作用对于保证水下通信系统的可靠性至关重要。本研究对当前UWAN的MAC研究进行了广泛的回顾。本研究对当前UWAN的MAC研究进行了广泛的回顾。本文探讨了UWAN的独特挑战和特点,这对MAC协议的设计至关重要。随后,对各种节能MAC技术进行了分类和回顾。讨论了节能MAC协议的潜在未来研究途径,特别强调了实现绿色水下物联网(GIoUT)更广泛实施的挑战。
{"title":"A survey on energy efficient medium access control for acoustic wireless communication networks in underwater environments","authors":"Walid K. Hasan ,&nbsp;Iftekhar Ahmad ,&nbsp;Daryoush Habibi ,&nbsp;Quoc Viet Phung ,&nbsp;Mohammad Al-Fawa'reh ,&nbsp;Kazi Yasin Islam ,&nbsp;Ruba Zaheer ,&nbsp;Haitham Khaled","doi":"10.1016/j.jnca.2024.104079","DOIUrl":"10.1016/j.jnca.2024.104079","url":null,"abstract":"<div><div>Underwater communication plays a crucial role in monitoring the aquatic environment on Earth. Due to their unique characteristics, underwater acoustic channels present unique challenges including lengthy signal transmission delays, limited data transfer bandwidth, variable signal quality, and fluctuating channel conditions. Furthermore, the reliance on battery power for most Underwater Wireless Acoustic Networks (UWAN) devices, coupled with the challenges associated with battery replacement or recharging, intensifies the challenges. Underwater acoustic communications are heavily constrained by available resources (e.g., very limited bandwidth, and limited energy storage). Consequently, the role of medium access control (MAC) protocol which distributes available resources among nodes is critical in maintaining a reliable underwater communication system. This study presents an extensive review of current research in MAC for UWAN. This study presents an extensive review of current research in MAC for UWAN. The paper explores the unique challenges and characteristics of UWAN, which are critical for the MAC protocol design. Subsequently, a diverse range of energy-efficient MAC techniques are categorized and reviewed. Potential future research avenues in energy-efficient MAC protocols are discussed, with a particular emphasis on the challenges to enable the broader implementation of the Green Internet of Underwater Things (GIoUT).</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"235 ","pages":"Article 104079"},"PeriodicalIF":7.7,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142825325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing 5G network slicing with DRL: Balancing eMBB, URLLC, and mMTC with OMA, NOMA, and RSMA 用DRL优化5G网络切片:用OMA、NOMA和RSMA平衡eMBB、URLLC和mMTC
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-11-28 DOI: 10.1016/j.jnca.2024.104068
Silvestre Malta , Pedro Pinto , Manuel Fernández-Veiga
The advent of 5th Generation (5G) networks has introduced the strategy of network slicing as a paradigm shift, enabling the provision of services with distinct Quality of Service (QoS) requirements. The 5th Generation New Radio (5G NR) standard complies with the use cases Enhanced Mobile Broadband (eMBB), Ultra-Reliable Low Latency Communications (URLLC), and Massive Machine Type Communications (mMTC), which demand a dynamic adaptation of network slicing to meet the diverse traffic needs. This dynamic adaptation presents both a critical challenge and a significant opportunity to improve 5G network efficiency. This paper proposes a Deep Reinforcement Learning (DRL) agent that performs dynamic resource allocation in 5G wireless network slicing according to traffic requirements of the 5G use cases within two scenarios: eMBB with URLLC and eMBB with mMTC. The DRL agent evaluates the performance of different decoding schemes such as Orthogonal Multiple Access (OMA), Non-Orthogonal Multiple Access (NOMA), and Rate Splitting Multiple Access (RSMA) and applies the best decoding scheme in these scenarios under different network conditions. The DRL agent has been tested to maximize the sum rate in scenario eMBB with URLLC and to maximize the number of successfully decoded devices in scenario eMBB with mMTC, both with different combinations of number of devices, power gains and number of allocated frequencies. The results show that the DRL agent dynamically chooses the best decoding scheme and presents an efficiency in maximizing the sum rate and the decoded devices between 84% and 100% for both scenarios evaluated.
第五代(5G)网络的出现引入了网络切片策略作为一种范式转变,使提供具有不同服务质量(QoS)要求的服务成为可能。第5代新无线电(5G NR)标准符合增强型移动宽带(eMBB)、超可靠低延迟通信(URLLC)和大规模机器类型通信(mMTC)用例,这些用例需要动态适应网络切片,以满足不同的流量需求。这种动态适应是提高5G网络效率的重大挑战,也是重大机遇。本文提出了一种深度强化学习(DRL)智能体,该智能体根据5G用例的流量需求,在带URLLC的eMBB和带mMTC的eMBB两种场景下,对5G无线网络切片进行动态资源分配。DRL代理对OMA (Orthogonal Multiple Access)、NOMA (Non-Orthogonal Multiple Access)、RSMA (Rate Splitting Multiple Access)等不同的译码方案的性能进行评估,并在不同的网络条件下应用最佳的译码方案。已经测试了DRL代理在带URLLC的eMBB场景中最大限度地提高和速率,在带mMTC的eMBB场景中最大限度地提高成功解码设备的数量,这两种情况都采用了设备数量、功率增益和分配频率数量的不同组合。结果表明,在两种情况下,DRL代理动态选择最佳解码方案,并且在最大化和率和解码设备之间具有84%到100%的效率。
{"title":"Optimizing 5G network slicing with DRL: Balancing eMBB, URLLC, and mMTC with OMA, NOMA, and RSMA","authors":"Silvestre Malta ,&nbsp;Pedro Pinto ,&nbsp;Manuel Fernández-Veiga","doi":"10.1016/j.jnca.2024.104068","DOIUrl":"10.1016/j.jnca.2024.104068","url":null,"abstract":"<div><div>The advent of 5th Generation (5G) networks has introduced the strategy of network slicing as a paradigm shift, enabling the provision of services with distinct Quality of Service (QoS) requirements. The 5th Generation New Radio (5G NR) standard complies with the use cases Enhanced Mobile Broadband (eMBB), Ultra-Reliable Low Latency Communications (URLLC), and Massive Machine Type Communications (mMTC), which demand a dynamic adaptation of network slicing to meet the diverse traffic needs. This dynamic adaptation presents both a critical challenge and a significant opportunity to improve 5G network efficiency. This paper proposes a Deep Reinforcement Learning (DRL) agent that performs dynamic resource allocation in 5G wireless network slicing according to traffic requirements of the 5G use cases within two scenarios: eMBB with URLLC and eMBB with mMTC. The DRL agent evaluates the performance of different decoding schemes such as Orthogonal Multiple Access (OMA), Non-Orthogonal Multiple Access (NOMA), and Rate Splitting Multiple Access (RSMA) and applies the best decoding scheme in these scenarios under different network conditions. The DRL agent has been tested to maximize the sum rate in scenario eMBB with URLLC and to maximize the number of successfully decoded devices in scenario eMBB with mMTC, both with different combinations of number of devices, power gains and number of allocated frequencies. The results show that the DRL agent dynamically chooses the best decoding scheme and presents an efficiency in maximizing the sum rate and the decoded devices between 84% and 100% for both scenarios evaluated.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"234 ","pages":"Article 104068"},"PeriodicalIF":7.7,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142745343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gwydion: Efficient auto-scaling for complex containerized applications in Kubernetes through Reinforcement Learning Gwydion:通过强化学习为Kubernetes中的复杂容器化应用程序提供高效的自动扩展
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-11-26 DOI: 10.1016/j.jnca.2024.104067
José Santos , Efstratios Reppas , Tim Wauters , Bruno Volckaert , Filip De Turck
Containers have reshaped application deployment and life-cycle management in recent cloud platforms. The paradigm shift from large monolithic applications to complex graphs of loosely-coupled microservices aims to increase deployment flexibility and operational efficiency. However, efficient allocation and scaling of microservice applications is challenging due to their intricate inter-dependencies. Existing works do not consider microservice dependencies, which could lead to the application’s performance degradation when service demand increases. As dependencies increase, communication between microservices becomes more complex and frequent, leading to slower response times and higher resource consumption, especially during high demand. In addition, performance issues in one microservice can also trigger a ripple effect across dependent services, exacerbating the performance degradation across the entire application. This paper studies the impact of microservice inter-dependencies in auto-scaling by proposing Gwydion, a novel framework that enables different auto-scaling goals through Reinforcement Learning (RL) algorithms. Gwydion has been developed based on the OpenAI Gym library and customized for the popular Kubernetes (K8s) platform to bridge the gap between RL and auto-scaling research by training RL algorithms on real cloud environments for two opposing reward strategies: cost-aware and latency-aware. Gwydion focuses on improving resource usage and reducing the application’s response time by considering microservice inter-dependencies when scaling horizontally. Experiments with microservice benchmark applications, such as Redis Cluster (RC) and Online Boutique (OB), show that RL agents can reduce deployment costs and the application’s response time compared to default scaling mechanisms, achieving up to 50% lower latency while avoiding performance degradation. For RC, cost-aware algorithms can reduce the number of deployed pods (2 to 4), resulting in slightly higher latency (300μs to 6 ms) but lower resource consumption. For OB, all RL algorithms exhibit a notable response time improvement by considering all microservices in the observation space, enabling the sequential triggering of actions across different deployments. This leads to nearly 30% cost savings while maintaining consistently lower latency throughout the experiment. Gwydion aims to advance auto-scaling research in a rapidly evolving dynamic cloud environment.
在最近的云平台中,容器重塑了应用程序部署和生命周期管理。从大型单片应用程序到松散耦合微服务的复杂图的范式转变旨在提高部署灵活性和操作效率。然而,由于微服务应用程序之间错综复杂的相互依赖关系,有效的分配和扩展是具有挑战性的。现有的工作没有考虑微服务依赖,当服务需求增加时,这可能导致应用程序的性能下降。随着依赖关系的增加,微服务之间的通信变得更加复杂和频繁,从而导致更慢的响应时间和更高的资源消耗,特别是在高需求期间。此外,一个微服务中的性能问题还可能引发跨依赖服务的连锁反应,从而加剧整个应用程序的性能下降。本文通过提出Gwydion来研究微服务相互依赖对自动扩展的影响,Gwydion是一个通过强化学习(RL)算法实现不同自动扩展目标的新框架。Gwydion是基于OpenAI Gym库开发的,并为流行的Kubernetes (K8s)平台定制的,通过在真实的云环境中训练RL算法,为两种相反的奖励策略(成本感知和延迟感知)架起了RL和自动扩展研究之间的桥梁。在横向扩展时,Gwydion通过考虑微服务的相互依赖,专注于改善资源的使用,减少应用程序的响应时间。对微服务基准应用程序(如Redis Cluster (RC)和Online Boutique (OB))的实验表明,与默认扩展机制相比,RL代理可以降低部署成本和应用程序的响应时间,在避免性能下降的同时实现高达50%的延迟降低。对于RC,成本感知算法可以减少部署的pod数量(2到4),从而导致稍高的延迟(300μs到6 ms),但降低资源消耗。对于OB,所有RL算法通过考虑观察空间中的所有微服务,支持跨不同部署的顺序触发操作,显示出显著的响应时间改进。这可以节省近30%的成本,同时在整个实验过程中始终保持较低的延迟。Gwydion旨在在快速发展的动态云环境中推进自动伸缩研究。
{"title":"Gwydion: Efficient auto-scaling for complex containerized applications in Kubernetes through Reinforcement Learning","authors":"José Santos ,&nbsp;Efstratios Reppas ,&nbsp;Tim Wauters ,&nbsp;Bruno Volckaert ,&nbsp;Filip De Turck","doi":"10.1016/j.jnca.2024.104067","DOIUrl":"10.1016/j.jnca.2024.104067","url":null,"abstract":"<div><div>Containers have reshaped application deployment and life-cycle management in recent cloud platforms. The paradigm shift from large monolithic applications to complex graphs of loosely-coupled microservices aims to increase deployment flexibility and operational efficiency. However, efficient allocation and scaling of microservice applications is challenging due to their intricate inter-dependencies. Existing works do not consider microservice dependencies, which could lead to the application’s performance degradation when service demand increases. As dependencies increase, communication between microservices becomes more complex and frequent, leading to slower response times and higher resource consumption, especially during high demand. In addition, performance issues in one microservice can also trigger a ripple effect across dependent services, exacerbating the performance degradation across the entire application. This paper studies the impact of microservice inter-dependencies in auto-scaling by proposing <em>Gwydion</em>, a novel framework that enables different auto-scaling goals through Reinforcement Learning (RL) algorithms. <em>Gwydion</em> has been developed based on the OpenAI Gym library and customized for the popular Kubernetes (K8s) platform to bridge the gap between RL and auto-scaling research by training RL algorithms on real cloud environments for two opposing reward strategies: cost-aware and latency-aware. <em>Gwydion</em> focuses on improving resource usage and reducing the application’s response time by considering microservice inter-dependencies when scaling horizontally. Experiments with microservice benchmark applications, such as Redis Cluster (RC) and Online Boutique (OB), show that RL agents can reduce deployment costs and the application’s response time compared to default scaling mechanisms, achieving up to 50% lower latency while avoiding performance degradation. For RC, cost-aware algorithms can reduce the number of deployed pods (2 to 4), resulting in slightly higher latency (<span><math><mrow><mn>300</mn><mspace></mspace><mi>μ</mi><mi>s</mi></mrow></math></span> to 6 ms) but lower resource consumption. For OB, all RL algorithms exhibit a notable response time improvement by considering all microservices in the observation space, enabling the sequential triggering of actions across different deployments. This leads to nearly 30% cost savings while maintaining consistently lower latency throughout the experiment. Gwydion aims to advance auto-scaling research in a rapidly evolving dynamic cloud environment.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"234 ","pages":"Article 104067"},"PeriodicalIF":7.7,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142745345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Network and Computer Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1