The study of how to integrate Complex Networks (CN) within the Internet of Things (IoT) ecosystem has advanced significantly because of the field's recent expansion. CNs can tackle the biggest IoT issues by providing a common conceptual framework that encompasses the IoT scope. To this end, the Social Internet of Things (SIoT) perspective is introduced. In this study, a dynamic community-driven recommendation-oriented connection prediction and choice strategy utilizing Deep Reinforcement Learning (DRL) is proposed to deal with the key challenges located in the SIoT friendship selection component. To increase the efficiency of exploration, we incorporate an approach motivated by curiosity to create an intrinsic bonus signal that encourages the DRL agent to efficiently interact with its surroundings. Also, a novel method for Dynamic Community Detection (DCD) on SIoT to carry out community-oriented object recommendations is introduced. Lastly, we complete the experimental verifications utilizing datasets from the real world, and the experimental findings demonstrate that, in comparison to the related baselines, the approach presented here can enhance the accuracy of the social IoT friendship selection task and the effectiveness of training.
如何在物联网(IoT)生态系统中集成复杂网络(CN)的研究由于该领域最近的扩展而取得了重大进展。cnn可以通过提供涵盖物联网范围的通用概念框架来解决最大的物联网问题。为此,引入了社会物联网(Social Internet of Things, SIoT)视角。本研究提出了一种基于深度强化学习(DRL)的动态社区驱动的面向推荐的连接预测和选择策略,以解决SIoT友谊选择组件中的关键挑战。为了提高探索的效率,我们采用了一种由好奇心驱动的方法来创造一个内在的奖励信号,鼓励DRL代理与周围环境有效地互动。在此基础上,提出了一种基于SIoT的动态社区检测(DCD)方法来进行面向社区的对象推荐。最后,我们利用来自现实世界的数据集完成了实验验证,实验结果表明,与相关基线相比,本文提出的方法可以提高社交物联网友谊选择任务的准确性和训练的有效性。
{"title":"A novel community-driven recommendation-based approach to predict and select friendships on the social IoT utilizing deep reinforcement learning","authors":"Babak Farhadi , Parvaneh Asghari , Ebrahim Mahdipour , Hamid Haj Seyyed Javadi","doi":"10.1016/j.jnca.2024.104092","DOIUrl":"10.1016/j.jnca.2024.104092","url":null,"abstract":"<div><div>The study of how to integrate Complex Networks (CN) within the Internet of Things (IoT) ecosystem has advanced significantly because of the field's recent expansion. CNs can tackle the biggest IoT issues by providing a common conceptual framework that encompasses the IoT scope. To this end, the Social Internet of Things (SIoT) perspective is introduced. In this study, a dynamic community-driven recommendation-oriented connection prediction and choice strategy utilizing Deep Reinforcement Learning (DRL) is proposed to deal with the key challenges located in the SIoT friendship selection component. To increase the efficiency of exploration, we incorporate an approach motivated by curiosity to create an intrinsic bonus signal that encourages the DRL agent to efficiently interact with its surroundings. Also, a novel method for Dynamic Community Detection (DCD) on SIoT to carry out community-oriented object recommendations is introduced. Lastly, we complete the experimental verifications utilizing datasets from the real world, and the experimental findings demonstrate that, in comparison to the related baselines, the approach presented here can enhance the accuracy of the social IoT friendship selection task and the effectiveness of training.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"235 ","pages":"Article 104092"},"PeriodicalIF":7.7,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142873860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mobile ad hoc networks (MANETs) are beneficial in a wide range of sectors because of their rapid network creation capabilities. If mobile nodes collaborate and have mutual trust, the network can function properly. Routing becomes more difficult, and vulnerabilities are exposed more quickly as a result of flexible network features and frequent relationship flaws induced by node movement. This paper proposes a method for evaluating trust nodes using direct trust values, indirect trust values, and comprehensive trust values. Then, evaluating the trust value, the network's malicious and non-malicious nodes are identified using the Improved Extreme Gradient Boosting (XGBoost) algorithm. From the detected malicious nodes, the cluster head is chosen to ensure effective data transmission. Finally, the optimal routes are chosen using a novel Enhanced Cat Swarm-assisted Optimized Link State Routing Protocol (ECSO OLSRP). Furthermore, the Cat Swarm Optimization (CSO) algorithm determines the ideal route path based on characteristics such as node stability degree and connection stability degree. Because the proposed technique provides secure data transmission, node path setup, and node efficiency evaluation, it can maintain network performance even in the presence of several hostile nodes. The performance of the proposed trust-based approach security routing technique in terms of packet delivery ratio of nodes (0.47), end-to-end delay time of nodes (0.06), network throughput of nodes (1852.22), and control overhead of nodes (7.41).
{"title":"A secure routing and malicious node detection in mobile Ad hoc network using trust value evaluation with improved XGBoost mechanism","authors":"Geetika Dhand , Meena Rao , Parul Chaudhary , Kavita Sheoran","doi":"10.1016/j.jnca.2024.104093","DOIUrl":"10.1016/j.jnca.2024.104093","url":null,"abstract":"<div><div>Mobile ad hoc networks (MANETs) are beneficial in a wide range of sectors because of their rapid network creation capabilities. If mobile nodes collaborate and have mutual trust, the network can function properly. Routing becomes more difficult, and vulnerabilities are exposed more quickly as a result of flexible network features and frequent relationship flaws induced by node movement. This paper proposes a method for evaluating trust nodes using direct trust values, indirect trust values, and comprehensive trust values. Then, evaluating the trust value, the network's malicious and non-malicious nodes are identified using the Improved Extreme Gradient Boosting (XGBoost) algorithm. From the detected malicious nodes, the cluster head is chosen to ensure effective data transmission. Finally, the optimal routes are chosen using a novel Enhanced Cat Swarm-assisted Optimized Link State Routing Protocol (ECSO OLSRP). Furthermore, the Cat Swarm Optimization (CSO) algorithm determines the ideal route path based on characteristics such as node stability degree and connection stability degree. Because the proposed technique provides secure data transmission, node path setup, and node efficiency evaluation, it can maintain network performance even in the presence of several hostile nodes. The performance of the proposed trust-based approach security routing technique in terms of packet delivery ratio of nodes (0.47), end-to-end delay time of nodes (0.06), network throughput of nodes (1852.22), and control overhead of nodes (7.41).</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"235 ","pages":"Article 104093"},"PeriodicalIF":7.7,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142873859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-09DOI: 10.1016/j.jnca.2024.104084
Hongyan Ran, Xiaohong Li, Zhichang Zhang
Recently, massive research has achieved significant development in improving the performance of rumor detection. However, identifying rumors in an invisible domain is still an elusive challenge. To address this issue, we propose an unsupervised cross-domain rumor detection model that enhances contrastive learning and cross-attention by label-aware learning to alleviate the domain shift. The model performs cross-domain feature alignment and enforces target samples to align with the corresponding prototypes of a given source domain. Moreover, we use a cross-attention mechanism on a pair of source data and target data with the same labels to learn domain-invariant representations. Because the samples in a domain pair tend to express similar semantic patterns, especially on the people’s attitudes (e.g., supporting or denying) towards the same category of rumors. In addition, we add a label-aware learning module as an enhancement component to learn the correlations between labels and instances during training and generate a better label distribution to replace the original one-hot label vector to guide the model training. At the same time, we use the label representation learned by the label learning module to guide the production of pseudo-label for the target samples. We conduct experiments on four groups of cross-domain datasets and show that our proposed model achieves state-of-the-art performance.
{"title":"Label-aware learning to enhance unsupervised cross-domain rumor detection","authors":"Hongyan Ran, Xiaohong Li, Zhichang Zhang","doi":"10.1016/j.jnca.2024.104084","DOIUrl":"10.1016/j.jnca.2024.104084","url":null,"abstract":"<div><div>Recently, massive research has achieved significant development in improving the performance of rumor detection. However, identifying rumors in an invisible domain is still an elusive challenge. To address this issue, we propose an unsupervised cross-domain rumor detection model that enhances contrastive learning and cross-attention by label-aware learning to alleviate the domain shift. The model performs cross-domain feature alignment and enforces target samples to align with the corresponding prototypes of a given source domain. Moreover, we use a cross-attention mechanism on a pair of source data and target data with the same labels to learn domain-invariant representations. Because the samples in a domain pair tend to express similar semantic patterns, especially on the people’s attitudes (e.g., supporting or denying) towards the same category of rumors. In addition, we add a label-aware learning module as an enhancement component to learn the correlations between labels and instances during training and generate a better label distribution to replace the original one-hot label vector to guide the model training. At the same time, we use the label representation learned by the label learning module to guide the production of pseudo-label for the target samples. We conduct experiments on four groups of cross-domain datasets and show that our proposed model achieves state-of-the-art performance.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"235 ","pages":"Article 104084"},"PeriodicalIF":7.7,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142825314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The traditional architecture of networks in Software Defined Networking (SDN) is divided into three distinct planes to incorporate intelligence into networks. However, this structure has also introduced security threats and challenges across these planes, including the widely recognized Distributed Denial of Service (DDoS) attack. Therefore, it is essential to predict such attacks and their variants at different planes in SDN to maintain seamless network operations. Apart from network based and flow analysis based solutions to detect the attacks; machine learning and deep learning based prediction and mitigation approaches are also explored by the researchers and applied at different planes of software defined networking. Consequently, a detailed analysis of DDoS attacks and a review that explores DDoS attacks in SDN along with their learning based prediction/mitigation strategies are required to be studied and presented in detail. This paper primarily aims to investigate and analyze DDoS attacks on each plane of SDN and to study as well as compare machine learning, advanced federated learning and deep learning approaches to predict these attacks. The real world case studies are also explored to compare the analysis. In addition, low-rate DDoS attacks and novel research directions are discussed that can further be utilized by SDN experts and researchers to confront the effects by DDoS attacks on SDN.
{"title":"A comprehensive plane-wise review of DDoS attacks in SDN: Leveraging detection and mitigation through machine learning and deep learning","authors":"Dhruv Kalambe, Divyansh Sharma, Pushkar Kadam, Shivangi Surati","doi":"10.1016/j.jnca.2024.104081","DOIUrl":"10.1016/j.jnca.2024.104081","url":null,"abstract":"<div><div>The traditional architecture of networks in Software Defined Networking (SDN) is divided into three distinct planes to incorporate intelligence into networks. However, this structure has also introduced security threats and challenges across these planes, including the widely recognized Distributed Denial of Service (DDoS) attack. Therefore, it is essential to predict such attacks and their variants at different planes in SDN to maintain seamless network operations. Apart from network based and flow analysis based solutions to detect the attacks; machine learning and deep learning based prediction and mitigation approaches are also explored by the researchers and applied at different planes of software defined networking. Consequently, a detailed analysis of DDoS attacks and a review that explores DDoS attacks in SDN along with their learning based prediction/mitigation strategies are required to be studied and presented in detail. This paper primarily aims to investigate and analyze DDoS attacks on each plane of SDN and to study as well as compare machine learning, advanced federated learning and deep learning approaches to predict these attacks. The real world case studies are also explored to compare the analysis. In addition, low-rate DDoS attacks and novel research directions are discussed that can further be utilized by SDN experts and researchers to confront the effects by DDoS attacks on SDN.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"235 ","pages":"Article 104081"},"PeriodicalIF":7.7,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142825313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-09DOI: 10.1016/j.jnca.2024.104082
Lizeth Patricia Aguirre Sanchez, Yao Shen, Minyi Guo
The challenge of link overutilization in networking persists, prompting the development of load-balancing methods such as multi-path strategies and flow rerouting. However, traditional rule-based heuristics struggle to adapt dynamically to network changes. This leads to complex models and lengthy convergence times, unsuitable for diverse QoS demands, particularly in time-sensitive applications. Existing routing approaches often result in specific types of traffic overloading links or general congestion, prolonged convergence delays, and scalability challenges. To tackle these issues, we propose a QoS-Congestion Aware Deep Reinforcement Learning Approach for Multi-Path Routing in Software-Defined Networking (MDQ). Leveraging Deep Reinforcement Learning, MDQ intelligently selects optimal multi-paths and allocates traffic based on flow needs. We design a multi-objective function using a combination of link and queue metrics to establish an efficient routing policy. Moreover, we integrate a congestion severity index into the learning process and incorporate a traffic classification phase to handle mice-elephant flows, ensuring that diverse class-of-service requirements are adequately addressed. Through an RYU-Docker-based Openflow framework integrating a Live QoS Monitor, DNC Classifier, and Online Routing, results demonstrate a 19%–22% reduction in delay compared to state-of-the-art algorithms, exhibiting robust reliability across diverse scenarios of network dynamics.
{"title":"MDQ: A QoS-Congestion Aware Deep Reinforcement Learning Approach for Multi-Path Routing in SDN","authors":"Lizeth Patricia Aguirre Sanchez, Yao Shen, Minyi Guo","doi":"10.1016/j.jnca.2024.104082","DOIUrl":"10.1016/j.jnca.2024.104082","url":null,"abstract":"<div><div>The challenge of link overutilization in networking persists, prompting the development of load-balancing methods such as multi-path strategies and flow rerouting. However, traditional rule-based heuristics struggle to adapt dynamically to network changes. This leads to complex models and lengthy convergence times, unsuitable for diverse QoS demands, particularly in time-sensitive applications. Existing routing approaches often result in specific types of traffic overloading links or general congestion, prolonged convergence delays, and scalability challenges. To tackle these issues, we propose a QoS-Congestion Aware Deep Reinforcement Learning Approach for Multi-Path Routing in Software-Defined Networking (MDQ). Leveraging Deep Reinforcement Learning, MDQ intelligently selects optimal multi-paths and allocates traffic based on flow needs. We design a multi-objective function using a combination of link and queue metrics to establish an efficient routing policy. Moreover, we integrate a congestion severity index into the learning process and incorporate a traffic classification phase to handle mice-elephant flows, ensuring that diverse class-of-service requirements are adequately addressed. Through an RYU-Docker-based Openflow framework integrating a Live QoS Monitor, DNC Classifier, and Online Routing, results demonstrate a 19%–22% reduction in delay compared to state-of-the-art algorithms, exhibiting robust reliability across diverse scenarios of network dynamics.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"235 ","pages":"Article 104082"},"PeriodicalIF":7.7,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142825309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-09DOI: 10.1016/j.jnca.2024.104080
Xiankun Fu, Li Pan, Shijun Liu
High computing power and large storage capacity are necessary for running big data tasks, which leads to high infrastructure costs. Infrastructure-as-a-Service (IaaS) clouds can provide configuration environments and computing resources needed for running big data tasks, while saving users from expensive software and hardware infrastructure investments. Many studies show that the cost of computation can be reduced by caching intermediate results and reusing them instead of repeating computations. However, the storage cost incurred by caching a large number of intermediate results over a long period of time may exceed the cost of computation, ultimately leading to an increase in total cost instead. For making optimal caching decisions, future usage profiles for big data tasks are needed, but it is generally very hard to predict them precisely. In this paper, to address this problem, we propose two practical online algorithms, one deterministic and the other randomized, which can determine whether to cache intermediate results to reduce the total cost of big data tasks without requiring any future information. We prove theoretically that the competitive ratio of the proposed deterministic (randomized) algorithm is (resp., ). Using real-world Wikipedia data as well as synthetic datasets, we verify the effectiveness of our proposed algorithms through a large number of experiments based on the price of Alibaba’s public IaaS cloud products.
{"title":"Caching or re-computing: Online cost optimization for running big data tasks in IaaS clouds","authors":"Xiankun Fu, Li Pan, Shijun Liu","doi":"10.1016/j.jnca.2024.104080","DOIUrl":"10.1016/j.jnca.2024.104080","url":null,"abstract":"<div><div>High computing power and large storage capacity are necessary for running big data tasks, which leads to high infrastructure costs. Infrastructure-as-a-Service (IaaS) clouds can provide configuration environments and computing resources needed for running big data tasks, while saving users from expensive software and hardware infrastructure investments. Many studies show that the cost of computation can be reduced by caching intermediate results and reusing them instead of repeating computations. However, the storage cost incurred by caching a large number of intermediate results over a long period of time may exceed the cost of computation, ultimately leading to an increase in total cost instead. For making optimal caching decisions, future usage profiles for big data tasks are needed, but it is generally very hard to predict them precisely. In this paper, to address this problem, we propose two practical online algorithms, one deterministic and the other randomized, which can determine whether to cache intermediate results to reduce the total cost of big data tasks without requiring any future information. We prove theoretically that the competitive ratio of the proposed deterministic (randomized) algorithm is <span><math><mrow><mi>m</mi><mi>i</mi><mi>n</mi><mrow><mo>(</mo><mn>2</mn><mo>−</mo><mfrac><mrow><mn>1</mn><mo>−</mo><mi>η</mi></mrow><mrow><mi>δ</mi></mrow></mfrac><mo>,</mo><mn>2</mn><mo>−</mo><mfrac><mrow><mi>η</mi></mrow><mrow><mi>β</mi></mrow></mfrac><mo>)</mo></mrow></mrow></math></span> (resp., <span><math><mfrac><mrow><mi>e</mi></mrow><mrow><mi>e</mi><mo>−</mo><mn>1</mn></mrow></mfrac></math></span>). Using real-world Wikipedia data as well as synthetic datasets, we verify the effectiveness of our proposed algorithms through a large number of experiments based on the price of Alibaba’s public IaaS cloud products.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"235 ","pages":"Article 104080"},"PeriodicalIF":7.7,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142825311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study addresses the growing challenges of energy consumption and the depletion of energy resources, particularly in the context of smart buildings. As the demand for energy increases alongside the need for efficient building maintenance, it becomes imperative to explore innovative energy management solutions. We present a review of Internet of Things (IoT)-based frameworks aimed at managing smart city energy consumption, the pivotal role of IoT devices in addressing these issues due to their compactness, sensing, measurement, and computing capabilities. Our review methodology involves a thorough analysis of existing literature on IoT architectures and frameworks for intelligent energy management applications. We focus on systems that not only collect and store data but also support intelligent analysis for monitoring, controlling, and enhancing system efficiency. Additionally, we examine the potential for these frameworks to serve as platforms for the development of third-party applications, thereby extending their utility and adaptability. The findings from our review indicate that IoT-based frameworks offer potential to reduce energy consumption and environmental impact in smart buildings. By adopting intelligent mechanisms and solutions, these frameworks facilitate effective energy management, leading to improved system efficiency and sustainability. Considering these findings, we recommend further exploration and adoption of IoT-based wireless sensing systems in smart buildings as a strategic approach to energy management. Our review highlights the importance of incorporating intelligent analysis and enabling the development of third-party applications within the IoT framework to efficiently meet evolving energy demands and maintenance challenges.
{"title":"Intelligent energy management with IoT framework in smart cities using intelligent analysis: An application of machine learning methods for complex networks and systems","authors":"Maryam Nikpour , Parisa Behvand Yousefi , Hadi Jafarzadeh , Kasra Danesh , Roya Shomali , Saeed Asadi , Ahmad Gholizadeh Lonbar , Mohsen Ahmadi","doi":"10.1016/j.jnca.2024.104089","DOIUrl":"10.1016/j.jnca.2024.104089","url":null,"abstract":"<div><div>This study addresses the growing challenges of energy consumption and the depletion of energy resources, particularly in the context of smart buildings. As the demand for energy increases alongside the need for efficient building maintenance, it becomes imperative to explore innovative energy management solutions. We present a review of Internet of Things (IoT)-based frameworks aimed at managing smart city energy consumption, the pivotal role of IoT devices in addressing these issues due to their compactness, sensing, measurement, and computing capabilities. Our review methodology involves a thorough analysis of existing literature on IoT architectures and frameworks for intelligent energy management applications. We focus on systems that not only collect and store data but also support intelligent analysis for monitoring, controlling, and enhancing system efficiency. Additionally, we examine the potential for these frameworks to serve as platforms for the development of third-party applications, thereby extending their utility and adaptability. The findings from our review indicate that IoT-based frameworks offer potential to reduce energy consumption and environmental impact in smart buildings. By adopting intelligent mechanisms and solutions, these frameworks facilitate effective energy management, leading to improved system efficiency and sustainability. Considering these findings, we recommend further exploration and adoption of IoT-based wireless sensing systems in smart buildings as a strategic approach to energy management. Our review highlights the importance of incorporating intelligent analysis and enabling the development of third-party applications within the IoT framework to efficiently meet evolving energy demands and maintenance challenges.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"235 ","pages":"Article 104089"},"PeriodicalIF":7.7,"publicationDate":"2024-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143135743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-04DOI: 10.1016/j.jnca.2024.104079
Walid K. Hasan , Iftekhar Ahmad , Daryoush Habibi , Quoc Viet Phung , Mohammad Al-Fawa'reh , Kazi Yasin Islam , Ruba Zaheer , Haitham Khaled
Underwater communication plays a crucial role in monitoring the aquatic environment on Earth. Due to their unique characteristics, underwater acoustic channels present unique challenges including lengthy signal transmission delays, limited data transfer bandwidth, variable signal quality, and fluctuating channel conditions. Furthermore, the reliance on battery power for most Underwater Wireless Acoustic Networks (UWAN) devices, coupled with the challenges associated with battery replacement or recharging, intensifies the challenges. Underwater acoustic communications are heavily constrained by available resources (e.g., very limited bandwidth, and limited energy storage). Consequently, the role of medium access control (MAC) protocol which distributes available resources among nodes is critical in maintaining a reliable underwater communication system. This study presents an extensive review of current research in MAC for UWAN. This study presents an extensive review of current research in MAC for UWAN. The paper explores the unique challenges and characteristics of UWAN, which are critical for the MAC protocol design. Subsequently, a diverse range of energy-efficient MAC techniques are categorized and reviewed. Potential future research avenues in energy-efficient MAC protocols are discussed, with a particular emphasis on the challenges to enable the broader implementation of the Green Internet of Underwater Things (GIoUT).
{"title":"A survey on energy efficient medium access control for acoustic wireless communication networks in underwater environments","authors":"Walid K. Hasan , Iftekhar Ahmad , Daryoush Habibi , Quoc Viet Phung , Mohammad Al-Fawa'reh , Kazi Yasin Islam , Ruba Zaheer , Haitham Khaled","doi":"10.1016/j.jnca.2024.104079","DOIUrl":"10.1016/j.jnca.2024.104079","url":null,"abstract":"<div><div>Underwater communication plays a crucial role in monitoring the aquatic environment on Earth. Due to their unique characteristics, underwater acoustic channels present unique challenges including lengthy signal transmission delays, limited data transfer bandwidth, variable signal quality, and fluctuating channel conditions. Furthermore, the reliance on battery power for most Underwater Wireless Acoustic Networks (UWAN) devices, coupled with the challenges associated with battery replacement or recharging, intensifies the challenges. Underwater acoustic communications are heavily constrained by available resources (e.g., very limited bandwidth, and limited energy storage). Consequently, the role of medium access control (MAC) protocol which distributes available resources among nodes is critical in maintaining a reliable underwater communication system. This study presents an extensive review of current research in MAC for UWAN. This study presents an extensive review of current research in MAC for UWAN. The paper explores the unique challenges and characteristics of UWAN, which are critical for the MAC protocol design. Subsequently, a diverse range of energy-efficient MAC techniques are categorized and reviewed. Potential future research avenues in energy-efficient MAC protocols are discussed, with a particular emphasis on the challenges to enable the broader implementation of the Green Internet of Underwater Things (GIoUT).</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"235 ","pages":"Article 104079"},"PeriodicalIF":7.7,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142825325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-28DOI: 10.1016/j.jnca.2024.104068
Silvestre Malta , Pedro Pinto , Manuel Fernández-Veiga
The advent of 5th Generation (5G) networks has introduced the strategy of network slicing as a paradigm shift, enabling the provision of services with distinct Quality of Service (QoS) requirements. The 5th Generation New Radio (5G NR) standard complies with the use cases Enhanced Mobile Broadband (eMBB), Ultra-Reliable Low Latency Communications (URLLC), and Massive Machine Type Communications (mMTC), which demand a dynamic adaptation of network slicing to meet the diverse traffic needs. This dynamic adaptation presents both a critical challenge and a significant opportunity to improve 5G network efficiency. This paper proposes a Deep Reinforcement Learning (DRL) agent that performs dynamic resource allocation in 5G wireless network slicing according to traffic requirements of the 5G use cases within two scenarios: eMBB with URLLC and eMBB with mMTC. The DRL agent evaluates the performance of different decoding schemes such as Orthogonal Multiple Access (OMA), Non-Orthogonal Multiple Access (NOMA), and Rate Splitting Multiple Access (RSMA) and applies the best decoding scheme in these scenarios under different network conditions. The DRL agent has been tested to maximize the sum rate in scenario eMBB with URLLC and to maximize the number of successfully decoded devices in scenario eMBB with mMTC, both with different combinations of number of devices, power gains and number of allocated frequencies. The results show that the DRL agent dynamically chooses the best decoding scheme and presents an efficiency in maximizing the sum rate and the decoded devices between 84% and 100% for both scenarios evaluated.
{"title":"Optimizing 5G network slicing with DRL: Balancing eMBB, URLLC, and mMTC with OMA, NOMA, and RSMA","authors":"Silvestre Malta , Pedro Pinto , Manuel Fernández-Veiga","doi":"10.1016/j.jnca.2024.104068","DOIUrl":"10.1016/j.jnca.2024.104068","url":null,"abstract":"<div><div>The advent of 5th Generation (5G) networks has introduced the strategy of network slicing as a paradigm shift, enabling the provision of services with distinct Quality of Service (QoS) requirements. The 5th Generation New Radio (5G NR) standard complies with the use cases Enhanced Mobile Broadband (eMBB), Ultra-Reliable Low Latency Communications (URLLC), and Massive Machine Type Communications (mMTC), which demand a dynamic adaptation of network slicing to meet the diverse traffic needs. This dynamic adaptation presents both a critical challenge and a significant opportunity to improve 5G network efficiency. This paper proposes a Deep Reinforcement Learning (DRL) agent that performs dynamic resource allocation in 5G wireless network slicing according to traffic requirements of the 5G use cases within two scenarios: eMBB with URLLC and eMBB with mMTC. The DRL agent evaluates the performance of different decoding schemes such as Orthogonal Multiple Access (OMA), Non-Orthogonal Multiple Access (NOMA), and Rate Splitting Multiple Access (RSMA) and applies the best decoding scheme in these scenarios under different network conditions. The DRL agent has been tested to maximize the sum rate in scenario eMBB with URLLC and to maximize the number of successfully decoded devices in scenario eMBB with mMTC, both with different combinations of number of devices, power gains and number of allocated frequencies. The results show that the DRL agent dynamically chooses the best decoding scheme and presents an efficiency in maximizing the sum rate and the decoded devices between 84% and 100% for both scenarios evaluated.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"234 ","pages":"Article 104068"},"PeriodicalIF":7.7,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142745343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-26DOI: 10.1016/j.jnca.2024.104067
José Santos , Efstratios Reppas , Tim Wauters , Bruno Volckaert , Filip De Turck
Containers have reshaped application deployment and life-cycle management in recent cloud platforms. The paradigm shift from large monolithic applications to complex graphs of loosely-coupled microservices aims to increase deployment flexibility and operational efficiency. However, efficient allocation and scaling of microservice applications is challenging due to their intricate inter-dependencies. Existing works do not consider microservice dependencies, which could lead to the application’s performance degradation when service demand increases. As dependencies increase, communication between microservices becomes more complex and frequent, leading to slower response times and higher resource consumption, especially during high demand. In addition, performance issues in one microservice can also trigger a ripple effect across dependent services, exacerbating the performance degradation across the entire application. This paper studies the impact of microservice inter-dependencies in auto-scaling by proposing Gwydion, a novel framework that enables different auto-scaling goals through Reinforcement Learning (RL) algorithms. Gwydion has been developed based on the OpenAI Gym library and customized for the popular Kubernetes (K8s) platform to bridge the gap between RL and auto-scaling research by training RL algorithms on real cloud environments for two opposing reward strategies: cost-aware and latency-aware. Gwydion focuses on improving resource usage and reducing the application’s response time by considering microservice inter-dependencies when scaling horizontally. Experiments with microservice benchmark applications, such as Redis Cluster (RC) and Online Boutique (OB), show that RL agents can reduce deployment costs and the application’s response time compared to default scaling mechanisms, achieving up to 50% lower latency while avoiding performance degradation. For RC, cost-aware algorithms can reduce the number of deployed pods (2 to 4), resulting in slightly higher latency ( to 6 ms) but lower resource consumption. For OB, all RL algorithms exhibit a notable response time improvement by considering all microservices in the observation space, enabling the sequential triggering of actions across different deployments. This leads to nearly 30% cost savings while maintaining consistently lower latency throughout the experiment. Gwydion aims to advance auto-scaling research in a rapidly evolving dynamic cloud environment.
{"title":"Gwydion: Efficient auto-scaling for complex containerized applications in Kubernetes through Reinforcement Learning","authors":"José Santos , Efstratios Reppas , Tim Wauters , Bruno Volckaert , Filip De Turck","doi":"10.1016/j.jnca.2024.104067","DOIUrl":"10.1016/j.jnca.2024.104067","url":null,"abstract":"<div><div>Containers have reshaped application deployment and life-cycle management in recent cloud platforms. The paradigm shift from large monolithic applications to complex graphs of loosely-coupled microservices aims to increase deployment flexibility and operational efficiency. However, efficient allocation and scaling of microservice applications is challenging due to their intricate inter-dependencies. Existing works do not consider microservice dependencies, which could lead to the application’s performance degradation when service demand increases. As dependencies increase, communication between microservices becomes more complex and frequent, leading to slower response times and higher resource consumption, especially during high demand. In addition, performance issues in one microservice can also trigger a ripple effect across dependent services, exacerbating the performance degradation across the entire application. This paper studies the impact of microservice inter-dependencies in auto-scaling by proposing <em>Gwydion</em>, a novel framework that enables different auto-scaling goals through Reinforcement Learning (RL) algorithms. <em>Gwydion</em> has been developed based on the OpenAI Gym library and customized for the popular Kubernetes (K8s) platform to bridge the gap between RL and auto-scaling research by training RL algorithms on real cloud environments for two opposing reward strategies: cost-aware and latency-aware. <em>Gwydion</em> focuses on improving resource usage and reducing the application’s response time by considering microservice inter-dependencies when scaling horizontally. Experiments with microservice benchmark applications, such as Redis Cluster (RC) and Online Boutique (OB), show that RL agents can reduce deployment costs and the application’s response time compared to default scaling mechanisms, achieving up to 50% lower latency while avoiding performance degradation. For RC, cost-aware algorithms can reduce the number of deployed pods (2 to 4), resulting in slightly higher latency (<span><math><mrow><mn>300</mn><mspace></mspace><mi>μ</mi><mi>s</mi></mrow></math></span> to 6 ms) but lower resource consumption. For OB, all RL algorithms exhibit a notable response time improvement by considering all microservices in the observation space, enabling the sequential triggering of actions across different deployments. This leads to nearly 30% cost savings while maintaining consistently lower latency throughout the experiment. Gwydion aims to advance auto-scaling research in a rapidly evolving dynamic cloud environment.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"234 ","pages":"Article 104067"},"PeriodicalIF":7.7,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142745345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}