Pub Date : 2024-08-16DOI: 10.1109/tnsm.2024.3444909
Xu Yu, Yan Lu, Feng Jiang, Qiang Hu, Junwei Du, Dunwei Gong
{"title":"A Cross-Domain Intrusion Detection Method Based on Nonlinear Augmented Explicit Features","authors":"Xu Yu, Yan Lu, Feng Jiang, Qiang Hu, Junwei Du, Dunwei Gong","doi":"10.1109/tnsm.2024.3444909","DOIUrl":"https://doi.org/10.1109/tnsm.2024.3444909","url":null,"abstract":"","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"7 1","pages":""},"PeriodicalIF":5.3,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-16DOI: 10.1109/TNSM.2024.3445123
Idayat O. Sanusi;Karim M. Nasr
Distributed Radio Resource Management (RRM) solutions are gaining an increasing interest recently, especially when a large number of devices are present as in the case of a wireless industrial network. Self-organisation relying on distributed RRM schemes is envisioned to be one of the key pillars of 5G and beyond Ultra Reliable Low Latency Communication (URLLC) networks. Reinforcement learning is emerging as a powerful distributed technique to facilitate self-organisation. In this paper, spectrum sharing in a Device-to-Device (D2D)-enabled wireless network is investigated, targeting URLLC applications. A distributed scheme denoted as Reinforcement Learning Based Matching (RLBM) which combines reinforcement learning and matching theory, is presented with the aim of achieving an autonomous device-based resource allocation. A distributed local Q-table is used to avoid global information gathering and a stateless Q-learning approach is adopted, therefore reducing requirements for a large state-action mapping. Simulation case studies are used to verify the performance of the presented approach in comparison with other RRM techniques. The presented RLBM approach results in a good tradeoff of throughput, complexity and signalling overheads while maintaining the target Quality of Service/Experience (QoS/QoE) requirements of the different users in the network.
{"title":"A Reinforcement Learning Approach for D2D Spectrum Sharing in Wireless Industrial URLLC Networks","authors":"Idayat O. Sanusi;Karim M. Nasr","doi":"10.1109/TNSM.2024.3445123","DOIUrl":"10.1109/TNSM.2024.3445123","url":null,"abstract":"Distributed Radio Resource Management (RRM) solutions are gaining an increasing interest recently, especially when a large number of devices are present as in the case of a wireless industrial network. Self-organisation relying on distributed RRM schemes is envisioned to be one of the key pillars of 5G and beyond Ultra Reliable Low Latency Communication (URLLC) networks. Reinforcement learning is emerging as a powerful distributed technique to facilitate self-organisation. In this paper, spectrum sharing in a Device-to-Device (D2D)-enabled wireless network is investigated, targeting URLLC applications. A distributed scheme denoted as Reinforcement Learning Based Matching (RLBM) which combines reinforcement learning and matching theory, is presented with the aim of achieving an autonomous device-based resource allocation. A distributed local Q-table is used to avoid global information gathering and a stateless Q-learning approach is adopted, therefore reducing requirements for a large state-action mapping. Simulation case studies are used to verify the performance of the presented approach in comparison with other RRM techniques. The presented RLBM approach results in a good tradeoff of throughput, complexity and signalling overheads while maintaining the target Quality of Service/Experience (QoS/QoE) requirements of the different users in the network.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 5","pages":"5410-5419"},"PeriodicalIF":4.7,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142224452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In Cloud-based computing, job scheduling and load balancing are vital to ensure on-demand dynamic resource provisioning. However, reducing the scheduling parameters may affect datacenter performance due to the fluctuating on-demand requests. To deal with the aforementioned challenges, this research proposes a job scheduling algorithm, which is an improved version of a swarm intelligence algorithm. Two approaches, namely linear weight JAYA (LWJAYA) and chaotic JAYA (CJAYA), are implemented to improve the convergence speed for optimal results. Besides, a load-balancing technique is incorporated in line with job scheduling. Dynamically independent and non-pre-emptive jobs were considered for the simulations, which were simulated on two disparate test cases with homogeneous and heterogeneous VMs. The efficiency of the proposed technique was validated against a synthetic and real-world dataset from NASA, and evaluated against several top-of-the-line intelligent optimization techniques, based on the Holm’s test and Friedman test. Findings of the experiment show that the suggested approach performs better than the alternative approaches.
{"title":"Collaborative Cloud Resource Management and Task Consolidation Using JAYA Variants","authors":"Kaushik Mishra;Santosh Kumar Majhi;Kshira Sagar Sahoo;Sourav Kumar Bhoi;Monowar Bhuyan;Amir H. Gandomi","doi":"10.1109/TNSM.2024.3443285","DOIUrl":"10.1109/TNSM.2024.3443285","url":null,"abstract":"In Cloud-based computing, job scheduling and load balancing are vital to ensure on-demand dynamic resource provisioning. However, reducing the scheduling parameters may affect datacenter performance due to the fluctuating on-demand requests. To deal with the aforementioned challenges, this research proposes a job scheduling algorithm, which is an improved version of a swarm intelligence algorithm. Two approaches, namely linear weight JAYA (LWJAYA) and chaotic JAYA (CJAYA), are implemented to improve the convergence speed for optimal results. Besides, a load-balancing technique is incorporated in line with job scheduling. Dynamically independent and non-pre-emptive jobs were considered for the simulations, which were simulated on two disparate test cases with homogeneous and heterogeneous VMs. The efficiency of the proposed technique was validated against a synthetic and real-world dataset from NASA, and evaluated against several top-of-the-line intelligent optimization techniques, based on the Holm’s test and Friedman test. Findings of the experiment show that the suggested approach performs better than the alternative approaches.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 6","pages":"6248-6259"},"PeriodicalIF":4.7,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10636847","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-14DOI: 10.1109/tnsm.2024.3443644
Yibo Zhang, Xiangwang Hou, Guoyu Du, Qi Li, Mian Ahmad Jan, Alireza Jolfaei, Muhammad Usman
{"title":"Machine Learning-Based Reliable Transmission for UAV Networks With Hybrid Multiple Access","authors":"Yibo Zhang, Xiangwang Hou, Guoyu Du, Qi Li, Mian Ahmad Jan, Alireza Jolfaei, Muhammad Usman","doi":"10.1109/tnsm.2024.3443644","DOIUrl":"https://doi.org/10.1109/tnsm.2024.3443644","url":null,"abstract":"","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"60 1","pages":""},"PeriodicalIF":5.3,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142224454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-13DOI: 10.1109/TNSM.2024.3442826
Run-Hua Shi;Xia-Qin Fang
Range query in cloud-based outsourcing applications is an important data search service, but it can suffer from privacy disclosure. In this paper, to enhance the security and privacy of sensitive data, we introduce quantum cryptographic technologies and present a feasible quantum approach to address an important range query, i.e., privacy-preserving range MAX/MIN query. First, we define a primitive protocol of secure multiparty computations, called Oblivious Set Inclusion Decision (OSID), in which two parties jointly decide whether a private set includes another private set in an oblivious way, and present an efficient OSID quantum protocol. Especially, in order to efficiently implement OSID quantum protocol, we design a single-photon-based quantum protocol for computing XOR of two private bits, which can achieve the information-theoretical security with the help of a non-colluding quantum cloud. Finally, we propose a novel quantum scheme for privacy-preserving range MAX/MIN query in edge-based Internet of Things by using OSID quantum protocols. Compared with the classical related schemes, our proposed quantum scheme has higher security (i.e., quantum security), because the security of our proposed protocols is based on the basic physical principles of quantum mechanics, instead of unproven computational difficulty assumptions.
{"title":"Quantum Scheme for Privacy-Preserving Range MAX/MIN Query in Edge-Based Internet of Things","authors":"Run-Hua Shi;Xia-Qin Fang","doi":"10.1109/TNSM.2024.3442826","DOIUrl":"10.1109/TNSM.2024.3442826","url":null,"abstract":"Range query in cloud-based outsourcing applications is an important data search service, but it can suffer from privacy disclosure. In this paper, to enhance the security and privacy of sensitive data, we introduce quantum cryptographic technologies and present a feasible quantum approach to address an important range query, i.e., privacy-preserving range MAX/MIN query. First, we define a primitive protocol of secure multiparty computations, called Oblivious Set Inclusion Decision (OSID), in which two parties jointly decide whether a private set includes another private set in an oblivious way, and present an efficient OSID quantum protocol. Especially, in order to efficiently implement OSID quantum protocol, we design a single-photon-based quantum protocol for computing XOR of two private bits, which can achieve the information-theoretical security with the help of a non-colluding quantum cloud. Finally, we propose a novel quantum scheme for privacy-preserving range MAX/MIN query in edge-based Internet of Things by using OSID quantum protocols. Compared with the classical related schemes, our proposed quantum scheme has higher security (i.e., quantum security), because the security of our proposed protocols is based on the basic physical principles of quantum mechanics, instead of unproven computational difficulty assumptions.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 6","pages":"6827-6838"},"PeriodicalIF":4.7,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-12DOI: 10.1109/TNSM.2024.3442688
Fekri Saleh;Saleem Karmoshi;Abraham O. Fapojuwo;Hong Zhong
Multi-Tenant Data Centers (MTDCs) allocate resources to tenants in terms of processors, memory, and storage. However, equal allocation of network resources is often overlooked, leading to unpredictable application performance. To address this issue, we propose Tenant-Aware Resource Allocation (TARA), a virtual resource allocation mechanism for MTDCs. TARA allocates tenants’ virtual network resources as virtual ports on the substrate physical network, enabling control and management by dedicated controllers. In this paper, we introduce a classification method for virtual nodes within Virtual Data Centers (VDCs) aimed at ensuring optimal network performance based on tenant demands. Furthermore, we present a source routing mechanism that utilizes path tables to minimize traffic forwarding delays and enhance network workload efficiency. The TARA model optimizes virtual resource allocation, enhances network performance, and simplifies virtual network resource management. Experimental evaluations demonstrate the effectiveness of the TARA system in improving network performance and meeting tenants’ quality of service requirements.
{"title":"TARA: Tenant-Aware Resource Allocation in Multi-Tenant Data Centers","authors":"Fekri Saleh;Saleem Karmoshi;Abraham O. Fapojuwo;Hong Zhong","doi":"10.1109/TNSM.2024.3442688","DOIUrl":"10.1109/TNSM.2024.3442688","url":null,"abstract":"Multi-Tenant Data Centers (MTDCs) allocate resources to tenants in terms of processors, memory, and storage. However, equal allocation of network resources is often overlooked, leading to unpredictable application performance. To address this issue, we propose Tenant-Aware Resource Allocation (TARA), a virtual resource allocation mechanism for MTDCs. TARA allocates tenants’ virtual network resources as virtual ports on the substrate physical network, enabling control and management by dedicated controllers. In this paper, we introduce a classification method for virtual nodes within Virtual Data Centers (VDCs) aimed at ensuring optimal network performance based on tenant demands. Furthermore, we present a source routing mechanism that utilizes path tables to minimize traffic forwarding delays and enhance network workload efficiency. The TARA model optimizes virtual resource allocation, enhances network performance, and simplifies virtual network resource management. Experimental evaluations demonstrate the effectiveness of the TARA system in improving network performance and meeting tenants’ quality of service requirements.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 6","pages":"6349-6363"},"PeriodicalIF":4.7,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-12DOI: 10.1109/TNSM.2024.3442298
Yuqi Dai;Hua Zhang;Jingyu Wang;Jianxin Liao
Modern networks are susceptible to configuration errors, such as misconfigurations and policy conflicts due to the complex interactions of diverse devices through various protocols. Control plane verification offers an effective solution to prevent these errors. However, existing tools face several challenges: (i) prolonged verification times, (ii) the verification of only specific policies, and (iii) poor robustness against node and link failures. To address these issues, we propose a control plane verification framework based on a multimodal multitask learning model. This framework enables simultaneous verification of multiple policies directly from various network configuration files. The learning model utilizes modality fusion techniques to capture both topology-related and traffic-related network features. It is trained on datasets augmented with the failure model to enhance robustness against failures. We compare our framework with three state-of-the-art verification tools: Minesweeper, Hoyan, and Tiramisu. Our evaluation shows that our framework is 2600 times faster than Minesweeper, twice as fast as Hoyan, and 19 times faster than Tiramisu, while maintaining 100% verification accuracy. Furthermore, our framework excels in verifying traffic-related network policies and remains effective even under node and link failures.
{"title":"Multimodal Multitask Control Plane Verification Framework","authors":"Yuqi Dai;Hua Zhang;Jingyu Wang;Jianxin Liao","doi":"10.1109/TNSM.2024.3442298","DOIUrl":"10.1109/TNSM.2024.3442298","url":null,"abstract":"Modern networks are susceptible to configuration errors, such as misconfigurations and policy conflicts due to the complex interactions of diverse devices through various protocols. Control plane verification offers an effective solution to prevent these errors. However, existing tools face several challenges: (i) prolonged verification times, (ii) the verification of only specific policies, and (iii) poor robustness against node and link failures. To address these issues, we propose a control plane verification framework based on a multimodal multitask learning model. This framework enables simultaneous verification of multiple policies directly from various network configuration files. The learning model utilizes modality fusion techniques to capture both topology-related and traffic-related network features. It is trained on datasets augmented with the failure model to enhance robustness against failures. We compare our framework with three state-of-the-art verification tools: Minesweeper, Hoyan, and Tiramisu. Our evaluation shows that our framework is 2600 times faster than Minesweeper, twice as fast as Hoyan, and 19 times faster than Tiramisu, while maintaining 100% verification accuracy. Furthermore, our framework excels in verifying traffic-related network policies and remains effective even under node and link failures.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 6","pages":"6684-6702"},"PeriodicalIF":4.7,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-09DOI: 10.1109/TNSM.2024.3441390
Juan Lucas Vieira;Daniel Mosse;Diego Passos
Mobile devices’ popularization has brought several new applications to communication networks. As we move into an increasingly denser scenario, problems such as collisions between transmissions and unbalanced load become more pronounced. Moreover, while station-based handoff is inefficient to reduce these issues, network-wide handover decisions might provide better network resource management. This paper proposes LEAF, an access point virtualization solution based on Software Defined Networking to enable station (STA) handover conducted by the network, based on a global scope. Unlike other solutions in the literature, our proposal fully supports multichannel migrations through the IEEE 802.11h Channel Switch Announcement without restricting the channel utilization by the access points. To demonstrate the feasibility of such an approach, we present experimental data regarding the behavior of several different devices in face of this mechanism. We also evaluate our complete virtualization solution, which reveals that the handoff of STAs did not lead to significant packet losses or delays in STAs’ connections, while providing a foundation to improve network’s self-management and flexibility, allowing association control and load balancing tasks to be executed on top of our solution.
{"title":"LEAF: Improving Handoff Flexibility of IEEE 802.11 Networks With an SDN-Based Virtual Access Point Framework","authors":"Juan Lucas Vieira;Daniel Mosse;Diego Passos","doi":"10.1109/TNSM.2024.3441390","DOIUrl":"10.1109/TNSM.2024.3441390","url":null,"abstract":"Mobile devices’ popularization has brought several new applications to communication networks. As we move into an increasingly denser scenario, problems such as collisions between transmissions and unbalanced load become more pronounced. Moreover, while station-based handoff is inefficient to reduce these issues, network-wide handover decisions might provide better network resource management. This paper proposes LEAF, an access point virtualization solution based on Software Defined Networking to enable station (STA) handover conducted by the network, based on a global scope. Unlike other solutions in the literature, our proposal fully supports multichannel migrations through the IEEE 802.11h Channel Switch Announcement without restricting the channel utilization by the access points. To demonstrate the feasibility of such an approach, we present experimental data regarding the behavior of several different devices in face of this mechanism. We also evaluate our complete virtualization solution, which reveals that the handoff of STAs did not lead to significant packet losses or delays in STAs’ connections, while providing a foundation to improve network’s self-management and flexibility, allowing association control and load balancing tasks to be executed on top of our solution.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 6","pages":"6630-6642"},"PeriodicalIF":4.7,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141946471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To ensure reliable network services, the link protection method is widely employed for light-path provision. However, it inevitably increases propagation delay due to different transmission distances between active and backup light-paths, leading to a longer transport delay. Consequently, a crucial challenge is how to coordinate link protection and transport delay to maximize service availability while satisfying the delay requirements of each service. In this paper, we investigate the availability-aware and delay-sensitive (AADS) radio access network (RAN) slicing mapping problem with link protection in metro-access/aggregation elastic optical networks (EONs). We initially provide the mathematical model of availability and propagation delay for both unprotected and protected RAN slicing requests. Subsequently, we propose a mixed-integer linear programming (MILP) model and a deep reinforcement learning (DRL)-based algorithm to maximize the availability of RAN requests while satisfying the specified delay requirements of each slice. Finally, we analyze the availability under various 5G services (i.e., enhanced Mobile Broadband, ultra-Reliable Low-Latency Communication, and massive Machine Type Communication) from a delay perspective in both small-scale and large-scale networks. Simulation results demonstrate that our proposed DRL-based method can achieve up to a 14.1% increase in availability compared to the benchmarks.
{"title":"Availability-Aware and Delay-Sensitive RAN Slicing Mapping Based on Deep Reinforcement Learning in Elastic Optical Networks","authors":"Yunwu Wang;Lingxing Kong;Min Zhu;Jiahua Gu;Yuancheng Cai;Jiao Zhang","doi":"10.1109/TNSM.2024.3440574","DOIUrl":"10.1109/TNSM.2024.3440574","url":null,"abstract":"To ensure reliable network services, the link protection method is widely employed for light-path provision. However, it inevitably increases propagation delay due to different transmission distances between active and backup light-paths, leading to a longer transport delay. Consequently, a crucial challenge is how to coordinate link protection and transport delay to maximize service availability while satisfying the delay requirements of each service. In this paper, we investigate the availability-aware and delay-sensitive (AADS) radio access network (RAN) slicing mapping problem with link protection in metro-access/aggregation elastic optical networks (EONs). We initially provide the mathematical model of availability and propagation delay for both unprotected and protected RAN slicing requests. Subsequently, we propose a mixed-integer linear programming (MILP) model and a deep reinforcement learning (DRL)-based algorithm to maximize the availability of RAN requests while satisfying the specified delay requirements of each slice. Finally, we analyze the availability under various 5G services (i.e., enhanced Mobile Broadband, ultra-Reliable Low-Latency Communication, and massive Machine Type Communication) from a delay perspective in both small-scale and large-scale networks. Simulation results demonstrate that our proposed DRL-based method can achieve up to a 14.1% increase in availability compared to the benchmarks.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 6","pages":"6026-6040"},"PeriodicalIF":4.7,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141969523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Relying on a data-driven methodology, deep learning has emerged as a new approach for dynamic resource allocation in large-scale cellular networks. This paper proposes a knowledge-assisted domain adversarial network to reduce the number of poorly performing base stations (BSs) by dynamically allocating radio resources to meet real-time mobile traffic needs. Firstly, we calculate theoretical inter-cell interference and BS capacity using Voronoi tessellation and stochastic geometry, which are then incorporated into a neural network as key parameters. Secondly, following the practical assessment, a performance classifier evaluates BS performance based on given traffic-resource pairs as either poor or good. Most importantly, we use well-performing BSs as source domain data to reallocate the resources of poorly performing ones through the domain adversarial neural network. Our experimental results demonstrate that the proposed knowledge-assisted domain adversarial resource allocation (KDARA) strategy effectively decreases the number of poorly performing BSs in the cellular network, and in turn, outperforms other benchmark algorithms in terms of both the ratio of poor BSs and radio resource consumption.
{"title":"Knowledge-Assisted Resource Allocation With Domain Adversarial Neural Networks","authors":"Youjia Chen;Yuyang Zheng;Jian Xu;Hanyu Lin;Peng Cheng;Ming Ding;Xi Wang;Jinsong Hu;Haifeng Zheng","doi":"10.1109/TNSM.2024.3440395","DOIUrl":"10.1109/TNSM.2024.3440395","url":null,"abstract":"Relying on a data-driven methodology, deep learning has emerged as a new approach for dynamic resource allocation in large-scale cellular networks. This paper proposes a knowledge-assisted domain adversarial network to reduce the number of poorly performing base stations (BSs) by dynamically allocating radio resources to meet real-time mobile traffic needs. Firstly, we calculate theoretical inter-cell interference and BS capacity using Voronoi tessellation and stochastic geometry, which are then incorporated into a neural network as key parameters. Secondly, following the practical assessment, a performance classifier evaluates BS performance based on given traffic-resource pairs as either poor or good. Most importantly, we use well-performing BSs as source domain data to reallocate the resources of poorly performing ones through the domain adversarial neural network. Our experimental results demonstrate that the proposed knowledge-assisted domain adversarial resource allocation (KDARA) strategy effectively decreases the number of poorly performing BSs in the cellular network, and in turn, outperforms other benchmark algorithms in terms of both the ratio of poor BSs and radio resource consumption.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 6","pages":"6493-6504"},"PeriodicalIF":4.7,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141946467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}