Pub Date : 2024-10-01DOI: 10.1016/j.dcan.2023.08.002
Ruiyu Wang , Yao Sun , Chao Zhang , Bowen Yang , Muhammad Imran , Lei Zhang
The millimeter-Wave (mmWave) communication with the advantages of abundant bandwidth and immunity to interference has been deemed a promising technology to greatly improve network capacity. However, due to such characteristics of mmWave, as short transmission distance, high sensitivity to the blockage, and large propagation path loss, handover issues (including trigger condition and target beam selection) become much complicated. In this paper, we design a novel handover scheme to optimize the overall system throughput as well as the total system delay while guaranteeing the Quality of Service (QoS) of each User Equipment (UE). Specifically, the proposed handover scheme called O-MAPPO integrates the Reinforcement Learning (RL) algorithm and optimization theory. The RL algorithm known as Multi-Agent Proximal Policy Optimization (MAPPO) plays a role in determining handover trigger conditions. Further, we propose an optimization problem in conjunction with MAPPO to select the target base station. The aim is to evaluate and optimize the system performance of total throughput and delay while guaranteeing the QoS of each UE after the handover decision is made. The numerical results show the overall system throughput and delay with our method are slightly worse than that with the exhaustive search method but much better than that using another typical RL algorithm Deep Deterministic Policy Gradient (DDPG).
{"title":"A novel handover scheme for millimeter wave network: An approach of integrating reinforcement learning and optimization","authors":"Ruiyu Wang , Yao Sun , Chao Zhang , Bowen Yang , Muhammad Imran , Lei Zhang","doi":"10.1016/j.dcan.2023.08.002","DOIUrl":"10.1016/j.dcan.2023.08.002","url":null,"abstract":"<div><div>The millimeter-Wave (mmWave) communication with the advantages of abundant bandwidth and immunity to interference has been deemed a promising technology to greatly improve network capacity. However, due to such characteristics of mmWave, as short transmission distance, high sensitivity to the blockage, and large propagation path loss, handover issues (including trigger condition and target beam selection) become much complicated. In this paper, we design a novel handover scheme to optimize the overall system throughput as well as the total system delay while guaranteeing the Quality of Service (QoS) of each User Equipment (UE). Specifically, the proposed handover scheme called O-MAPPO integrates the Reinforcement Learning (RL) algorithm and optimization theory. The RL algorithm known as Multi-Agent Proximal Policy Optimization (MAPPO) plays a role in determining handover trigger conditions. Further, we propose an optimization problem in conjunction with MAPPO to select the target base station. The aim is to evaluate and optimize the system performance of total throughput and delay while guaranteeing the QoS of each UE after the handover decision is made. The numerical results show the overall system throughput and delay with our method are slightly worse than that with the exhaustive search method but much better than that using another typical RL algorithm Deep Deterministic Policy Gradient (DDPG).</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 5","pages":"Pages 1493-1502"},"PeriodicalIF":7.5,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41564998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.dcan.2022.10.020
Yuhan Liu , He Yan , Qilie Liu , Wei Zhang , Junbin Huang
Efficient Convolution Operator (ECO) algorithms have achieved impressive performances in visual tracking. However, its feature extraction network of ECO is unconducive for capturing the correlation features of occluded and blurred targets between long-range complex scene frames. More so, its fixed weight fusion strategy does not use the complementary properties of deep and shallow features. In this paper, we propose a new target tracking method, namely ECO++, using deep feature adaptive fusion in a complex scene, in the following two aspects: First, we constructed a new temporal convolution mode and used it to replace the underlying convolution layer in Conformer network to obtain an improved Conformer network. Second, we adaptively fuse the deep features, which output through the improved Conformer network, by combining the Peak to Sidelobe Ratio (PSR), frame smoothness scores and adaptive adjustment weight. Extensive experiments on the OTB-2013, OTB-2015, UAV123, and VOT2019 benchmarks demonstrate that the proposed approach outperforms the state-of-the-art algorithms in tracking accuracy and robustness in complex scenes with occluded, blurred, and fast-moving targets.
{"title":"ECO++: Adaptive deep feature fusion target tracking method in complex scene","authors":"Yuhan Liu , He Yan , Qilie Liu , Wei Zhang , Junbin Huang","doi":"10.1016/j.dcan.2022.10.020","DOIUrl":"10.1016/j.dcan.2022.10.020","url":null,"abstract":"<div><div>Efficient Convolution Operator (ECO) algorithms have achieved impressive performances in visual tracking. However, its feature extraction network of ECO is unconducive for capturing the correlation features of occluded and blurred targets between long-range complex scene frames. More so, its fixed weight fusion strategy does not use the complementary properties of deep and shallow features. In this paper, we propose a new target tracking method, namely ECO++, using deep feature adaptive fusion in a complex scene, in the following two aspects: First, we constructed a new temporal convolution mode and used it to replace the underlying convolution layer in Conformer network to obtain an improved Conformer network. Second, we adaptively fuse the deep features, which output through the improved Conformer network, by combining the Peak to Sidelobe Ratio (PSR), frame smoothness scores and adaptive adjustment weight. Extensive experiments on the OTB-2013, OTB-2015, UAV123, and VOT2019 benchmarks demonstrate that the proposed approach outperforms the state-of-the-art algorithms in tracking accuracy and robustness in complex scenes with occluded, blurred, and fast-moving targets.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 5","pages":"Pages 1352-1364"},"PeriodicalIF":7.5,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43434686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.dcan.2022.10.018
Peng Shixin, Chen Kai, Tian Tian, Chen Jingying
Although speech emotion recognition is challenging, it has broad application prospects in human-computer interaction. Building a system that can accurately and stably recognize emotions from human languages can provide a better user experience. However, the current unimodal emotion feature representations are not distinctive enough to accomplish the recognition, and they do not effectively simulate the inter-modality dynamics in speech emotion recognition tasks. This paper proposes a multimodal method that utilizes both audio and semantic content for speech emotion recognition. The proposed method consists of three parts: two high-level feature extractors for text and audio modalities, and an autoencoder-based feature fusion. For audio modality, we propose a structure called Temporal Global Feature Extractor (TGFE) to extract the high-level features of the time-frequency domain relationship from the original speech signal. Considering that text lacks frequency information, we use only a Bidirectional Long Short-Term Memory network (BLSTM) and attention mechanism to simulate an intra-modal dynamic. Once these steps have been accomplished, the high-level text and audio features are sent to the autoencoder in parallel to learn their shared representation for final emotion classification. We conducted extensive experiments on three public benchmark datasets to evaluate our method. The results on Interactive Emotional Motion Capture (IEMOCAP) and Multimodal EmotionLines Dataset (MELD) outperform the existing method. Additionally, the results of CMU Multi-modal Opinion-level Sentiment Intensity (CMU-MOSI) are competitive. Furthermore, experimental results show that compared to unimodal information and autoencoder-based feature level fusion, the joint multimodal information (audio and text) improves the overall performance and can achieve greater accuracy than simple feature concatenation.
{"title":"An autoencoder-based feature level fusion for speech emotion recognition","authors":"Peng Shixin, Chen Kai, Tian Tian, Chen Jingying","doi":"10.1016/j.dcan.2022.10.018","DOIUrl":"10.1016/j.dcan.2022.10.018","url":null,"abstract":"<div><div>Although speech emotion recognition is challenging, it has broad application prospects in human-computer interaction. Building a system that can accurately and stably recognize emotions from human languages can provide a better user experience. However, the current unimodal emotion feature representations are not distinctive enough to accomplish the recognition, and they do not effectively simulate the inter-modality dynamics in speech emotion recognition tasks. This paper proposes a multimodal method that utilizes both audio and semantic content for speech emotion recognition. The proposed method consists of three parts: two high-level feature extractors for text and audio modalities, and an autoencoder-based feature fusion. For audio modality, we propose a structure called Temporal Global Feature Extractor (TGFE) to extract the high-level features of the time-frequency domain relationship from the original speech signal. Considering that text lacks frequency information, we use only a Bidirectional Long Short-Term Memory network (BLSTM) and attention mechanism to simulate an intra-modal dynamic. Once these steps have been accomplished, the high-level text and audio features are sent to the autoencoder in parallel to learn their shared representation for final emotion classification. We conducted extensive experiments on three public benchmark datasets to evaluate our method. The results on Interactive Emotional Motion Capture (IEMOCAP) and Multimodal EmotionLines Dataset (MELD) outperform the existing method. Additionally, the results of CMU Multi-modal Opinion-level Sentiment Intensity (CMU-MOSI) are competitive. Furthermore, experimental results show that compared to unimodal information and autoencoder-based feature level fusion, the joint multimodal information (audio and text) improves the overall performance and can achieve greater accuracy than simple feature concatenation.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 5","pages":"Pages 1341-1351"},"PeriodicalIF":7.5,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47336974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.dcan.2023.01.021
Fei Tang , Chunliang Ma , Kefei Cheng
Zero trust architecture is an end-to-end approach for server resources and data security which contains identity authentication, access control, dynamic evaluation, and so on. This work focuses on authentication technology in the zero trust network. In this paper, a Traceable Universal Designated Verifier Signature (TUDVS) is used to construct a privacy-preserving authentication scheme for zero trust architecture. Specifically, when a client requests access to server resources, we want to protect the client's access privacy which means that the server administrator cannot disclose the client's access behavior to any third party. In addition, the security of the proposed scheme is proved and its efficiency is analyzed. Finally, TUDVS is applied to the single packet authorization scenario of the zero trust architecture to prove the practicability of the proposed scheme.
{"title":"Privacy-preserving authentication scheme based on zero trust architecture","authors":"Fei Tang , Chunliang Ma , Kefei Cheng","doi":"10.1016/j.dcan.2023.01.021","DOIUrl":"10.1016/j.dcan.2023.01.021","url":null,"abstract":"<div><div>Zero trust architecture is an end-to-end approach for server resources and data security which contains identity authentication, access control, dynamic evaluation, and so on. This work focuses on authentication technology in the zero trust network. In this paper, a Traceable Universal Designated Verifier Signature (TUDVS) is used to construct a privacy-preserving authentication scheme for zero trust architecture. Specifically, when a client requests access to server resources, we want to protect the client's access privacy which means that the server administrator cannot disclose the client's access behavior to any third party. In addition, the security of the proposed scheme is proved and its efficiency is analyzed. Finally, TUDVS is applied to the single packet authorization scenario of the zero trust architecture to prove the practicability of the proposed scheme.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 5","pages":"Pages 1211-1220"},"PeriodicalIF":7.5,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46147599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.dcan.2023.11.009
Sampa Sahoo , Kshira Sagar Sahoo , Bibhudatta Sahoo , Amir H. Gandomi
The development of the Internet of Things (IoT) technology is leading to a new era of smart applications such as smart transportation, buildings, and smart homes. Moreover, these applications act as the building blocks of IoT-enabled smart cities. The high volume and high velocity of data generated by various smart city applications are sent to flexible and efficient cloud computing resources for processing. However, there is a high computation latency due to the presence of a remote cloud server. Edge computing, which brings the computation close to the data source is introduced to overcome this problem. In an IoT-enabled smart city environment, one of the main concerns is to consume the least amount of energy while executing tasks that satisfy the delay constraint. An efficient resource allocation at the edge is helpful to address this issue. In this paper, an energy and delay minimization problem in a smart city environment is formulated as a bi-objective edge resource allocation problem. First, we presented a three-layer network architecture for IoT-enabled smart cities. Then, we designed a learning automata-based edge resource allocation approach considering the three-layer network architecture to solve the said bi-objective minimization problem. Learning Automata (LA) is a reinforcement-based adaptive decision-maker that helps to find the best task and edge resource mapping. An extensive set of simulations is performed to demonstrate the applicability and effectiveness of the LA-based approach in the IoT-enabled smart city environment.
物联网(IoT)技术的发展正在引领智能交通、楼宇和智能家居等智能应用进入新时代。此外,这些应用还是物联网智能城市的基石。各种智慧城市应用产生的大量高速数据被发送到灵活高效的云计算资源进行处理。然而,由于远程云服务器的存在,计算延迟较高。为了解决这个问题,我们引入了边缘计算,它能使计算接近数据源。在启用了物联网的智慧城市环境中,主要关注点之一是在执行满足延迟约束的任务时消耗最少的能源。边缘的高效资源分配有助于解决这一问题。本文将智能城市环境中的能量和延迟最小化问题表述为一个双目标边缘资源分配问题。首先,我们介绍了物联网智能城市的三层网络架构。然后,考虑到三层网络架构,我们设计了一种基于学习自动机的边缘资源分配方法,以解决上述双目标最小化问题。学习自动机(LA)是一种基于强化的自适应决策制定器,有助于找到最佳任务和边缘资源映射。为了证明基于 LA 的方法在物联网智能城市环境中的适用性和有效性,我们进行了大量的模拟。
{"title":"A learning automata based edge resource allocation approach for IoT-enabled smart cities","authors":"Sampa Sahoo , Kshira Sagar Sahoo , Bibhudatta Sahoo , Amir H. Gandomi","doi":"10.1016/j.dcan.2023.11.009","DOIUrl":"10.1016/j.dcan.2023.11.009","url":null,"abstract":"<div><div>The development of the Internet of Things (IoT) technology is leading to a new era of smart applications such as smart transportation, buildings, and smart homes. Moreover, these applications act as the building blocks of IoT-enabled smart cities. The high volume and high velocity of data generated by various smart city applications are sent to flexible and efficient cloud computing resources for processing. However, there is a high computation latency due to the presence of a remote cloud server. Edge computing, which brings the computation close to the data source is introduced to overcome this problem. In an IoT-enabled smart city environment, one of the main concerns is to consume the least amount of energy while executing tasks that satisfy the delay constraint. An efficient resource allocation at the edge is helpful to address this issue. In this paper, an energy and delay minimization problem in a smart city environment is formulated as a bi-objective edge resource allocation problem. First, we presented a three-layer network architecture for IoT-enabled smart cities. Then, we designed a learning automata-based edge resource allocation approach considering the three-layer network architecture to solve the said bi-objective minimization problem. Learning Automata (LA) is a reinforcement-based adaptive decision-maker that helps to find the best task and edge resource mapping. An extensive set of simulations is performed to demonstrate the applicability and effectiveness of the LA-based approach in the IoT-enabled smart city environment.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 5","pages":"Pages 1258-1266"},"PeriodicalIF":7.5,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139021138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.dcan.2023.06.008
Sai Zou , Junrui Wu , Haisheng Yu , Wenyong Wang , Lisheng Huang , Wei Ni , Yan Liu
The future Sixth-Generation (6G) wireless systems are expected to encounter emerging services with diverse requirements. In this paper, 6G network resource orchestration is optimized to support customized network slicing of services, and place network functions generated by heterogeneous devices into available resources. This is a combinatorial optimization problem that is solved by developing a Particle Swarm Optimization (PSO) based scheduling strategy with enhanced inertia weight, particle variation, and nonlinear learning factor, thereby balancing the local and global solutions and improving the convergence speed to globally near-optimal solutions. Simulations show that the method improves the convergence speed and the utilization of network resources compared with other variants of PSO.
{"title":"Efficiency-optimized 6G: A virtual network resource orchestration strategy by enhanced particle swarm optimization","authors":"Sai Zou , Junrui Wu , Haisheng Yu , Wenyong Wang , Lisheng Huang , Wei Ni , Yan Liu","doi":"10.1016/j.dcan.2023.06.008","DOIUrl":"10.1016/j.dcan.2023.06.008","url":null,"abstract":"<div><div>The future Sixth-Generation (6G) wireless systems are expected to encounter emerging services with diverse requirements. In this paper, 6G network resource orchestration is optimized to support customized network slicing of services, and place network functions generated by heterogeneous devices into available resources. This is a combinatorial optimization problem that is solved by developing a Particle Swarm Optimization (PSO) based scheduling strategy with enhanced inertia weight, particle variation, and nonlinear learning factor, thereby balancing the local and global solutions and improving the convergence speed to globally near-optimal solutions. Simulations show that the method improves the convergence speed and the utilization of network resources compared with other variants of PSO.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 5","pages":"Pages 1221-1233"},"PeriodicalIF":7.5,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46688400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.dcan.2023.05.009
Alicja Olejniczak, Olga Blaszkiewicz, Krzysztof K. Cwalina, Piotr Rajchowski, Jaroslaw Sadowski
Visibility conditions between antennas, i.e. Line-of-Sight (LOS) and Non-Line-of-Sight (NLOS) can be crucial in the context of indoor localization, for which detecting the NLOS condition and further correcting constant position estimation errors or allocating resources can reduce the negative influence of multipath propagation on wireless communication and positioning. In this paper a Deep Learning (DL) model to classify LOS/NLOS condition while analyzing two Channel Impulse Response (CIR) parameters: Total Power (TP) [dBm] and First Path Power (FP) [dBm] is proposed. The experiments were conducted using DWM1000 DecaWave radio module based on measurements collected in a real indoor environment and the proposed architecture provides LOS/NLOS identification with an accuracy of more than 100% and 95% in static and dynamic senarios, respectively. The proposed model improves the classification rate by 2-5% compared to other Machine Learning (ML) methods proposed in the literature.
{"title":"LOS and NLOS identification in real indoor environment using deep learning approach","authors":"Alicja Olejniczak, Olga Blaszkiewicz, Krzysztof K. Cwalina, Piotr Rajchowski, Jaroslaw Sadowski","doi":"10.1016/j.dcan.2023.05.009","DOIUrl":"10.1016/j.dcan.2023.05.009","url":null,"abstract":"<div><div>Visibility conditions between antennas, i.e. Line-of-Sight (LOS) and Non-Line-of-Sight (NLOS) can be crucial in the context of indoor localization, for which detecting the NLOS condition and further correcting constant position estimation errors or allocating resources can reduce the negative influence of multipath propagation on wireless communication and positioning. In this paper a Deep Learning (DL) model to classify LOS/NLOS condition while analyzing two Channel Impulse Response (CIR) parameters: Total Power (TP) [dBm] and First Path Power (FP) [dBm] is proposed. The experiments were conducted using DWM1000 DecaWave radio module based on measurements collected in a real indoor environment and the proposed architecture provides LOS/NLOS identification with an accuracy of more than 100% and 95% in static and dynamic senarios, respectively. The proposed model improves the classification rate by 2-5% compared to other Machine Learning (ML) methods proposed in the literature.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 5","pages":"Pages 1305-1312"},"PeriodicalIF":7.5,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49035926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.dcan.2023.09.002
Haleema Sadia , Ahmad Kamal Hassan , Ziaul Haq Abbas , Ghulam Abbas , Muhammad Waqas , Zhu Han
Non-Orthogonal Multiple Access (NOMA) has already proven to be an effective multiple access scheme for 5th Generation (5G) wireless networks. It provides improved performance in terms of system throughput, spectral efficiency, fairness, and energy efficiency (EE). However, in conventional NOMA networks, performance degradation still exists because of the stochastic behavior of wireless channels. To combat this challenge, the concept of Intelligent Reflecting Surface (IRS) has risen to prominence as a low-cost intelligent solution for Beyond 5G (B5G) networks. In this paper, a modeling primer based on the integration of these two cutting-edge technologies, i.e., IRS and NOMA, for B5G wireless networks is presented. An in-depth comparative analysis of IRS-assisted Power Domain (PD)-NOMA networks is provided through 3-fold investigations. First, a primer is presented on the system architecture of IRS-enabled multiple-configuration PD-NOMA systems, and parallels are drawn with conventional network configurations, i.e., conventional NOMA, Orthogonal Multiple Access (OMA), and IRS-assisted OMA networks. Followed by this, a comparative analysis of these network configurations is showcased in terms of significant performance metrics, namely, individual users' achievable rate, sum rate, ergodic rate, EE, and outage probability. Moreover, for multi-antenna IRS-enabled NOMA networks, we exploit the active Beamforming (BF) technique by employing a greedy algorithm using a state-of-the-art branch-reduce-and-bound (BRB) method. The optimality of the BRB algorithm is presented by comparing it with benchmark BF techniques, i.e., minimum-mean-square-error, zero-forcing-BF, and maximum-ratio-transmission. Furthermore, we present an outlook on future envisioned NOMA networks, aided by IRSs, i.e., with a variety of potential applications for 6G wireless networks. This work presents a generic performance assessment toolkit for wireless networks, focusing on IRS-assisted NOMA networks. This comparative analysis provides a solid foundation for the development of future IRS-enabled, energy-efficient wireless communication systems.
{"title":"IRS-enabled NOMA communication systems: A network architecture primer with future trends and challenges","authors":"Haleema Sadia , Ahmad Kamal Hassan , Ziaul Haq Abbas , Ghulam Abbas , Muhammad Waqas , Zhu Han","doi":"10.1016/j.dcan.2023.09.002","DOIUrl":"10.1016/j.dcan.2023.09.002","url":null,"abstract":"<div><div>Non-Orthogonal Multiple Access (NOMA) has already proven to be an effective multiple access scheme for 5th Generation (5G) wireless networks. It provides improved performance in terms of system throughput, spectral efficiency, fairness, and energy efficiency (EE). However, in conventional NOMA networks, performance degradation still exists because of the stochastic behavior of wireless channels. To combat this challenge, the concept of Intelligent Reflecting Surface (IRS) has risen to prominence as a low-cost intelligent solution for Beyond 5G (B5G) networks. In this paper, a modeling primer based on the integration of these two cutting-edge technologies, i.e., IRS and NOMA, for B5G wireless networks is presented. An in-depth comparative analysis of IRS-assisted Power Domain (PD)-NOMA networks is provided through 3-fold investigations. First, a primer is presented on the system architecture of IRS-enabled multiple-configuration PD-NOMA systems, and parallels are drawn with conventional network configurations, i.e., conventional NOMA, Orthogonal Multiple Access (OMA), and IRS-assisted OMA networks. Followed by this, a comparative analysis of these network configurations is showcased in terms of significant performance metrics, namely, individual users' achievable rate, sum rate, ergodic rate, EE, and outage probability. Moreover, for multi-antenna IRS-enabled NOMA networks, we exploit the active Beamforming (BF) technique by employing a greedy algorithm using a state-of-the-art branch-reduce-and-bound (BRB) method. The optimality of the BRB algorithm is presented by comparing it with benchmark BF techniques, i.e., minimum-mean-square-error, zero-forcing-BF, and maximum-ratio-transmission. Furthermore, we present an outlook on future envisioned NOMA networks, aided by IRSs, i.e., with a variety of potential applications for 6G wireless networks. This work presents a generic performance assessment <em>toolkit</em> for wireless networks, focusing on IRS-assisted NOMA networks. This comparative analysis provides a solid foundation for the development of future IRS-enabled, energy-efficient wireless communication systems.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 5","pages":"Pages 1503-1528"},"PeriodicalIF":7.5,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135347605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.dcan.2022.11.012
Chunlong He , Xinjie Li , Yin Huang , Jianzhen Lin , Gongbin Qian , Xingquan Li
Unmanned Aerial Vehicle (UAV) is an air base station featuring flexible deployment and mobility. It can significantly improve the communication quality of the system due to its line-of-sight channel connection with ground devices. However, due to the openness of UAV-to-Ground channels, the communication between ground users’ devices and UAV is easily eavesdropped. In this paper, we aim to improve the security of communication system by using full-duplex UAV as a mobile air base station. The UAV sends interference signals to eavesdroppers and receives signals from ground devices. We jointly optimize the scheduling between the UAV and ground devices, the transmission power of the UAV and ground devices, as well as the trajectory of the UAV to maximize the minimum average security communication data rate. This optimization problem is mixed with integers and non-convex expressions. Therefore, this problem is not a standard convex optimization problem, which can not be solved with standard methods. With this in mind, we propose an effective algorithm which solves this problem iteratively by applying Successive Convex Approximation (SCA), variable relaxation and substitution. Finally, numerical results demonstrate the effectiveness of the proposed algorithm.
{"title":"Secure data rate maximization for full-duplex UAV-enabled base station","authors":"Chunlong He , Xinjie Li , Yin Huang , Jianzhen Lin , Gongbin Qian , Xingquan Li","doi":"10.1016/j.dcan.2022.11.012","DOIUrl":"10.1016/j.dcan.2022.11.012","url":null,"abstract":"<div><div>Unmanned Aerial Vehicle (UAV) is an air base station featuring flexible deployment and mobility. It can significantly improve the communication quality of the system due to its line-of-sight channel connection with ground devices. However, due to the openness of UAV-to-Ground channels, the communication between ground users’ devices and UAV is easily eavesdropped. In this paper, we aim to improve the security of communication system by using full-duplex UAV as a mobile air base station. The UAV sends interference signals to eavesdroppers and receives signals from ground devices. We jointly optimize the scheduling between the UAV and ground devices, the transmission power of the UAV and ground devices, as well as the trajectory of the UAV to maximize the minimum average security communication data rate. This optimization problem is mixed with integers and non-convex expressions. Therefore, this problem is not a standard convex optimization problem, which can not be solved with standard methods. With this in mind, we propose an effective algorithm which solves this problem iteratively by applying Successive Convex Approximation (SCA), variable relaxation and substitution. Finally, numerical results demonstrate the effectiveness of the proposed algorithm.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 5","pages":"Pages 1387-1393"},"PeriodicalIF":7.5,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46434399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.dcan.2022.12.017
Xuyang Jing , Jingjing Zhao , Zheng Yan , Witold Pedrycz , Xian Li
Accurate classification of encrypted traffic plays an important role in network management. However, current methods confronts several problems: inability to characterize traffic that exhibits great dispersion, inability to classify traffic with multi-level features, and degradation due to limited training traffic size. To address these problems, this paper proposes a traffic granularity-based cryptographic traffic classification method, called Granular Classifier (GC). In this paper, a novel Cardinality-based Constrained Fuzzy C-Means (CCFCM) clustering algorithm is proposed to address the problem caused by limited training traffic, considering the ratio of cardinality that must be linked between flows to achieve good traffic partitioning. Then, an original representation format of traffic is presented based on granular computing, named Traffic Granules (TG), to accurately describe traffic structure by catching the dispersion of different traffic features. Each granule is a compact set of similar data with a refined boundary by excluding outliers. Based on TG, GC is constructed to perform traffic classification based on multi-level features. The performance of the GC is evaluated based on real-world encrypted network traffic data. Experimental results show that the GC achieves outstanding performance for encrypted traffic classification with limited size of training traffic and keeps accurate classification in dynamic network conditions.
{"title":"Granular classifier: Building traffic granules for encrypted traffic classification based on granular computing","authors":"Xuyang Jing , Jingjing Zhao , Zheng Yan , Witold Pedrycz , Xian Li","doi":"10.1016/j.dcan.2022.12.017","DOIUrl":"10.1016/j.dcan.2022.12.017","url":null,"abstract":"<div><div>Accurate classification of encrypted traffic plays an important role in network management. However, current methods confronts several problems: inability to characterize traffic that exhibits great dispersion, inability to classify traffic with multi-level features, and degradation due to limited training traffic size. To address these problems, this paper proposes a traffic granularity-based cryptographic traffic classification method, called Granular Classifier (GC). In this paper, a novel Cardinality-based Constrained Fuzzy C-Means (CCFCM) clustering algorithm is proposed to address the problem caused by limited training traffic, considering the ratio of cardinality that must be linked between flows to achieve good traffic partitioning. Then, an original representation format of traffic is presented based on granular computing, named Traffic Granules (TG), to accurately describe traffic structure by catching the dispersion of different traffic features. Each granule is a compact set of similar data with a refined boundary by excluding outliers. Based on TG, GC is constructed to perform traffic classification based on multi-level features. The performance of the GC is evaluated based on real-world encrypted network traffic data. Experimental results show that the GC achieves outstanding performance for encrypted traffic classification with limited size of training traffic and keeps accurate classification in dynamic network conditions.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 5","pages":"Pages 1428-1438"},"PeriodicalIF":7.5,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44647460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}