Pub Date : 2025-08-01DOI: 10.1016/j.dcan.2024.10.020
Mohamed S. Sayed , Hatem M. Zakaria , Abdelhady M. Abdelhady
A Mixed Numerology OFDM (MN-OFDM) system is essential in 6G and beyond. However, it encounters challenges due to Inter-Numerology Interference (INI). The upcoming 6G technology aims to support innovative applications with high data rates, low latency, and reliability. Therefore, effective handling of INI is crucial to meet the diverse requirements of these applications. To address INI in MN-OFDM systems, this paper proposes a User-Based Numerology and Waveform (UBNW) approach that uses various OFDM-based waveforms and their parameters to mitigate INI. By assigning a specific waveform and numerology to each user, UBNW mitigates INI, optimizes service characteristics, and addresses user demands efficiently. The required Guard Bands (GB), expressed as a ratio of user bandwidth, vary significantly across different waveforms at an SIR of 25 dB. For instance, OFDM-FOFDM needs only 2.5%, while OFDM-UFMC, OFDM-WOLA, and conventional OFDM require 7.5%, 24%, and 40%, respectively. The time-frequency efficiency also varies between the waveforms. FOFDM achieves 85.6%, UFMC achieves 81.6%, WOLA achieves 70.7%, and conventional OFDM achieves 66.8%. The simulation results demonstrate that the UBNW approach not only effectively mitigates INI but also enhances system flexibility and time-frequency efficiency while simultaneously reducing the required GB.
{"title":"Enhancing flexibility and system performance in 6G and beyond: A user-based numerology and waveform approach","authors":"Mohamed S. Sayed , Hatem M. Zakaria , Abdelhady M. Abdelhady","doi":"10.1016/j.dcan.2024.10.020","DOIUrl":"10.1016/j.dcan.2024.10.020","url":null,"abstract":"<div><div>A Mixed Numerology OFDM (MN-OFDM) system is essential in 6G and beyond. However, it encounters challenges due to Inter-Numerology Interference (INI). The upcoming 6G technology aims to support innovative applications with high data rates, low latency, and reliability. Therefore, effective handling of INI is crucial to meet the diverse requirements of these applications. To address INI in MN-OFDM systems, this paper proposes a User-Based Numerology and Waveform (UBNW) approach that uses various OFDM-based waveforms and their parameters to mitigate INI. By assigning a specific waveform and numerology to each user, UBNW mitigates INI, optimizes service characteristics, and addresses user demands efficiently. The required Guard Bands (GB), expressed as a ratio of user bandwidth, vary significantly across different waveforms at an SIR of 25 dB. For instance, OFDM-FOFDM needs only 2.5%, while OFDM-UFMC, OFDM-WOLA, and conventional OFDM require 7.5%, 24%, and 40%, respectively. The time-frequency efficiency also varies between the waveforms. FOFDM achieves 85.6%, UFMC achieves 81.6%, WOLA achieves 70.7%, and conventional OFDM achieves 66.8%. The simulation results demonstrate that the UBNW approach not only effectively mitigates INI but also enhances system flexibility and time-frequency efficiency while simultaneously reducing the required GB.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"11 4","pages":"Pages 975-991"},"PeriodicalIF":7.5,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144925846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01DOI: 10.1016/j.dcan.2024.11.007
Lijun Wang , Huajie Hao , Chun Wang , Xianzhou Han
Efficient and safe information exchange between vehicles can reduce the probability of road accidents, thereby improving the driving experience of vehicles in Vehicular Ad Hoc Networks (VANETs). This paper proposes a group management algorithm with trust and mobility evaluation to address the enormous pressure on VANETs topology caused by high-speed vehicle movement and dynamic changes in the direction of travel. This algorithm utilizes historical interactive data to mine the fusion trust between vehicles. Then, combined with fusion mobility, the selection of center members and information maintenance of group members is achieved. Furthermore, based on bilinear pairing, an encryption protocol is designed to solve the problem of key management and update when the group structure changes rapidly, ensuring the safe forwarding of messages within and between groups. Numerical analysis shows that the algorithm in the paper ensures group stability and improves performance such as average message delivery rate and interaction delay.
{"title":"VANETs group message secure forwarding with trust evaluation","authors":"Lijun Wang , Huajie Hao , Chun Wang , Xianzhou Han","doi":"10.1016/j.dcan.2024.11.007","DOIUrl":"10.1016/j.dcan.2024.11.007","url":null,"abstract":"<div><div>Efficient and safe information exchange between vehicles can reduce the probability of road accidents, thereby improving the driving experience of vehicles in Vehicular Ad Hoc Networks (VANETs). This paper proposes a group management algorithm with trust and mobility evaluation to address the enormous pressure on VANETs topology caused by high-speed vehicle movement and dynamic changes in the direction of travel. This algorithm utilizes historical interactive data to mine the fusion trust between vehicles. Then, combined with fusion mobility, the selection of center members and information maintenance of group members is achieved. Furthermore, based on bilinear pairing, an encryption protocol is designed to solve the problem of key management and update when the group structure changes rapidly, ensuring the safe forwarding of messages within and between groups. Numerical analysis shows that the algorithm in the paper ensures group stability and improves performance such as average message delivery rate and interaction delay.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"11 4","pages":"Pages 1150-1157"},"PeriodicalIF":7.5,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144926847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01DOI: 10.1016/j.dcan.2024.12.007
Somia Sahraoui , Abdelmalik Bachir
The Internet of Things (IoT) has gained substantial attention in both academic research and real-world applications. The proliferation of interconnected devices across various domains promises to deliver intelligent and advanced services. However, this rapid expansion also heightens the vulnerability of the IoT ecosystem to security threats. Consequently, innovative solutions capable of effectively mitigating risks while accommodating the unique constraints of IoT environments are urgently needed. Recently, the convergence of Blockchain technology and IoT has introduced a decentralized and robust framework for securing data and interactions, commonly referred to as the Internet of Blockchained Things (IoBT). Extensive research efforts have been devoted to adapting Blockchain technology to meet the specific requirements of IoT deployments. Within this context, consensus algorithms play a critical role in assessing the feasibility of integrating Blockchain into IoT ecosystems. The adoption of efficient and lightweight consensus mechanisms for block validation has become increasingly essential. This paper presents a comprehensive examination of lightweight, constraint-aware consensus algorithms tailored for IoBT. The study categorizes these consensus mechanisms based on their core operations, the security of the block validation process, the incorporation of AI techniques, and the specific applications they are designed to support.
{"title":"Lightweight consensus mechanisms in the Internet of Blockchained Things: Thorough analysis and research directions","authors":"Somia Sahraoui , Abdelmalik Bachir","doi":"10.1016/j.dcan.2024.12.007","DOIUrl":"10.1016/j.dcan.2024.12.007","url":null,"abstract":"<div><div>The Internet of Things (IoT) has gained substantial attention in both academic research and real-world applications. The proliferation of interconnected devices across various domains promises to deliver intelligent and advanced services. However, this rapid expansion also heightens the vulnerability of the IoT ecosystem to security threats. Consequently, innovative solutions capable of effectively mitigating risks while accommodating the unique constraints of IoT environments are urgently needed. Recently, the convergence of Blockchain technology and IoT has introduced a decentralized and robust framework for securing data and interactions, commonly referred to as the Internet of Blockchained Things (IoBT). Extensive research efforts have been devoted to adapting Blockchain technology to meet the specific requirements of IoT deployments. Within this context, consensus algorithms play a critical role in assessing the feasibility of integrating Blockchain into IoT ecosystems. The adoption of efficient and lightweight consensus mechanisms for block validation has become increasingly essential. This paper presents a comprehensive examination of lightweight, constraint-aware consensus algorithms tailored for IoBT. The study categorizes these consensus mechanisms based on their core operations, the security of the block validation process, the incorporation of AI techniques, and the specific applications they are designed to support.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"11 4","pages":"Pages 1246-1261"},"PeriodicalIF":7.5,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144926716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01DOI: 10.1016/j.dcan.2024.10.011
Pengzhan Jiang , Long Shi , Bin Cao , Taotao Wang , Baofeng Ji , Jun Li
Traditional Internet of Things (IoT) architectures that rely on centralized servers for data management and decision-making are vulnerable to security threats and privacy leakage. To address this issue, blockchain has been advocated for decentralized data management in a tamper-resistance, traceable, and transparent manner. However, a major issue that hinders the integration of blockchain and IoT lies in that, it is rather challenging for resource-constrained IoT devices to perform computation-intensive blockchain consensuses such as Proof-of-Work (PoW). Furthermore, the incentive mechanism of PoW pushes lightweight IoT nodes to aggregate their computing power to increase the possibility of successful block generation. Nevertheless, this eventually leads to the formation of computing power alliances, and significantly compromises the decentralization and security of BlockChain-aided IoT (BC-IoT) networks. To cope with these issues, we propose a lightweight consensus protocol for BC-IoT, called Proof-of-Trusted-Work (PoTW). The goal of the proposed consensus is to disincentivize the centralization of computing power and encourage the independent participation of lightweight IoT nodes in blockchain consensus. First, we put forth an on-chain reputation evaluation rule and a reputation chain for PoTW to enable the verifiability and traceability of nodes' reputations based on their contributions of computing power to the blockchain consensus, and we incorporate the multi-level block generation difficulty as a rewards for nodes to accumulate reputations. Second, we model the block generation process of PoTW and analyze the block throughput using the continuous time Markov chain. Additionally, we define and optimize the relative throughput gain to quantify and maximize the capability of PoTW that suppresses the computing power centralization (i.e., centralization suppression). Furthermore, we investigate the impact of the computing power of the computing power alliance and the levels of block generation difficulty on the centralization suppression capability of PoTW. Finally, simulation results demonstrate the consistency of the analytical results in terms of block throughput. In particular, the results show that PoTW effectively reduces the block generation proportion of the computing power alliance compared with PoW, while simultaneously improving that of individual lightweight nodes. This indicates that PoTW is capable of suppressing the centralization of computing power to a certain degree. Moreover, as the levels of block generation difficulty in PoTW increase, its centralization suppression capability strengthens.
{"title":"Proof-of-trusted-work: A lightweight blockchain consensus for decentralized IoT networks","authors":"Pengzhan Jiang , Long Shi , Bin Cao , Taotao Wang , Baofeng Ji , Jun Li","doi":"10.1016/j.dcan.2024.10.011","DOIUrl":"10.1016/j.dcan.2024.10.011","url":null,"abstract":"<div><div>Traditional Internet of Things (IoT) architectures that rely on centralized servers for data management and decision-making are vulnerable to security threats and privacy leakage. To address this issue, blockchain has been advocated for decentralized data management in a tamper-resistance, traceable, and transparent manner. However, a major issue that hinders the integration of blockchain and IoT lies in that, it is rather challenging for resource-constrained IoT devices to perform computation-intensive blockchain consensuses such as Proof-of-Work (PoW). Furthermore, the incentive mechanism of PoW pushes lightweight IoT nodes to aggregate their computing power to increase the possibility of successful block generation. Nevertheless, this eventually leads to the formation of computing power alliances, and significantly compromises the decentralization and security of BlockChain-aided IoT (BC-IoT) networks. To cope with these issues, we propose a lightweight consensus protocol for BC-IoT, called Proof-of-Trusted-Work (PoTW). The goal of the proposed consensus is to disincentivize the centralization of computing power and encourage the independent participation of lightweight IoT nodes in blockchain consensus. First, we put forth an on-chain reputation evaluation rule and a reputation chain for PoTW to enable the verifiability and traceability of nodes' reputations based on their contributions of computing power to the blockchain consensus, and we incorporate the multi-level block generation difficulty as a rewards for nodes to accumulate reputations. Second, we model the block generation process of PoTW and analyze the block throughput using the continuous time Markov chain. Additionally, we define and optimize the relative throughput gain to quantify and maximize the capability of PoTW that suppresses the computing power centralization (i.e., centralization suppression). Furthermore, we investigate the impact of the computing power of the computing power alliance and the levels of block generation difficulty on the centralization suppression capability of PoTW. Finally, simulation results demonstrate the consistency of the analytical results in terms of block throughput. In particular, the results show that PoTW effectively reduces the block generation proportion of the computing power alliance compared with PoW, while simultaneously improving that of individual lightweight nodes. This indicates that PoTW is capable of suppressing the centralization of computing power to a certain degree. Moreover, as the levels of block generation difficulty in PoTW increase, its centralization suppression capability strengthens.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"11 4","pages":"Pages 1055-1066"},"PeriodicalIF":7.5,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144926843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01DOI: 10.1016/j.dcan.2024.10.016
Xiangdong Huang , Yimin Wang , Yanping Li , Xiaolei Wang
Due to the neglect of the retrieval of communication parameters (including the symbol rate, the symbol timing offset, and the carrier frequency), the existing non-cooperative communication mode recognizers suffer from the generality ability degradation and severe difficulty in distinguishing a large number of modulation modes, etc. To overcome these drawbacks, this paper proposes an efficient communication mode recognizer consisting of communication parameter estimation, the constellation diagram retrieval, and a classification network. In particular, we define a 2-D symbol synchronization metric to retrieve both the symbol rate and the symbol timing offset, whereas a constellation dispersity annealing procedure is devised to correct the carrier frequency accurately. Owing to the accurate estimation of these crucial parameters, high-regularity constellation maps can be retrieved and thus simplify the subsequent classification work. Numerical results show that the proposed communication mode recognizer acquires higher classification accuracy, stronger anti-noise robustness, and higher applicability of distinguishing multiple types, which presents the proposed scheme with vast applicable potentials in non-cooperative scenarios.
{"title":"Efficient modulation mode recognition based on joint communication parameter estimation in non-cooperative scenarios","authors":"Xiangdong Huang , Yimin Wang , Yanping Li , Xiaolei Wang","doi":"10.1016/j.dcan.2024.10.016","DOIUrl":"10.1016/j.dcan.2024.10.016","url":null,"abstract":"<div><div>Due to the neglect of the retrieval of communication parameters (including the symbol rate, the symbol timing offset, and the carrier frequency), the existing non-cooperative communication mode recognizers suffer from the generality ability degradation and severe difficulty in distinguishing a large number of modulation modes, etc. To overcome these drawbacks, this paper proposes an efficient communication mode recognizer consisting of communication parameter estimation, the constellation diagram retrieval, and a classification network. In particular, we define a 2-D symbol synchronization metric to retrieve both the symbol rate and the symbol timing offset, whereas a constellation dispersity annealing procedure is devised to correct the carrier frequency accurately. Owing to the accurate estimation of these crucial parameters, high-regularity constellation maps can be retrieved and thus simplify the subsequent classification work. Numerical results show that the proposed communication mode recognizer acquires higher classification accuracy, stronger anti-noise robustness, and higher applicability of distinguishing multiple types, which presents the proposed scheme with vast applicable potentials in non-cooperative scenarios.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"11 4","pages":"Pages 1080-1090"},"PeriodicalIF":7.5,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144926844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01DOI: 10.1016/j.dcan.2024.11.001
Wenjian Hu , Yao Yu , Xin Hao , Phee Lep Yeoh , Lei Guo , Yonghui Li
We propose a Cross-Chain Mapping Blockchain (CCMB) for scalable data management in massive Internet of Things (IoT) networks. Specifically, CCMB aims to improve the scalability of securely storing, tracing, and transmitting IoT behavior and reputation data based on our proposed cross-mapped Behavior Chain (BChain) and Reputation Chain (RChain). To improve off-chain IoT data storage scalability, we show that our lightweight CCMB architecture efficiently utilizes available fog-cloud resources. The scalability of on-chain IoT data tracing is enhanced using our Mapping Smart Contract (MSC) and cross-chain mapping design to perform rapid Reputation-to-Behavior (R2B) traceability queries between BChain and RChain blocks. To maximize off-chain to on-chain throughput, we optimize the CCMB block settings and producers based on a general Poisson Point Process (PPP) network model. The constrained optimization problem is formulated as a Markov Decision Process (MDP), and solved using a dual-network Deep Reinforcement Learning (DRL) algorithm. Simulation results validate CCMB's scalability advantages in storage, traceability, and throughput. In specific massive IoT scenarios, CCMB can reduce the storage footprint by 50% and traceability query time by 90%, while improving system throughput by 55% compared to existing benchmarks.
{"title":"Cross-chain mapping blockchain: Scalable data management in massive IoT networks","authors":"Wenjian Hu , Yao Yu , Xin Hao , Phee Lep Yeoh , Lei Guo , Yonghui Li","doi":"10.1016/j.dcan.2024.11.001","DOIUrl":"10.1016/j.dcan.2024.11.001","url":null,"abstract":"<div><div>We propose a Cross-Chain Mapping Blockchain (CCMB) for scalable data management in massive Internet of Things (IoT) networks. Specifically, CCMB aims to improve the scalability of securely storing, tracing, and transmitting IoT behavior and reputation data based on our proposed cross-mapped Behavior Chain (BChain) and Reputation Chain (RChain). To improve off-chain IoT data storage scalability, we show that our lightweight CCMB architecture efficiently utilizes available fog-cloud resources. The scalability of on-chain IoT data tracing is enhanced using our Mapping Smart Contract (MSC) and cross-chain mapping design to perform rapid Reputation-to-Behavior (R2B) traceability queries between BChain and RChain blocks. To maximize off-chain to on-chain throughput, we optimize the CCMB block settings and producers based on a general Poisson Point Process (PPP) network model. The constrained optimization problem is formulated as a Markov Decision Process (MDP), and solved using a dual-network Deep Reinforcement Learning (DRL) algorithm. Simulation results validate CCMB's scalability advantages in storage, traceability, and throughput. In specific massive IoT scenarios, CCMB can reduce the storage footprint by 50% and traceability query time by 90%, while improving system throughput by 55% compared to existing benchmarks.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"11 4","pages":"Pages 1125-1140"},"PeriodicalIF":7.5,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144926845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01DOI: 10.1016/j.dcan.2024.10.017
Ling Xia Liao , Changqing Zhao , Jian Wang , Roy Xiaorong Lai , Steve Drew
Accurate early classification of elephant flows (elephants) is important for network management and resource optimization. Elephant models, mainly based on the byte count of flows, can always achieve high accuracy, but not in a time-efficient manner. The time efficiency becomes even worse when the flows to be classified are sampled by flow entry timeout over Software-Defined Networks (SDNs) to achieve a better resource efficiency. This paper addresses this situation by combining co-training and Reinforcement Learning (RL) to enable a closed-loop classification approach that divides the entire classification process into episodes, each involving two elephant models. One predicts elephants and is retrained by a selection of flows automatically labeled online by the other. RL is used to formulate a reward function that estimates the values of the possible actions based on the current states of both models and further adjusts the ratio of flows to be labeled in each phase. Extensive evaluation based on real traffic traces shows that the proposed approach can stably predict elephants using the packets received in the first 10% of their lifetime with an accuracy of over 80%, and using only about 10% more control channel bandwidth than the baseline over the evolved SDNs.
{"title":"Accurate and efficient elephant-flow classification based on co-trained models in evolved software-defined networks","authors":"Ling Xia Liao , Changqing Zhao , Jian Wang , Roy Xiaorong Lai , Steve Drew","doi":"10.1016/j.dcan.2024.10.017","DOIUrl":"10.1016/j.dcan.2024.10.017","url":null,"abstract":"<div><div>Accurate early classification of elephant flows (elephants) is important for network management and resource optimization. Elephant models, mainly based on the byte count of flows, can always achieve high accuracy, but not in a time-efficient manner. The time efficiency becomes even worse when the flows to be classified are sampled by flow entry timeout over Software-Defined Networks (SDNs) to achieve a better resource efficiency. This paper addresses this situation by combining co-training and Reinforcement Learning (RL) to enable a closed-loop classification approach that divides the entire classification process into episodes, each involving two elephant models. One predicts elephants and is retrained by a selection of flows automatically labeled online by the other. RL is used to formulate a reward function that estimates the values of the possible actions based on the current states of both models and further adjusts the ratio of flows to be labeled in each phase. Extensive evaluation based on real traffic traces shows that the proposed approach can stably predict elephants using the packets received in the first 10% of their lifetime with an accuracy of over 80%, and using only about 10% more control channel bandwidth than the baseline over the evolved SDNs.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"11 4","pages":"Pages 1091-1102"},"PeriodicalIF":7.5,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144925895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01DOI: 10.1016/j.dcan.2024.10.015
Yan Zhen , Litianyi Tao , Dapeng Wu , Tong Tang , Ruyan Wang
Aiming at the problem of mobile data traffic surge in 5G networks, this paper proposes an effective solution combining massive multiple-input multiple-output techniques with Ultra-Dense Network (UDN) and focuses on solving the resulting challenge of increased energy consumption. A base station control algorithm based on Multi-Agent Proximity Policy Optimization (MAPPO) is designed. In the constructed 5G UDN model, each base station is considered as an agent, and the MAPPO algorithm enables inter-base station collaboration and interference management to optimize the network performance. To reduce the extra power consumption due to frequent sleep mode switching of base stations, a sleep mode switching decision algorithm is proposed. The algorithm reduces unnecessary power consumption by evaluating the network state similarity and intelligently adjusting the agent's action strategy. Simulation results show that the proposed algorithm reduces the power consumption by 24.61% compared to the no-sleep strategy and further reduces the power consumption by 5.36% compared to the traditional MAPPO algorithm under the premise of guaranteeing the quality of service of users.
{"title":"Energy-saving control strategy for ultra-dense network base stations based on multi-agent reinforcement learning","authors":"Yan Zhen , Litianyi Tao , Dapeng Wu , Tong Tang , Ruyan Wang","doi":"10.1016/j.dcan.2024.10.015","DOIUrl":"10.1016/j.dcan.2024.10.015","url":null,"abstract":"<div><div>Aiming at the problem of mobile data traffic surge in 5G networks, this paper proposes an effective solution combining massive multiple-input multiple-output techniques with Ultra-Dense Network (UDN) and focuses on solving the resulting challenge of increased energy consumption. A base station control algorithm based on Multi-Agent Proximity Policy Optimization (MAPPO) is designed. In the constructed 5G UDN model, each base station is considered as an agent, and the MAPPO algorithm enables inter-base station collaboration and interference management to optimize the network performance. To reduce the extra power consumption due to frequent sleep mode switching of base stations, a sleep mode switching decision algorithm is proposed. The algorithm reduces unnecessary power consumption by evaluating the network state similarity and intelligently adjusting the agent's action strategy. Simulation results show that the proposed algorithm reduces the power consumption by 24.61% compared to the no-sleep strategy and further reduces the power consumption by 5.36% compared to the traditional MAPPO algorithm under the premise of guaranteeing the quality of service of users.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"11 4","pages":"Pages 1007-1017"},"PeriodicalIF":7.5,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144926835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01DOI: 10.1016/j.dcan.2024.10.010
Jiantao Xin , Wei Xu , Bin Cao , Taotao Wang , Shengli Zhang
With increasing density and heterogeneity in unlicensed wireless networks, traditional MAC protocols, such as Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) in Wi-Fi networks, are experiencing performance degradation. This is manifested in increased collisions and extended backoff times, leading to diminished spectrum efficiency and protocol coordination. Addressing these issues, this paper proposes a deep-learning-based MAC paradigm, dubbed DL-MAC, which leverages spectrum data readily available from energy detection modules in wireless devices to achieve the MAC functionalities of channel access, rate adaptation, and channel switch. First, we utilize DL-MAC to realize a joint design of channel access and rate adaptation. Subsequently, we integrate the capability of channel switching into DL-MAC, enhancing its functionality from single-channel to multi-channel operations. Specifically, the DL-MAC protocol incorporates a Deep Neural Network (DNN) for channel selection and a Recurrent Neural Network (RNN) for the joint design of channel access and rate adaptation. We conducted real-world data collection within the 2.4 GHz frequency band to validate the effectiveness of DL-MAC. Experimental results demonstrate that DL-MAC exhibits significantly superior performance compared to traditional algorithms in both single and multi-channel environments, and also outperforms single-function designs. Additionally, the performance of DL-MAC remains robust, unaffected by channel switch overheads within the evaluation range.
{"title":"A deep-learning-based MAC for integrating channel access, rate adaptation, and channel switch","authors":"Jiantao Xin , Wei Xu , Bin Cao , Taotao Wang , Shengli Zhang","doi":"10.1016/j.dcan.2024.10.010","DOIUrl":"10.1016/j.dcan.2024.10.010","url":null,"abstract":"<div><div>With increasing density and heterogeneity in unlicensed wireless networks, traditional MAC protocols, such as Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) in Wi-Fi networks, are experiencing performance degradation. This is manifested in increased collisions and extended backoff times, leading to diminished spectrum efficiency and protocol coordination. Addressing these issues, this paper proposes a deep-learning-based MAC paradigm, dubbed DL-MAC, which leverages spectrum data readily available from energy detection modules in wireless devices to achieve the MAC functionalities of channel access, rate adaptation, and channel switch. First, we utilize DL-MAC to realize a joint design of channel access and rate adaptation. Subsequently, we integrate the capability of channel switching into DL-MAC, enhancing its functionality from single-channel to multi-channel operations. Specifically, the DL-MAC protocol incorporates a Deep Neural Network (DNN) for channel selection and a Recurrent Neural Network (RNN) for the joint design of channel access and rate adaptation. We conducted real-world data collection within the 2.4 GHz frequency band to validate the effectiveness of DL-MAC. Experimental results demonstrate that DL-MAC exhibits significantly superior performance compared to traditional algorithms in both single and multi-channel environments, and also outperforms single-function designs. Additionally, the performance of DL-MAC remains robust, unaffected by channel switch overheads within the evaluation range.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"11 4","pages":"Pages 1042-1054"},"PeriodicalIF":7.5,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144926841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01DOI: 10.1016/j.dcan.2024.11.015
Chiya Zhang , Qinggeng Huang , Chunlong He , Gaojie Chen , Xingquan Li
Reconfigurable Intelligent Surface (RIS) is regarded as a cutting-edge technology for the development of future wireless communication networks with improved frequency efficiency and reduced energy consumption. This paper proposes an architecture by combining RIS with Generalized Spatial Modulation (GSM) and then presents a Multi-Residual Deep Neural Network (MR-DNN) scheme, where the active antennas and their transmitted constellation symbols are detected by sub-DNNs in the detection block. Simulation results demonstrate that the proposed MR-DNN detection algorithm performs considerably better than the traditional Zero-Forcing (ZF) and the Minimum Mean Squared Error (MMSE) detection algorithms in terms of Bit Error Rate (BER). Moreover, the MR-DNN detection algorithm has less time complexity than the traditional detection algorithms.
可重构智能表面(RIS)被认为是未来无线通信网络发展的前沿技术,具有提高频率效率和降低能耗的特点。本文提出了一种将RIS与广义空间调制(GSM)相结合的结构,并在此基础上提出了一种多残差深度神经网络(MR-DNN)方案,该方案通过检测块中的子dnn检测有源天线及其发射星座符号。仿真结果表明,提出的MR-DNN检测算法在误码率(BER)方面明显优于传统的零强迫(Zero-Forcing, ZF)和最小均方误差(Minimum Mean Squared Error, MMSE)检测算法。此外,MR-DNN检测算法比传统检测算法具有更低的时间复杂度。
{"title":"Generalized spatial modulation detector assisted by reconfigurable intelligent surface based on deep learning","authors":"Chiya Zhang , Qinggeng Huang , Chunlong He , Gaojie Chen , Xingquan Li","doi":"10.1016/j.dcan.2024.11.015","DOIUrl":"10.1016/j.dcan.2024.11.015","url":null,"abstract":"<div><div>Reconfigurable Intelligent Surface (RIS) is regarded as a cutting-edge technology for the development of future wireless communication networks with improved frequency efficiency and reduced energy consumption. This paper proposes an architecture by combining RIS with Generalized Spatial Modulation (GSM) and then presents a Multi-Residual Deep Neural Network (MR-DNN) scheme, where the active antennas and their transmitted constellation symbols are detected by sub-DNNs in the detection block. Simulation results demonstrate that the proposed MR-DNN detection algorithm performs considerably better than the traditional Zero-Forcing (ZF) and the Minimum Mean Squared Error (MMSE) detection algorithms in terms of Bit Error Rate (BER). Moreover, the MR-DNN detection algorithm has less time complexity than the traditional detection algorithms.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"11 4","pages":"Pages 1173-1180"},"PeriodicalIF":7.5,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144926849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}