Pub Date : 2026-01-01DOI: 10.1016/j.dcan.2025.05.009
Guoqiang Zhang, Qiwei Hu, Yu Zhang, Tao Jiang
The developing Sixth-Generation (6G) network aims to establish seamless global connectivity for billions of humans, machines, and devices. However, the rich digital service and the explosive heterogeneous connection between various entities in 6G networks can not only induce increasing complications of digital identity management, but also raise material concerns about the security and privacy of the user identity. In this paper, we design a user-centric identity management that returns the sole control to the user self and achieves identity sovereignty toward 6G networks. Specifically, we propose a blockchain-based Identity Management (IDM) architecture for 6G networks, which provides a practical method to secure digital identity management. Subsequently, we develop a fully privacy-preserving identity attribute management scheme by using zero-knowledge proof to protect the privacy-sensitive identity attribute. In particular, the scheme achieves an identity attribute hiding and verification protocol to support users in obtaining and applying their identity attributes without revealing concrete data. Finally, we analyze the security of the proposed architecture and implement a prototype system to evaluate its performance. The results demonstrate that our architecture ensures effective user digital identity management in 6G networks.
{"title":"A blockchain-based user-centric identity management toward 6G networks","authors":"Guoqiang Zhang, Qiwei Hu, Yu Zhang, Tao Jiang","doi":"10.1016/j.dcan.2025.05.009","DOIUrl":"10.1016/j.dcan.2025.05.009","url":null,"abstract":"<div><div>The developing Sixth-Generation (6G) network aims to establish seamless global connectivity for billions of humans, machines, and devices. However, the rich digital service and the explosive heterogeneous connection between various entities in 6G networks can not only induce increasing complications of digital identity management, but also raise material concerns about the security and privacy of the user identity. In this paper, we design a user-centric identity management that returns the sole control to the user self and achieves identity sovereignty toward 6G networks. Specifically, we propose a blockchain-based Identity Management (IDM) architecture for 6G networks, which provides a practical method to secure digital identity management. Subsequently, we develop a fully privacy-preserving identity attribute management scheme by using zero-knowledge proof to protect the privacy-sensitive identity attribute. In particular, the scheme achieves an identity attribute hiding and verification protocol to support users in obtaining and applying their identity attributes without revealing concrete data. Finally, we analyze the security of the proposed architecture and implement a prototype system to evaluate its performance. The results demonstrate that our architecture ensures effective user digital identity management in 6G networks.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"12 1","pages":"Pages 1-10"},"PeriodicalIF":7.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145969391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.1016/j.dcan.2024.03.006
Xiaoqin Song , Quan Chen , Shumo Wang , Tiecheng Song
Due to the dynamic nature of service requests and the uneven distribution of services in the Internet of Vehicles (IoV), Multi-access Edge Computing (MEC) networks with pre-installed servers are often susceptible to insufficient computing power at certain times or in certain areas. In addition, Vehicular Users (VUs) need to share their observations for centralized neural network training, resulting in additional communication overhead. In this paper, we present a hybrid MEC server architecture, where fixed RoadSide Units (RSUs) and Mobile Edge Servers (MESs) cooperate to provide computation offloading services to VUs. We propose a distributed federated learning and Deep Reinforcement Learning (DRL) based algorithm, namely Federated Dueling Double Deep Q-Network (FD3QN), with the objective of minimizing the weighted sum of service latency and energy consumption. Horizontal federated learning is incorporated into the Dueling Double Deep Q-Network (D3QN) to allocate cross-domain resources after the offload decision process. A client-server framework with federated aggregation is used to maintain the global model. The proposed FD3QN algorithm can jointly optimize power, sub-band, and computational resources. Simulation results show that the proposed algorithm outperforms baselines in terms of system cost and exhibits better robustness in uncertain IoV environments.
由于车联网(Internet of vehicle, IoV)中服务请求的动态性和服务分布的不均匀性,预装服务器的多接入边缘计算(Multi-access Edge Computing, MEC)网络在某些时间或某些区域往往存在计算能力不足的问题。此外,车辆用户(vu)需要共享他们的观察结果以进行集中神经网络训练,从而导致额外的通信开销。在本文中,我们提出了一种混合MEC服务器架构,其中固定路边单元(rsu)和移动边缘服务器(MESs)合作为vu提供计算卸载服务。我们提出了一种基于分布式联邦学习和深度强化学习(DRL)的算法,即federated Dueling Double Deep Q-Network (FD3QN),其目标是最小化服务延迟和能量消耗的加权总和。将水平联邦学习引入Dueling双深度Q-Network (D3QN),在卸载决策过程后进行跨域资源分配。使用具有联邦聚合的客户机-服务器框架来维护全局模型。提出的FD3QN算法可以共同优化功率、子带和计算资源。仿真结果表明,该算法在系统成本方面优于基线,在不确定的车联网环境中具有更好的鲁棒性。
{"title":"Cross-domain resources optimization for hybrid edge computing networks: Federated DRL approach","authors":"Xiaoqin Song , Quan Chen , Shumo Wang , Tiecheng Song","doi":"10.1016/j.dcan.2024.03.006","DOIUrl":"10.1016/j.dcan.2024.03.006","url":null,"abstract":"<div><div>Due to the dynamic nature of service requests and the uneven distribution of services in the Internet of Vehicles (IoV), Multi-access Edge Computing (MEC) networks with pre-installed servers are often susceptible to insufficient computing power at certain times or in certain areas. In addition, Vehicular Users (VUs) need to share their observations for centralized neural network training, resulting in additional communication overhead. In this paper, we present a hybrid MEC server architecture, where fixed RoadSide Units (RSUs) and Mobile Edge Servers (MESs) cooperate to provide computation offloading services to VUs. We propose a distributed federated learning and Deep Reinforcement Learning (DRL) based algorithm, namely Federated Dueling Double Deep Q-Network (FD3QN), with the objective of minimizing the weighted sum of service latency and energy consumption. Horizontal federated learning is incorporated into the Dueling Double Deep Q-Network (D3QN) to allocate cross-domain resources after the offload decision process. A client-server framework with federated aggregation is used to maintain the global model. The proposed FD3QN algorithm can jointly optimize power, sub-band, and computational resources. Simulation results show that the proposed algorithm outperforms baselines in terms of system cost and exhibits better robustness in uncertain IoV environments.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"11 6","pages":"Pages 1797-1808"},"PeriodicalIF":7.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140404933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.1016/j.dcan.2025.05.011
Kaiyue Luo , Yumei Wang , Yu Liu , Jiake Li , Jishiyu Ding , Kewu Sun
Metaverse, envisioned as the next evolution of the Internet, is expected to evolve into an innovative medium advancing information civilization. Its core characteristics, including ubiquity, seamlessness, immersion, interoperability and metaspatiotemporality, are catalyzing the development of multiple technologies and fostering a convergence between the physical and virtual worlds. Despite its potential, the critical concept of symbiosis, which involves the synchronous generation and management of virtuality from reality and serves as the cornerstone of this convergence, is often overlooked. Additionally, cumbersome service designs, stemming from the intricate interplay of various technologies and inefficient resource utilization, are impeding an ideal Metaverse ecosystem. To address these challenges, we propose a bi-model Parallel Symbiotic Metaverse (PSM) system, engineered with a Cybertwin-enabled 6G framework where Cybertwins mirror Sensing Devices (SDs) and serve a bridging role as autonomous agents. Based on this framework, the system is structured into two models. In the queue model, SDs capture environmental data that Cybertwins then coordinate and schedule. In the service model, Cybertwins manage service requests and collaborate with SDs to make responsive decisions. We incorporate two algorithms to address resource scheduling and virtual service responses, showcasing the synergistic role of Cybertwins. Moreover, our PSM system advocates for the participation of SDs from collaborators, enhancing performance while reducing operational costs for Virtual Service Operator (VSO). Finally, we comparatively analyze the efficiency and complexity of the proposed algorithms, and demonstrate the efficacy of the PSM system across multiple performance indicators. The results indicate our system can be deployed cost-effectively with Cybertwin-enabled 6G.
{"title":"Towards parallel Metaverse: Symbiosis of physical and virtual worlds based on Cybertwin-enabled 6G","authors":"Kaiyue Luo , Yumei Wang , Yu Liu , Jiake Li , Jishiyu Ding , Kewu Sun","doi":"10.1016/j.dcan.2025.05.011","DOIUrl":"10.1016/j.dcan.2025.05.011","url":null,"abstract":"<div><div>Metaverse, envisioned as the next evolution of the Internet, is expected to evolve into an innovative medium advancing information civilization. Its core characteristics, including ubiquity, seamlessness, immersion, interoperability and metaspatiotemporality, are catalyzing the development of multiple technologies and fostering a convergence between the physical and virtual worlds. Despite its potential, the critical concept of symbiosis, which involves the synchronous generation and management of virtuality from reality and serves as the cornerstone of this convergence, is often overlooked. Additionally, cumbersome service designs, stemming from the intricate interplay of various technologies and inefficient resource utilization, are impeding an ideal Metaverse ecosystem. To address these challenges, we propose a bi-model Parallel Symbiotic Metaverse (PSM) system, engineered with a Cybertwin-enabled 6G framework where Cybertwins mirror Sensing Devices (SDs) and serve a bridging role as autonomous agents. Based on this framework, the system is structured into two models. In the queue model, SDs capture environmental data that Cybertwins then coordinate and schedule. In the service model, Cybertwins manage service requests and collaborate with SDs to make responsive decisions. We incorporate two algorithms to address resource scheduling and virtual service responses, showcasing the synergistic role of Cybertwins. Moreover, our PSM system advocates for the participation of SDs from collaborators, enhancing performance while reducing operational costs for Virtual Service Operator (VSO). Finally, we comparatively analyze the efficiency and complexity of the proposed algorithms, and demonstrate the efficacy of the PSM system across multiple performance indicators. The results indicate our system can be deployed cost-effectively with Cybertwin-enabled 6G.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"11 6","pages":"Pages 1843-1863"},"PeriodicalIF":7.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145842601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.1016/j.dcan.2025.06.013
Yasheng Jin , Hong Ren , Cunhua Pan , Zhiyuan Yu , Ruisong Weng , Boshi Wang , Gui Zhou , Yongchao He , Maged Elkashlan
In this paper, we investigate an reconfigurable intelligent surface-aided Integrated Sensing And Communication (ISAC) system. Our objective is to maximize the achievable sum rate of the multi-antenna communication users through the joint active and passive beamforming. Specifically, the weighted minimum mean-square error method is first used to reformulate the original problem into an equivalent one. Then, we utilize an alternating optimization algorithm to decouple the optimization variables and decompose this challenging problem into two subproblems. Given reflecting coefficients, a penalty-based algorithm is utilized to deal with the non-convex radar Signal-to-Noise Ratio (SNR) constraints. For the given beamforming matrix of the base station, we apply majorization-minimization to transform the problem into a Quadratic Constraint Quadratic Programming (QCQP) problem, which is ultimately solved using a Semi-Definite Relaxation (SDR) based algorithm. Simulation results illustrate the advantage of deploying reconfigurable intelligent surface in the considered multi-user Multiple-Input Multiple-Output (MIMO) ISAC systems.
{"title":"Reconfigurable intelligent surface-aided dual-function radar and communication systems with MU-MIMO communication","authors":"Yasheng Jin , Hong Ren , Cunhua Pan , Zhiyuan Yu , Ruisong Weng , Boshi Wang , Gui Zhou , Yongchao He , Maged Elkashlan","doi":"10.1016/j.dcan.2025.06.013","DOIUrl":"10.1016/j.dcan.2025.06.013","url":null,"abstract":"<div><div>In this paper, we investigate an reconfigurable intelligent surface-aided Integrated Sensing And Communication (ISAC) system. Our objective is to maximize the achievable sum rate of the multi-antenna communication users through the joint active and passive beamforming. Specifically, the weighted minimum mean-square error method is first used to reformulate the original problem into an equivalent one. Then, we utilize an alternating optimization algorithm to decouple the optimization variables and decompose this challenging problem into two subproblems. Given reflecting coefficients, a penalty-based algorithm is utilized to deal with the non-convex radar Signal-to-Noise Ratio (SNR) constraints. For the given beamforming matrix of the base station, we apply majorization-minimization to transform the problem into a Quadratic Constraint Quadratic Programming (QCQP) problem, which is ultimately solved using a Semi-Definite Relaxation (SDR) based algorithm. Simulation results illustrate the advantage of deploying reconfigurable intelligent surface in the considered multi-user Multiple-Input Multiple-Output (MIMO) ISAC systems.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"11 6","pages":"Pages 1831-1842"},"PeriodicalIF":7.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145842600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Low Earth Orbit (LEO) satellites have gained significant attention for their low-latency communication and computing capabilities but face challenges due to high mobility and limited resources. Existing studies integrate edge computing with LEO satellite networks to optimize task offloading; however, they often overlook the impact of frequent topology changes, unstable transmission links, and intermittent satellite visibility, leading to task execution failures and increased latency. To address these issues, this paper proposes a dynamic integrated space-ground computing framework that optimizes task offloading under LEO satellite mobility constraints. We design an adaptive task migration strategy through inter-satellite links when target satellites become inaccessible. To enhance data transmission reliability, we introduce a communication stability constraint based on transmission bit error rate (BER). Additionally, we develop a genetic algorithm (GA)-based task scheduling method that dynamically allocates computing resources while minimizing latency and energy consumption. Our approach jointly considers satellite computing capacity, link stability, and task execution reliability to achieve efficient task offloading. Experimental results demonstrate that the proposed method significantly improves task execution success rates, reduces system overhead, and enhances overall computational efficiency in LEO satellite networks.
{"title":"Resilient task offloading in integrated satellite-terrestrial networks with mobility-induced variability","authors":"Kongyang Chen , Guomin Liang , Hongfa Zhang , Waixi Liu , Jiaxing Shen","doi":"10.1016/j.dcan.2025.07.004","DOIUrl":"10.1016/j.dcan.2025.07.004","url":null,"abstract":"<div><div>Low Earth Orbit (LEO) satellites have gained significant attention for their low-latency communication and computing capabilities but face challenges due to high mobility and limited resources. Existing studies integrate edge computing with LEO satellite networks to optimize task offloading; however, they often overlook the impact of frequent topology changes, unstable transmission links, and intermittent satellite visibility, leading to task execution failures and increased latency. To address these issues, this paper proposes a dynamic integrated space-ground computing framework that optimizes task offloading under LEO satellite mobility constraints. We design an adaptive task migration strategy through inter-satellite links when target satellites become inaccessible. To enhance data transmission reliability, we introduce a communication stability constraint based on transmission bit error rate (BER). Additionally, we develop a genetic algorithm (GA)-based task scheduling method that dynamically allocates computing resources while minimizing latency and energy consumption. Our approach jointly considers satellite computing capacity, link stability, and task execution reliability to achieve efficient task offloading. Experimental results demonstrate that the proposed method significantly improves task execution success rates, reduces system overhead, and enhances overall computational efficiency in LEO satellite networks.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"11 6","pages":"Pages 1961-1972"},"PeriodicalIF":7.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145842662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In federated learning (FL), the distribution of data across different clients leads to the degradation of global model performance in training. Personalized Federated Learning (pFL) can address this problem through global model personalization. Researches over the past few years have calibrated differences in weights across the entire model or optimized only individual layers of the model without considering that different layers of the whole neural network have different utilities, resulting in lagged model convergence and inadequate personalization in non-IID data. In this paper, we propose model layered optimization for feature extractor and classifier (pFedEC), a novel pFL training framework personalized for different layers of the model. Our study divides the model layers into the feature extractor and classifier. We initialize the model's classifiers during model training, while making the local model's feature extractors learn the representation of the global model's feature extractors to correct each client's local training, integrating the utilities of the different layers in the entire model. Our extensive experiments show that pFedEC achieves 92.95% accuracy on CIFAR-10, outperforming existing pFL methods by approximately 1.8%. On CIFAR-100 and Tiny-ImageNet, pFedEC improves the accuracy by at least 4.2%, reaching 73.02% and 28.39%, respectively.
{"title":"Model layered optimization with contrastive learning for personalized federated learning","authors":"Dawei Xu , Chentao Lu , TianXin Chen , Baokun Zheng , Chuan Zhang , Liehuang Zhu , Jian Zhao","doi":"10.1016/j.dcan.2025.08.011","DOIUrl":"10.1016/j.dcan.2025.08.011","url":null,"abstract":"<div><div>In federated learning (FL), the distribution of data across different clients leads to the degradation of global model performance in training. Personalized Federated Learning (pFL) can address this problem through global model personalization. Researches over the past few years have calibrated differences in weights across the entire model or optimized only individual layers of the model without considering that different layers of the whole neural network have different utilities, resulting in lagged model convergence and inadequate personalization in non-IID data. In this paper, we propose model layered optimization for feature extractor and classifier (pFedEC), a novel pFL training framework personalized for different layers of the model. Our study divides the model layers into the feature extractor and classifier. We initialize the model's classifiers during model training, while making the local model's feature extractors learn the representation of the global model's feature extractors to correct each client's local training, integrating the utilities of the different layers in the entire model. Our extensive experiments show that pFedEC achieves 92.95% accuracy on CIFAR-10, outperforming existing pFL methods by approximately 1.8%. On CIFAR-100 and Tiny-ImageNet, pFedEC improves the accuracy by at least 4.2%, reaching 73.02% and 28.39%, respectively.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"11 6","pages":"Pages 1973-1982"},"PeriodicalIF":7.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145842708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.1016/j.dcan.2025.08.001
Yilin Ma , Chiya Zhang , Chunlong He , Xingquan Li
As the 6G era approaches, wireless communication faces challenges such as massive user numbers, high mobility, and spectrum resource sharing. Radio maps are crucial for network design, optimization, and management, providing essential channel information. In this paper, we propose an innovative learning framework for Radio Map Estimation (RME) based on cycle-consistent generative adversarial networks. Traditional RME methods are often constrained by model complexity and interpolation accuracy, while learning-based methods require strictly paired datasets, making their practical application difficult. Our method overcomes these limitations by enabling training with unpaired data, efficiently converting local features into radio maps. Our experimental results demonstrate the effectiveness of the proposed method in two scenarios: accurate map data and map data with dynamic errors. To address dynamic interference, we designed a two-stage learning process that uses sparse observations to correct local details in the radio map, and the model's accuracy and practicality.
{"title":"Radio map estimation using a CycleGAN-based learning framework for 6G wireless communication","authors":"Yilin Ma , Chiya Zhang , Chunlong He , Xingquan Li","doi":"10.1016/j.dcan.2025.08.001","DOIUrl":"10.1016/j.dcan.2025.08.001","url":null,"abstract":"<div><div>As the 6G era approaches, wireless communication faces challenges such as massive user numbers, high mobility, and spectrum resource sharing. Radio maps are crucial for network design, optimization, and management, providing essential channel information. In this paper, we propose an innovative learning framework for Radio Map Estimation (RME) based on cycle-consistent generative adversarial networks. Traditional RME methods are often constrained by model complexity and interpolation accuracy, while learning-based methods require strictly paired datasets, making their practical application difficult. Our method overcomes these limitations by enabling training with unpaired data, efficiently converting local features into radio maps. Our experimental results demonstrate the effectiveness of the proposed method in two scenarios: accurate map data and map data with dynamic errors. To address dynamic interference, we designed a two-stage learning process that uses sparse observations to correct local details in the radio map, and the model's accuracy and practicality.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"11 6","pages":"Pages 1822-1830"},"PeriodicalIF":7.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145842594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sixth-generation (6G) communication system promises unprecedented data density and transformative applications over different industries. However, managing heterogeneous data with different distributions in 6G-enabled multi-access edge cloud networks presents challenges for efficient Machine Learning (ML) training and aggregation, often leading to increased energy consumption and reduced model generalization. To solve this problem, this research proposes a Weighted Proximal Policy-based Federated Learning approach integrated with ResNet50 and Scaled Exponential Linear Unit activation function (WPPFL-RS). The proposed method optimizes resource allocation such as CPU and memory, through enhancing the Cyber-twin technology to estimate the computing capacities of edge clouds. The proposed WPPFL-RS approach significantly minimizes the latency and energy consumption, solving complex challenges in 6G-enabled edge computing. This makes sure that efficient resource utilization and enhanced performance in heterogeneous edge networks. The proposed WPPFL-RS achieves a minimum latency of 8.20 s on 100 tasks, a significant improvement over the baseline Deep Reinforcement Learning (DRL), which recorded 11.39 s. This approach highlights its potential to enhance resource utilization and performance in 6G edge networks.
{"title":"Cybertwin driven resource allocation using optimized proximal policy based federated learning in 6G enabled edge environment","authors":"Sowmya Madhavan , M.G. Aruna , G.P. Ramesh , Abdul Lateef Haroon Phulara Shaik , Dhulipalla Ramya Krishna","doi":"10.1016/j.dcan.2025.05.015","DOIUrl":"10.1016/j.dcan.2025.05.015","url":null,"abstract":"<div><div>Sixth-generation (6G) communication system promises unprecedented data density and transformative applications over different industries. However, managing heterogeneous data with different distributions in 6G-enabled multi-access edge cloud networks presents challenges for efficient Machine Learning (ML) training and aggregation, often leading to increased energy consumption and reduced model generalization. To solve this problem, this research proposes a Weighted Proximal Policy-based Federated Learning approach integrated with ResNet50 and Scaled Exponential Linear Unit activation function (WPPFL-RS). The proposed method optimizes resource allocation such as CPU and memory, through enhancing the Cyber-twin technology to estimate the computing capacities of edge clouds. The proposed WPPFL-RS approach significantly minimizes the latency and energy consumption, solving complex challenges in 6G-enabled edge computing. This makes sure that efficient resource utilization and enhanced performance in heterogeneous edge networks. The proposed WPPFL-RS achieves a minimum latency of 8.20 s on 100 tasks, a significant improvement over the baseline Deep Reinforcement Learning (DRL), which recorded 11.39 s. This approach highlights its potential to enhance resource utilization and performance in 6G edge networks.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"11 6","pages":"Pages 1809-1821"},"PeriodicalIF":7.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145842636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.1016/j.dcan.2025.05.003
Shilei Tan , Xuesong Wang , Haoquan Zhou, Wei Gong
The Internet of Things (IoT) technology provides data acquisition, transmission, and analysis to control rehabilitation robots, encompassing sensor data from the robots as well as lidar signals for trajectory planning (desired trajectory). In IoT rehabilitation robot systems, managing nonvanishing uncertainties and input quantization is crucial for precise and reliable control performance. These challenges can cause instability and reduced effectiveness, particularly in adaptive networked control. This paper investigates networked control with guaranteed performance for IoT rehabilitation robots under nonvanishing uncertainties and input quantization. First, input quantization is managed via a quantization-aware control design, ensur stability and minimizing tracking errors, even with discrete control inputs, to avoid chattering. Second, the method handles nonvanishing uncertainties by adjusting control parameters via real-time neural network adaptation, maintaining consistent performance despite persistent disturbances. Third, the control scheme guarantees the desired tracking performance within a specified time, with all signals in the closed-loop system remaining uniformly bounded, offering a robust, reliable solution for IoT rehabilitation robot control. The simulation verifies the benefits and efficacy of the proposed control strategy.
{"title":"Networked control with guaranteed performance for IoT rehabilitation robot under nonvanishing uncertainties and input quantization","authors":"Shilei Tan , Xuesong Wang , Haoquan Zhou, Wei Gong","doi":"10.1016/j.dcan.2025.05.003","DOIUrl":"10.1016/j.dcan.2025.05.003","url":null,"abstract":"<div><div>The Internet of Things (IoT) technology provides data acquisition, transmission, and analysis to control rehabilitation robots, encompassing sensor data from the robots as well as lidar signals for trajectory planning (desired trajectory). In IoT rehabilitation robot systems, managing nonvanishing uncertainties and input quantization is crucial for precise and reliable control performance. These challenges can cause instability and reduced effectiveness, particularly in adaptive networked control. This paper investigates networked control with guaranteed performance for IoT rehabilitation robots under nonvanishing uncertainties and input quantization. First, input quantization is managed via a quantization-aware control design, ensur stability and minimizing tracking errors, even with discrete control inputs, to avoid chattering. Second, the method handles nonvanishing uncertainties by adjusting control parameters via real-time neural network adaptation, maintaining consistent performance despite persistent disturbances. Third, the control scheme guarantees the desired tracking performance within a specified time, with all signals in the closed-loop system remaining uniformly bounded, offering a robust, reliable solution for IoT rehabilitation robot control. The simulation verifies the benefits and efficacy of the proposed control strategy.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"11 6","pages":"Pages 1774-1782"},"PeriodicalIF":7.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145842663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.1016/j.dcan.2025.06.009
Ying Ouyang, Chungang Yang, Rongqian Fan, Tangyi Li
The Space-Terrestrial Network (STN) aims to deliver comprehensive on-demand network services, addressing the broad and varied needs of Internet of Things (IoT) applications. However, the STN faces new challenges such as service multiplicity, topology dynamicity, and conventional management complexity. This necessitates a flexible and autonomous approach to network resource management to effectively align network services with available resources. Thus, we incorporate the Intent-Driven Network (IDN) into the STN, enabling the execution of multiple missions through automated resource allocation and dynamic network policy optimization. This approach enhances programmability and flexibility, facilitating intelligent network management for real-time control and adaptable service deployment in both traditional and IoT-focused scenarios. Building on previous mechanisms, we develop the intent-driven CoX resource management model, which includes components for coordination intent decomposition, collaboration intent management, and cooperation resource management. We propose an advanced intent verification mechanism and create an intent-driven CoX resource management algorithm leveraging a two-stage deep reinforcement learning method to minimize resource usage and delay costs in cross-domain communications within the STN. Ultimately, we establish an intent-driven CoX prototype to validate the efficacy of this proposed mechanism, which demonstrates improved performance in intent refinement and resource management efficiency.
{"title":"Enabling intent-driven CoX mechanism in space-terrestrial network for multiple mission impossible","authors":"Ying Ouyang, Chungang Yang, Rongqian Fan, Tangyi Li","doi":"10.1016/j.dcan.2025.06.009","DOIUrl":"10.1016/j.dcan.2025.06.009","url":null,"abstract":"<div><div>The Space-Terrestrial Network (STN) aims to deliver comprehensive on-demand network services, addressing the broad and varied needs of Internet of Things (IoT) applications. However, the STN faces new challenges such as service multiplicity, topology dynamicity, and conventional management complexity. This necessitates a flexible and autonomous approach to network resource management to effectively align network services with available resources. Thus, we incorporate the Intent-Driven Network (IDN) into the STN, enabling the execution of multiple missions through automated resource allocation and dynamic network policy optimization. This approach enhances programmability and flexibility, facilitating intelligent network management for real-time control and adaptable service deployment in both traditional and IoT-focused scenarios. Building on previous mechanisms, we develop the intent-driven CoX resource management model, which includes components for coordination intent decomposition, collaboration intent management, and cooperation resource management. We propose an advanced intent verification mechanism and create an intent-driven CoX resource management algorithm leveraging a two-stage deep reinforcement learning method to minimize resource usage and delay costs in cross-domain communications within the STN. Ultimately, we establish an intent-driven CoX prototype to validate the efficacy of this proposed mechanism, which demonstrates improved performance in intent refinement and resource management efficiency.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"11 6","pages":"Pages 1762-1773"},"PeriodicalIF":7.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145842711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}