Pub Date : 2026-02-01Epub Date: 2025-08-18DOI: 10.1016/j.dcan.2025.08.003
ChaoYue Wang , Xian Zhao , Qingyuan Liu , Ting Chen , Tao Liu
Driven by globalization and digitization, the Mobile Industrial Supply Chain Internet of Things (IoT) has gradually developed, utilizing mobile devices and IoT technologies to enable real-time monitoring and efficient responses across various stages. However, with the growing demand for high-frequency data exchange, the Mobile Industrial Supply Chain IoT faces significant challenges in data security, authentication, and privacy protection. This paper proposes a security authentication scheme based on blockchain and group key management, leveraging the decentralized and tamper-resistant features of blockchain, the privacy-preserving authentication method of Zero-Knowledge Proofs (ZKP), and a hierarchical key management mechanism based on binary key trees. This approach aims to enhance the security and scalability of Mobile Industrial Supply Chain IoT. The experimental section simulates scenarios such as dynamic node addition and key updates, evaluating the performance in terms of encryption, decryption, and key management efficiency, thus demonstrating its superiority in multi-party collaborative environments.
{"title":"A security authentication scheme for mobile industrial IoT supply chains based on blockchain and group key management","authors":"ChaoYue Wang , Xian Zhao , Qingyuan Liu , Ting Chen , Tao Liu","doi":"10.1016/j.dcan.2025.08.003","DOIUrl":"10.1016/j.dcan.2025.08.003","url":null,"abstract":"<div><div>Driven by globalization and digitization, the Mobile Industrial Supply Chain Internet of Things (IoT) has gradually developed, utilizing mobile devices and IoT technologies to enable real-time monitoring and efficient responses across various stages. However, with the growing demand for high-frequency data exchange, the Mobile Industrial Supply Chain IoT faces significant challenges in data security, authentication, and privacy protection. This paper proposes a security authentication scheme based on blockchain and group key management, leveraging the decentralized and tamper-resistant features of blockchain, the privacy-preserving authentication method of Zero-Knowledge Proofs (ZKP), and a hierarchical key management mechanism based on binary key trees. This approach aims to enhance the security and scalability of Mobile Industrial Supply Chain IoT. The experimental section simulates scenarios such as dynamic node addition and key updates, evaluating the performance in terms of encryption, decryption, and key management efficiency, thus demonstrating its superiority in multi-party collaborative environments.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"12 2","pages":"Pages 283-293"},"PeriodicalIF":7.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147423859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper investigates a downlink millimeter-Wave (mmWave) communication system equipped with multiple cooperative Intelligent Reflecting Surfaces (IRSs), aiming to extend mmWave signal coverage and maximize system throughput. To fully exploit the potential of IRSs within a user-centric framework, this study delves into the joint optimization problem of user multiple association, transmit beamforming, and cooperative passive beamforming. Meanwhile, the impact of IRS locations on user association is analyzed. Given the non-convexity and complexity of the joint optimization problem, a low-complexity optimization algorithm is designed. The algorithm integrates iterative optimization, Lagrangian dual decomposition, and Fractional Programming (FP) techniques. Specifically, the user association problem is optimized using the Lagrangian dual decomposition method, while the joint beamforming is solved via the FP method. Simulation results demonstrate that, compared to traditional methods, the proposed algorithm significantly improves the system sum rate, validating its effectiveness and superiority.
{"title":"Joint user association and cooperative beamforming for multi-IRSs aided mmWave communication systems","authors":"Qing Xue , Jiajun Mu , Fengsheng Wei , Meng Hua , Qianbin Chen","doi":"10.1016/j.dcan.2025.11.007","DOIUrl":"10.1016/j.dcan.2025.11.007","url":null,"abstract":"<div><div>This paper investigates a downlink millimeter-Wave (mmWave) communication system equipped with multiple cooperative Intelligent Reflecting Surfaces (IRSs), aiming to extend mmWave signal coverage and maximize system throughput. To fully exploit the potential of IRSs within a user-centric framework, this study delves into the joint optimization problem of user multiple association, transmit beamforming, and cooperative passive beamforming. Meanwhile, the impact of IRS locations on user association is analyzed. Given the non-convexity and complexity of the joint optimization problem, a low-complexity optimization algorithm is designed. The algorithm integrates iterative optimization, Lagrangian dual decomposition, and Fractional Programming (FP) techniques. Specifically, the user association problem is optimized using the Lagrangian dual decomposition method, while the joint beamforming is solved via the FP method. Simulation results demonstrate that, compared to traditional methods, the proposed algorithm significantly improves the system sum rate, validating its effectiveness and superiority.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"12 2","pages":"Pages 252-261"},"PeriodicalIF":7.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147423864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-11-10DOI: 10.1016/j.dcan.2025.11.002
Bo Yang, Zhanglin Zhou, Nanqi Fan, Feng Ke, Jie Tang, Xiu Yin Zhang
Driven by the increasing demand for efficient data transmission, massive Multiple-Input Multiple-Output (MIMO) systems have emerged as a key technology for future communication systems. However, effective utilization of MIMO relies heavily on accurate Channel State Information (CSI) that is fed back to the base station, which poses significant challenges due to the overhead associated with CSI feedback, especially with the increasing number of antennas. To overcome these drawbacks, this paper proposes a Deep Learning (DL) scheme to improve the CSI feedback, presenting a network named CsiDNet, which compresses CSI at the user end and decompresses it at the base station side. In addition, an auxiliary module is designed to restore CSI information under error-prone scenarios, enhancing the robustness of the system. Extensive performance analysis and simulations demonstrate that CsiDNet achieves an improvement of 2.7 dB and 0.1 dB in terms of Normalized Mean Square Error (NMSE) and Square Generalized Cosine Similarity (SGCS) respectively compared to other models, while significantly reducing computational complexity. The auxiliary module further improves the NMSE and SGCS performance by 4 dB and 0.1 dB respectively, reflecting its effectiveness in recovering error-prone CSI components. Overall, our research improves the accuracy and efficiency of CSI feedback while enhancing the system's robustness against real-world transmission challenges.
{"title":"Deep learning aided CSI feedback optimization with robust error recovery","authors":"Bo Yang, Zhanglin Zhou, Nanqi Fan, Feng Ke, Jie Tang, Xiu Yin Zhang","doi":"10.1016/j.dcan.2025.11.002","DOIUrl":"10.1016/j.dcan.2025.11.002","url":null,"abstract":"<div><div>Driven by the increasing demand for efficient data transmission, massive Multiple-Input Multiple-Output (MIMO) systems have emerged as a key technology for future communication systems. However, effective utilization of MIMO relies heavily on accurate Channel State Information (CSI) that is fed back to the base station, which poses significant challenges due to the overhead associated with CSI feedback, especially with the increasing number of antennas. To overcome these drawbacks, this paper proposes a Deep Learning (DL) scheme to improve the CSI feedback, presenting a network named CsiDNet, which compresses CSI at the user end and decompresses it at the base station side. In addition, an auxiliary module is designed to restore CSI information under error-prone scenarios, enhancing the robustness of the system. Extensive performance analysis and simulations demonstrate that CsiDNet achieves an improvement of 2.7 dB and 0.1 dB in terms of Normalized Mean Square Error (NMSE) and Square Generalized Cosine Similarity (SGCS) respectively compared to other models, while significantly reducing computational complexity. The auxiliary module further improves the NMSE and SGCS performance by 4 dB and 0.1 dB respectively, reflecting its effectiveness in recovering error-prone CSI components. Overall, our research improves the accuracy and efficiency of CSI feedback while enhancing the system's robustness against real-world transmission challenges.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"12 2","pages":"Pages 354-363"},"PeriodicalIF":7.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147423857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-12-03DOI: 10.1016/j.dcan.2025.11.006
Hao Liu , Xiaonyu Hu , Ran Wang , Jie Hao , Qiang Wu , Hongke Zhang
The explosive proliferation of Large Language Models (LLMs) imposes significant energy and operational burdens on Geographically Distributed Data Centers (GDDCs), thereby demanding an efficient mechanism for LLMs task scheduling. While prior geo-distributed scheduling methods reduce cost and carbon emissions by exploiting regional heterogeneity, they largely overlook model and data reuse opportunities and the uncertainty of LLM execution times. In this paper, we introduce GCOS, to the best of our knowledge, the first green scheduling framework that incorporates a dual-cache system for both data and models, while jointly optimizing task assignment and cache migration. We firstly propose a dual-cache mechanism that decouples model and data caching to enable fine-grained reuse and minimize redundant transmissions. Subsequently, we propose the Multi-Agent Cache-aware Cooperative Scheduling (MACCS) algorithm, which leverages reinforcement learning to optimize task placement with a focus on minimizing both carbon emissions and cost. Additionally, we design a lightweight execution time predictor, DiPTree, to address the high variability in task execution times. Extensive experiments on real-world datasets demonstrate that GCOS reduces overall cost by up to 92.6 % and carbon emissions by 90.3 %, significantly outperforming existing baselines.
{"title":"Green scheduling for LLM workloads with model and data reuse across geo-distributed data centers","authors":"Hao Liu , Xiaonyu Hu , Ran Wang , Jie Hao , Qiang Wu , Hongke Zhang","doi":"10.1016/j.dcan.2025.11.006","DOIUrl":"10.1016/j.dcan.2025.11.006","url":null,"abstract":"<div><div>The explosive proliferation of Large Language Models (LLMs) imposes significant energy and operational burdens on Geographically Distributed Data Centers (GDDCs), thereby demanding an efficient mechanism for LLMs task scheduling. While prior geo-distributed scheduling methods reduce cost and carbon emissions by exploiting regional heterogeneity, they largely overlook model and data reuse opportunities and the uncertainty of LLM execution times. In this paper, we introduce GCOS, to the best of our knowledge, the first green scheduling framework that incorporates a dual-cache system for both data and models, while jointly optimizing task assignment and cache migration. We firstly propose a dual-cache mechanism that decouples model and data caching to enable fine-grained reuse and minimize redundant transmissions. Subsequently, we propose the Multi-Agent Cache-aware Cooperative Scheduling (MACCS) algorithm, which leverages reinforcement learning to optimize task placement with a focus on minimizing both carbon emissions and cost. Additionally, we design a lightweight execution time predictor, DiPTree, to address the high variability in task execution times. Extensive experiments on real-world datasets demonstrate that GCOS reduces overall cost by up to 92.6 % and carbon emissions by 90.3 %, significantly outperforming existing baselines.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"12 2","pages":"Pages 236-251"},"PeriodicalIF":7.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147423866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-12-09DOI: 10.1016/j.dcan.2025.12.003
Manlin Fang , Min Deng , Jun Yang , Saifullah Adnan , Zhen Chen
The emerging sixth-generation networks demand ultra-high-speed wideband transmissions. In this context, this study proposes a novel Reconfigurable Intelligent Surface (RIS)-aided Incremental Relaying (IR) scheme that combines the complementary benefits of RISs and relay systems to enhance the achievable rate. In the proposed system, a relay is exploited to retransmit the source signal when the destination fails to decode the RIS-aided signal correctly. To assess the system performance, we analytically derive closed-form expressions for the outage probability and throughput of the RIS-aided IR scheme, using the central limit theorem. Simulation results validate the analytical findings and reveal that the proposed RIS-aided IR scheme significantly outperforms the conventional pure RIS and hybrid RIS-relay schemes in terms of both outage probability and throughput, highlighting its potential for improving communication-system performance.
{"title":"Performance analysis of RIS-aided incremental-relaying wireless communication systems","authors":"Manlin Fang , Min Deng , Jun Yang , Saifullah Adnan , Zhen Chen","doi":"10.1016/j.dcan.2025.12.003","DOIUrl":"10.1016/j.dcan.2025.12.003","url":null,"abstract":"<div><div>The emerging sixth-generation networks demand ultra-high-speed wideband transmissions. In this context, this study proposes a novel Reconfigurable Intelligent Surface (RIS)-aided Incremental Relaying (IR) scheme that combines the complementary benefits of RISs and relay systems to enhance the achievable rate. In the proposed system, a relay is exploited to retransmit the source signal when the destination fails to decode the RIS-aided signal correctly. To assess the system performance, we analytically derive closed-form expressions for the outage probability and throughput of the RIS-aided IR scheme, using the central limit theorem. Simulation results validate the analytical findings and reveal that the proposed RIS-aided IR scheme significantly outperforms the conventional pure RIS and hybrid RIS-relay schemes in terms of both outage probability and throughput, highlighting its potential for improving communication-system performance.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"12 2","pages":"Pages 388-395"},"PeriodicalIF":7.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147423853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amplitude Phase Shift Keying (APSK) is more suitable for the nonlinear channels of Low Earth Orbit (LEO) satellite communication systems compared to Quadrature Amplitude Modulation (QAM). To tackle challenges posed by Direct Current (DC) interference and high demodulation complexity, we propose an APSK demodulation algorithm based on K-means clustering. Initially, static DC components are calculated and removed from the received APSK signals. Subsequently, the estimated APSK constellation points serve as initial centers for K-means clustering. These centers are refined through the K-means process and act as theoretical APSK constellation points for the Max-Log-MAP demodulation algorithm, effectively eliminating residual DC. We then introduce a low-complexity APSK demodulation algorithm that utilizes the symmetry of constellation points along with the Euclidean distance between DC-eliminated signals and these constellation points to minimize the set of constellation points. Simulation results indicate that for 32-APSK, our proposed demodulation submodule reduces computational complexity to approximately one-third that of the Max-Log-MAP algorithm while improving Bit Error Rate (BER) performance by about 0.23 dB. Furthermore, end-to-end simulation experiments conducted within LEO satellite communication systems demonstrate that our approach not only maintains this complexity advantage but also enhances BER performance by approximately 1.1 dB.
{"title":"Low-complexity APSK demodulation algorithm based on K-means clustering in LEO satellite communication systems","authors":"Guangfu Wu , Xiangrui Meng , Changlin Chen , Biqun Xiang","doi":"10.1016/j.dcan.2025.08.002","DOIUrl":"10.1016/j.dcan.2025.08.002","url":null,"abstract":"<div><div>Amplitude Phase Shift Keying (APSK) is more suitable for the nonlinear channels of Low Earth Orbit (LEO) satellite communication systems compared to Quadrature Amplitude Modulation (QAM). To tackle challenges posed by Direct Current (DC) interference and high demodulation complexity, we propose an APSK demodulation algorithm based on K-means clustering. Initially, static DC components are calculated and removed from the received APSK signals. Subsequently, the estimated APSK constellation points serve as initial centers for K-means clustering. These centers are refined through the K-means process and act as theoretical APSK constellation points for the Max-Log-MAP demodulation algorithm, effectively eliminating residual DC. We then introduce a low-complexity APSK demodulation algorithm that utilizes the symmetry of constellation points along with the Euclidean distance between DC-eliminated signals and these constellation points to minimize the set of constellation points. Simulation results indicate that for 32-APSK, our proposed demodulation submodule reduces computational complexity to approximately one-third that of the Max-Log-MAP algorithm while improving Bit Error Rate (BER) performance by about 0.23 dB. Furthermore, end-to-end simulation experiments conducted within LEO satellite communication systems demonstrate that our approach not only maintains this complexity advantage but also enhances BER performance by approximately 1.1 dB.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"12 2","pages":"Pages 343-353"},"PeriodicalIF":7.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147423856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-08-18DOI: 10.1016/j.dcan.2025.08.005
Maochuan Wu , Juan Li , Jing Xu , Bing Chen , Kun Zhu
Semantic Communication (SemCom) is a promising paradigm for future 6G networks, where communication performance hinges on the effectiveness of SemCom models, particularly the source-channel encoder and decoder. However, training these models faces significant challenges. Firstly, the privacy-sensitive nature of communication data discourages users from uploading data to centralized servers. Secondly, heterogeneous local data distributions and diverse communication counterparts of different users necessitate personalized SemCom models. Specifically, a user's encoder must align with its receivers' decoders and the transmitted data distribution, while its decoder must adapt to the user's transmitters and received data distribution. To address these challenges, we propose FineFed, a personalized federated learning method with collaborative fine-tuning. Initially, a unified global model is trained distributively via federated learning, eliminating data uploads. Subsequently, users iteratively fine-tune encoders and decoders collaboratively, achieving SemCom model personalization. For encoder fine-tuning, decoders are fixed and shared with transmitters to address distributed loss calculation issues. Each encoder is fine-tuned using the idea of multi-task learning, treating communication with each receiver as a separate task. Then, encoders are fixed. A user shares its decoder with its own transmitters. These transmitters collaboratively fine-tune the user's decoder by the idea of federated multi-task learning. Experimental results demonstrate that FineFed improves the average performance of federated SemCom models by , bringing it closer to the performance of centrally-trained models.
{"title":"Personalized federated learning for semantic communication with collaborative fine-tuning","authors":"Maochuan Wu , Juan Li , Jing Xu , Bing Chen , Kun Zhu","doi":"10.1016/j.dcan.2025.08.005","DOIUrl":"10.1016/j.dcan.2025.08.005","url":null,"abstract":"<div><div>Semantic Communication (SemCom) is a promising paradigm for future 6G networks, where communication performance hinges on the effectiveness of SemCom models, particularly the source-channel encoder and decoder. However, training these models faces significant challenges. Firstly, the privacy-sensitive nature of communication data discourages users from uploading data to centralized servers. Secondly, heterogeneous local data distributions and diverse communication counterparts of different users necessitate personalized SemCom models. Specifically, a user's encoder must align with its receivers' decoders and the transmitted data distribution, while its decoder must adapt to the user's transmitters and received data distribution. To address these challenges, we propose FineFed, a personalized federated learning method with collaborative fine-tuning. Initially, a unified global model is trained distributively via federated learning, eliminating data uploads. Subsequently, users iteratively fine-tune encoders and decoders collaboratively, achieving SemCom model personalization. For encoder fine-tuning, decoders are fixed and shared with transmitters to address distributed loss calculation issues. Each encoder is fine-tuned using the idea of multi-task learning, treating communication with each receiver as a separate task. Then, encoders are fixed. A user shares its decoder with its own transmitters. These transmitters collaboratively fine-tune the user's decoder by the idea of federated multi-task learning. Experimental results demonstrate that FineFed improves the average performance of federated SemCom models by <span><math><mn>1</mn><mtext>%</mtext><mtext>-</mtext><mn>7</mn><mtext>%</mtext></math></span>, bringing it closer to the performance of centrally-trained models.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"12 2","pages":"Pages 306-318"},"PeriodicalIF":7.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147423860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-10-17DOI: 10.1016/j.dcan.2025.10.002
Yizhuo Ma , Rongzheng Wang , Shuang Liang, Guangchun Luo, Ke Qin
Communication infrastructure is often among the first casualties in natural or human-induced disasters, severely impairing the coordination and efficiency of rescue operations. Rapid deployment of Unmanned Aerial Vehicles (UAVs) and satellite systems has thus become essential for establishing robust communication links to support rescue-critical tasks. However, existing emergency communication networks rely heavily on domain expertise for topology design, thereby suffering from issues such as inefficient resource allocation and network congestion, among others. To address these challenges, we present TopoLLM, a framework that leverages Large Language Models (LLMs) for tool-driven optimization of emergency network topologies. This framework effectively combines the reasoning capabilities of the LLM with TopoTool, a domain-specific optimization toolkit engineered for high-precision and load-balanced network planning in disaster scenarios. Guided by an adaptive tool-selection mechanism, TopoLLM autonomously generates resilient topologies and allocates resources intelligently, reducing the need for extensive human interventions. Experimental evaluations on simulated disaster scenarios verify that TopoLLM can rapidly generate high-accuracy and robust topologies, achieving notable performance improvements compared with existing approaches.
{"title":"TopoLLM: LLM-driven adaptive tool learning for real-time emergency network topology planning","authors":"Yizhuo Ma , Rongzheng Wang , Shuang Liang, Guangchun Luo, Ke Qin","doi":"10.1016/j.dcan.2025.10.002","DOIUrl":"10.1016/j.dcan.2025.10.002","url":null,"abstract":"<div><div>Communication infrastructure is often among the first casualties in natural or human-induced disasters, severely impairing the coordination and efficiency of rescue operations. Rapid deployment of Unmanned Aerial Vehicles (UAVs) and satellite systems has thus become essential for establishing robust communication links to support rescue-critical tasks. However, existing emergency communication networks rely heavily on domain expertise for topology design, thereby suffering from issues such as inefficient resource allocation and network congestion, among others. To address these challenges, we present TopoLLM, a framework that leverages Large Language Models (LLMs) for tool-driven optimization of emergency network topologies. This framework effectively combines the reasoning capabilities of the LLM with TopoTool, a domain-specific optimization toolkit engineered for high-precision and load-balanced network planning in disaster scenarios. Guided by an adaptive tool-selection mechanism, TopoLLM autonomously generates resilient topologies and allocates resources intelligently, reducing the need for extensive human interventions. Experimental evaluations on simulated disaster scenarios verify that TopoLLM can rapidly generate high-accuracy and robust topologies, achieving notable performance improvements compared with existing approaches.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"12 2","pages":"Pages 273-282"},"PeriodicalIF":7.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147423862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-11-27DOI: 10.1016/j.dcan.2025.11.004
Chen Zhu , Jianrong Bao , Zhouxiang Zhao , Zhaohui Yang , Chongwen Huang , Jiawen Kang , Hao Xu , Zhaoyang Zhang
This paper develops a quadruped robot virtual-real interactive control system based on digital twin technology. The system is designed to address key challenges in robotics technology, including real-time performance, low-latency control, high-precision multi-sensor data fusion, stable network transmission, data security, user-friendly interaction interface, system scalability, and maintainability. The system comprises a number of functional modules, including a 3D modeling module, a positioning perception module, a virtual interaction module, a wise sensing-transmission module, and a cloud server. The 3D modeling module is responsible for constructing the virtual quadruped robot and motion space scenarios. The positioning perception module integrates LiDAR and Inertial Measurement Unit (IMU) data, utilizing Point-LIO and HDL-localization algorithms for high-precision environmental perception and positioning. The virtual interaction module provides a user-friendly control interface through computer software and the HoloLens headset. The wise sensing-transmission module employs WiFi and 5G links to ensure low-latency and high-bandwidth data transmission, and employs libhv and libssl asynchronous IO and network security cryptographic libraries to guarantee data security. The system is designed to run on the Ubuntu 20.04 platform, offering excellent scalability and maintainability. This system has broad application prospects in industrial manufacturing, construction, disaster rescue, military applications, and educational training. It enhances the performance and reliability of quadruped robot systems and lays a solid foundation for the future development of the industrial metaverse.
{"title":"A digital twin-based quadruped robot system with scene perception, fast communication, and holographic interaction","authors":"Chen Zhu , Jianrong Bao , Zhouxiang Zhao , Zhaohui Yang , Chongwen Huang , Jiawen Kang , Hao Xu , Zhaoyang Zhang","doi":"10.1016/j.dcan.2025.11.004","DOIUrl":"10.1016/j.dcan.2025.11.004","url":null,"abstract":"<div><div>This paper develops a quadruped robot virtual-real interactive control system based on digital twin technology. The system is designed to address key challenges in robotics technology, including real-time performance, low-latency control, high-precision multi-sensor data fusion, stable network transmission, data security, user-friendly interaction interface, system scalability, and maintainability. The system comprises a number of functional modules, including a 3D modeling module, a positioning perception module, a virtual interaction module, a wise sensing-transmission module, and a cloud server. The 3D modeling module is responsible for constructing the virtual quadruped robot and motion space scenarios. The positioning perception module integrates LiDAR and Inertial Measurement Unit (IMU) data, utilizing Point-LIO and HDL-localization algorithms for high-precision environmental perception and positioning. The virtual interaction module provides a user-friendly control interface through computer software and the HoloLens headset. The wise sensing-transmission module employs WiFi and 5G links to ensure low-latency and high-bandwidth data transmission, and employs <span>libhv</span> and <span>libssl</span> asynchronous IO and network security cryptographic libraries to guarantee data security. The system is designed to run on the Ubuntu 20.04 platform, offering excellent scalability and maintainability. This system has broad application prospects in industrial manufacturing, construction, disaster rescue, military applications, and educational training. It enhances the performance and reliability of quadruped robot systems and lays a solid foundation for the future development of the industrial metaverse.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"12 2","pages":"Pages 262-272"},"PeriodicalIF":7.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147423863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-11-10DOI: 10.1016/j.dcan.2025.11.001
Yu Dai , Jie Tian , Tiantian Li , Jing Wang , Chuanfen Feng
Multi-access Edge Computing (MEC) enhances computational efficiency by enabling resource-constrained User Devices (UD) to offload tasks to edge servers. Compared to traditional edge servers fixed on the Small Cellular Base Stations (SBS), mobile vehicles with idle resources serve as mobile edge servers, which can reduce UD's task latency due to closer proximity to the UD. However, due to the limited computation resources of vehicles and highly competitive among UD, the available computation resources provided by vehicles for UD are uncertain, which poses a challenge for UD in making task offloading decisions. In this paper, we establish a risk-aware task offloading framework in vehicle-assisted MEC networks with computation resource uncertainty, where UD make offloading decisions by considering their risk-aware behavior. We first characterize and model UD's risk-aware behavior based on Prospect Theory (PT) and then formulate a user satisfaction maximization problem by optimizing the offloading strategy of UD. To solve it, we reformulate the above problem among multiple users as a non-cooperative game and prove the uniqueness of the Pure Nash Equilibrium (PNE). We also propose a low-complexity distributed iterative optimization algorithm to obtain the optimal offloading strategy. The simulation results demonstrate that the proposed scheme significantly enhances satisfaction utility of UD and reduces failure probability of vehicles compared to other benchmark methods.
{"title":"Risk-aware user satisfaction maximization in vehicle-assisted multi-access edge computing offloading: a game-theoretic approach","authors":"Yu Dai , Jie Tian , Tiantian Li , Jing Wang , Chuanfen Feng","doi":"10.1016/j.dcan.2025.11.001","DOIUrl":"10.1016/j.dcan.2025.11.001","url":null,"abstract":"<div><div>Multi-access Edge Computing (MEC) enhances computational efficiency by enabling resource-constrained User Devices (UD) to offload tasks to edge servers. Compared to traditional edge servers fixed on the Small Cellular Base Stations (SBS), mobile vehicles with idle resources serve as mobile edge servers, which can reduce UD's task latency due to closer proximity to the UD. However, due to the limited computation resources of vehicles and highly competitive among UD, the available computation resources provided by vehicles for UD are uncertain, which poses a challenge for UD in making task offloading decisions. In this paper, we establish a risk-aware task offloading framework in vehicle-assisted MEC networks with computation resource uncertainty, where UD make offloading decisions by considering their risk-aware behavior. We first characterize and model UD's risk-aware behavior based on Prospect Theory (PT) and then formulate a user satisfaction maximization problem by optimizing the offloading strategy of UD. To solve it, we reformulate the above problem among multiple users as a non-cooperative game and prove the uniqueness of the Pure Nash Equilibrium (PNE). We also propose a low-complexity distributed iterative optimization algorithm to obtain the optimal offloading strategy. The simulation results demonstrate that the proposed scheme significantly enhances satisfaction utility of UD and reduces failure probability of vehicles compared to other benchmark methods.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"12 2","pages":"Pages 294-305"},"PeriodicalIF":7.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147423865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}