Pub Date : 2025-11-17DOI: 10.1016/j.comcom.2025.108365
Xiaoyong Yan , Yulu Wen , Lei Mo , Chenhuang Wu , Chuntao Ding , Shigeng Zhang
Node localization is prerequisite for multi-hop network applications. Traditional localization algorithms often assume nodes are uniformly distributed within regular, obstacle-free networks. However, this assumption rarely aligns with real-world network conditions. To address this, we propose a Robust Multi-hop Localization (RML) algorithm designed for irregular networks. First, a similarity metric is applied to compute distances between node pairs. Next, topological information from anchor nodes is used to infer a hop count threshold, filtering inaccurate distance measurements. Finally, depending on collinearity issues, either trilateration or an improved Black-winged Kite optimization algorithm is employed to determine node locations. Simulation results show that RML surpasses existing algorithms in efficiency, accuracy, and stability across diverse irregular networks. Specifically, RML achieves at least a 59.40% improvement in localization accuracy.
{"title":"RML: A Robust Multi-hop Localization algorithm for irregular networks","authors":"Xiaoyong Yan , Yulu Wen , Lei Mo , Chenhuang Wu , Chuntao Ding , Shigeng Zhang","doi":"10.1016/j.comcom.2025.108365","DOIUrl":"10.1016/j.comcom.2025.108365","url":null,"abstract":"<div><div>Node localization is prerequisite for multi-hop network applications. Traditional localization algorithms often assume nodes are uniformly distributed within regular, obstacle-free networks. However, this assumption rarely aligns with real-world network conditions. To address this, we propose a <u>R</u>obust <u>M</u>ulti-hop <u>L</u>ocalization (RML) algorithm designed for irregular networks. First, a similarity metric is applied to compute distances between node pairs. Next, topological information from anchor nodes is used to infer a hop count threshold, filtering inaccurate distance measurements. Finally, depending on collinearity issues, either trilateration or an improved Black-winged Kite optimization algorithm is employed to determine node locations. Simulation results show that RML surpasses existing algorithms in efficiency, accuracy, and stability across diverse irregular networks. Specifically, RML achieves at least a 59.40% improvement in localization accuracy.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"245 ","pages":"Article 108365"},"PeriodicalIF":4.3,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145580323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-13DOI: 10.1016/j.comcom.2025.108363
Tanveer Ahmad , Asma Abbas Hassan Elnour , Muhammad Usman Hadi , Kiran Khurshid , Xue Jun Li , Weiwei Jiang
Integrating AI inference into wireless sensing edge networks presents notable challenges due to limited resources, changing environments, and diverse devices. In this study, we proposed a novel resource allocation framework that enhances energy efficiency, reduces latency, and ensures fairness across distributed edge nodes for AI inference. The framework models a multi-objective optimization problem that reflects the interdependence of computation, communication, and energy at each device. We also develop a decentralized algorithm based on dual decomposition and projected gradient ascent, by using local data. The extensive simulations demonstrate that our proposed method reduces the average inference latency by 31.4% and energy consumption by 27.8% compared to the greedy and round-robin techniques. The system utility is improved by up to 59.2%, and fairness, measured using Jain’s index, remains within 8% of the ideal. Additionally, throughput analysis further confirms that our approach gains up to 49 tasks/sec, outperforming existing strategies by more than 40%. These findings show that the resource-aware AI inference approach is scalable, energy-efficient, and appropriate for real-time use in multi-user wireless edge networks.
{"title":"Resource allocation for efficient AI inference in wireless sensing edge networks","authors":"Tanveer Ahmad , Asma Abbas Hassan Elnour , Muhammad Usman Hadi , Kiran Khurshid , Xue Jun Li , Weiwei Jiang","doi":"10.1016/j.comcom.2025.108363","DOIUrl":"10.1016/j.comcom.2025.108363","url":null,"abstract":"<div><div>Integrating AI inference into wireless sensing edge networks presents notable challenges due to limited resources, changing environments, and diverse devices. In this study, we proposed a novel resource allocation framework that enhances energy efficiency, reduces latency, and ensures fairness across distributed edge nodes for AI inference. The framework models a multi-objective optimization problem that reflects the interdependence of computation, communication, and energy at each device. We also develop a decentralized algorithm based on dual decomposition and projected gradient ascent, by using local data. The extensive simulations demonstrate that our proposed method reduces the average inference latency by 31.4% and energy consumption by 27.8% compared to the greedy and round-robin techniques. The system utility is improved by up to 59.2%, and fairness, measured using Jain’s index, remains within 8% of the ideal. Additionally, throughput analysis further confirms that our approach gains up to 49 tasks/sec, outperforming existing strategies by more than 40%. These findings show that the resource-aware AI inference approach is scalable, energy-efficient, and appropriate for real-time use in multi-user wireless edge networks.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"245 ","pages":"Article 108363"},"PeriodicalIF":4.3,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145500201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-11DOI: 10.1016/j.comcom.2025.108354
Dinh Van Tung , Thai-Hoc Vu , Nguyen Tien Hoa
Emerging 6G applications, such as autonomous systems and immersive extended reality, require joint communication and sensing to meet stringent performance demands. Integrated sensing and communication (ISAC) has thus emerged as a promising paradigm for supporting such dual functionality in future wireless networks. This paper proposes a novel optimization framework for joint spectrum and power allocation in semi-ISAC systems assisted by non-orthogonal multiple access (NOMA). The objective is to maximize the minimum ergodic achievable rate under statistical channel state information (CSI), thereby ensuring fairness across heterogeneous communication and sensing services. The non-convex problem is reformulated using successive convex approximation (SCA) for efficient and tractable optimization. Closed-form expressions for ergodic rates are derived under two NOMA configurations: single layer per sub-band and multiple layers per sub-band, highlighting the trade-off between decoding complexity and spectral efficiency. Numerical results highlight four key performance benefits: (i) a guaranteed minimum rate of 2 Gbps per user at 20 dBm transmit power, (ii) improved fairness based on Jain’s index, (iii) higher ergodic sum rate compared to benchmark schemes, and (iv) robustness to channel fading and target variations such as Nakagami-m parameters, sensing distance, and radar cross-section. These findings confirm the adaptability and efficiency of the proposed framework for dense deployment scenarios in semi-ISAC networks.
{"title":"Analytical-based resource allocation framework for NOMA-assisted Semi-ISAC systems","authors":"Dinh Van Tung , Thai-Hoc Vu , Nguyen Tien Hoa","doi":"10.1016/j.comcom.2025.108354","DOIUrl":"10.1016/j.comcom.2025.108354","url":null,"abstract":"<div><div>Emerging 6G applications, such as autonomous systems and immersive extended reality, require joint communication and sensing to meet stringent performance demands. Integrated sensing and communication (ISAC) has thus emerged as a promising paradigm for supporting such dual functionality in future wireless networks. This paper proposes a novel optimization framework for joint spectrum and power allocation in semi-ISAC systems assisted by non-orthogonal multiple access (NOMA). The objective is to maximize the minimum ergodic achievable rate under statistical channel state information (CSI), thereby ensuring fairness across heterogeneous communication and sensing services. The non-convex problem is reformulated using successive convex approximation (SCA) for efficient and tractable optimization. Closed-form expressions for ergodic rates are derived under two NOMA configurations: single layer per sub-band and multiple layers per sub-band, highlighting the trade-off between decoding complexity and spectral efficiency. Numerical results highlight four key performance benefits: (i) a guaranteed minimum rate of 2 Gbps per user at 20 dBm transmit power, (ii) improved fairness based on Jain’s index, (iii) higher ergodic sum rate compared to benchmark schemes, and (iv) robustness to channel fading and target variations such as Nakagami-m parameters, sensing distance, and radar cross-section. These findings confirm the adaptability and efficiency of the proposed framework for dense deployment scenarios in semi-ISAC networks.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"245 ","pages":"Article 108354"},"PeriodicalIF":4.3,"publicationDate":"2025-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145528997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-07DOI: 10.1016/j.comcom.2025.108353
Zhihan Yu, Li Zhang, Haoru Su, Wanting Zhu
The characteristics of Low Earth Orbit (LEO) satellite networks, including high-speed node mobility, dynamic topology changes, and limited resources, significantly complicate rapid network congestion resolution. To address this challenge, an Artificial Immune System-based Congestion Control Routing (AIS-CCR) algorithm is proposed. AIS-CCR emulates the operational mechanisms of biological immune systems by employing immune memory and learning mechanisms to store and reuse historical effective control strategies, thereby enhancing congestion response speed. The algorithm adopts virtual grid mapping combined with geographic routing to simplify the routing calculation process, achieving self-learning, self-adaptive, and distributed congestion control capabilities in satellite networks. Simulation experiments demonstrate that AIS-CCR outperforms comparable algorithms across key performance metrics, including response time, queue load rate, packet loss rate, and end-to-end delay. The algorithm exhibits particularly pronounced advantages when handling complex multi-link congestion scenarios.
{"title":"Artificial immune system-based congestion control routing for Satellite networks","authors":"Zhihan Yu, Li Zhang, Haoru Su, Wanting Zhu","doi":"10.1016/j.comcom.2025.108353","DOIUrl":"10.1016/j.comcom.2025.108353","url":null,"abstract":"<div><div>The characteristics of Low Earth Orbit (LEO) satellite networks, including high-speed node mobility, dynamic topology changes, and limited resources, significantly complicate rapid network congestion resolution. To address this challenge, an Artificial Immune System-based Congestion Control Routing (AIS-CCR) algorithm is proposed. AIS-CCR emulates the operational mechanisms of biological immune systems by employing immune memory and learning mechanisms to store and reuse historical effective control strategies, thereby enhancing congestion response speed. The algorithm adopts virtual grid mapping combined with geographic routing to simplify the routing calculation process, achieving self-learning, self-adaptive, and distributed congestion control capabilities in satellite networks. Simulation experiments demonstrate that AIS-CCR outperforms comparable algorithms across key performance metrics, including response time, queue load rate, packet loss rate, and end-to-end delay. The algorithm exhibits particularly pronounced advantages when handling complex multi-link congestion scenarios.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"246 ","pages":"Article 108353"},"PeriodicalIF":4.3,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145555381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-05DOI: 10.1016/j.comcom.2025.108351
Xiguang Li , Junlong Li , Yunhe Sun , Ammar Muthanna , Ammar Hawbani , Liang Zhao
Vehicular Edge Computing (VEC) faces significant challenges in jointly managing caching and task offloading due to dynamic network conditions and resource constraints. This paper proposes a novel framework that addresses these challenges through a synergistic three-stage process. The innovation lies in the tight integration of our modules: first, a Spatio-Temporal Fast Graph Convolutional Network (ST-FGCN) accurately forecasts task demands by capturing complex spatio-temporal correlations. Second, these predictions guide a Prediction-Informed Edge Collaborative Caching (PIECC) algorithm to proactively optimize resource placement across edge servers. Finally, a Genetic Asynchronous Advantage Actor–Critic (GA3C) strategy performs robust task offloading within this optimized environment. Unlike traditional reinforcement learning methods that often struggle with the large state–action spaces in VEC and converge to local optima, our framework simplifies the decision process via predictive caching and enhances exploration with the GA-infused GA3C algorithm. Simulation results demonstrate that our proposed framework significantly reduces long-term system cost, outperforms baseline methods in both latency and energy efficiency, and offers a more adaptive solution for dynamic VEC systems.
{"title":"Cache-assisted task offloading in Vehicular Edge Computing: A spatio-temporal deep reinforcement learning approach","authors":"Xiguang Li , Junlong Li , Yunhe Sun , Ammar Muthanna , Ammar Hawbani , Liang Zhao","doi":"10.1016/j.comcom.2025.108351","DOIUrl":"10.1016/j.comcom.2025.108351","url":null,"abstract":"<div><div>Vehicular Edge Computing (VEC) faces significant challenges in jointly managing caching and task offloading due to dynamic network conditions and resource constraints. This paper proposes a novel framework that addresses these challenges through a synergistic three-stage process. The innovation lies in the tight integration of our modules: first, a Spatio-Temporal Fast Graph Convolutional Network (ST-FGCN) accurately forecasts task demands by capturing complex spatio-temporal correlations. Second, these predictions guide a Prediction-Informed Edge Collaborative Caching (PIECC) algorithm to proactively optimize resource placement across edge servers. Finally, a Genetic Asynchronous Advantage Actor–Critic (GA3C) strategy performs robust task offloading within this optimized environment. Unlike traditional reinforcement learning methods that often struggle with the large state–action spaces in VEC and converge to local optima, our framework simplifies the decision process via predictive caching and enhances exploration with the GA-infused GA3C algorithm. Simulation results demonstrate that our proposed framework significantly reduces long-term system cost, outperforms baseline methods in both latency and energy efficiency, and offers a more adaptive solution for dynamic VEC systems.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"246 ","pages":"Article 108351"},"PeriodicalIF":4.3,"publicationDate":"2025-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145618643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-04DOI: 10.1016/j.comcom.2025.108352
Xueqin Zhang , Yisong Lu , Gang Liu , Xiaowei Chen
Predicting the scale of information diffusion in social networks can sense the future propagation of information in advance, which plays a crucial role in controlling the diffusion of harmful information. We propose STTFP (Spatio-Temporal and Trend Features for Prediction), a deep learning framework that integrates temporal, spatial, and trend features to improve macroscopic diffusion prediction accuracy. This framework first utilizes graph attention networks to extract node interaction features from cascaded graphs. It captures node position features from diffusion sequences, and uses sparse matrix factorization to extract node features from social network graphs. Then it adopts bi-directional gated recurrent units and self-attention mechanisms to deeply mine spatio-temporal features. Additionally, we design an attention-based convolutional neural network to capture the short-term fluctuations in the information propagation process, while long short-term memory networks are used to uncover historical forwarding variation in diffusion scales. By fusing these features, the framework achieves incremental predictions of information diffusion. Experiments on three public datasets show that our method effectively enhances the accuracy of macroscopic diffusion predictions.
{"title":"Macroscopic diffusion prediction in social networks based on spatio-temporal and trend features","authors":"Xueqin Zhang , Yisong Lu , Gang Liu , Xiaowei Chen","doi":"10.1016/j.comcom.2025.108352","DOIUrl":"10.1016/j.comcom.2025.108352","url":null,"abstract":"<div><div>Predicting the scale of information diffusion in social networks can sense the future propagation of information in advance, which plays a crucial role in controlling the diffusion of harmful information. We propose STTFP (Spatio-Temporal and Trend Features for Prediction), a deep learning framework that integrates temporal, spatial, and trend features to improve macroscopic diffusion prediction accuracy. This framework first utilizes graph attention networks to extract node interaction features from cascaded graphs. It captures node position features from diffusion sequences, and uses sparse matrix factorization to extract node features from social network graphs. Then it adopts bi-directional gated recurrent units and self-attention mechanisms to deeply mine spatio-temporal features. Additionally, we design an attention-based convolutional neural network to capture the short-term fluctuations in the information propagation process, while long short-term memory networks are used to uncover historical forwarding variation in diffusion scales. By fusing these features, the framework achieves incremental predictions of information diffusion. Experiments on three public datasets show that our method effectively enhances the accuracy of macroscopic diffusion predictions.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"244 ","pages":"Article 108352"},"PeriodicalIF":4.3,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145467569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-27DOI: 10.1016/j.comcom.2025.108337
Mohammad Esmaeil Esmaeili , Ahmad Khonsari , Mahdi Dolati
Edge computing reduces latency by bringing computation closer to end devices, but the growing scale and heterogeneity of edge networks make resource management increasingly complex. Load balancing is essential for efficient resource use and low response times, yet static approaches struggle in dynamic environments. This calls for adaptable, data-driven load balancing methods that can continuously respond to changing conditions and optimize performance. This paper addresses the problem of load balancing in edge computing, where the distance between servers plays a critical role in performance. We propose two deep reinforcement learning (DRL)-based algorithms – Deep Q-Learning (DQL) and Long Short-Term Memory (LSTM) – that dynamically adjust the neighbor radius for load distribution in response to environmental changes. Unlike static approaches, our methods learn the radius online in a data-driven manner without requiring global coordination. Simulation results demonstrate that both algorithms adapt effectively to dynamic conditions. In scenarios with 80–100 edge servers and 500–1000 requests per second, DQL achieves up to 18% higher throughput, 21% lower average response time, and 23% lower blocking rate compared to recent methods, while LSTM remains competitive under stable workloads.
{"title":"Dynamic distance-based load balancing in mobile edge computing with deep reinforcement learning","authors":"Mohammad Esmaeil Esmaeili , Ahmad Khonsari , Mahdi Dolati","doi":"10.1016/j.comcom.2025.108337","DOIUrl":"10.1016/j.comcom.2025.108337","url":null,"abstract":"<div><div>Edge computing reduces latency by bringing computation closer to end devices, but the growing scale and heterogeneity of edge networks make resource management increasingly complex. Load balancing is essential for efficient resource use and low response times, yet static approaches struggle in dynamic environments. This calls for adaptable, data-driven load balancing methods that can continuously respond to changing conditions and optimize performance. This paper addresses the problem of load balancing in edge computing, where the distance between servers plays a critical role in performance. We propose two deep reinforcement learning (DRL)-based algorithms – Deep Q-Learning (DQL) and Long Short-Term Memory (LSTM) – that dynamically adjust the neighbor radius for load distribution in response to environmental changes. Unlike static approaches, our methods learn the radius online in a data-driven manner without requiring global coordination. Simulation results demonstrate that both algorithms adapt effectively to dynamic conditions. In scenarios with 80–100 edge servers and 500–1000 requests per second, DQL achieves up to 18% higher throughput, 21% lower average response time, and 23% lower blocking rate compared to recent methods, while LSTM remains competitive under stable workloads.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"244 ","pages":"Article 108337"},"PeriodicalIF":4.3,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145419909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-21DOI: 10.1016/j.comcom.2025.108343
Jinyi Li , Yong Feng , Nianbo Liu , Ming Liu , Yingna Li
Multi-antenna mobile chargers (MC) featuring directional multi-beam functionality present a promising solution for energy replenishment in wireless rechargeable sensor networks. However, existing multi-antenna scheduling schemes encounter challenges in jointly optimizing the coupled problem of Antenna Configuration and Path Planning (ACPP) while balancing MC’s coverage efficiency with energy consumption. To address this gap, this paper investigates the complex interdependencies and stringent constraints inherent in ACPP, and proposes a phased hybrid optimization scheme, PHMS-ACPP, integrating multi-objective optimization and deep reinforcement learning to compute approximate solutions. We first employ a modified Gaussian mixture model incorporating physical coverage constraints via the Expectation–Maximization algorithm to partition clusters, thereby reducing problem complexity. Within each cluster, the subproblem of determining optimal antenna count and orientation is solved using the Multi-objective Grey Wolf Optimizer to simultaneously optimize MC’s coverage efficiency and energy consumption. Then, we utilize Double Deep Q-Network to plan MC’s charging path across clusters, which captures long-term temporal dependencies between the evolution of nodes’ energy states and the spatial allocation of charging resources, enhancing both global scheduling efficacy and long-term charging efficiency. Extensive simulations demonstrate that PHMS-ACPP significantly outperforms state-of-the-art baselines in reducing node failure rate and minimizing average charging delay, with reductions of approximately 21.6% and 14.4%, respectively.
具有定向多波束功能的多天线移动充电器(MC)为无线可充电传感器网络中的能量补充提供了一种很有前途的解决方案。然而,现有的多天线调度方案在兼顾MC的覆盖效率和能量消耗的同时,面临着共同优化天线配置和路径规划(ACPP)耦合问题的挑战。为了解决这一差距,本文研究了ACPP固有的复杂相互依赖关系和严格约束,并提出了一种分阶段混合优化方案PHMS-ACPP,该方案结合多目标优化和深度强化学习来计算近似解。我们首先采用改进的高斯混合模型,通过期望最大化算法结合物理覆盖约束来划分集群,从而降低问题的复杂性。在每个簇内,利用多目标灰狼优化器求解确定最优天线数量和方向的子问题,同时优化MC的覆盖效率和能耗。在此基础上,利用Double Deep Q-Network对MC的充电路径进行规划,捕捉节点能量状态演化与充电资源空间分配之间的长期依赖关系,提高全局调度效率和长期充电效率。大量的仿真表明,PHMS-ACPP在降低节点故障率和最小化平均充电延迟方面明显优于最先进的基线,分别降低了约21.6%和14.4%。
{"title":"Multi-antenna mobile charger scheduling optimization scheme for wireless rechargeable sensor networks","authors":"Jinyi Li , Yong Feng , Nianbo Liu , Ming Liu , Yingna Li","doi":"10.1016/j.comcom.2025.108343","DOIUrl":"10.1016/j.comcom.2025.108343","url":null,"abstract":"<div><div>Multi-antenna mobile chargers (MC) featuring directional multi-beam functionality present a promising solution for energy replenishment in wireless rechargeable sensor networks. However, existing multi-antenna scheduling schemes encounter challenges in jointly optimizing the coupled problem of Antenna Configuration and Path Planning (ACPP) while balancing MC’s coverage efficiency with energy consumption. To address this gap, this paper investigates the complex interdependencies and stringent constraints inherent in ACPP, and proposes a phased hybrid optimization scheme, PHMS-ACPP, integrating multi-objective optimization and deep reinforcement learning to compute approximate solutions. We first employ a modified Gaussian mixture model incorporating physical coverage constraints via the Expectation–Maximization algorithm to partition clusters, thereby reducing problem complexity. Within each cluster, the subproblem of determining optimal antenna count and orientation is solved using the Multi-objective Grey Wolf Optimizer to simultaneously optimize MC’s coverage efficiency and energy consumption. Then, we utilize Double Deep Q-Network to plan MC’s charging path across clusters, which captures long-term temporal dependencies between the evolution of nodes’ energy states and the spatial allocation of charging resources, enhancing both global scheduling efficacy and long-term charging efficiency. Extensive simulations demonstrate that PHMS-ACPP significantly outperforms state-of-the-art baselines in reducing node failure rate and minimizing average charging delay, with reductions of approximately 21.6% and 14.4%, respectively.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"244 ","pages":"Article 108343"},"PeriodicalIF":4.3,"publicationDate":"2025-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145366114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-21DOI: 10.1016/j.comcom.2025.108342
Pingduo Xu , Debin Wei , Jinglong Wen , Li Yang
Large-scale low earth orbit (LEO) satellite networks constitute a core component of future sixth-generation (6G) communication systems. To address the challenges of resource scarcity and highly dynamic topologies, the integration of software-defined networking (SDN) and network function virtualization (NFV) technologies into LEO satellite networks has become imperative. We proposes a hybrid centralized-distributed software-defined LEO satellite network architecture. Within this framework, This study focuses on the service function chain (SFC) deployment problem in LEO space-ground integrated networks. time-expanded graphs (TEGs) are employed to model satellite networks with dynamic topological variations, aiming to satisfy diverse user requirements while jointly optimizing resource consumption costs and service latency. The problem is formulated as a weighted sum minimization of resource consumption costs and service latency, and this problem is proven to be NP-complete. Subsequently, we integrate the twin delayed deep deterministic policy gradient method with multi-agent techniques to design a multi-agent deep reinforcement learning SFC deployment (MADRL-D) framework for optimizing our objectives. Experimental results demonstrate that the proposed MADRL-D framework outperforms existing alternatives in terms of resource utilization efficiency, resource consumption costs, and service latency.
{"title":"Multi-agent deep reinforcement learning for service function chain deployment in software defined LEO satellite networks","authors":"Pingduo Xu , Debin Wei , Jinglong Wen , Li Yang","doi":"10.1016/j.comcom.2025.108342","DOIUrl":"10.1016/j.comcom.2025.108342","url":null,"abstract":"<div><div>Large-scale low earth orbit (LEO) satellite networks constitute a core component of future sixth-generation (6G) communication systems. To address the challenges of resource scarcity and highly dynamic topologies, the integration of software-defined networking (SDN) and network function virtualization (NFV) technologies into LEO satellite networks has become imperative. We proposes a hybrid centralized-distributed software-defined LEO satellite network architecture. Within this framework, This study focuses on the service function chain (SFC) deployment problem in LEO space-ground integrated networks. time-expanded graphs (TEGs) are employed to model satellite networks with dynamic topological variations, aiming to satisfy diverse user requirements while jointly optimizing resource consumption costs and service latency. The problem is formulated as a weighted sum minimization of resource consumption costs and service latency, and this problem is proven to be NP-complete. Subsequently, we integrate the twin delayed deep deterministic policy gradient method with multi-agent techniques to design a multi-agent deep reinforcement learning SFC deployment (MADRL-D) framework for optimizing our objectives. Experimental results demonstrate that the proposed MADRL-D framework outperforms existing alternatives in terms of resource utilization efficiency, resource consumption costs, and service latency.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"244 ","pages":"Article 108342"},"PeriodicalIF":4.3,"publicationDate":"2025-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145340548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-20DOI: 10.1016/j.comcom.2025.108341
Joana Angjo, Anatolij Zubow, Falko Dressler
In sixth generation (6G) mobile networks, the push towards high-frequency bands for ultra-fast data rates intensifies the challenges of signal attenuation and reduced coverage range. Intelligent reconfigurable surfaces (IRSs) present a promising solution to these challenges by enhancing signal coverage and directing reflections, which also contribute to minimize loss. However, there are multiple challenges associated with IRS, which have to be addressed before full incorporation of this technology into existing networks. A key issue arises from the inability of IRS to filter out non-target signals from other frequency bands due to lack of bandpass filtering. In areas where multiple wireless operators are spatially nearby, even if they use different frequency bands, this may cause unwanted reflections that may degrade their communication performances. To address this challenge, we previously proposed a solution, which relied on partitioning an IRS into sub-surfaces (sub_IRS) and dynamically assigning operators to these sub_IRS. Results have shown that a proper assignment of wireless operators to sub_IRS can improve the overall performance compared to a random assignment. In this paper, we introduce a wideband approach, demonstrating that the impact from unwanted reflections can be mitigated by using wideband channels, as the average signal to noise ratio (SNR) across subcarriers is less adversely affected. This approach leverages frequency diversity to reduce SNR variance, as some of the subcarriers may be negatively affected while others benefit, resulting in maintaining a more consistent and robust system performance in the presence of IRS-induced unwanted reflections. Simulations and real-world measurements confirm that the deployment of wideband IRS provides a robust strategy for combating inter-operator reflections in next generation IRS-assisted networks. Additionally, the wideband approach comes at no additional necessity for centralized resource control in future multi-operator networks. According to simulations, the SNR variance for a 1.28 GHz channel is approximately 20 dB lower than that of a 10 MHz channel when coexistence is considered. Similarly, measurements confirm a threefold reduction in SNR variation when transitioning from narrowband (10 MHz) to wideband (320 MHz) transmission. In overall, the usage of wideband channels in this context allows the system to be more stable and predictable.
{"title":"Operator coexistence in IRS-assisted mmWave networks: A wideband approach","authors":"Joana Angjo, Anatolij Zubow, Falko Dressler","doi":"10.1016/j.comcom.2025.108341","DOIUrl":"10.1016/j.comcom.2025.108341","url":null,"abstract":"<div><div>In sixth generation (6G) mobile networks, the push towards high-frequency bands for ultra-fast data rates intensifies the challenges of signal attenuation and reduced coverage range. Intelligent reconfigurable surfaces (IRSs) present a promising solution to these challenges by enhancing signal coverage and directing reflections, which also contribute to minimize loss. However, there are multiple challenges associated with IRS, which have to be addressed before full incorporation of this technology into existing networks. A key issue arises from the inability of IRS to filter out non-target signals from other frequency bands due to lack of bandpass filtering. In areas where multiple wireless operators are spatially nearby, even if they use different frequency bands, this may cause unwanted reflections that may degrade their communication performances. To address this challenge, we previously proposed a solution, which relied on partitioning an IRS into sub-surfaces (<span>sub_IRS</span>) and dynamically assigning operators to these <span>sub_IRS</span>. Results have shown that a proper assignment of wireless operators to <span>sub_IRS</span> can improve the overall performance compared to a random assignment. In this paper, we introduce a wideband approach, demonstrating that the impact from unwanted reflections can be mitigated by using wideband channels, as the average signal to noise ratio (SNR) across subcarriers is less adversely affected. This approach leverages frequency diversity to reduce SNR variance, as some of the subcarriers may be negatively affected while others benefit, resulting in maintaining a more consistent and robust system performance in the presence of IRS-induced unwanted reflections. Simulations and real-world measurements confirm that the deployment of wideband IRS provides a robust strategy for combating inter-operator reflections in next generation IRS-assisted networks. Additionally, the wideband approach comes at no additional necessity for centralized resource control in future multi-operator networks. According to simulations, the SNR variance for a 1.28<!--> <!-->GHz channel is approximately 20 dB lower than that of a 10<!--> <!-->MHz channel when coexistence is considered. Similarly, measurements confirm a threefold reduction in SNR variation when transitioning from narrowband (10<!--> <!-->MHz) to wideband (320<!--> <!-->MHz) transmission. In overall, the usage of wideband channels in this context allows the system to be more stable and predictable.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"244 ","pages":"Article 108341"},"PeriodicalIF":4.3,"publicationDate":"2025-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145366115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}