Dynamic behaviors of complex interacting systems, ubiquitously found in physical, biological, engineering, and social phenomena, are associated with underlying interactions between components of the system. A fundamental challenge in network science is to uncover interaction relationships between network components solely from observational data on their dynamics. Recently, generative models in machine learning, such as the variational autoencoder, have been used to identify the network structure through relational inference in multivariate time series data. However, most existing approaches are based on time series predictions, which are still challenging in the presence of missing data. In this study, we propose a novel approach, Diffusion model for Relational Inference (DiffRI), inspired by a self-supervised method for probabilistic time series imputation. DiffRI learns to infer the existence probability of interactions between network components through conditional diffusion modeling. Numerical experiments on both synthetic and quasi-real datasets show that DiffRI is highly competent with other well-known methods in discovering ground truth interactions. Furthermore, we demonstrate that our imputation-based approach is more tolerant of missing data than prediction-based approaches.
{"title":"Diffusion Model for Relational Inference in Interacting Systems","authors":"Shuhan Zheng;Ziqiang Li;Kantaro Fujiwara;Gouhei Tanaka","doi":"10.1109/TNSE.2025.3607563","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3607563","url":null,"abstract":"Dynamic behaviors of complex interacting systems, ubiquitously found in physical, biological, engineering, and social phenomena, are associated with underlying interactions between components of the system. A fundamental challenge in network science is to uncover interaction relationships between network components solely from observational data on their dynamics. Recently, generative models in machine learning, such as the variational autoencoder, have been used to identify the network structure through relational inference in multivariate time series data. However, most existing approaches are based on time series predictions, which are still challenging in the presence of missing data. In this study, we propose a novel approach, <bold>Diff</b>usion model for <bold>R</b>elational <bold>I</b>nference (DiffRI), inspired by a self-supervised method for probabilistic time series imputation. DiffRI learns to infer the existence probability of interactions between network components through conditional diffusion modeling. Numerical experiments on both synthetic and quasi-real datasets show that DiffRI is highly competent with other well-known methods in discovering ground truth interactions. Furthermore, we demonstrate that our imputation-based approach is more tolerant of missing data than prediction-based approaches.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"1990-2003"},"PeriodicalIF":7.9,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11164166","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Distributed Energy Resources (DERs) offer significant potential for reducing power losses, improving voltage stability, and enhancing resilience in distribution networks. To effectively address network-specific operational constraints and maximize DER performance, it is crucial to strategically optimize both their siting and sizing. Existing works primarily adopt analytical or search-based approaches for DER placement aimed at minimizing power losses. However, analytical methods, while computationally efficient, frequently yield suboptimal solutions at higher DER penetration levels, whereas search-based methods, despite their robustness, become computationally impractical for large-scale networks due to exponential complexity growth. To overcome the limitations, this paper proposes a novel analytical framework that establishes an exact quadratic relationship between power losses and DER injections, enabling a precise analytical estimation and optimization. The proposed approach explicitly relates nodal power demands to their respective contributions to system line losses, providing detailed theoretical insights into the root causes of power losses. Practically, the proposed framework facilitates real-time, large-scale DER allocation optimization while maintaining high accuracy. Furthermore, our theoretical analysis quantifies the impact of the DER power factor on optimal placement for loss reduction. This insight provides a direct, simplified method for integrating power loss considerations into complex, multi-objective optimization models. We validate our method on 33, 69, 123 and 533-bus distribution networks. It significantly outperforms feature-based analytical approaches and matches or exceeds traditional search-based methods. On the largest 533-bus system, our algorithm completes the allocation in just 0.5 s, confirming its effectiveness and practicality for real-world applications.
{"title":"QPADL: Quadratic Programming for Allocation of Distributed Energy Resources to Minimize Power Loss in Distribution Networks","authors":"Hongshen Zhang;Shibo He;Yongtao Zhang;Wenchao Meng","doi":"10.1109/TNSE.2025.3608895","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3608895","url":null,"abstract":"Distributed Energy Resources (DERs) offer significant potential for reducing power losses, improving voltage stability, and enhancing resilience in distribution networks. To effectively address network-specific operational constraints and maximize DER performance, it is crucial to strategically optimize both their siting and sizing. Existing works primarily adopt analytical or search-based approaches for DER placement aimed at minimizing power losses. However, analytical methods, while computationally efficient, frequently yield suboptimal solutions at higher DER penetration levels, whereas search-based methods, despite their robustness, become computationally impractical for large-scale networks due to exponential complexity growth. To overcome the limitations, this paper proposes a novel analytical framework that establishes an exact quadratic relationship between power losses and DER injections, enabling a precise analytical estimation and optimization. The proposed approach explicitly relates nodal power demands to their respective contributions to system line losses, providing detailed theoretical insights into the root causes of power losses. Practically, the proposed framework facilitates real-time, large-scale DER allocation optimization while maintaining high accuracy. Furthermore, our theoretical analysis quantifies the impact of the DER power factor on optimal placement for loss reduction. This insight provides a direct, simplified method for integrating power loss considerations into complex, multi-objective optimization models. We validate our method on 33, 69, 123 and 533-bus distribution networks. It significantly outperforms feature-based analytical approaches and matches or exceeds traditional search-based methods. On the largest 533-bus system, our algorithm completes the allocation in just 0.5 s, confirming its effectiveness and practicality for real-world applications.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"2038-2052"},"PeriodicalIF":7.9,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-11DOI: 10.1109/TNSE.2025.3609198
Bhaskara Narottama;Abrar Ul Haq;James Adu Ansere;Nidhi Simmons;Berk Canberk;Simon L. Cotton;Hyundong Shin;Trung Q. Duong
Recently, quantum deep reinforcement learning (Q-DRL) has started to gain attention as a potential approach for tackling complex challenges in wireless communication systems. In particular, Q-DRL, integrating quantum operations into deep learning models, can effectively handle dynamic environments and process large-scale optimizations. As future wireless networks continue to evolve, greater emphasis is being placed on context and meaning rather than raw data. New paradigms, such as semantic communications (SemComs) are essential to effectively convey meaning between transmitters and receivers. By linking SemComs with Q-DRL, future wireless networks will be capable of large-scale extractions and decoding of meaning, thereby minimizing reliance on complete context sharing between communicating parties. Together with SemComs, digital twins (DTs) have been considered as key enablers for future wireless networks. As virtual replicas of physical networks, they serve an important role in network operation, optimization, and control. In this regard, Q-DRL will also be highly beneficial for DTs in enhancing critical functions such as data management and security. This study offers fresh outlooks on how to leverage Q-DRL for SemComs in future wireless networks, augmented by the use of DTs.
{"title":"Quantum Deep Reinforcement Learning for Digital Twin-Enabled 6G Networks and Semantic Communications: Considerations for Adoption and Security","authors":"Bhaskara Narottama;Abrar Ul Haq;James Adu Ansere;Nidhi Simmons;Berk Canberk;Simon L. Cotton;Hyundong Shin;Trung Q. Duong","doi":"10.1109/TNSE.2025.3609198","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3609198","url":null,"abstract":"Recently, quantum deep reinforcement learning (Q-DRL) has started to gain attention as a potential approach for tackling complex challenges in wireless communication systems. In particular, Q-DRL, integrating quantum operations into deep learning models, can effectively handle dynamic environments and process large-scale optimizations. As future wireless networks continue to evolve, greater emphasis is being placed on context and meaning rather than raw data. New paradigms, such as semantic communications (SemComs) are essential to effectively convey meaning between transmitters and receivers. By linking SemComs with Q-DRL, future wireless networks will be capable of large-scale extractions and decoding of meaning, thereby minimizing reliance on complete context sharing between communicating parties. Together with SemComs, digital twins (DTs) have been considered as key enablers for future wireless networks. As virtual replicas of physical networks, they serve an important role in network operation, optimization, and control. In this regard, Q-DRL will also be highly beneficial for DTs in enhancing critical functions such as data management and security. This study offers fresh outlooks on how to leverage Q-DRL for SemComs in future wireless networks, augmented by the use of DTs.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"2053-2076"},"PeriodicalIF":7.9,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-10DOI: 10.1109/TNSE.2025.3608483
Siqi Ma;Yang Fang;Weidong Xiao;Xiang Zhao
Node importance estimation involves assigning a global importance score to each node in a graph, pivotal to various subsequent tasks, including recommendation, network dismantling, etc. Prior research involves pre-training classification tasks using node labels and structural information, followed by computing node importance scores as a downstream regression task. However, a gap exists caused by the inconsistency between the pre-training and downstream tasks, which tends to exert negative transfer. This paper proposes to narrow down the gap for node importance estimation by implementing a multi-view technique, including node-view for context and graph-view for structure. Specifically, in node-view, we devise soft prompts by encoding node information, which enables the model to capture structural features within a semantic context; afterward, the downstream node regression task is aligned with pre-training by inserting prompt patterns. In graph-view, we introduce virtual nodes, which are learnably inserted based on node importance, to create a prompt graph. High-importance nodes in the original graph are linked to more virtual nodes, improving their embeddings in subsequent propagation steps. Such enhancement increases their importance scores in downstream tasks, improving the model's ability to distinguish significant nodes effectively. Additionally, the prompts from different views are fused through multi-view contrastive learning to further enhance the expressiveness of the node embeddings. We empirically evaluate our model on four public datasets, which are shown to outperform other state-of-the-art alternatives significantly and consistently.
{"title":"Node Importance Estimation via Multi-View Graph Prompting","authors":"Siqi Ma;Yang Fang;Weidong Xiao;Xiang Zhao","doi":"10.1109/TNSE.2025.3608483","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3608483","url":null,"abstract":"Node importance estimation involves assigning a global importance score to each node in a graph, pivotal to various subsequent tasks, including recommendation, network dismantling, etc. Prior research involves pre-training classification tasks using node labels and structural information, followed by computing node importance scores as a downstream regression task. However, a gap exists caused by the inconsistency between the pre-training and downstream tasks, which tends to exert negative transfer. This paper proposes to narrow down the gap for node importance estimation by implementing a multi-view technique, including node-view for context and graph-view for structure. Specifically, in node-view, we devise soft prompts by encoding node information, which enables the model to capture structural features within a semantic context; afterward, the downstream node regression task is aligned with pre-training by inserting prompt patterns. In graph-view, we introduce virtual nodes, which are learnably inserted based on node importance, to create a prompt graph. High-importance nodes in the original graph are linked to more virtual nodes, improving their embeddings in subsequent propagation steps. Such enhancement increases their importance scores in downstream tasks, improving the model's ability to distinguish significant nodes effectively. Additionally, the prompts from different views are fused through multi-view contrastive learning to further enhance the expressiveness of the node embeddings. We empirically evaluate our model on four public datasets, which are shown to outperform other state-of-the-art alternatives significantly and consistently.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"2022-2037"},"PeriodicalIF":7.9,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-09DOI: 10.1109/TNSE.2025.3607916
Dingwen Pan;Weilong Chen;Jian Shi;Chenye Wu;Dan Wang;Choong Seon Hong;Zhu Han
The rapid expansion of large language models (LLMs) has led to increasingly frequent interactions between LLM agents and human users, motivating new questions about their capacity to form and maintain cooperative relationships. To this end, game theory, as an effective tool in the study of strategic interactions, has gathered attention and has been employed in the research field of LLMs, particularly in exploring their interactions with users. However, most previous studies focused on the performance of LLMs in static games or finitely repeated games, and these studies are relatively stylized and cannot fully capture the complex, evolving nature of User–LLM interactions. In this paper, we modeled User–LLM interactions as a dynamic network of repeated strategic exchanges and proposed an infinitely repeated game framework to analyze the behavioral traits of LLMs in such settings. To enable adaptive decision-making under uncertainty, we further incorporated Bayesian inference using a beta distribution as both the prior and posterior. We conducted a case study over the trending and state-of-the-art LLMs: GPT-3, GPT-4, DeepSeek-V3, Qwen2.5-72B, Qwen2.5-7B and Llama-3-70B. Experimental results demonstrate that LLMs show decent performance in infinitely repeated games, indicating their capability in decision-making and cooperation during repeated interactions within dynamic networks. The integration of Bayesian inference further reveals that LLMs can effectively process probabilistic information, leading to improved performance. Our findings suggest that LLM agents prefer to consider future payoffs rather than only caring about single-stage rewards, as well as the ability to build and maintain long-term cooperative relationships with users in dynamic network settings.
{"title":"Bayesian Inference-Aided Large Language Model Agents in Infinitely Repeated Games: A Dynamic Network View","authors":"Dingwen Pan;Weilong Chen;Jian Shi;Chenye Wu;Dan Wang;Choong Seon Hong;Zhu Han","doi":"10.1109/TNSE.2025.3607916","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3607916","url":null,"abstract":"The rapid expansion of large language models (LLMs) has led to increasingly frequent interactions between LLM agents and human users, motivating new questions about their capacity to form and maintain cooperative relationships. To this end, game theory, as an effective tool in the study of strategic interactions, has gathered attention and has been employed in the research field of LLMs, particularly in exploring their interactions with users. However, most previous studies focused on the performance of LLMs in static games or finitely repeated games, and these studies are relatively stylized and cannot fully capture the complex, evolving nature of User–LLM interactions. In this paper, we modeled User–LLM interactions as a dynamic network of repeated strategic exchanges and proposed an infinitely repeated game framework to analyze the behavioral traits of LLMs in such settings. To enable adaptive decision-making under uncertainty, we further incorporated Bayesian inference using a beta distribution as both the prior and posterior. We conducted a case study over the trending and state-of-the-art LLMs: GPT-3, GPT-4, DeepSeek-V3, Qwen2.5-72B, Qwen2.5-7B and Llama-3-70B. Experimental results demonstrate that LLMs show decent performance in infinitely repeated games, indicating their capability in decision-making and cooperation during repeated interactions within dynamic networks. The integration of Bayesian inference further reveals that LLMs can effectively process probabilistic information, leading to improved performance. Our findings suggest that LLM agents prefer to consider future payoffs rather than only caring about single-stage rewards, as well as the ability to build and maintain long-term cooperative relationships with users in dynamic network settings.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"2004-2021"},"PeriodicalIF":7.9,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-08DOI: 10.1109/TNSE.2025.3607331
Hao Li;Hao Jiang;Jing Wu;Guiao Yang;Jian Zhang
As the size of large language models (LLMs) increases, the limitations of a single data center, such as constrained computational resources and storage capacity, have made distributed training across multiple data centers the preferred solution. However, a primary challenge in this context is reducing the impact of gradient synchronization on the training efficiency across multiple data centers. In this work, we propose a distributed training scheme for LLMs, named parallel gradient computation and synchronization (PGCS). Specifically, while one expert model is being trained to compute gradients, another expert model performs gradient synchronization in parallel. In addition, a gradient synchronization algorithm named BLP is developed to find the optimal gradient synchronization strategy under arbitrary network connectivity and limited bandwidth across multiple data centers. Ultimately, the effectiveness of PGCS and BLP in enhancing the efficiency of distributed training is demonstrated through comprehensive simulations and physical experiments.
{"title":"Parallel Gradient Computation and Synchronization: Enhancing the Efficiency of Distributed Training for LLMs","authors":"Hao Li;Hao Jiang;Jing Wu;Guiao Yang;Jian Zhang","doi":"10.1109/TNSE.2025.3607331","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3607331","url":null,"abstract":"As the size of large language models (LLMs) increases, the limitations of a single data center, such as constrained computational resources and storage capacity, have made distributed training across multiple data centers the preferred solution. However, a primary challenge in this context is reducing the impact of gradient synchronization on the training efficiency across multiple data centers. In this work, we propose a distributed training scheme for LLMs, named parallel gradient computation and synchronization (PGCS). Specifically, while one expert model is being trained to compute gradients, another expert model performs gradient synchronization in parallel. In addition, a gradient synchronization algorithm named BLP is developed to find the optimal gradient synchronization strategy under arbitrary network connectivity and limited bandwidth across multiple data centers. Ultimately, the effectiveness of PGCS and BLP in enhancing the efficiency of distributed training is demonstrated through comprehensive simulations and physical experiments.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"1962-1976"},"PeriodicalIF":7.9,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-08DOI: 10.1109/TNSE.2025.3607494
Xinchao Wang;Wei Wang;Cheng Huang;Ping Cao
The open and zero-trust nature of the heterogeneous low-altitude intelligence network requires more stringent secure authentication that cannot be meet with conventional schemes, due to the static authorization misalignment, long-validity token infiltration risk, and single-factor credential ossification. To address these challenges, this study proposes a blockchain-based cross-domain authentication scheme. We first develop a blockchain-enabled secure cross-domain registration and information management architecture incorporating a dual-index data structure for efficient historical query operations. Unmanned aerial vehicles (UAVs) achieve cross-domain registration through blockchain-based secure interactions with target domain trusted authorities (TAs). A cross-domain authentication protocol integrating physical unclonable function (PUF) and hash-based signature technique is designed, for mutual authentication. The TA generates time-limited cross-domain tokens with restricted communication attempts for UAVs, which subsequently establish negotiated session keys with base stations for secure resource sharing. To enhance security dynamics, both parties update temporary identity information and prepare fresh authentication keys during each token request cycle. The TA delegates token-updating random factors to base stations to ensure secure token renewal. Additionally, as the blockchain records the hash values of each token round, TA can detect if internal attackers have tampered with the token state. The security analysis and experiments demonstrate the advantages of our scheme.
{"title":"Zero-Trust Enabled Anonymous Continuous Cross-Domain Authentication for UAVs: A Blockchain-Based Approach","authors":"Xinchao Wang;Wei Wang;Cheng Huang;Ping Cao","doi":"10.1109/TNSE.2025.3607494","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3607494","url":null,"abstract":"The open and zero-trust nature of the heterogeneous low-altitude intelligence network requires more stringent secure authentication that cannot be meet with conventional schemes, due to the static authorization misalignment, long-validity token infiltration risk, and single-factor credential ossification. To address these challenges, this study proposes a blockchain-based cross-domain authentication scheme. We first develop a blockchain-enabled secure cross-domain registration and information management architecture incorporating a dual-index data structure for efficient historical query operations. Unmanned aerial vehicles (UAVs) achieve cross-domain registration through blockchain-based secure interactions with target domain trusted authorities (TAs). A cross-domain authentication protocol integrating physical unclonable function (PUF) and hash-based signature technique is designed, for mutual authentication. The TA generates time-limited cross-domain tokens with restricted communication attempts for UAVs, which subsequently establish negotiated session keys with base stations for secure resource sharing. To enhance security dynamics, both parties update temporary identity information and prepare fresh authentication keys during each token request cycle. The TA delegates token-updating random factors to base stations to ensure secure token renewal. Additionally, as the blockchain records the hash values of each token round, TA can detect if internal attackers have tampered with the token state. The security analysis and experiments demonstrate the advantages of our scheme.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"1977-1989"},"PeriodicalIF":7.9,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-22DOI: 10.1109/TNSE.2025.3561751
Qingyan Ren;Yanjing Sun;Sizhen Bian;Michele Magno
Underwater Wireless Sensor Networks (UWSNs) offer promising exploration capabilities in challenging underwater environments, necessitating a focus on reducing energy consumption while guaranteeing monitoring coverage. Underwater magnetic induction (MI)-assisted acoustic cooperative multiple-input–multiple-output (MIMO) WSNs have shown advantages over traditional UWSNs in various aspects due to the seamless integration of sensor networks and communication technology. However, as an emerging topic, a critical gap exists, as they often overlook the vital considerations of monitoring coverage requirements and the dynamic nature of the unknown underwater environment. Moreover, these advantages can be further enhanced by harnessing the collaborative potential of multiple independent underwater nodes. This paper introduces a significant advancement to the field of MI-assisted Acoustic Cooperative MIMO WSNs leveraging the innovative Confident Information Coverage (CIC) and a reinforcement learning paradigm known as Learning Automata (LA). The paper presents the LA-based Cooperative MIMO Formation (LACMF) algorithm designed to minimize communication energy consumption in sensors while concurrently maximizing coverage performance. Experimental results demonstrate the LACMF considerably outperforms other schemes in terms of energy consumption, and network coverage to satisfy the imposed constraints, the CIC can be improved up to by an additional 52%, 11% reduction in energy consumption.
{"title":"Optimizing Energy Consumption and Coverage in Underwater Magnetic Induction-Assisted Acoustic WSNs Using Learning Automata-Based Cooperative MIMO Formation","authors":"Qingyan Ren;Yanjing Sun;Sizhen Bian;Michele Magno","doi":"10.1109/TNSE.2025.3561751","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3561751","url":null,"abstract":"Underwater Wireless Sensor Networks (UWSNs) offer promising exploration capabilities in challenging underwater environments, necessitating a focus on reducing energy consumption while guaranteeing monitoring coverage. Underwater magnetic induction (MI)-assisted acoustic cooperative multiple-input–multiple-output (MIMO) WSNs have shown advantages over traditional UWSNs in various aspects due to the seamless integration of sensor networks and communication technology. However, as an emerging topic, a critical gap exists, as they often overlook the vital considerations of monitoring coverage requirements and the dynamic nature of the unknown underwater environment. Moreover, these advantages can be further enhanced by harnessing the collaborative potential of multiple independent underwater nodes. This paper introduces a significant advancement to the field of MI-assisted Acoustic Cooperative MIMO WSNs leveraging the innovative Confident Information Coverage (CIC) and a reinforcement learning paradigm known as Learning Automata (LA). The paper presents the LA-based Cooperative MIMO Formation (LACMF) algorithm designed to minimize communication energy consumption in sensors while concurrently maximizing coverage performance. Experimental results demonstrate the LACMF considerably outperforms other schemes in terms of energy consumption, and network coverage to satisfy the imposed constraints, the CIC can be improved up to by an additional 52%, 11% reduction in energy consumption.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"12 5","pages":"3527-3540"},"PeriodicalIF":7.9,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144891284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-20DOI: 10.1109/TNSE.2025.3600924
Fenghui Zhang;Yuhang Jiang;Xuecai Bao;Xiancun Zhou;Yu Zong;Xiaohu Liang;Kun Yang
Introducing edge computing into smart manufacturing can enhance factory efficiency and productivity. By leveraging a central scheduler to connect Edge Servers (ESs) in these factories, resource sharing can be achieved. However, the unpredictable nature of task offloading from factory IoT devices results in varying task loads at each ES, expanding the action space and complicating task scheduling coordination, thus impeding effective load balancing. To address this challenge, we propose an AxTD3-Deep Reinforcement Learning (DRL) method to balance the system while reducing system latency. Firstly, we consider that each ES has multiple virtual machines and propose a workload balancing algorithm to ensure more balanced computation among the virtual machines of each ES. Next, we construct this system as a reinforcement learning model and analyze its state and action spaces. Based on this analysis, we modify the system's states and actions to reduce its complexity without compromising utility. We then design the AxTD3-DRL to balance the system, i.e., A2TD3 and A3TD3, dividing a neural network into several parallel sub-networks to further reduce the action space and state space, thereby accelerating convergence. Finally, we compare the designed method with classic DRL algorithms (e.g., SAC, TD3) and heuristic approaches (e.g., PSO). The results show that our proposed AxTD3 algorithm not only balances the load across ESs but also reduces the average system latency.
{"title":"Adaptive Load Balancing for Industrial Edge Computing Systems: An AxTD3-Deep Reinforcement Learning Approach","authors":"Fenghui Zhang;Yuhang Jiang;Xuecai Bao;Xiancun Zhou;Yu Zong;Xiaohu Liang;Kun Yang","doi":"10.1109/TNSE.2025.3600924","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3600924","url":null,"abstract":"Introducing edge computing into smart manufacturing can enhance factory efficiency and productivity. By leveraging a central scheduler to connect Edge Servers (ESs) in these factories, resource sharing can be achieved. However, the unpredictable nature of task offloading from factory IoT devices results in varying task loads at each ES, expanding the action space and complicating task scheduling coordination, thus impeding effective load balancing. To address this challenge, we propose an AxTD3-Deep Reinforcement Learning (DRL) method to balance the system while reducing system latency. Firstly, we consider that each ES has multiple virtual machines and propose a workload balancing algorithm to ensure more balanced computation among the virtual machines of each ES. Next, we construct this system as a reinforcement learning model and analyze its state and action spaces. Based on this analysis, we modify the system's states and actions to reduce its complexity without compromising utility. We then design the AxTD3-DRL to balance the system, i.e., A2TD3 and A3TD3, dividing a neural network into several parallel sub-networks to further reduce the action space and state space, thereby accelerating convergence. Finally, we compare the designed method with classic DRL algorithms (e.g., SAC, TD3) and heuristic approaches (e.g., PSO). The results show that our proposed AxTD3 algorithm not only balances the load across ESs but also reduces the average system latency.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"4743-4759"},"PeriodicalIF":7.9,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-19DOI: 10.1109/TNSE.2025.3598991
Xinchen Xu;Francesca Parise
We study a family of random graph models - termed subgraph generated models (SUGMs) - initially developed by Chandrasekhar and Jackson (2025) in which higher-order structures are explicitly included in the network formation process. We use matrix concentration inequalities to bound the difference between the adjacency matrix of networks realized from such SUGMs and the expected adjacency matrix as a function of the network size. We apply this result to derive high-probability bounds on the difference between centrality measures (such as degree, eigenvector, and Katz centrality) in sampled versus expected normalized networks.
{"title":"On the Asymptotic Convergence of Subgraph Generated Models","authors":"Xinchen Xu;Francesca Parise","doi":"10.1109/TNSE.2025.3598991","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3598991","url":null,"abstract":"We study a family of random graph models - termed subgraph generated models (SUGMs) - initially developed by Chandrasekhar and Jackson (2025) in which higher-order structures are explicitly included in the network formation process. We use matrix concentration inequalities to bound the difference between the adjacency matrix of networks realized from such SUGMs and the expected adjacency matrix as a function of the network size. We apply this result to derive high-probability bounds on the difference between centrality measures (such as degree, eigenvector, and Katz centrality) in sampled versus expected normalized networks.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"5654-5662"},"PeriodicalIF":7.9,"publicationDate":"2025-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}