Pub Date : 2025-11-05DOI: 10.1109/TNSE.2025.3629133
Pengfei Zhang;Junhuai Li;Dong Ding;Huaijun Wang;Kan Wang;Xiaofan Wang
Accurate and efficient cellular traffic prediction is crucial for enhancing the user quality of experience in mobile networks. However, this task faces significant challenges due to the dynamic complexity of spatial-temporal connections. Existing studies primarily focus on global spatial features while neglecting geographical relationships between base stations and overlooking local spatial-temporal dependencies during feature fusion. To address these limitations, we propose SA-GCN—a novel multi-dimensional feature fusion self-attention graph convolutional network that leverages base station topology, dynamic spatial-temporal characteristics, and traffic aggregation effects. SA-GCN enhances prediction accuracy by synergistically fusing static geographical features with dynamic spatio-temporal patterns driven by user mobility and holiday events. The model comprises two key components: 1) Spatial transformers with graph convolution and enhanced self-attention that capture static and dynamic spatial features through gated fusion and 2) Temporal transformers modeling non-stationary dependencies via self-attention. Multiple spatial-temporal blocks are connected via skip connections for deep feature fusion, while a densely connected convolutional module extracts local dependencies. Extensive experiments on real-world datasets demonstrate SA-GCN's superior performance over state-of-the-art methods.
{"title":"A Spatial-Temporal Graph Convolutional Network With Self-Attention for City-Level Cellular Network Traffic Prediction","authors":"Pengfei Zhang;Junhuai Li;Dong Ding;Huaijun Wang;Kan Wang;Xiaofan Wang","doi":"10.1109/TNSE.2025.3629133","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3629133","url":null,"abstract":"Accurate and efficient cellular traffic prediction is crucial for enhancing the user quality of experience in mobile networks. However, this task faces significant challenges due to the dynamic complexity of spatial-temporal connections. Existing studies primarily focus on global spatial features while neglecting geographical relationships between base stations and overlooking local spatial-temporal dependencies during feature fusion. To address these limitations, we propose SA-GCN—a novel multi-dimensional feature fusion self-attention graph convolutional network that leverages base station topology, dynamic spatial-temporal characteristics, and traffic aggregation effects. SA-GCN enhances prediction accuracy by synergistically fusing static geographical features with dynamic spatio-temporal patterns driven by user mobility and holiday events. The model comprises two key components: 1) Spatial transformers with graph convolution and enhanced self-attention that capture static and dynamic spatial features through gated fusion and 2) Temporal transformers modeling non-stationary dependencies via self-attention. Multiple spatial-temporal blocks are connected via skip connections for deep feature fusion, while a densely connected convolutional module extracts local dependencies. Extensive experiments on real-world datasets demonstrate SA-GCN's superior performance over state-of-the-art methods.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"3285-3300"},"PeriodicalIF":7.9,"publicationDate":"2025-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graph anomaly detection (GAD) plays an important role in improving public safety and product quality and has attracted a great deal of interest in recent years. Although a wide range of progress has been achieved recently, the following challenges still remain: (1) abnormal nodes mixed in the normal node subgraph and (2) global-consistency filtering to different features. To overcome these challenges, we propose AGFNN, a novel adaptive graph filtering neural network designed to handle diverse mixed local patterns and feature variations, thereby improving model fitting from both the node and feature perspectives. Specifically, to enhance the discriminative capacity of the node representation, channel-wise feature adaptive filtering is proposed to learn a specific filter for each feature in a progressive way, which first performs multi-frequency filtering and then adaptively captures the importance of different frequency components for each feature. Meanwhile, to better fit the complex local subgraphs, the node's preference for multi-frequency information can be self-adjusted by learning node-aware bias, which is also equal to learning a specific filter for each node. Extensive experiments on real-world graph datasets demonstrate that AGFNN outperforms the state-of-the-art methods.
{"title":"Adaptive Graph Filtering Neural Network for Graph Anomaly Detection","authors":"Zhizhe Liu;Shuai Zheng;Yeyu Yan;Zhenfeng Zhu;Yao Zhao","doi":"10.1109/TNSE.2025.3629084","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3629084","url":null,"abstract":"Graph anomaly detection (GAD) plays an important role in improving public safety and product quality and has attracted a great deal of interest in recent years. Although a wide range of progress has been achieved recently, the following challenges still remain: (1) abnormal nodes mixed in the normal node subgraph and (2) global-consistency filtering to different features. To overcome these challenges, we propose AGFNN, a novel adaptive graph filtering neural network designed to handle diverse mixed local patterns and feature variations, thereby improving model fitting from both the node and feature perspectives. Specifically, to enhance the discriminative capacity of the node representation, channel-wise feature adaptive filtering is proposed to learn a specific filter for each feature in a progressive way, which first performs multi-frequency filtering and then adaptively captures the importance of different frequency components for each feature. Meanwhile, to better fit the complex local subgraphs, the node's preference for multi-frequency information can be self-adjusted by learning node-aware bias, which is also equal to learning a specific filter for each node. Extensive experiments on real-world graph datasets demonstrate that AGFNN outperforms the state-of-the-art methods.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"3274-3284"},"PeriodicalIF":7.9,"publicationDate":"2025-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Online-to-Offline (O2O) e-commerce services and their users confront a spectrum of fraud risks, where financial identity theft is prevalent and severe. However, current approaches are inadequate to cover such fraud. To address this problem, we consider both environmental entity interactions and activity sequences to model more granular user behaviors. According to our preliminary study, we discovered that fraudulent users exhibit high aggregations of various environmental entities and fraudulent individuals using the same personal ID that features diverse interactions with different environmental entities. We further investigate the abnormal behaviors of individual fraudsters. Motivated by these discoveries, we propose a deep learning-based behavior modeling framework named EnvIT to capture the above behavior patterns. Therefore, EnvIT is sufficiently general to learn user representations for various e-commerce fraud situations. Extensive experiments are conducted on two real-world datasets provided by Meituan and Vesta, respectively. The results demonstrate the superiority of our method, with a 0.17%-13.50% improvement in AUC and 1.13%-22.57% in R$@$90%P on the Meituan dataset, and a 0.71%-11.94% improvement in AUC and 2.99%-21.19% in R$@$90%P on the Vesta dataset, respectively.
{"title":"Fine-Grained Behavioral Modeling With Graph Neural Networks for Financial Identity Theft Detection","authors":"Min Gao;Qiongzan Ye;Yangbo Gao;Zhenhua Zhang;Yu Chen;Yupeng Li;Shutong Chen;Qingyuan Gong;Xin Wang;Yang Chen","doi":"10.1109/TNSE.2025.3627451","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3627451","url":null,"abstract":"Online-to-Offline (O2O) e-commerce services and their users confront a spectrum of fraud risks, where financial identity theft is prevalent and severe. However, current approaches are inadequate to cover such fraud. To address this problem, we consider both environmental entity interactions and activity sequences to model more granular user behaviors. According to our preliminary study, we discovered that fraudulent users exhibit high aggregations of various environmental entities and fraudulent individuals using the same personal ID that features diverse interactions with different environmental entities. We further investigate the abnormal behaviors of individual fraudsters. Motivated by these discoveries, we propose a deep learning-based behavior modeling framework named EnvIT to capture the above behavior patterns. Therefore, EnvIT is sufficiently general to learn user representations for various e-commerce fraud situations. Extensive experiments are conducted on two real-world datasets provided by Meituan and Vesta, respectively. The results demonstrate the superiority of our method, with a 0.17%-13.50% improvement in AUC and 1.13%-22.57% in R<inline-formula><tex-math>$@$</tex-math></inline-formula>90%P on the Meituan dataset, and a 0.71%-11.94% improvement in AUC and 2.99%-21.19% in R<inline-formula><tex-math>$@$</tex-math></inline-formula>90%P on the Vesta dataset, respectively.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"5586-5602"},"PeriodicalIF":7.9,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-31DOI: 10.1109/TNSE.2025.3627823
{"title":"2025 Index IEEE Transactions on Network Science and Engineering","authors":"","doi":"10.1109/TNSE.2025.3627823","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3627823","url":null,"abstract":"","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"12 6","pages":"5086-5177"},"PeriodicalIF":7.9,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11223187","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145405359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In edge computing power network environments, there is a growing demand to support compute-intensive Agentic AI Services, which are composed of interdependent functions represented as Directed Acyclic Graphs (DAGs). Nevertheless, the challenges posed by dynamic resource volatility and potential node failures significantly impact reliable task execution. Existing solutions (often reactive heuristics or GAN-based models) struggle to anticipate risks and overlook DAG dependencies. This paper introduces GADP, a Graph-Aware Diffusion Policy framework designed to facilitate proactive fault-tolerant DAG workload migration in large-scale edge computing systems. This paper presents GADP, a Graph-Aware Diffusion Policy framework for proactive, fault-tolerant DAG workload migration in large-scale edge systems. GADP integrates three key modules: a Transformer-GAT fault predictor for failure probability and type estimation; a DAG encoder that learns structure-preserving task embeddings via multi-round attention; and a diffusion policy generator that refines placement strategies through conditional denoising. Experiments on dynamic simulations with real workload traces show that GADP achieves 99.6% fault detection accuracy, 95.4% diagnosis F1, and over 60% fewer SLO violations, while consuming the least energy among baselines. These results demonstrate GADP's robustness and effectiveness in anticipatory migration under volatile edge conditions.
{"title":"Graph-Aware Diffusion Policy for Fault-Tolerant Agentic AI Service Migration in Edge Computing Power Networks","authors":"Honglin Fang;Peng Yu;Xinxiu Liu;Jice Liu;Zhaowei Qu;Ying Wang;Wenjing Li;Shaoyong Guo;Celimuge Wu","doi":"10.1109/TNSE.2025.3627391","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3627391","url":null,"abstract":"In edge computing power network environments, there is a growing demand to support compute-intensive Agentic AI Services, which are composed of interdependent functions represented as Directed Acyclic Graphs (DAGs). Nevertheless, the challenges posed by dynamic resource volatility and potential node failures significantly impact reliable task execution. Existing solutions (often reactive heuristics or GAN-based models) struggle to anticipate risks and overlook DAG dependencies. This paper introduces <bold>GADP</b>, a <italic>Graph-Aware Diffusion Policy</i> framework designed to facilitate proactive fault-tolerant DAG workload migration in large-scale edge computing systems. This paper presents <bold>GADP</b>, a <italic>Graph-Aware Diffusion Policy</i> framework for proactive, fault-tolerant DAG workload migration in large-scale edge systems. GADP integrates three key modules: a Transformer-GAT fault predictor for failure probability and type estimation; a DAG encoder that learns structure-preserving task embeddings via multi-round attention; and a diffusion policy generator that refines placement strategies through conditional denoising. Experiments on dynamic simulations with real workload traces show that GADP achieves 99.6% fault detection accuracy, 95.4% diagnosis F1, and over 60% fewer SLO violations, while consuming the least energy among baselines. These results demonstrate GADP's robustness and effectiveness in anticipatory migration under volatile edge conditions.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"5992-6009"},"PeriodicalIF":7.9,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-16DOI: 10.1109/TNSE.2025.3610091
Jie Liu;Jiamou Liu;Kaiqi Zhao;Wu Chen
Anomaly detection in dynamic networks is a critical task with broad applications in fields such as recommendation systems, social networks, and financial transaction networks. Most existing anomaly detection approaches rely on discrete-time models that approximate dynamic networks as a sequence of static snapshots. However, real-world data is typically represented as dynamic networks characterized by continuous edge streams. As a result, these methods often fail to capture fine-grained temporal dynamics, leading to significant information loss and suboptimal detection performance. Addressing this gap, this paper tackles the detection of anomalous edges in continuous-time dynamic networks, a crucial task for ensuring the security and integrity of networks in graph-based data analytics. We introduce COMEDY, a novel Continuous-time anOMalous Edge detection framework in DYnamic network. COMEDY innovates a Continuous Dynamic Graph Neural Network that integrates mechanisms for filtering outdated information, encodes node spatial-temporal properties, and refines negative sampling strategies, with the aim of improving the accuracy of anomalous edge detection. Notably, COMEDY is deliberately designed so that all necessary operations can respond to each new edge in the input stream in a constant time (w.r.t. the graph size). Experimental results on six real datasets demonstrate that COMEDY outperforms state-of-the-art anomaly detection methods, with a maximum gain of 8.20% in terms of AUC.
{"title":"COMEDY: Continuous-Time Anomalous Edge Detection in Dynamic Networks","authors":"Jie Liu;Jiamou Liu;Kaiqi Zhao;Wu Chen","doi":"10.1109/TNSE.2025.3610091","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3610091","url":null,"abstract":"Anomaly detection in dynamic networks is a critical task with broad applications in fields such as recommendation systems, social networks, and financial transaction networks. Most existing anomaly detection approaches rely on discrete-time models that approximate dynamic networks as a sequence of static snapshots. However, real-world data is typically represented as dynamic networks characterized by continuous edge streams. As a result, these methods often fail to capture fine-grained temporal dynamics, leading to significant information loss and suboptimal detection performance. Addressing this gap, this paper tackles the detection of anomalous edges in continuous-time dynamic networks, a crucial task for ensuring the security and integrity of networks in graph-based data analytics. We introduce COMEDY, a novel <underline>C</u>ontinuous-time an<underline>OM</u>alous <underline>E</u>dge detection framework in <underline>DY</u>namic network. COMEDY innovates a Continuous Dynamic Graph Neural Network that integrates mechanisms for filtering outdated information, encodes node spatial-temporal properties, and refines negative sampling strategies, with the aim of improving the accuracy of anomalous edge detection. Notably, COMEDY is deliberately designed so that all necessary operations can respond to each new edge in the input stream in a constant time (w.r.t. the graph size). Experimental results on six real datasets demonstrate that COMEDY outperforms state-of-the-art anomaly detection methods, with a maximum gain of 8.20% in terms of AUC.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"2093-2110"},"PeriodicalIF":7.9,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-15DOI: 10.1109/TNSE.2025.3610125
Yirui Huang;Jia-Li Yin;Zhou Tan;Qiuxiang Wang;Ximeng Liu
The advancement of Large Language Models has revolutionized natural language understanding, offering powerful capabilities. However, deploying LLMs in privacy-sensitive environments, such as Internet of Things (IoT) systems, presents substantial challenges to ensuring user data confidentiality. This paper introduces SLDP-LoRA, a novel privacy-preserving split learning framework that combines Rényi Differential Privacy with Low-Rank Adaptation for efficient and secure fine-tuning. SLDP-LoRA minimizes computational overhead on user devices by performing token representation and privacy perturbation locally, while employing LoRA-based fine-tuning and denoising techniques on the server side. The framework ensures strong privacy protection through dynamic noise injection tailored to token representations, maintaining high utility. Experimental results on multiple datasets and models demonstrate SLDP-LoRA's superior performance, with an average improvement of 25.55% in accuracy and a 74.9% reduction in privacy leakage compared to the state-of-the-art methods. Ablation studies further validate the effectiveness of its components in achieving a robust utility-privacy trade-off. SLDP-LoRA offers a scalable, efficient, and practical solution for privacy-preserving applications in distributed and resource-constrained environments.
{"title":"SLDP-LoRA: A Privacy-Preserving Split Learning Framework With Low-Rank Adaptation","authors":"Yirui Huang;Jia-Li Yin;Zhou Tan;Qiuxiang Wang;Ximeng Liu","doi":"10.1109/TNSE.2025.3610125","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3610125","url":null,"abstract":"The advancement of Large Language Models has revolutionized natural language understanding, offering powerful capabilities. However, deploying LLMs in privacy-sensitive environments, such as Internet of Things (IoT) systems, presents substantial challenges to ensuring user data confidentiality. This paper introduces SLDP-LoRA, a novel privacy-preserving split learning framework that combines Rényi Differential Privacy with Low-Rank Adaptation for efficient and secure fine-tuning. SLDP-LoRA minimizes computational overhead on user devices by performing token representation and privacy perturbation locally, while employing LoRA-based fine-tuning and denoising techniques on the server side. The framework ensures strong privacy protection through dynamic noise injection tailored to token representations, maintaining high utility. Experimental results on multiple datasets and models demonstrate SLDP-LoRA's superior performance, with an average improvement of 25.55% in accuracy and a 74.9% reduction in privacy leakage compared to the state-of-the-art methods. Ablation studies further validate the effectiveness of its components in achieving a robust utility-privacy trade-off. SLDP-LoRA offers a scalable, efficient, and practical solution for privacy-preserving applications in distributed and resource-constrained environments.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"2111-2127"},"PeriodicalIF":7.9,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-15DOI: 10.1109/TNSE.2025.3609971
Yan Wang;Yifan Ren;Jinting Nie;Keqin Li
Graph Convolutional Networks (GCNs) have shown great promise in recommender systems due to their ability to capture complex relationships and generate high-quality representations, especially under sparse data conditions. However, stacking multiple GCN layers often leads to oversmoothing, where node embeddings become indistinguishably similar. This problem is exacerbated when target items gather noisy or irrelevant information from high-order neighbors during message propagation. To address this, we propose AMP-GCN, an Attribute-aware Message-Passing GCN that mitigates oversmoothing by clustering items with similar attributes into subgraphs. High-order propagation is then performed within each subgraph, effectively filtering out irrelevant signals and preserving semantic consistency. To further enhance embedding learning, we introduce AMPA-GCN, which integrates item-item correlation signals into the AMP-GCN framework by modifying the adjacency matrix. This design strengthens direct and indirect item relations, leading to more robust representations. Extensive experiments on four public benchmark datasets demonstrate that our proposed models consistently outperform state-of-the-art baselines.
{"title":"Enhancing Recommendation Performance Using Attribute-Aware Message-Passing and Augmentation GCN","authors":"Yan Wang;Yifan Ren;Jinting Nie;Keqin Li","doi":"10.1109/TNSE.2025.3609971","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3609971","url":null,"abstract":"Graph Convolutional Networks (GCNs) have shown great promise in recommender systems due to their ability to capture complex relationships and generate high-quality representations, especially under sparse data conditions. However, stacking multiple GCN layers often leads to oversmoothing, where node embeddings become indistinguishably similar. This problem is exacerbated when target items gather noisy or irrelevant information from high-order neighbors during message propagation. To address this, we propose AMP-GCN, an Attribute-aware Message-Passing GCN that mitigates oversmoothing by clustering items with similar attributes into subgraphs. High-order propagation is then performed within each subgraph, effectively filtering out irrelevant signals and preserving semantic consistency. To further enhance embedding learning, we introduce AMPA-GCN, which integrates item-item correlation signals into the AMP-GCN framework by modifying the adjacency matrix. This design strengthens direct and indirect item relations, leading to more robust representations. Extensive experiments on four public benchmark datasets demonstrate that our proposed models consistently outperform state-of-the-art baselines.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"2077-2092"},"PeriodicalIF":7.9,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dynamic behaviors of complex interacting systems, ubiquitously found in physical, biological, engineering, and social phenomena, are associated with underlying interactions between components of the system. A fundamental challenge in network science is to uncover interaction relationships between network components solely from observational data on their dynamics. Recently, generative models in machine learning, such as the variational autoencoder, have been used to identify the network structure through relational inference in multivariate time series data. However, most existing approaches are based on time series predictions, which are still challenging in the presence of missing data. In this study, we propose a novel approach, Diffusion model for Relational Inference (DiffRI), inspired by a self-supervised method for probabilistic time series imputation. DiffRI learns to infer the existence probability of interactions between network components through conditional diffusion modeling. Numerical experiments on both synthetic and quasi-real datasets show that DiffRI is highly competent with other well-known methods in discovering ground truth interactions. Furthermore, we demonstrate that our imputation-based approach is more tolerant of missing data than prediction-based approaches.
{"title":"Diffusion Model for Relational Inference in Interacting Systems","authors":"Shuhan Zheng;Ziqiang Li;Kantaro Fujiwara;Gouhei Tanaka","doi":"10.1109/TNSE.2025.3607563","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3607563","url":null,"abstract":"Dynamic behaviors of complex interacting systems, ubiquitously found in physical, biological, engineering, and social phenomena, are associated with underlying interactions between components of the system. A fundamental challenge in network science is to uncover interaction relationships between network components solely from observational data on their dynamics. Recently, generative models in machine learning, such as the variational autoencoder, have been used to identify the network structure through relational inference in multivariate time series data. However, most existing approaches are based on time series predictions, which are still challenging in the presence of missing data. In this study, we propose a novel approach, <bold>Diff</b>usion model for <bold>R</b>elational <bold>I</b>nference (DiffRI), inspired by a self-supervised method for probabilistic time series imputation. DiffRI learns to infer the existence probability of interactions between network components through conditional diffusion modeling. Numerical experiments on both synthetic and quasi-real datasets show that DiffRI is highly competent with other well-known methods in discovering ground truth interactions. Furthermore, we demonstrate that our imputation-based approach is more tolerant of missing data than prediction-based approaches.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"1990-2003"},"PeriodicalIF":7.9,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11164166","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Distributed Energy Resources (DERs) offer significant potential for reducing power losses, improving voltage stability, and enhancing resilience in distribution networks. To effectively address network-specific operational constraints and maximize DER performance, it is crucial to strategically optimize both their siting and sizing. Existing works primarily adopt analytical or search-based approaches for DER placement aimed at minimizing power losses. However, analytical methods, while computationally efficient, frequently yield suboptimal solutions at higher DER penetration levels, whereas search-based methods, despite their robustness, become computationally impractical for large-scale networks due to exponential complexity growth. To overcome the limitations, this paper proposes a novel analytical framework that establishes an exact quadratic relationship between power losses and DER injections, enabling a precise analytical estimation and optimization. The proposed approach explicitly relates nodal power demands to their respective contributions to system line losses, providing detailed theoretical insights into the root causes of power losses. Practically, the proposed framework facilitates real-time, large-scale DER allocation optimization while maintaining high accuracy. Furthermore, our theoretical analysis quantifies the impact of the DER power factor on optimal placement for loss reduction. This insight provides a direct, simplified method for integrating power loss considerations into complex, multi-objective optimization models. We validate our method on 33, 69, 123 and 533-bus distribution networks. It significantly outperforms feature-based analytical approaches and matches or exceeds traditional search-based methods. On the largest 533-bus system, our algorithm completes the allocation in just 0.5 s, confirming its effectiveness and practicality for real-world applications.
{"title":"QPADL: Quadratic Programming for Allocation of Distributed Energy Resources to Minimize Power Loss in Distribution Networks","authors":"Hongshen Zhang;Shibo He;Yongtao Zhang;Wenchao Meng","doi":"10.1109/TNSE.2025.3608895","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3608895","url":null,"abstract":"Distributed Energy Resources (DERs) offer significant potential for reducing power losses, improving voltage stability, and enhancing resilience in distribution networks. To effectively address network-specific operational constraints and maximize DER performance, it is crucial to strategically optimize both their siting and sizing. Existing works primarily adopt analytical or search-based approaches for DER placement aimed at minimizing power losses. However, analytical methods, while computationally efficient, frequently yield suboptimal solutions at higher DER penetration levels, whereas search-based methods, despite their robustness, become computationally impractical for large-scale networks due to exponential complexity growth. To overcome the limitations, this paper proposes a novel analytical framework that establishes an exact quadratic relationship between power losses and DER injections, enabling a precise analytical estimation and optimization. The proposed approach explicitly relates nodal power demands to their respective contributions to system line losses, providing detailed theoretical insights into the root causes of power losses. Practically, the proposed framework facilitates real-time, large-scale DER allocation optimization while maintaining high accuracy. Furthermore, our theoretical analysis quantifies the impact of the DER power factor on optimal placement for loss reduction. This insight provides a direct, simplified method for integrating power loss considerations into complex, multi-objective optimization models. We validate our method on 33, 69, 123 and 533-bus distribution networks. It significantly outperforms feature-based analytical approaches and matches or exceeds traditional search-based methods. On the largest 533-bus system, our algorithm completes the allocation in just 0.5 s, confirming its effectiveness and practicality for real-world applications.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"2038-2052"},"PeriodicalIF":7.9,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}