Pub Date : 2025-01-06DOI: 10.1109/TSIPN.2025.3525978
Eisuke Yamagata;Kazuki Naganuma;Shunsuke Ono
We propose a time-varying graph signal recovery method for estimating the true time-varying graph signal from corrupted observations by leveraging dynamic graphs. Most of the conventional methods for time-varying graph signal recovery have been proposed under the assumption that the underlying graph that houses the signals is static. However, in light of rapid advances in sensor technology, the assumption that sensor networks are time-varying like the signals is becoming a very practical problem setting. In this paper, we focus on such cases and formulate dynamic graph signal recovery as a constrained convex optimization problem that simultaneously estimates both time-varying graph signals and sparsely modeled outliers. In our formulation, we use two types of regularizations, time-varying graph Laplacian-based and temporal difference-based, and also separately modeled missing values with known positions and unknown outliers to achieve robust estimations from highly degraded data. In addition, an algorithm is developed to efficiently solve the optimization problem based on a primal-dual splitting method. Extensive experiments on simulated drone remote sensing data and real-world sea surface temperature data demonstrate the advantages of the proposed method over existing methods.
{"title":"Robust Time-Varying Graph Signal Recovery for Dynamic Physical Sensor Network Data","authors":"Eisuke Yamagata;Kazuki Naganuma;Shunsuke Ono","doi":"10.1109/TSIPN.2025.3525978","DOIUrl":"https://doi.org/10.1109/TSIPN.2025.3525978","url":null,"abstract":"We propose a time-varying graph signal recovery method for estimating the true time-varying graph signal from corrupted observations by leveraging dynamic graphs. Most of the conventional methods for time-varying graph signal recovery have been proposed under the assumption that the underlying graph that houses the signals is static. However, in light of rapid advances in sensor technology, the assumption that sensor networks are time-varying like the signals is becoming a very practical problem setting. In this paper, we focus on such cases and formulate dynamic graph signal recovery as a constrained convex optimization problem that simultaneously estimates both time-varying graph signals and sparsely modeled outliers. In our formulation, we use two types of regularizations, time-varying graph Laplacian-based and temporal difference-based, and also separately modeled missing values with known positions and unknown outliers to achieve robust estimations from highly degraded data. In addition, an algorithm is developed to efficiently solve the optimization problem based on a primal-dual splitting method. Extensive experiments on simulated drone remote sensing data and real-world sea surface temperature data demonstrate the advantages of the proposed method over existing methods.","PeriodicalId":56268,"journal":{"name":"IEEE Transactions on Signal and Information Processing over Networks","volume":"11 ","pages":"59-70"},"PeriodicalIF":3.0,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10824961","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142992951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-06DOI: 10.1109/TSIPN.2025.3525961
Ziyan Zhang;Bo Jiang;Jin Tang;Bin Luo
Graph Convolutional Networks (GCNs) have been widely studied for semi-supervised learning tasks. It is known that the graph convolution operations in most of existing GCNs are composed of two parts, i.e., feature propagation (FP) on a neighborhood graph and feature transformation (FT) with a fully connected network. For semi-supervised learning, existing GCNs generally utilize the label information only to train the parameters of the FT part via optimizing the loss function. However, they lack exploiting the label information in neighborhood feature propagation. Besides, due to the fixed graph topology used in FP, existing GCNs are vulnerable w.r.t. structural noises/attacks. To address these issues, we propose a novel and robust Label Guided Graph Optimized Convolutional Network (LabelGOCN) model which aims to fully exploit the label information in feature propagation of GCN via pairwise constraints propagation. In LabelGOCN, the pairwise constraints can provide a kind of ‘weakly’ supervised information to refine graph topology structure and thus to guide graph convolution operations for robust semi-supervised learning tasks. In particular, LabelGOCN jointly refines the pairwise constraints and GCN via a unified regularization model which can boost their respective performance. The experiments on several benchmark datasets show the effectiveness and robustness of the proposed LabelGOCN on semi-supervised learning tasks.
{"title":"Label Guided Graph Optimized Convolutional Network for Semi-Supervised Learning","authors":"Ziyan Zhang;Bo Jiang;Jin Tang;Bin Luo","doi":"10.1109/TSIPN.2025.3525961","DOIUrl":"https://doi.org/10.1109/TSIPN.2025.3525961","url":null,"abstract":"Graph Convolutional Networks (GCNs) have been widely studied for semi-supervised learning tasks. It is known that the graph convolution operations in most of existing GCNs are composed of two parts, i.e., feature propagation (FP) on a neighborhood graph and feature transformation (FT) with a fully connected network. For semi-supervised learning, existing GCNs generally utilize the label information only to train the parameters of the FT part via optimizing the loss function. However, they lack exploiting the label information in neighborhood feature propagation. Besides, due to the fixed graph topology used in FP, existing GCNs are vulnerable w.r.t. structural noises/attacks. To address these issues, we propose a novel and robust Label Guided Graph Optimized Convolutional Network (LabelGOCN) model which aims to fully exploit the label information in feature propagation of GCN via pairwise constraints propagation. In LabelGOCN, the pairwise constraints can provide a kind of ‘weakly’ supervised information to refine graph topology structure and thus to guide graph convolution operations for robust semi-supervised learning tasks. In particular, LabelGOCN jointly refines the pairwise constraints and GCN via a unified regularization model which can boost their respective performance. The experiments on several benchmark datasets show the effectiveness and robustness of the proposed LabelGOCN on semi-supervised learning tasks.","PeriodicalId":56268,"journal":{"name":"IEEE Transactions on Signal and Information Processing over Networks","volume":"11 ","pages":"71-84"},"PeriodicalIF":3.0,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142992949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-06DOI: 10.1109/TSIPN.2025.3525950
Xuhui Bu;Yan Zhang;Yiming Zeng;Zhongsheng Hou
This paper is concerned with an event-triggered distributed load frequency control method for multi-area interconnected power systems. Firstly, because of high dimension, nonlinearity and uncertainty of the power system, the relevant model information cannot be fully obtained. To realize the design of LFC algorithm under the condition that the model information is unknown, the equivalent functional relationship between the control signal and the area-control-error signal is established by using a dynamic linearization technique. Secondly, a novel distributed load frequency control algorithm is proposed based on controller dynamic-linearization method and the controller parameters are tuned online by constructing a radial basis function neural network. In addition, to reduce the computation and communication burden on the system, an event-triggered mechanism is also designed, in which whether the data is transmitted at the current instant is completely determined by a triggering condition. Rigorous analysis shows that the proposed method can render the frequency deviation of the power system to converge to a bounded value. Finally, simulation results in a four-area power system verify the effectiveness of the proposed algorithm.
{"title":"Event-Triggered Data-Driven Distributed LFC Using Controller-Dynamic-Linearization Method","authors":"Xuhui Bu;Yan Zhang;Yiming Zeng;Zhongsheng Hou","doi":"10.1109/TSIPN.2025.3525950","DOIUrl":"https://doi.org/10.1109/TSIPN.2025.3525950","url":null,"abstract":"This paper is concerned with an event-triggered distributed load frequency control method for multi-area interconnected power systems. Firstly, because of high dimension, nonlinearity and uncertainty of the power system, the relevant model information cannot be fully obtained. To realize the design of LFC algorithm under the condition that the model information is unknown, the equivalent functional relationship between the control signal and the area-control-error signal is established by using a dynamic linearization technique. Secondly, a novel distributed load frequency control algorithm is proposed based on controller dynamic-linearization method and the controller parameters are tuned online by constructing a radial basis function neural network. In addition, to reduce the computation and communication burden on the system, an event-triggered mechanism is also designed, in which whether the data is transmitted at the current instant is completely determined by a triggering condition. Rigorous analysis shows that the proposed method can render the frequency deviation of the power system to converge to a bounded value. Finally, simulation results in a four-area power system verify the effectiveness of the proposed algorithm.","PeriodicalId":56268,"journal":{"name":"IEEE Transactions on Signal and Information Processing over Networks","volume":"11 ","pages":"85-96"},"PeriodicalIF":3.0,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142992950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-18DOI: 10.1109/TSIPN.2024.3511258
Zeng-Di Zhou;Ge Guo;Renyongkang Zhang
This article proposes a distributed time-varying optimization approach to address the dynamic resource allocation problem, leveraging a sliding mode technique. The algorithm integrates a fixed-time sliding mode component to ensure that the global equality constraints are met, and is coupled with a fixed-time distributed control mechanism involving the nonsmooth consensus idea for attaining the system's optimal state. It is designed to operate with minimal communication overhead, requiring only a single variable exchange between neighboring agents. This algorithm can effectuate the optimal resource allocation in both scenarios with time-varying cost functions of identical and nonidentical Hessians, where the latter can be non-quadratic. The practicality and superiority of our algorithm are validated by case studies.
{"title":"A Fixed-Time Convergent Distributed Algorithm for Time-Varying Optimal Resource Allocation Problem","authors":"Zeng-Di Zhou;Ge Guo;Renyongkang Zhang","doi":"10.1109/TSIPN.2024.3511258","DOIUrl":"https://doi.org/10.1109/TSIPN.2024.3511258","url":null,"abstract":"This article proposes a distributed time-varying optimization approach to address the dynamic resource allocation problem, leveraging a sliding mode technique. The algorithm integrates a fixed-time sliding mode component to ensure that the global equality constraints are met, and is coupled with a fixed-time distributed control mechanism involving the nonsmooth consensus idea for attaining the system's optimal state. It is designed to operate with minimal communication overhead, requiring only a single variable exchange between neighboring agents. This algorithm can effectuate the optimal resource allocation in both scenarios with time-varying cost functions of identical and nonidentical Hessians, where the latter can be non-quadratic. The practicality and superiority of our algorithm are validated by case studies.","PeriodicalId":56268,"journal":{"name":"IEEE Transactions on Signal and Information Processing over Networks","volume":"11 ","pages":"48-58"},"PeriodicalIF":3.0,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142890167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-12DOI: 10.1109/TSIPN.2024.3511265
Shufen Ding;Deyuan Meng;Mingjun Du;Kaiquan Cai
In this paper, a memory-enhanced distributed accelerated algorithm is proposed for solving large-scale systems of linear equations within the context of multi-agent systems. By employing a local predictor consisting of a linear combination of the nodes' current and previous values, the inclusion of two memory taps can be characterized such that the convergence of the distributed solution algorithm for coordinated computation is accelerated. Moreover, consensus-based convergence results are established by leveraging an analysis of the spectral radius of an augmented iterative matrix associated with the error system that arises from solving the equation. In addition, the connection between the convergence rate and the tunable parameters is developed through an examination of the spectral radius of the iterative matrix, and the optimal mixing parameter is systematically derived to achieve the fastest convergence rate. It is shown that despite whether the linear equation of interest possesses a unique solution or multiple solutions, the proposed distributed algorithm exhibits exponential convergence to the solution, without dependence on the initial conditions. In particular, both the theoretical analysis and simulation examples demonstrate that the proposed distributed algorithm can achieve a faster convergence rate than conventional distributed algorithms for the coordinated linear computation, provided that adjustable parameters are appropriately selected.
{"title":"Memory-Enhanced Distributed Accelerated Algorithms for Coordinated Linear Computation","authors":"Shufen Ding;Deyuan Meng;Mingjun Du;Kaiquan Cai","doi":"10.1109/TSIPN.2024.3511265","DOIUrl":"https://doi.org/10.1109/TSIPN.2024.3511265","url":null,"abstract":"In this paper, a memory-enhanced distributed accelerated algorithm is proposed for solving large-scale systems of linear equations within the context of multi-agent systems. By employing a local predictor consisting of a linear combination of the nodes' current and previous values, the inclusion of two memory taps can be characterized such that the convergence of the distributed solution algorithm for coordinated computation is accelerated. Moreover, consensus-based convergence results are established by leveraging an analysis of the spectral radius of an augmented iterative matrix associated with the error system that arises from solving the equation. In addition, the connection between the convergence rate and the tunable parameters is developed through an examination of the spectral radius of the iterative matrix, and the optimal mixing parameter is systematically derived to achieve the fastest convergence rate. It is shown that despite whether the linear equation of interest possesses a unique solution or multiple solutions, the proposed distributed algorithm exhibits exponential convergence to the solution, without dependence on the initial conditions. In particular, both the theoretical analysis and simulation examples demonstrate that the proposed distributed algorithm can achieve a faster convergence rate than conventional distributed algorithms for the coordinated linear computation, provided that adjustable parameters are appropriately selected.","PeriodicalId":56268,"journal":{"name":"IEEE Transactions on Signal and Information Processing over Networks","volume":"11 ","pages":"35-47"},"PeriodicalIF":3.0,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142890382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deep matrix factorization (DMF) has the capability to discover hierarchical structures within raw data by factorizing matrices layer by layer, allowing it to utilize latent information for superior clustering performance. However, DMF-based approaches face limitations when dealing with complex and nonlinear raw data. To address this issue, Auto-weighted Multi-view Deep Nonnegative Matrix Factorization with Multi-kernel Learning (MvMKDNMF) is proposed by incorporating multi-kernel learning into deep nonnegative matrix factorization. Specifically, samples are mapped into the kernel space which is a convex combination of several predefined kernels, free from selecting kernels manually. Furthermore, to preserve the local manifold structure of samples, a graph regularization is embedded in each view and the weights are assigned adaptively to different views. An alternate iteration algorithm is designed to solve the proposed model, and the convergence and computational complexity are also analyzed. Comparative experiments are conducted across nine multi-view datasets against seven state-of-the-art clustering methods showing the superior performances of the proposed MvMKDNMF.
{"title":"Auto-Weighted Multi-View Deep Non-Negative Matrix Factorization With Multi-Kernel Learning","authors":"Xuanhao Yang;Hangjun Che;Man-Fai Leung;Cheng Liu;Shiping Wen","doi":"10.1109/TSIPN.2024.3511262","DOIUrl":"https://doi.org/10.1109/TSIPN.2024.3511262","url":null,"abstract":"Deep matrix factorization (DMF) has the capability to discover hierarchical structures within raw data by factorizing matrices layer by layer, allowing it to utilize latent information for superior clustering performance. However, DMF-based approaches face limitations when dealing with complex and nonlinear raw data. To address this issue, Auto-weighted Multi-view Deep Nonnegative Matrix Factorization with Multi-kernel Learning (MvMKDNMF) is proposed by incorporating multi-kernel learning into deep nonnegative matrix factorization. Specifically, samples are mapped into the kernel space which is a convex combination of several predefined kernels, free from selecting kernels manually. Furthermore, to preserve the local manifold structure of samples, a graph regularization is embedded in each view and the weights are assigned adaptively to different views. An alternate iteration algorithm is designed to solve the proposed model, and the convergence and computational complexity are also analyzed. Comparative experiments are conducted across nine multi-view datasets against seven state-of-the-art clustering methods showing the superior performances of the proposed MvMKDNMF.","PeriodicalId":56268,"journal":{"name":"IEEE Transactions on Signal and Information Processing over Networks","volume":"11 ","pages":"23-34"},"PeriodicalIF":3.0,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142858875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-20DOI: 10.1109/TSIPN.2024.3496253
Linlin Mao;Shefeng Yan;Zeping Sui;Hongbin Li
We consider a distributed detection problem within a wireless sensor network (WSN), where a substantial number of sensors cooperate to detect the existence of sparse stochastic signals. To achieve a trade-off between detection performance and system constraints, multi-bit quantizers are employed at local sensors. Then, two quantization strategies, namely raw quantization (RQ) and likelihood ratio quantization (LQ), are examined. The multi-bit quantized signals undergo encoding into binary codewords and are subsequently transmitted to the fusion center via error-prone reporting channels. Upon exploiting the locally most powerful test (LMPT) strategy, we devise two multi-bit LMPT detectors in which quantized raw observations and local likelihood ratios are fused respectively. Moreover, the asymptotic detection performance of the proposed quantized detectors is analyzed, and closed-form expressions for the detection and false alarm probabilities are derived. Furthermore, the multi-bit quantizer design criterion, considering both RQ and LQ, is then proposed to achieve near-optimal asymptotic performance for our proposed detectors. The normalized Fisher information and asymptotic relative efficiency are derived, serving as tools to analyze and compensate for the loss of information introduced by the quantization. Simulation results validate the effectiveness of the proposed detectors, especially in scenarios with low signal-to-noise ratios and poor channel conditions.
{"title":"Multi-Bit Distributed Detection of Sparse Stochastic Signals Over Error-Prone Reporting Channels","authors":"Linlin Mao;Shefeng Yan;Zeping Sui;Hongbin Li","doi":"10.1109/TSIPN.2024.3496253","DOIUrl":"https://doi.org/10.1109/TSIPN.2024.3496253","url":null,"abstract":"We consider a distributed detection problem within a wireless sensor network (WSN), where a substantial number of sensors cooperate to detect the existence of sparse stochastic signals. To achieve a trade-off between detection performance and system constraints, multi-bit quantizers are employed at local sensors. Then, two quantization strategies, namely raw quantization (RQ) and likelihood ratio quantization (LQ), are examined. The multi-bit quantized signals undergo encoding into binary codewords and are subsequently transmitted to the fusion center via error-prone reporting channels. Upon exploiting the locally most powerful test (LMPT) strategy, we devise two multi-bit LMPT detectors in which quantized raw observations and local likelihood ratios are fused respectively. Moreover, the asymptotic detection performance of the proposed quantized detectors is analyzed, and closed-form expressions for the detection and false alarm probabilities are derived. Furthermore, the multi-bit quantizer design criterion, considering both RQ and LQ, is then proposed to achieve near-optimal asymptotic performance for our proposed detectors. The normalized Fisher information and asymptotic relative efficiency are derived, serving as tools to analyze and compensate for the loss of information introduced by the quantization. Simulation results validate the effectiveness of the proposed detectors, especially in scenarios with low signal-to-noise ratios and poor channel conditions.","PeriodicalId":56268,"journal":{"name":"IEEE Transactions on Signal and Information Processing over Networks","volume":"10 ","pages":"881-893"},"PeriodicalIF":3.0,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142713876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-20DOI: 10.1109/TSIPN.2024.3503416
Jhony H. Giraldo;Aref Einizade;Andjela Todorovic;Jhon A. Castro-Correa;Mohsen Badiey;Thierry Bouwmans;Fragkiskos D. Malliaros
Graph Neural Networks (GNNs) have shown great promise in modeling relationships between nodes in a graph, but capturing higher-order relationships remains a challenge for large-scale networks. Previous studies have primarily attempted to utilize the information from higher-order neighbors in the graph, involving the incorporation of powers of the shift operator, such as the graph Laplacian or adjacency matrix. This approach comes with a trade-off in terms of increased computational and memory demands. Relying on graph spectral theory, we make a fundamental observation: the regular and the Hadamard power of the Laplacian matrix behave similarly in the spectrum