首页 > 最新文献

IEEE Transactions on Signal and Information Processing over Networks最新文献

英文 中文
Robust Time-Varying Graph Signal Recovery for Dynamic Physical Sensor Network Data 动态物理传感器网络数据的鲁棒时变图信号恢复
IF 3 3区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-01-06 DOI: 10.1109/TSIPN.2025.3525978
Eisuke Yamagata;Kazuki Naganuma;Shunsuke Ono
We propose a time-varying graph signal recovery method for estimating the true time-varying graph signal from corrupted observations by leveraging dynamic graphs. Most of the conventional methods for time-varying graph signal recovery have been proposed under the assumption that the underlying graph that houses the signals is static. However, in light of rapid advances in sensor technology, the assumption that sensor networks are time-varying like the signals is becoming a very practical problem setting. In this paper, we focus on such cases and formulate dynamic graph signal recovery as a constrained convex optimization problem that simultaneously estimates both time-varying graph signals and sparsely modeled outliers. In our formulation, we use two types of regularizations, time-varying graph Laplacian-based and temporal difference-based, and also separately modeled missing values with known positions and unknown outliers to achieve robust estimations from highly degraded data. In addition, an algorithm is developed to efficiently solve the optimization problem based on a primal-dual splitting method. Extensive experiments on simulated drone remote sensing data and real-world sea surface temperature data demonstrate the advantages of the proposed method over existing methods.
我们提出了一种时变图信号恢复方法,利用动态图从损坏的观测中估计出真正的时变图信号。大多数时变图信号恢复的传统方法都是在假定包含信号的底层图是静态的情况下提出的。然而,随着传感器技术的快速发展,传感器网络像信号一样随时间变化的假设正在成为一个非常实际的问题设置。在本文中,我们关注这种情况,并将动态图信号恢复制定为同时估计时变图信号和稀疏建模异常值的约束凸优化问题。在我们的公式中,我们使用了两种类型的正则化,即基于时变图拉普拉斯和基于时间差异的正则化,并且还分别对已知位置和未知异常值的缺失值进行建模,以从高度退化的数据中实现鲁棒估计。在此基础上,提出了一种基于原对偶分割的优化算法。在模拟无人机遥感数据和真实海洋表面温度数据上进行的大量实验表明,该方法优于现有方法。
{"title":"Robust Time-Varying Graph Signal Recovery for Dynamic Physical Sensor Network Data","authors":"Eisuke Yamagata;Kazuki Naganuma;Shunsuke Ono","doi":"10.1109/TSIPN.2025.3525978","DOIUrl":"https://doi.org/10.1109/TSIPN.2025.3525978","url":null,"abstract":"We propose a time-varying graph signal recovery method for estimating the true time-varying graph signal from corrupted observations by leveraging dynamic graphs. Most of the conventional methods for time-varying graph signal recovery have been proposed under the assumption that the underlying graph that houses the signals is static. However, in light of rapid advances in sensor technology, the assumption that sensor networks are time-varying like the signals is becoming a very practical problem setting. In this paper, we focus on such cases and formulate dynamic graph signal recovery as a constrained convex optimization problem that simultaneously estimates both time-varying graph signals and sparsely modeled outliers. In our formulation, we use two types of regularizations, time-varying graph Laplacian-based and temporal difference-based, and also separately modeled missing values with known positions and unknown outliers to achieve robust estimations from highly degraded data. In addition, an algorithm is developed to efficiently solve the optimization problem based on a primal-dual splitting method. Extensive experiments on simulated drone remote sensing data and real-world sea surface temperature data demonstrate the advantages of the proposed method over existing methods.","PeriodicalId":56268,"journal":{"name":"IEEE Transactions on Signal and Information Processing over Networks","volume":"11 ","pages":"59-70"},"PeriodicalIF":3.0,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10824961","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142992951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Label Guided Graph Optimized Convolutional Network for Semi-Supervised Learning 半监督学习的标签引导图优化卷积网络
IF 3 3区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-01-06 DOI: 10.1109/TSIPN.2025.3525961
Ziyan Zhang;Bo Jiang;Jin Tang;Bin Luo
Graph Convolutional Networks (GCNs) have been widely studied for semi-supervised learning tasks. It is known that the graph convolution operations in most of existing GCNs are composed of two parts, i.e., feature propagation (FP) on a neighborhood graph and feature transformation (FT) with a fully connected network. For semi-supervised learning, existing GCNs generally utilize the label information only to train the parameters of the FT part via optimizing the loss function. However, they lack exploiting the label information in neighborhood feature propagation. Besides, due to the fixed graph topology used in FP, existing GCNs are vulnerable w.r.t. structural noises/attacks. To address these issues, we propose a novel and robust Label Guided Graph Optimized Convolutional Network (LabelGOCN) model which aims to fully exploit the label information in feature propagation of GCN via pairwise constraints propagation. In LabelGOCN, the pairwise constraints can provide a kind of ‘weakly’ supervised information to refine graph topology structure and thus to guide graph convolution operations for robust semi-supervised learning tasks. In particular, LabelGOCN jointly refines the pairwise constraints and GCN via a unified regularization model which can boost their respective performance. The experiments on several benchmark datasets show the effectiveness and robustness of the proposed LabelGOCN on semi-supervised learning tasks.
图卷积网络(GCNs)在半监督学习任务中得到了广泛的研究。已知大多数现有GCNs的图卷积操作由两部分组成,即邻域图上的特征传播(FP)和全连通网络上的特征变换(FT)。对于半监督学习,现有的GCNs一般只利用标签信息通过优化损失函数来训练FT部分的参数。然而,它们在邻域特征传播中缺乏对标签信息的利用。此外,由于FP采用了固定的图拓扑结构,现有的GCNs容易受到w.r.t.结构噪声/攻击。为了解决这些问题,我们提出了一种新颖且鲁棒的标签引导图优化卷积网络(LabelGOCN)模型,该模型旨在通过对约束传播充分利用标签信息在GCN特征传播中的作用。在LabelGOCN中,配对约束可以提供一种“弱”监督信息来优化图的拓扑结构,从而指导图的卷积操作,实现鲁棒的半监督学习任务。特别是LabelGOCN通过统一的正则化模型对配对约束和GCN进行了共同的细化,提高了它们各自的性能。在多个基准数据集上的实验表明了LabelGOCN算法在半监督学习任务上的有效性和鲁棒性。
{"title":"Label Guided Graph Optimized Convolutional Network for Semi-Supervised Learning","authors":"Ziyan Zhang;Bo Jiang;Jin Tang;Bin Luo","doi":"10.1109/TSIPN.2025.3525961","DOIUrl":"https://doi.org/10.1109/TSIPN.2025.3525961","url":null,"abstract":"Graph Convolutional Networks (GCNs) have been widely studied for semi-supervised learning tasks. It is known that the graph convolution operations in most of existing GCNs are composed of two parts, i.e., feature propagation (FP) on a neighborhood graph and feature transformation (FT) with a fully connected network. For semi-supervised learning, existing GCNs generally utilize the label information only to train the parameters of the FT part via optimizing the loss function. However, they lack exploiting the label information in neighborhood feature propagation. Besides, due to the fixed graph topology used in FP, existing GCNs are vulnerable w.r.t. structural noises/attacks. To address these issues, we propose a novel and robust Label Guided Graph Optimized Convolutional Network (LabelGOCN) model which aims to fully exploit the label information in feature propagation of GCN via pairwise constraints propagation. In LabelGOCN, the pairwise constraints can provide a kind of ‘weakly’ supervised information to refine graph topology structure and thus to guide graph convolution operations for robust semi-supervised learning tasks. In particular, LabelGOCN jointly refines the pairwise constraints and GCN via a unified regularization model which can boost their respective performance. The experiments on several benchmark datasets show the effectiveness and robustness of the proposed LabelGOCN on semi-supervised learning tasks.","PeriodicalId":56268,"journal":{"name":"IEEE Transactions on Signal and Information Processing over Networks","volume":"11 ","pages":"71-84"},"PeriodicalIF":3.0,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142992949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Event-Triggered Data-Driven Distributed LFC Using Controller-Dynamic-Linearization Method 基于控制器动态线性化方法的事件触发数据驱动分布式LFC
IF 3 3区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-01-06 DOI: 10.1109/TSIPN.2025.3525950
Xuhui Bu;Yan Zhang;Yiming Zeng;Zhongsheng Hou
This paper is concerned with an event-triggered distributed load frequency control method for multi-area interconnected power systems. Firstly, because of high dimension, nonlinearity and uncertainty of the power system, the relevant model information cannot be fully obtained. To realize the design of LFC algorithm under the condition that the model information is unknown, the equivalent functional relationship between the control signal and the area-control-error signal is established by using a dynamic linearization technique. Secondly, a novel distributed load frequency control algorithm is proposed based on controller dynamic-linearization method and the controller parameters are tuned online by constructing a radial basis function neural network. In addition, to reduce the computation and communication burden on the system, an event-triggered mechanism is also designed, in which whether the data is transmitted at the current instant is completely determined by a triggering condition. Rigorous analysis shows that the proposed method can render the frequency deviation of the power system to converge to a bounded value. Finally, simulation results in a four-area power system verify the effectiveness of the proposed algorithm.
本文研究了一种多区域互联电力系统的事件触发分布式负荷频率控制方法。首先,由于电力系统的高维、非线性和不确定性,不能充分获取相关的模型信息。为了实现模型信息未知情况下LFC算法的设计,采用动态线性化技术建立了控制信号与面积控制误差信号之间的等效函数关系。其次,提出了一种基于控制器动态线性化的分布式负荷频率控制算法,并通过构建径向基函数神经网络对控制器参数进行在线整定。此外,为了减少系统的计算和通信负担,还设计了事件触发机制,在该机制中,数据是否在当前时刻传输完全由触发条件决定。严格的分析表明,该方法能使电力系统的频率偏差收敛到一个有界值。最后,对四区电力系统进行了仿真,验证了算法的有效性。
{"title":"Event-Triggered Data-Driven Distributed LFC Using Controller-Dynamic-Linearization Method","authors":"Xuhui Bu;Yan Zhang;Yiming Zeng;Zhongsheng Hou","doi":"10.1109/TSIPN.2025.3525950","DOIUrl":"https://doi.org/10.1109/TSIPN.2025.3525950","url":null,"abstract":"This paper is concerned with an event-triggered distributed load frequency control method for multi-area interconnected power systems. Firstly, because of high dimension, nonlinearity and uncertainty of the power system, the relevant model information cannot be fully obtained. To realize the design of LFC algorithm under the condition that the model information is unknown, the equivalent functional relationship between the control signal and the area-control-error signal is established by using a dynamic linearization technique. Secondly, a novel distributed load frequency control algorithm is proposed based on controller dynamic-linearization method and the controller parameters are tuned online by constructing a radial basis function neural network. In addition, to reduce the computation and communication burden on the system, an event-triggered mechanism is also designed, in which whether the data is transmitted at the current instant is completely determined by a triggering condition. Rigorous analysis shows that the proposed method can render the frequency deviation of the power system to converge to a bounded value. Finally, simulation results in a four-area power system verify the effectiveness of the proposed algorithm.","PeriodicalId":56268,"journal":{"name":"IEEE Transactions on Signal and Information Processing over Networks","volume":"11 ","pages":"85-96"},"PeriodicalIF":3.0,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142992950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Fixed-Time Convergent Distributed Algorithm for Time-Varying Optimal Resource Allocation Problem 时变资源最优分配问题的固定时间收敛分布式算法
IF 3 3区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-12-18 DOI: 10.1109/TSIPN.2024.3511258
Zeng-Di Zhou;Ge Guo;Renyongkang Zhang
This article proposes a distributed time-varying optimization approach to address the dynamic resource allocation problem, leveraging a sliding mode technique. The algorithm integrates a fixed-time sliding mode component to ensure that the global equality constraints are met, and is coupled with a fixed-time distributed control mechanism involving the nonsmooth consensus idea for attaining the system's optimal state. It is designed to operate with minimal communication overhead, requiring only a single variable exchange between neighboring agents. This algorithm can effectuate the optimal resource allocation in both scenarios with time-varying cost functions of identical and nonidentical Hessians, where the latter can be non-quadratic. The practicality and superiority of our algorithm are validated by case studies.
本文提出了一种利用滑模技术的分布式时变优化方法来解决动态资源分配问题。该算法集成了固定时间滑模组件以保证系统满足全局等式约束,并结合了包含非光滑一致性思想的固定时间分布式控制机制以达到系统的最优状态。它被设计成以最小的通信开销运行,只需要在相邻代理之间进行单个变量交换。该算法可以在具有相同和非相同Hessians的时变代价函数的两种情况下实现资源的最优分配,其中后者可以是非二次的。通过实例验证了算法的实用性和优越性。
{"title":"A Fixed-Time Convergent Distributed Algorithm for Time-Varying Optimal Resource Allocation Problem","authors":"Zeng-Di Zhou;Ge Guo;Renyongkang Zhang","doi":"10.1109/TSIPN.2024.3511258","DOIUrl":"https://doi.org/10.1109/TSIPN.2024.3511258","url":null,"abstract":"This article proposes a distributed time-varying optimization approach to address the dynamic resource allocation problem, leveraging a sliding mode technique. The algorithm integrates a fixed-time sliding mode component to ensure that the global equality constraints are met, and is coupled with a fixed-time distributed control mechanism involving the nonsmooth consensus idea for attaining the system's optimal state. It is designed to operate with minimal communication overhead, requiring only a single variable exchange between neighboring agents. This algorithm can effectuate the optimal resource allocation in both scenarios with time-varying cost functions of identical and nonidentical Hessians, where the latter can be non-quadratic. The practicality and superiority of our algorithm are validated by case studies.","PeriodicalId":56268,"journal":{"name":"IEEE Transactions on Signal and Information Processing over Networks","volume":"11 ","pages":"48-58"},"PeriodicalIF":3.0,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142890167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Memory-Enhanced Distributed Accelerated Algorithms for Coordinated Linear Computation 协调线性计算的内存增强分布式加速算法
IF 3 3区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-12-12 DOI: 10.1109/TSIPN.2024.3511265
Shufen Ding;Deyuan Meng;Mingjun Du;Kaiquan Cai
In this paper, a memory-enhanced distributed accelerated algorithm is proposed for solving large-scale systems of linear equations within the context of multi-agent systems. By employing a local predictor consisting of a linear combination of the nodes' current and previous values, the inclusion of two memory taps can be characterized such that the convergence of the distributed solution algorithm for coordinated computation is accelerated. Moreover, consensus-based convergence results are established by leveraging an analysis of the spectral radius of an augmented iterative matrix associated with the error system that arises from solving the equation. In addition, the connection between the convergence rate and the tunable parameters is developed through an examination of the spectral radius of the iterative matrix, and the optimal mixing parameter is systematically derived to achieve the fastest convergence rate. It is shown that despite whether the linear equation of interest possesses a unique solution or multiple solutions, the proposed distributed algorithm exhibits exponential convergence to the solution, without dependence on the initial conditions. In particular, both the theoretical analysis and simulation examples demonstrate that the proposed distributed algorithm can achieve a faster convergence rate than conventional distributed algorithms for the coordinated linear computation, provided that adjustable parameters are appropriately selected.
本文提出了一种内存增强分布式加速算法,用于求解多智能体环境下的大型线性方程组。通过采用由节点当前值和先前值的线性组合组成的局部预测器,可以表征包含两个存储器抽头的特征,从而加快分布式解决算法的收敛速度,以进行协调计算。此外,基于共识的收敛结果是通过利用与求解方程产生的误差系统相关的增广迭代矩阵的谱半径分析来建立的。此外,通过对迭代矩阵谱半径的考察,建立了收敛速率与可调参数之间的关系,并系统地推导了最优混合参数,以实现最快的收敛速率。结果表明,无论所关注的线性方程是否具有唯一解或多个解,所提出的分布式算法对解具有指数收敛性,而不依赖于初始条件。理论分析和仿真实例均表明,只要选择适当的可调参数,所提出的分布式算法在协调线性计算中比传统的分布式算法收敛速度更快。
{"title":"Memory-Enhanced Distributed Accelerated Algorithms for Coordinated Linear Computation","authors":"Shufen Ding;Deyuan Meng;Mingjun Du;Kaiquan Cai","doi":"10.1109/TSIPN.2024.3511265","DOIUrl":"https://doi.org/10.1109/TSIPN.2024.3511265","url":null,"abstract":"In this paper, a memory-enhanced distributed accelerated algorithm is proposed for solving large-scale systems of linear equations within the context of multi-agent systems. By employing a local predictor consisting of a linear combination of the nodes' current and previous values, the inclusion of two memory taps can be characterized such that the convergence of the distributed solution algorithm for coordinated computation is accelerated. Moreover, consensus-based convergence results are established by leveraging an analysis of the spectral radius of an augmented iterative matrix associated with the error system that arises from solving the equation. In addition, the connection between the convergence rate and the tunable parameters is developed through an examination of the spectral radius of the iterative matrix, and the optimal mixing parameter is systematically derived to achieve the fastest convergence rate. It is shown that despite whether the linear equation of interest possesses a unique solution or multiple solutions, the proposed distributed algorithm exhibits exponential convergence to the solution, without dependence on the initial conditions. In particular, both the theoretical analysis and simulation examples demonstrate that the proposed distributed algorithm can achieve a faster convergence rate than conventional distributed algorithms for the coordinated linear computation, provided that adjustable parameters are appropriately selected.","PeriodicalId":56268,"journal":{"name":"IEEE Transactions on Signal and Information Processing over Networks","volume":"11 ","pages":"35-47"},"PeriodicalIF":3.0,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142890382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Auto-Weighted Multi-View Deep Non-Negative Matrix Factorization With Multi-Kernel Learning 基于多核学习的自加权多视图深度非负矩阵分解
IF 3 3区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-12-04 DOI: 10.1109/TSIPN.2024.3511262
Xuanhao Yang;Hangjun Che;Man-Fai Leung;Cheng Liu;Shiping Wen
Deep matrix factorization (DMF) has the capability to discover hierarchical structures within raw data by factorizing matrices layer by layer, allowing it to utilize latent information for superior clustering performance. However, DMF-based approaches face limitations when dealing with complex and nonlinear raw data. To address this issue, Auto-weighted Multi-view Deep Nonnegative Matrix Factorization with Multi-kernel Learning (MvMKDNMF) is proposed by incorporating multi-kernel learning into deep nonnegative matrix factorization. Specifically, samples are mapped into the kernel space which is a convex combination of several predefined kernels, free from selecting kernels manually. Furthermore, to preserve the local manifold structure of samples, a graph regularization is embedded in each view and the weights are assigned adaptively to different views. An alternate iteration algorithm is designed to solve the proposed model, and the convergence and computational complexity are also analyzed. Comparative experiments are conducted across nine multi-view datasets against seven state-of-the-art clustering methods showing the superior performances of the proposed MvMKDNMF.
深度矩阵分解(DMF)能够通过逐层分解矩阵来发现原始数据中的层次结构,从而允许它利用潜在信息获得卓越的聚类性能。然而,基于dmf的方法在处理复杂和非线性的原始数据时面临局限性。为了解决这一问题,将多核学习与深度非负矩阵分解相结合,提出了基于多核学习的自加权多视图深度非负矩阵分解(MvMKDNMF)。具体来说,样本被映射到内核空间中,内核空间是几个预定义内核的凸组合,无需手动选择内核。此外,为了保持样本的局部流形结构,在每个视图中嵌入图正则化,并自适应地为不同的视图分配权重。设计了一种交替迭代算法来求解该模型,并分析了算法的收敛性和计算复杂度。在9个多视图数据集上与7种最先进的聚类方法进行了比较实验,结果表明所提出的MvMKDNMF具有优越的性能。
{"title":"Auto-Weighted Multi-View Deep Non-Negative Matrix Factorization With Multi-Kernel Learning","authors":"Xuanhao Yang;Hangjun Che;Man-Fai Leung;Cheng Liu;Shiping Wen","doi":"10.1109/TSIPN.2024.3511262","DOIUrl":"https://doi.org/10.1109/TSIPN.2024.3511262","url":null,"abstract":"Deep matrix factorization (DMF) has the capability to discover hierarchical structures within raw data by factorizing matrices layer by layer, allowing it to utilize latent information for superior clustering performance. However, DMF-based approaches face limitations when dealing with complex and nonlinear raw data. To address this issue, Auto-weighted Multi-view Deep Nonnegative Matrix Factorization with Multi-kernel Learning (MvMKDNMF) is proposed by incorporating multi-kernel learning into deep nonnegative matrix factorization. Specifically, samples are mapped into the kernel space which is a convex combination of several predefined kernels, free from selecting kernels manually. Furthermore, to preserve the local manifold structure of samples, a graph regularization is embedded in each view and the weights are assigned adaptively to different views. An alternate iteration algorithm is designed to solve the proposed model, and the convergence and computational complexity are also analyzed. Comparative experiments are conducted across nine multi-view datasets against seven state-of-the-art clustering methods showing the superior performances of the proposed MvMKDNMF.","PeriodicalId":56268,"journal":{"name":"IEEE Transactions on Signal and Information Processing over Networks","volume":"11 ","pages":"23-34"},"PeriodicalIF":3.0,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142858875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Bit Distributed Detection of Sparse Stochastic Signals Over Error-Prone Reporting Channels 在易出错报告信道上对稀疏随机信号进行多比特分布式检测
IF 3 3区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-20 DOI: 10.1109/TSIPN.2024.3496253
Linlin Mao;Shefeng Yan;Zeping Sui;Hongbin Li
We consider a distributed detection problem within a wireless sensor network (WSN), where a substantial number of sensors cooperate to detect the existence of sparse stochastic signals. To achieve a trade-off between detection performance and system constraints, multi-bit quantizers are employed at local sensors. Then, two quantization strategies, namely raw quantization (RQ) and likelihood ratio quantization (LQ), are examined. The multi-bit quantized signals undergo encoding into binary codewords and are subsequently transmitted to the fusion center via error-prone reporting channels. Upon exploiting the locally most powerful test (LMPT) strategy, we devise two multi-bit LMPT detectors in which quantized raw observations and local likelihood ratios are fused respectively. Moreover, the asymptotic detection performance of the proposed quantized detectors is analyzed, and closed-form expressions for the detection and false alarm probabilities are derived. Furthermore, the multi-bit quantizer design criterion, considering both RQ and LQ, is then proposed to achieve near-optimal asymptotic performance for our proposed detectors. The normalized Fisher information and asymptotic relative efficiency are derived, serving as tools to analyze and compensate for the loss of information introduced by the quantization. Simulation results validate the effectiveness of the proposed detectors, especially in scenarios with low signal-to-noise ratios and poor channel conditions.
我们考虑的是无线传感器网络(WSN)中的分布式检测问题,即大量传感器合作检测稀疏随机信号的存在。为了在检测性能和系统约束之间实现权衡,我们在局部传感器上采用了多比特量化器。然后,研究了两种量化策略,即原始量化(RQ)和似然比量化(LQ)。多比特量化信号被编码成二进制编码字,随后通过易出错的报告信道传输到融合中心。利用局部最强检测(LMPT)策略,我们设计了两种多比特 LMPT 检测器,分别融合了量化原始观测值和局部似然比。此外,我们还分析了所提出的量化检测器的渐进检测性能,并得出了检测概率和误报概率的闭式表达。此外,还提出了同时考虑 RQ 和 LQ 的多比特量化器设计准则,以使我们提出的探测器达到接近最优的渐近性能。归一化费舍尔信息和渐近相对效率是用来分析和补偿量化带来的信息损失的工具。仿真结果验证了所提探测器的有效性,尤其是在信噪比低和信道条件差的情况下。
{"title":"Multi-Bit Distributed Detection of Sparse Stochastic Signals Over Error-Prone Reporting Channels","authors":"Linlin Mao;Shefeng Yan;Zeping Sui;Hongbin Li","doi":"10.1109/TSIPN.2024.3496253","DOIUrl":"https://doi.org/10.1109/TSIPN.2024.3496253","url":null,"abstract":"We consider a distributed detection problem within a wireless sensor network (WSN), where a substantial number of sensors cooperate to detect the existence of sparse stochastic signals. To achieve a trade-off between detection performance and system constraints, multi-bit quantizers are employed at local sensors. Then, two quantization strategies, namely raw quantization (RQ) and likelihood ratio quantization (LQ), are examined. The multi-bit quantized signals undergo encoding into binary codewords and are subsequently transmitted to the fusion center via error-prone reporting channels. Upon exploiting the locally most powerful test (LMPT) strategy, we devise two multi-bit LMPT detectors in which quantized raw observations and local likelihood ratios are fused respectively. Moreover, the asymptotic detection performance of the proposed quantized detectors is analyzed, and closed-form expressions for the detection and false alarm probabilities are derived. Furthermore, the multi-bit quantizer design criterion, considering both RQ and LQ, is then proposed to achieve near-optimal asymptotic performance for our proposed detectors. The normalized Fisher information and asymptotic relative efficiency are derived, serving as tools to analyze and compensate for the loss of information introduced by the quantization. Simulation results validate the effectiveness of the proposed detectors, especially in scenarios with low signal-to-noise ratios and poor channel conditions.","PeriodicalId":56268,"journal":{"name":"IEEE Transactions on Signal and Information Processing over Networks","volume":"10 ","pages":"881-893"},"PeriodicalIF":3.0,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142713876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Higher-Order GNNs Meet Efficiency: Sparse Sobolev Graph Neural Networks 高阶GNNs满足效率:稀疏Sobolev图神经网络
IF 3 3区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-20 DOI: 10.1109/TSIPN.2024.3503416
Jhony H. Giraldo;Aref Einizade;Andjela Todorovic;Jhon A. Castro-Correa;Mohsen Badiey;Thierry Bouwmans;Fragkiskos D. Malliaros
Graph Neural Networks (GNNs) have shown great promise in modeling relationships between nodes in a graph, but capturing higher-order relationships remains a challenge for large-scale networks. Previous studies have primarily attempted to utilize the information from higher-order neighbors in the graph, involving the incorporation of powers of the shift operator, such as the graph Laplacian or adjacency matrix. This approach comes with a trade-off in terms of increased computational and memory demands. Relying on graph spectral theory, we make a fundamental observation: the regular and the Hadamard power of the Laplacian matrix behave similarly in the spectrum. This observation has significant implications for capturing higher-order information in GNNs for various tasks such as node classification and semi-supervised learning. Consequently, we propose a novel graph convolutional operator based on the sparse Sobolev norm of graph signals. Our approach, known as Sparse Sobolev GNN (S2-GNN), employs Hadamard products between matrices to maintain the sparsity level in graph representations. S2-GNN utilizes a cascade of filters with increasing Hadamard powers to generate a diverse set of functions. We theoretically analyze the stability of S2-GNN to show the robustness of the model against possible graph perturbations. We also conduct a comprehensive evaluation of S2-GNN across various graph mining, semi-supervised node classification, and computer vision tasks. In particular use cases, our algorithm demonstrates competitive performance compared to state-of-the-art GNNs in terms of performance and running time.
图神经网络(gnn)在图中节点之间的关系建模方面显示出巨大的前景,但捕获高阶关系仍然是大规模网络的挑战。以前的研究主要是试图利用图中高阶邻居的信息,涉及到移位算子的幂合并,如图拉普拉斯矩阵或邻接矩阵。这种方法需要权衡计算和内存需求的增加。根据图谱理论,我们得出了一个基本的结论:拉普拉斯矩阵的正则幂和Hadamard幂在谱中的表现是相似的。这一观察结果对于在gnn中捕获高阶信息用于各种任务(如节点分类和半监督学习)具有重要意义。因此,我们提出了一种新的基于图信号的稀疏Sobolev范数的图卷积算子。我们的方法,称为稀疏Sobolev GNN (S2-GNN),使用矩阵之间的Hadamard积来维持图表示中的稀疏度水平。S2-GNN利用级联滤波器,增加阿达玛尔功率,以产生多种功能。我们从理论上分析了S2-GNN的稳定性,以证明该模型对可能的图扰动具有鲁棒性。我们还在各种图挖掘、半监督节点分类和计算机视觉任务中对S2-GNN进行了全面评估。在特定的用例中,我们的算法在性能和运行时间方面与最先进的gnn相比具有竞争力。
{"title":"Higher-Order GNNs Meet Efficiency: Sparse Sobolev Graph Neural Networks","authors":"Jhony H. Giraldo;Aref Einizade;Andjela Todorovic;Jhon A. Castro-Correa;Mohsen Badiey;Thierry Bouwmans;Fragkiskos D. Malliaros","doi":"10.1109/TSIPN.2024.3503416","DOIUrl":"https://doi.org/10.1109/TSIPN.2024.3503416","url":null,"abstract":"Graph Neural Networks (GNNs) have shown great promise in modeling relationships between nodes in a graph, but capturing higher-order relationships remains a challenge for large-scale networks. Previous studies have primarily attempted to utilize the information from higher-order neighbors in the graph, involving the incorporation of powers of the shift operator, such as the graph Laplacian or adjacency matrix. This approach comes with a trade-off in terms of increased computational and memory demands. Relying on graph spectral theory, we make a fundamental observation: \u0000<italic>the regular and the Hadamard power of the Laplacian matrix behave similarly in the spectrum</i>\u0000. This observation has significant implications for capturing higher-order information in GNNs for various tasks such as node classification and semi-supervised learning. Consequently, we propose a novel graph convolutional operator based on the sparse Sobolev norm of graph signals. Our approach, known as Sparse Sobolev GNN (S2-GNN), employs Hadamard products between matrices to maintain the sparsity level in graph representations. S2-GNN utilizes a cascade of filters with increasing Hadamard powers to generate a diverse set of functions. We theoretically analyze the stability of S2-GNN to show the robustness of the model against possible graph perturbations. We also conduct a comprehensive evaluation of S2-GNN across various graph mining, semi-supervised node classification, and computer vision tasks. In particular use cases, our algorithm demonstrates competitive performance compared to state-of-the-art GNNs in terms of performance and running time.","PeriodicalId":56268,"journal":{"name":"IEEE Transactions on Signal and Information Processing over Networks","volume":"11 ","pages":"11-22"},"PeriodicalIF":3.0,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142844519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Probability-Guaranteed Distributed Filtering for Nonlinear Systems on Basis of Nonuniform Samplings Subject to Envelope Constraints 基于包络约束的非均匀采样的非线性系统概率保证分布式滤波
IF 3 3区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-15 DOI: 10.1109/TSIPN.2024.3496254
Wei Wang;Chen Hu;Lifeng Ma;Xiaojian Yi
This paper investigates the probability-guaranteed distributed $H_infty$ filtering problem for stochastic time-varying systems over sensor networks. The measurements from sensing nodes are sampled nonuniformly before being received by filters and the sampling processes are modeled by a set of Markov chains. The purpose of the addressed problem is to design a distributed filter algorithm which meets the finite-horizon average $H_infty$ performance, meanwhile guaranteeing all filtering errors bounded within a prespecified envelope with a certain probability. Sufficient conditions for the feasibility of the mentioned filtering technique are established using convex optimization techniques. The desired filtering gains are subsequently determined by resolving the relevant matrix inequalities at each time step. Finally, the effectiveness of the proposed filtering algorithm is shown via an illustrative numerical example.
本文研究了传感器网络上随机时变系统的概率保证分布式 $H_infty$ 滤波问题。在滤波器接收来自传感节点的测量值之前,会对其进行非均匀采样,采样过程由一组马尔可夫链建模。该问题的目的是设计一种分布式滤波算法,该算法既要满足有限视距平均 $H_infty$ 性能,又要保证所有滤波误差以一定概率约束在一个预先指定的包络内。利用凸优化技术建立了上述过滤技术可行性的充分条件。随后,通过解决每个时间步长的相关矩阵不等式,确定了所需的滤波增益。最后,通过一个数值示例说明了所提出的滤波算法的有效性。
{"title":"Probability-Guaranteed Distributed Filtering for Nonlinear Systems on Basis of Nonuniform Samplings Subject to Envelope Constraints","authors":"Wei Wang;Chen Hu;Lifeng Ma;Xiaojian Yi","doi":"10.1109/TSIPN.2024.3496254","DOIUrl":"https://doi.org/10.1109/TSIPN.2024.3496254","url":null,"abstract":"This paper investigates the probability-guaranteed distributed \u0000<inline-formula><tex-math>$H_infty$</tex-math></inline-formula>\u0000 filtering problem for stochastic time-varying systems over sensor networks. The measurements from sensing nodes are sampled nonuniformly before being received by filters and the sampling processes are modeled by a set of Markov chains. The purpose of the addressed problem is to design a distributed filter algorithm which meets the finite-horizon average \u0000<inline-formula><tex-math>$H_infty$</tex-math></inline-formula>\u0000 performance, meanwhile guaranteeing all filtering errors bounded within a prespecified envelope with a certain probability. Sufficient conditions for the feasibility of the mentioned filtering technique are established using convex optimization techniques. The desired filtering gains are subsequently determined by resolving the relevant matrix inequalities at each time step. Finally, the effectiveness of the proposed filtering algorithm is shown via an illustrative numerical example.","PeriodicalId":56268,"journal":{"name":"IEEE Transactions on Signal and Information Processing over Networks","volume":"10 ","pages":"905-915"},"PeriodicalIF":3.0,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142736339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Asynchronous Message-Passing and Zeroth-Order Optimization Based Distributed Learning With a Use-Case in Resource Allocation in Communication Networks 基于异步消息传递和零阶优化的分布式学习在通信网络资源分配中的应用
IF 3 3区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-15 DOI: 10.1109/TSIPN.2024.3487421
Pourya Behmandpoor;Marc Moonen;Panagiotis Patrinos
Distributed learning and adaptation have received significant interest and found wide-ranging applications in machine learning and signal processing. While various approaches, such as shared-memory optimization, multi-task learning, and consensus-based learning (e.g., federated learning and learning over graphs), focus on optimizing either local costs or a global cost, there remains a need for further exploration of their interconnections. This paper specifically focuses on a scenario where agents collaborate towards a common task (i.e., optimizing a global cost equal to aggregated local costs) while effectively having distinct individual tasks (i.e., optimizing individual local parameters in a local cost). Each agent's actions can potentially impact other agents' performance through interactions. Notably, each agent has access to only its local zeroth-order oracle (i.e., cost function value) and shares scalar values, rather than gradient vectors, with other agents, leading to communication bandwidth efficiency and agent privacy. Agents employ zeroth-order optimization to update their parameters, and the asynchronous message-passing between them is subject to bounded but possibly random communication delays. This paper presents theoretical convergence analyses and establishes a convergence rate for nonconvex problems. Furthermore, it addresses the relevant use-case of deep learning-based resource allocation in communication networks and conducts numerical experiments in which agents, acting as transmitters, collaboratively train their individual policies to maximize a global reward, e.g., a sum of data rates.
分布式学习和自适应在机器学习和信号处理中得到了广泛的应用。虽然各种方法,如共享内存优化、多任务学习和基于共识的学习(例如,联邦学习和基于图的学习)都侧重于优化局部成本或全局成本,但仍需要进一步探索它们之间的相互关系。本文特别关注这样一种场景,即智能体协作完成共同任务(即优化全局成本等于聚合本地成本),同时有效地执行不同的单个任务(即在本地成本中优化单个本地参数)。每个代理的行为都可能通过交互影响其他代理的性能。值得注意的是,每个代理只能访问其局部零阶oracle(即成本函数值),并与其他代理共享标量值,而不是梯度向量,从而提高了通信带宽效率和代理隐私。代理使用零阶优化来更新它们的参数,它们之间的异步消息传递受到有限的但可能是随机的通信延迟的影响。本文对非凸问题进行了理论收敛分析,并建立了一个收敛速率。此外,它解决了通信网络中基于深度学习的资源分配的相关用例,并进行了数值实验,其中代理作为发射器,协同训练他们的个人策略以最大化全局奖励,例如,数据速率的总和。
{"title":"Asynchronous Message-Passing and Zeroth-Order Optimization Based Distributed Learning With a Use-Case in Resource Allocation in Communication Networks","authors":"Pourya Behmandpoor;Marc Moonen;Panagiotis Patrinos","doi":"10.1109/TSIPN.2024.3487421","DOIUrl":"https://doi.org/10.1109/TSIPN.2024.3487421","url":null,"abstract":"Distributed learning and adaptation have received significant interest and found wide-ranging applications in machine learning and signal processing. While various approaches, such as shared-memory optimization, multi-task learning, and consensus-based learning (e.g., federated learning and learning over graphs), focus on optimizing either local costs or a global cost, there remains a need for further exploration of their interconnections. This paper specifically focuses on a scenario where agents collaborate towards a common task (i.e., optimizing a global cost equal to aggregated local costs) while effectively having distinct individual tasks (i.e., optimizing individual local parameters in a local cost). Each agent's actions can potentially impact other agents' performance through interactions. Notably, each agent has access to only its local zeroth-order oracle (i.e., cost function value) and shares scalar values, rather than gradient vectors, with other agents, leading to communication bandwidth efficiency and agent privacy. Agents employ zeroth-order optimization to update their parameters, and the asynchronous message-passing between them is subject to bounded but possibly random communication delays. This paper presents theoretical convergence analyses and establishes a convergence rate for nonconvex problems. Furthermore, it addresses the relevant use-case of deep learning-based resource allocation in communication networks and conducts numerical experiments in which agents, acting as transmitters, collaboratively train their individual policies to maximize a global reward, e.g., a sum of data rates.","PeriodicalId":56268,"journal":{"name":"IEEE Transactions on Signal and Information Processing over Networks","volume":"10 ","pages":"916-931"},"PeriodicalIF":3.0,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142777758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Signal and Information Processing over Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1