Pub Date : 2026-01-23DOI: 10.1109/TSP.2026.3656119
Zhongtao Chen;Lei Cheng;Yik-Chung Wu;H. Vincent Poor
Block-term decomposition (BTD), particularly its rank-$left(L_{r},L_{r},1right)$ special case, is widely used in signal processing. Traditional methods for computing BTD either unrealistically assume the number of blocks and block ranks are known or require exhaustive tuning of these parameters. While sparsity-promoting regularization has been introduced to estimate these parameters more efficiently, it still requires regularization parameter tuning. Bayesian learning addresses these issues by employing sparsity-promoting priors on the number of blocks and block ranks, but so far is limited to fully observed BTD tensors. To process incomplete BTD tensors, only a few optimization-based methods have been proposed, and they continue to suffer from heavy parameter tuning. To enable tuning-free BTD completion, a prior that simultaneously enforces block-wise sparsity and within-block column-wise sparsity while incorporating graph structure is introduced within the Bayesian framework. Besides theoretically establishing the legitimacy of the prior distribution, a mean-field design is developed to obtain a closed-form updating variational inference (VI) algorithm without loss of graph information. Extensive experiments on both synthetic datasets and real-world datasets demonstrate the superiority of the proposed method over existing optimization‐based algorithms and the Bayesian model without graph information, in terms of rank learning, tensor recovery, and factor recovery.
{"title":"Rank-Revealing Bayesian Block-Term Tensor Completion With Graph Information","authors":"Zhongtao Chen;Lei Cheng;Yik-Chung Wu;H. Vincent Poor","doi":"10.1109/TSP.2026.3656119","DOIUrl":"10.1109/TSP.2026.3656119","url":null,"abstract":"Block-term decomposition (BTD), particularly its rank-<inline-formula><tex-math>$left(L_{r},L_{r},1right)$</tex-math></inline-formula> special case, is widely used in signal processing. Traditional methods for computing BTD either unrealistically assume the number of blocks and block ranks are known or require exhaustive tuning of these parameters. While sparsity-promoting regularization has been introduced to estimate these parameters more efficiently, it still requires regularization parameter tuning. Bayesian learning addresses these issues by employing sparsity-promoting priors on the number of blocks and block ranks, but so far is limited to fully observed BTD tensors. To process incomplete BTD tensors, only a few optimization-based methods have been proposed, and they continue to suffer from heavy parameter tuning. To enable tuning-free BTD completion, a prior that simultaneously enforces block-wise sparsity and within-block column-wise sparsity while incorporating graph structure is introduced within the Bayesian framework. Besides theoretically establishing the legitimacy of the prior distribution, a mean-field design is developed to obtain a closed-form updating variational inference (VI) algorithm without loss of graph information. Extensive experiments on both synthetic datasets and real-world datasets demonstrate the superiority of the proposed method over existing optimization‐based algorithms and the Bayesian model without graph information, in terms of rank learning, tensor recovery, and factor recovery.","PeriodicalId":13330,"journal":{"name":"IEEE Transactions on Signal Processing","volume":"74 ","pages":"654-669"},"PeriodicalIF":5.8,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146042848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-22DOI: 10.1109/TSP.2026.3656887
Thu Ha Phi;Alexandre Hippert-Ferrer;Florent Bouchard;Arnaud Breloy
This paper addresses the problem of learning an undirected graph from data gathered at each node. Within Gaussian graphical models (GGM), the topology of such graph can be linked to the support of the conditional correlation matrix of the data. The corresponding graph learning problem then scales as the square of number of variables (nodes), which is usually problematic for large dimension. To tackle this issue, we propose a graph learning framework that leverages a low-rank factorization of the conditional correlation matrix. In order to solve the resulting optimization problem, we derive tools required to apply Riemannian optimization techniques for this particular structure. The proposal is then particularized to a low-rank constrained counterpart of the standard GGM estimation problem, i.e., the regularized maximum likelihood estimation of a precision matrix. Experiments on synthetic and real data demonstrate that a very efficient dimension-versus-performance trade-off can be achieved with this approach.
{"title":"Leveraging Low-Rank Factorizations of Conditional Correlation Matrices in Graph Learning","authors":"Thu Ha Phi;Alexandre Hippert-Ferrer;Florent Bouchard;Arnaud Breloy","doi":"10.1109/TSP.2026.3656887","DOIUrl":"10.1109/TSP.2026.3656887","url":null,"abstract":"This paper addresses the problem of learning an undirected graph from data gathered at each node. Within Gaussian graphical models (GGM), the topology of such graph can be linked to the support of the conditional correlation matrix of the data. The corresponding graph learning problem then scales as the square of number of variables (nodes), which is usually problematic for large dimension. To tackle this issue, we propose a graph learning framework that leverages a low-rank factorization of the conditional correlation matrix. In order to solve the resulting optimization problem, we derive tools required to apply Riemannian optimization techniques for this particular structure. The proposal is then particularized to a low-rank constrained counterpart of the standard GGM estimation problem, i.e., the regularized maximum likelihood estimation of a precision matrix. Experiments on synthetic and real data demonstrate that a very efficient dimension-versus-performance trade-off can be achieved with this approach.","PeriodicalId":13330,"journal":{"name":"IEEE Transactions on Signal Processing","volume":"74 ","pages":"750-764"},"PeriodicalIF":5.8,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146042745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Achieving Full Multipath Diversity by Random Constellation Rotation: a Theoretical Perspective","authors":"Xuehan Wang, Jinhong Yuan, Jintao Wang, Kehan Huang","doi":"10.1109/tsp.2026.3657038","DOIUrl":"https://doi.org/10.1109/tsp.2026.3657038","url":null,"abstract":"","PeriodicalId":13330,"journal":{"name":"IEEE Transactions on Signal Processing","volume":"7 1","pages":""},"PeriodicalIF":5.4,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146043158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-21DOI: 10.1109/tsp.2026.3656569
Saidur R. Pavel, Yimin D. Zhang, Shunqiao Sun
{"title":"2D DOA Estimation of Coherent Signals Exploiting Forward-Backward Covariance Tensor","authors":"Saidur R. Pavel, Yimin D. Zhang, Shunqiao Sun","doi":"10.1109/tsp.2026.3656569","DOIUrl":"https://doi.org/10.1109/tsp.2026.3656569","url":null,"abstract":"","PeriodicalId":13330,"journal":{"name":"IEEE Transactions on Signal Processing","volume":"274 1","pages":""},"PeriodicalIF":5.4,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146042759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We investigate a novel characteristic of the conjugate function associated to a generic convex optimization problem, which can subsequently be leveraged for efficient dual decomposition methods. In particular, under mild assumptions, we show that there is a specific region in the domain of the conjugate function such that for any point in the region, there is always a ray originating from that point along which the gradients of the conjugate remain constant. We refer to this characteristic as a fixed gradient over rays (FGOR). We further show that this characteristic is inherited by the corresponding dual function. Then we provide a thorough exposition of the application of the FGOR characteristic to dual subgradient methods. More importantly, we leverage FGOR to devise a simple stepsize rule that can be prepended with state-of-the-art stepsize methods enabling them to be more efficient. Furthermore, we investigate how the FGOR characteristic is used when solving the global consensus problem, a prevalent formulation in diverse application domains. We show that FGOR can be exploited not only to expedite the convergence of the dual decomposition methods but also to reduce the communication overhead. FGOR is extended to nonconvex formulations, and its advantages in stochastic optimization are demonstrated. Numerical experiments using quadratic objectives and a regularized least squares regression with real datasets are conducted. The results show that FGOR can significantly improve the performance of existing stepsize methods and outperform the state-of-the-art splitting methods on average in terms of both convergence behavior and communication efficiency.
{"title":"On the Characteristics of the Conjugate Function Enabling Effective Dual Decomposition Methods","authors":"Hansi Abeynanda;Chathuranga Weeraddana;Carlo Fischione","doi":"10.1109/TSP.2026.3656332","DOIUrl":"10.1109/TSP.2026.3656332","url":null,"abstract":"We investigate a novel characteristic of the conjugate function associated to a generic convex optimization problem, which can subsequently be leveraged for efficient dual decomposition methods. In particular, under mild assumptions, we show that there is a specific region in the domain of the conjugate function such that for any point in the region, there is always a ray originating from that point along which the gradients of the conjugate remain constant. We refer to this characteristic as a <italic>fixed gradient over rays</i> (FGOR). We further show that this characteristic is inherited by the corresponding dual function. Then we provide a thorough exposition of the application of the FGOR characteristic to dual subgradient methods. More importantly, we leverage FGOR to devise a simple stepsize rule that can be prepended with state-of-the-art stepsize methods enabling them to be more efficient. Furthermore, we investigate how the FGOR characteristic is used when solving the global consensus problem, a prevalent formulation in diverse application domains. We show that FGOR can be exploited not only to expedite the convergence of the dual decomposition methods but also to reduce the communication overhead. FGOR is extended to nonconvex formulations, and its advantages in stochastic optimization are demonstrated. Numerical experiments using quadratic objectives and a regularized least squares regression with real datasets are conducted. The results show that FGOR can significantly improve the performance of existing stepsize methods and outperform the state-of-the-art splitting methods on average in terms of both convergence behavior and communication efficiency.","PeriodicalId":13330,"journal":{"name":"IEEE Transactions on Signal Processing","volume":"74 ","pages":"572-588"},"PeriodicalIF":5.8,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11359597","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146042760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-21DOI: 10.1109/TSP.2026.3656662
Chengxi Li;Ming Xiao;Mikael Skoglund
Communication bottlenecks and the presence of stragglers pose significant challenges in distributed learning (DL). To deal with these challenges, recent advances leverage unbiased compression functions and gradient coding. However, the significant benefits of biased compression remain largely unexplored. To close this gap, we propose Compressed Gradient Coding with Error Feedback (COCO-EF), a novel DL method that combines gradient coding with biased compression to mitigate straggler effects and reduce communication costs. In each iteration, non-straggler devices encode local gradients from redundantly allocated training data, incorporate prior compression errors, and compress the results using biased compression functions before transmission. The server aggregates these compressed messages from the non-stragglers to approximate the global gradient for model updates. We provide rigorous theoretical convergence guarantees for COCO-EF and validate its superior learning performance over baseline methods through empirical evaluations. As far as we know, we are among the first to rigorously demonstrate that biased compression has substantial benefits in DL, when gradient coding is employed to cope with stragglers.
{"title":"Biased Compression in Gradient Coding for Distributed Learning","authors":"Chengxi Li;Ming Xiao;Mikael Skoglund","doi":"10.1109/TSP.2026.3656662","DOIUrl":"10.1109/TSP.2026.3656662","url":null,"abstract":"Communication bottlenecks and the presence of stragglers pose significant challenges in distributed learning (DL). To deal with these challenges, recent advances leverage unbiased compression functions and gradient coding. However, the significant benefits of biased compression remain largely unexplored. To close this gap, we propose <bold>Co</b>mpressed Gradient <bold>Co</b>ding with <bold>E</b>rror <bold>F</b>eedback (COCO-EF), a novel DL method that combines gradient coding with biased compression to mitigate straggler effects and reduce communication costs. In each iteration, non-straggler devices encode local gradients from redundantly allocated training data, incorporate prior compression errors, and compress the results using biased compression functions before transmission. The server aggregates these compressed messages from the non-stragglers to approximate the global gradient for model updates. We provide rigorous theoretical convergence guarantees for COCO-EF and validate its superior learning performance over baseline methods through empirical evaluations. As far as we know, we are among the first to rigorously demonstrate that biased compression has substantial benefits in DL, when gradient coding is employed to cope with stragglers.","PeriodicalId":13330,"journal":{"name":"IEEE Transactions on Signal Processing","volume":"74 ","pages":"514-530"},"PeriodicalIF":5.8,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146042758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-21DOI: 10.1109/tsp.2026.3655921
Hongyu Han, Sheng Zhang, Hing Cheung So
{"title":"Privacy-Preserving Distributed Adaptive Filtering via Input Perturbation and Amplitude-Shifted Data Exchange over Networks","authors":"Hongyu Han, Sheng Zhang, Hing Cheung So","doi":"10.1109/tsp.2026.3655921","DOIUrl":"https://doi.org/10.1109/tsp.2026.3655921","url":null,"abstract":"","PeriodicalId":13330,"journal":{"name":"IEEE Transactions on Signal Processing","volume":"117 1","pages":""},"PeriodicalIF":5.4,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146042757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-21DOI: 10.1109/tsp.2026.3656668
Chengen Liu, Victor M. Tenorio, Antonio G. Marques, Elvin Isufi
{"title":"Matched Topological Subspace Detector","authors":"Chengen Liu, Victor M. Tenorio, Antonio G. Marques, Elvin Isufi","doi":"10.1109/tsp.2026.3656668","DOIUrl":"https://doi.org/10.1109/tsp.2026.3656668","url":null,"abstract":"","PeriodicalId":13330,"journal":{"name":"IEEE Transactions on Signal Processing","volume":"395 1","pages":""},"PeriodicalIF":5.4,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146042756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-20DOI: 10.1109/tsp.2026.3655839
Siddhartha Parupudi, Gourab Ghatak
{"title":"An Algorithm for Fixed Budget Best Arm Identification with Combinatorial Exploration","authors":"Siddhartha Parupudi, Gourab Ghatak","doi":"10.1109/tsp.2026.3655839","DOIUrl":"https://doi.org/10.1109/tsp.2026.3655839","url":null,"abstract":"","PeriodicalId":13330,"journal":{"name":"IEEE Transactions on Signal Processing","volume":"24 1","pages":""},"PeriodicalIF":5.4,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146042772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-16DOI: 10.1109/TSP.2026.3654842
Xiaohuan Wu;Jin Qiu;Ji Sun;Wei Liu;Haiyang Zhang;Yonina C. Eldar
To achieve ultra-high precision positioning, the extremely large-scale antenna array (ELAA), consisting of hundreds or even thousands of antenna elements, has garnered significant attention. However, due to increased antenna aperture, it inevitably encounters both near-field effects and spatial non-stationarity effects. In the near-field region, the traditional assumption of far-field plane wavefront no longer holds, necessitating consideration of spherical wave characteristics. Spatial non-stationarity arises when signals fail to reach the entire array, but instead only impinge on a subset of antennas, which is referred to as the signal’s visible region (VR). Both effects cause model mismatch and therefore reduce positioning accuracy. In this paper, we introduce an exact near-field signal model in the context of ELAA. Based on this model, we prove that the steering vectors of source signals and the eigenvectors of the signal subspace become collinear as the number of antennas approaches infinity, which makes it easier to estimate the VR and source location parameters. Accordingly, we develop an estimation method to effectively extract the VR information of signals even when the VRs are discontinuous or overlapping. After obtaining the VR information, we propose three source localization methods that leverage the estimated VR and eigenvectors. Simulation results demonstrate that the proposed methods achieve high-precision localization while reducing computational complexity, thereby overcoming the model mismatch induced by near-field effects and spatial non-stationarity effects in ELAA.
{"title":"Source Localization for Extremely Large-Scale Antenna Arrays Under Spatial Non-Stationarity and Near-Field Effects","authors":"Xiaohuan Wu;Jin Qiu;Ji Sun;Wei Liu;Haiyang Zhang;Yonina C. Eldar","doi":"10.1109/TSP.2026.3654842","DOIUrl":"10.1109/TSP.2026.3654842","url":null,"abstract":"To achieve ultra-high precision positioning, the extremely large-scale antenna array (ELAA), consisting of hundreds or even thousands of antenna elements, has garnered significant attention. However, due to increased antenna aperture, it inevitably encounters both near-field effects and spatial non-stationarity effects. In the near-field region, the traditional assumption of far-field plane wavefront no longer holds, necessitating consideration of spherical wave characteristics. Spatial non-stationarity arises when signals fail to reach the entire array, but instead only impinge on a subset of antennas, which is referred to as the signal’s visible region (VR). Both effects cause model mismatch and therefore reduce positioning accuracy. In this paper, we introduce an exact near-field signal model in the context of ELAA. Based on this model, we prove that the steering vectors of source signals and the eigenvectors of the signal subspace become collinear as the number of antennas approaches infinity, which makes it easier to estimate the VR and source location parameters. Accordingly, we develop an estimation method to effectively extract the VR information of signals even when the VRs are discontinuous or overlapping. After obtaining the VR information, we propose three source localization methods that leverage the estimated VR and eigenvectors. Simulation results demonstrate that the proposed methods achieve high-precision localization while reducing computational complexity, thereby overcoming the model mismatch induced by near-field effects and spatial non-stationarity effects in ELAA.","PeriodicalId":13330,"journal":{"name":"IEEE Transactions on Signal Processing","volume":"74 ","pages":"685-700"},"PeriodicalIF":5.8,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145993254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}