Pub Date : 2025-12-12DOI: 10.1109/LSP.2025.3643361
Andrew J. Christensen;Ananya Sen Gupta
Neural networks have achieved remarkable results across numerous scientific domains because of their ability to uncover complex patterns. However, despite their effectiveness, these networks rely on heuristic training of highly non-convex objective functions, limiting theoretical understanding and practical reliability. Recent work has shown that shallow neural networks with scalar outputs can be formulated as convex optimization problems, bridging empirical success with theory. In this work, we build upon this framework for vector-valued outputs, introducing a convex formulation for two-layer ReLU networks based on an atomic norm and expressible as a semidefinite program (SDP). This yields a principled convex relaxation of multi-output networks that is both expressive and tractable. We validate the approach using standard SDP solvers, demonstrating its feasibility. These results extend convex neural network training beyond scalar outputs and provide a foundation for scalable, robust alternatives to current heuristic deep learning methods. Our method achieved a 7.3% increase in classification accuracy compared to a baseline convex multi-output network.
{"title":"Shallow Neural Network Training via Atomic Norms and Semidefinite Programming","authors":"Andrew J. Christensen;Ananya Sen Gupta","doi":"10.1109/LSP.2025.3643361","DOIUrl":"https://doi.org/10.1109/LSP.2025.3643361","url":null,"abstract":"Neural networks have achieved remarkable results across numerous scientific domains because of their ability to uncover complex patterns. However, despite their effectiveness, these networks rely on heuristic training of highly non-convex objective functions, limiting theoretical understanding and practical reliability. Recent work has shown that shallow neural networks with scalar outputs can be formulated as convex optimization problems, bridging empirical success with theory. In this work, we build upon this framework for vector-valued outputs, introducing a convex formulation for two-layer ReLU networks based on an atomic norm and expressible as a semidefinite program (SDP). This yields a principled convex relaxation of multi-output networks that is both expressive and tractable. We validate the approach using standard SDP solvers, demonstrating its feasibility. These results extend convex neural network training beyond scalar outputs and provide a foundation for scalable, robust alternatives to current heuristic deep learning methods. Our method achieved a 7.3% increase in classification accuracy compared to a baseline convex multi-output network.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"321-325"},"PeriodicalIF":3.9,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-12DOI: 10.1109/LSP.2025.3643388
Alan Luo;Kaiwen Yuan
Vision Transformers (ViTs) have demonstrated exceptional performance in various vision tasks. However, they tend to underperform on smaller datasets due to their inherent lack of inductive biases. Current approaches address this limitation implicitly—often by pairing ViTs with pretext tasks or by distilling knowledge from convolutional neural networks (CNNs) to strengthen the prior. In contrast, Self-Organizing Maps (SOMs), a widely adopted self-supervised framework, are inherently structured to preserve topology and spatial organization, making them a promising candidate to directly address the limitations of ViTs in limited or small training datasets. Despite this potential, equipping SOMs with modern deep learning architectures remains largely unexplored. In this study, we conduct a novel exploration on how Vision Transformers (ViTs) and Self-Organizing Maps (SOMs) can empower each other, aiming to bridge this critical research gap. Our findings demonstrate that these architectures can synergistically enhance each other, leading to significantly improved performance in both unsupervised and supervised tasks.
{"title":"Simple Self-Organizing Map With Vision Transformers","authors":"Alan Luo;Kaiwen Yuan","doi":"10.1109/LSP.2025.3643388","DOIUrl":"https://doi.org/10.1109/LSP.2025.3643388","url":null,"abstract":"Vision Transformers (ViTs) have demonstrated exceptional performance in various vision tasks. However, they tend to underperform on smaller datasets due to their inherent lack of inductive biases. Current approaches address this limitation implicitly—often by pairing ViTs with pretext tasks or by distilling knowledge from convolutional neural networks (CNNs) to strengthen the prior. In contrast, Self-Organizing Maps (SOMs), a widely adopted self-supervised framework, are inherently structured to preserve topology and spatial organization, making them a promising candidate to directly address the limitations of ViTs in limited or small training datasets. Despite this potential, equipping SOMs with modern deep learning architectures remains largely unexplored. In this study, we conduct a novel exploration on how Vision Transformers (ViTs) and Self-Organizing Maps (SOMs) can empower each other, aiming to bridge this critical research gap. Our findings demonstrate that these architectures can synergistically enhance each other, leading to significantly improved performance in both unsupervised and supervised tasks.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"331-335"},"PeriodicalIF":3.9,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-12DOI: 10.1109/LSP.2025.3643348
Yebin Zheng;Haonan An;Guang Hua;Yongming Chen;Zhiping Lin
Generative adversarial networks (GANs) are a set of powerful generative models, among which CycleGAN, featuring the unique cycle-consistency loss, has gained special popularity. However, this unique structure and the cycle-consistency loss make watermarking CycleGAN particularly challenging, rendering existing deep neural network (DNN) watermarking methods, whether model-agnostic or GAN-specific, inapplicable. Meanwhile, existing DNN watermarking methods are intrusive in nature, requiring direct or indirect modification of model parameters for watermark embedding, which raises fidelity concerns. To solve the above problems, we propose the first nonintrusive and robust watermarking method for CycleGAN. We empirically show that without modifying the CycleGAN model, a user-defined watermark image can still be extracted from model outputs using a dedicated watermark decoder. Extensive experimental results verify that while achieving the so-called absolute fidelity, the proposed method is robust to various attacks, from image post-processing to model stealing.
{"title":"Nonintrusive Watermarking for CycleGAN","authors":"Yebin Zheng;Haonan An;Guang Hua;Yongming Chen;Zhiping Lin","doi":"10.1109/LSP.2025.3643348","DOIUrl":"https://doi.org/10.1109/LSP.2025.3643348","url":null,"abstract":"Generative adversarial networks (GANs) are a set of powerful generative models, among which CycleGAN, featuring the unique cycle-consistency loss, has gained special popularity. However, this unique structure and the cycle-consistency loss make watermarking CycleGAN particularly challenging, rendering existing deep neural network (DNN) watermarking methods, whether model-agnostic or GAN-specific, inapplicable. Meanwhile, existing DNN watermarking methods are intrusive in nature, requiring direct or indirect modification of model parameters for watermark embedding, which raises fidelity concerns. To solve the above problems, we propose the first nonintrusive and robust watermarking method for CycleGAN. We empirically show that without modifying the CycleGAN model, a user-defined watermark image can still be extracted from model outputs using a dedicated watermark decoder. Extensive experimental results verify that while achieving the so-called absolute fidelity, the proposed method is robust to various attacks, from image post-processing to model stealing.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"256-260"},"PeriodicalIF":3.9,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145830886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-11DOI: 10.1109/LSP.2025.3643385
Filippo Fabiani;Andrea Simonetto
We study data-driven least squares (LS) problems with semidefinite (SD) constraints and derive finite-sample guarantees on the spectrum of their optimal solutions when these constraints are relaxed. In particular, we provide a high confidence bound allowing one to solve a simpler program in place of the full SDLS problem, while ensuring that the eigenvalues of the resulting solution are $varepsilon$-close of those enforced by the SD constraints. The developed certificate, which consistently shrinks as the number of data increases, turns out to be easy-to-compute, distribution-free, and only requires independent and identically distributed samples. Moreover, when the SDLS is used to learn an unknown quadratic function, we establish bounds on the error between a gradient descent iterate minimizing the surrogate cost obtained with no SD constraints and the true minimizer.
{"title":"Concentration Inequalities for Semidefinite Least Squares Based on Data","authors":"Filippo Fabiani;Andrea Simonetto","doi":"10.1109/LSP.2025.3643385","DOIUrl":"https://doi.org/10.1109/LSP.2025.3643385","url":null,"abstract":"We study data-driven least squares (LS) problems with semidefinite (SD) constraints and derive finite-sample guarantees on the spectrum of their optimal solutions when these constraints are relaxed. In particular, we provide a high confidence bound allowing one to solve a simpler program in place of the full SDLS problem, while ensuring that the eigenvalues of the resulting solution are <inline-formula><tex-math>$varepsilon$</tex-math></inline-formula>-close of those enforced by the SD constraints. The developed certificate, which consistently shrinks as the number of data increases, turns out to be easy-to-compute, distribution-free, and only requires independent and identically distributed samples. Moreover, when the SDLS is used to learn an unknown quadratic function, we establish bounds on the error between a gradient descent iterate minimizing the surrogate cost obtained with no SD constraints and the true minimizer.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"326-330"},"PeriodicalIF":3.9,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-11DOI: 10.1109/LSP.2025.3643352
Zuomin Qu;Yimao Guo;Qianyue Hu;Wei Lu
Deepfakes pose significant societal risks, motivating the development of proactive defenses that embed adversarial perturbations in facial images to prevent manipulation. However, in this paper, we show that these preemptive defenses often lack robustness and reliability. We propose a novel approach, Low-Rank Adaptation (LoRA) patching, which injects a plug-and-play LoRA patch into Deepfake generators to bypass state-of-the-art defenses. A learnable gating mechanism adaptively controls the effect of the LoRA patch and prevents gradient explosions during fine-tuning. We also introduce a Multi-Modal Feature Alignment (MMFA) loss, encouraging the features of adversarial outputs to align with those of the desired outputs at the semantic level. Beyond bypassing, we present defensive LoRA patching, embedding visible warnings in the outputs as a complementary solution to mitigate this newly identified security vulnerability. With only 1,000 facial examples and a single epoch of fine-tuning, LoRA patching successfully defeats multiple proactive defenses. These results reveal a critical weakness in current paradigms and underscore the need for more robust Deepfake defense strategies.
{"title":"LoRA Patching: Exposing the Fragility of Proactive Defenses Against Deepfakes","authors":"Zuomin Qu;Yimao Guo;Qianyue Hu;Wei Lu","doi":"10.1109/LSP.2025.3643352","DOIUrl":"https://doi.org/10.1109/LSP.2025.3643352","url":null,"abstract":"Deepfakes pose significant societal risks, motivating the development of proactive defenses that embed adversarial perturbations in facial images to prevent manipulation. However, in this paper, we show that these preemptive defenses often lack robustness and reliability. We propose a novel approach, Low-Rank Adaptation (LoRA) patching, which injects a plug-and-play LoRA patch into Deepfake generators to bypass state-of-the-art defenses. A learnable gating mechanism adaptively controls the effect of the LoRA patch and prevents gradient explosions during fine-tuning. We also introduce a Multi-Modal Feature Alignment (MMFA) loss, encouraging the features of adversarial outputs to align with those of the desired outputs at the semantic level. Beyond bypassing, we present defensive LoRA patching, embedding visible warnings in the outputs as a complementary solution to mitigate this newly identified security vulnerability. With only 1,000 facial examples and a single epoch of fine-tuning, LoRA patching successfully defeats multiple proactive defenses. These results reveal a critical weakness in current paradigms and underscore the need for more robust Deepfake defense strategies.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"286-290"},"PeriodicalIF":3.9,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145830948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-11DOI: 10.1109/LSP.2025.3643359
Pingping Pan;Yunjian Zhang;You Li;Renzhong Guo
Time-frequency analysis (TFA) and ridge separation of non-stationary signals have long been research topics in signal processing. They are mutually dependent: informative time-frequency representations (TFRs) enable reliable ridge estimation, while accurate ridges refine TFRs by outlining component-wise time-frequency (TF) trajectories. However, the uncertainty principle limits TF resolution and ridge discriminability, and existing ridge tracking or optimization-based methods rely on empirical tuning and degrade with weak or closely spaced components, highlighting the need for a more robust and unified solution. This letter proposes a unified network that jointly performs TFA and ridge separation. It features a knowledge-guided short-time transform module for extracting discriminative TF features, coupled with an instance segmentation module with learnable queries that interacts with the extracted TF features to achieve ridge separation. This knowledge- and data-integrated framework enables fine-grained TFR construction and high-accuracy ridge separation, while eliminating manual parameter tuning and enhancing adaptability. Finally, experiments on simulated and real-world data validate its effectiveness.
{"title":"Learnable Time-Frequency Transform and Ridge Separation","authors":"Pingping Pan;Yunjian Zhang;You Li;Renzhong Guo","doi":"10.1109/LSP.2025.3643359","DOIUrl":"https://doi.org/10.1109/LSP.2025.3643359","url":null,"abstract":"Time-frequency analysis (TFA) and ridge separation of non-stationary signals have long been research topics in signal processing. They are mutually dependent: informative time-frequency representations (TFRs) enable reliable ridge estimation, while accurate ridges refine TFRs by outlining component-wise time-frequency (TF) trajectories. However, the uncertainty principle limits TF resolution and ridge discriminability, and existing ridge tracking or optimization-based methods rely on empirical tuning and degrade with weak or closely spaced components, highlighting the need for a more robust and unified solution. This letter proposes a unified network that jointly performs TFA and ridge separation. It features a knowledge-guided short-time transform module for extracting discriminative TF features, coupled with an instance segmentation module with learnable queries that interacts with the extracted TF features to achieve ridge separation. This knowledge- and data-integrated framework enables fine-grained TFR construction and high-accuracy ridge separation, while eliminating manual parameter tuning and enhancing adaptability. Finally, experiments on simulated and real-world data validate its effectiveness.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"296-300"},"PeriodicalIF":3.9,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145830817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-11DOI: 10.1109/LSP.2025.3643357
Daisy Das;Nabamita Deb;Saswati Sanyal Choudhury
Pregnancy is often a time of increased stress and anxiety, both psychologically and physically. More and more research is being done on the calming effects of relaxation techniques on the mother's brain. Passively listening to meditative mantras is one such method. This study investigates neuronal phase synchronization in three distinct cognitive states in pregnant women with their eyes closed: resting state (RS), mantra listening (M), and after mantra listening (AM). 32 scalp electrodes and a 128 Hz sampling rate were used to collect EEG data from 43 pregnant subjects. There were two 2-minute trials in each state. To assess the temporal synchrony of brain oscillations, the Inter-Trial Coherence (ITC), a phase-locking metric that quantifies the stability of neural phase over multiple trials, was computed. Bandpass filtering followed by Hilbert transform was used to assess ITC across the Theta (4–8 Hz), Alpha (8–13 Hz), and Beta (13–30 Hz) frequency bands. The Mantra condition had the greatest mean ITC, according to the results: 0.9117 (Theta), 0.8891 (Alpha), and 0.8083 (Beta). Conversely, the After-Mantra condition displayed moderate ITC levels of 0.6582 (Theta), 0.6510 (Alpha), and 0.6437 (Beta), whereas the Resting State produced 0.6392 (Theta), 0.6381 (Alpha), and 0.6368 (Beta). According to these results, passive mantra listening improves brain phase synchrony, especially in the lower frequency bands, and could be a useful non-invasive method of meditative pregnant relaxation.
{"title":"Inter-Trial Coherence Reveals Enhanced Synchrony During Mantra Listening","authors":"Daisy Das;Nabamita Deb;Saswati Sanyal Choudhury","doi":"10.1109/LSP.2025.3643357","DOIUrl":"https://doi.org/10.1109/LSP.2025.3643357","url":null,"abstract":"Pregnancy is often a time of increased stress and anxiety, both psychologically and physically. More and more research is being done on the calming effects of relaxation techniques on the mother's brain. Passively listening to meditative mantras is one such method. This study investigates neuronal phase synchronization in three distinct cognitive states in pregnant women with their eyes closed: resting state (RS), mantra listening (M), and after mantra listening (AM). 32 scalp electrodes and a 128 Hz sampling rate were used to collect EEG data from 43 pregnant subjects. There were two 2-minute trials in each state. To assess the temporal synchrony of brain oscillations, the Inter-Trial Coherence (ITC), a phase-locking metric that quantifies the stability of neural phase over multiple trials, was computed. Bandpass filtering followed by Hilbert transform was used to assess ITC across the Theta (4–8 Hz), Alpha (8–13 Hz), and Beta (13–30 Hz) frequency bands. The Mantra condition had the greatest mean ITC, according to the results: 0.9117 (Theta), 0.8891 (Alpha), and 0.8083 (Beta). Conversely, the After-Mantra condition displayed moderate ITC levels of 0.6582 (Theta), 0.6510 (Alpha), and 0.6437 (Beta), whereas the Resting State produced 0.6392 (Theta), 0.6381 (Alpha), and 0.6368 (Beta). According to these results, passive mantra listening improves brain phase synchrony, especially in the lower frequency bands, and could be a useful non-invasive method of meditative pregnant relaxation.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"291-295"},"PeriodicalIF":3.9,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145830760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-10DOI: 10.1109/LSP.2025.3642765
Jianhong Ye;Haiquan Zhao;Yi Peng
Building upon the mean $p$-power error (MPE) criterion, the normalized subband $p$-norm (NSPN) algorithm demonstrates superior robustness in $alpha$-stable noise environments ($1< alpha leq 2$) through effective utilization of low-order moment hidden in robust loss functions. Nevertheless, its performance degrades significantly when processing noise input or additive noise characterized by $alpha$-stable processes ($0< alpha leq 1$). To overcome these limitations, we propose a novel fractional-order NSPN (FoNSPN) algorithm that incorporates the fractional-order stochastic gradient descent (FoSGD) method into the MPE framework. Additionally, this paper also analyzes the convergence range of its step-size, the theoretical domain of values for the fractional-order $beta$, and establishes the theoretical steady-state mean square deviation (MSD) model. Simulations conducted in diverse impulsive noise environments confirm the superiority of the proposed FoNSPN algorithm against existing state-of-the-art algorithms.
{"title":"P-Norm Based Fractional-Order Robust Subband Adaptive Filtering Algorithm for Impulsive Noise and Noisy Input","authors":"Jianhong Ye;Haiquan Zhao;Yi Peng","doi":"10.1109/LSP.2025.3642765","DOIUrl":"https://doi.org/10.1109/LSP.2025.3642765","url":null,"abstract":"Building upon the mean <inline-formula><tex-math>$p$</tex-math></inline-formula>-power error (MPE) criterion, the normalized subband <inline-formula><tex-math>$p$</tex-math></inline-formula>-norm (NSPN) algorithm demonstrates superior robustness in <inline-formula><tex-math>$alpha$</tex-math></inline-formula>-stable noise environments (<inline-formula><tex-math>$1< alpha leq 2$</tex-math></inline-formula>) through effective utilization of low-order moment hidden in robust loss functions. Nevertheless, its performance degrades significantly when processing noise input or additive noise characterized by <inline-formula><tex-math>$alpha$</tex-math></inline-formula>-stable processes (<inline-formula><tex-math>$0< alpha leq 1$</tex-math></inline-formula>). To overcome these limitations, we propose a novel fractional-order NSPN (FoNSPN) algorithm that incorporates the fractional-order stochastic gradient descent (FoSGD) method into the MPE framework. Additionally, this paper also analyzes the convergence range of its step-size, the theoretical domain of values for the fractional-order <inline-formula><tex-math>$beta$</tex-math></inline-formula>, and establishes the theoretical steady-state mean square deviation (MSD) model. Simulations conducted in diverse impulsive noise environments confirm the superiority of the proposed FoNSPN algorithm against existing state-of-the-art algorithms.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"281-285"},"PeriodicalIF":3.9,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145830893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-09DOI: 10.1109/LSP.2025.3642058
Petr Fiedler;Kamil Dedecius
The letter investigates the problem of distributed multitarget tracking with a network of sensors with limited and partially overlapping or non-overlapping fields of view. The information processing is based on information diffusion, where each sensor can communicate only with its adjacent neighbors. The communication comprises an adaptation phase suited for the exchange of measurements, followed by a combination phase where the estimates are shared and fused via arithmetic average rule. Each phase is performed only once at each discrete time step, thus effectively reducing computational, memory, and communication overheads. An important part of the solution is the self-referencing mechanism, allowing the incorporation of only those neighbors' information that aligns with local estimates or enhances them. The simulation example demonstrates improved localization performance and resilience to misdetections.
{"title":"Self-Referencing Adapt-Then-Combine Information Diffusion Scheme for Distributed PHD Filtering","authors":"Petr Fiedler;Kamil Dedecius","doi":"10.1109/LSP.2025.3642058","DOIUrl":"https://doi.org/10.1109/LSP.2025.3642058","url":null,"abstract":"The letter investigates the problem of distributed multitarget tracking with a network of sensors with limited and partially overlapping or non-overlapping fields of view. The information processing is based on information diffusion, where each sensor can communicate only with its adjacent neighbors. The communication comprises an adaptation phase suited for the exchange of measurements, followed by a combination phase where the estimates are shared and fused via arithmetic average rule. Each phase is performed only once at each discrete time step, thus effectively reducing computational, memory, and communication overheads. An important part of the solution is the self-referencing mechanism, allowing the incorporation of only those neighbors' information that aligns with local estimates or enhances them. The simulation example demonstrates improved localization performance and resilience to misdetections.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"251-255"},"PeriodicalIF":3.9,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145830833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-08DOI: 10.1109/LSP.2025.3634660
{"title":"List of Reviewers","authors":"","doi":"10.1109/LSP.2025.3634660","DOIUrl":"https://doi.org/10.1109/LSP.2025.3634660","url":null,"abstract":"","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"4473-4484"},"PeriodicalIF":3.9,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11284689","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}