Pub Date : 2026-01-19DOI: 10.1109/LSP.2026.3655346
Rui Sun;Yifan Zhang;Xiaolu Yu;Yuwei Dai;Yaofei Wang
The rapid progress of deepfake technology, which primarily manipulates facial identity and image semantics, has made detection and defense critically important. Conventional global watermarking methods offer limited capacity for protecting key semantic content, as they typically rely on uniformly distributed watermarks across the entire image. This letter presents a method that weave watermarks as intrinsic components into the semantic content of images (facial regions) in the latent space. By aligning watermark embedding regions with facial content, we establish an inherent fragility mechanism wherein any deepfake manipulation that modifies facial semantics inevitably disrupts the watermark, enabling precise detection. Simultaneously, adversarial training of the extractor ensures robustness against conventional signal processing operations. A local entropy perception module dynamically adjusts embedding intensity based on regional texture complexity, maintaining high perceptual fidelity. Extensive experiments indicate that compared to advanced methods, the proposed approach maintains robustness against conventional benign operations while achieving reliable detection of deepfake forgeries, thereby enabling precise protection of image semantic content.
{"title":"Semantic-Aware and Semi-Fragile Diffusion Watermarking for Proactive Deepfake Detection","authors":"Rui Sun;Yifan Zhang;Xiaolu Yu;Yuwei Dai;Yaofei Wang","doi":"10.1109/LSP.2026.3655346","DOIUrl":"https://doi.org/10.1109/LSP.2026.3655346","url":null,"abstract":"The rapid progress of deepfake technology, which primarily manipulates facial identity and image semantics, has made detection and defense critically important. Conventional global watermarking methods offer limited capacity for protecting key semantic content, as they typically rely on uniformly distributed watermarks across the entire image. This letter presents a method that weave watermarks as intrinsic components into the semantic content of images (facial regions) in the latent space. By aligning watermark embedding regions with facial content, we establish an inherent fragility mechanism wherein any deepfake manipulation that modifies facial semantics inevitably disrupts the watermark, enabling precise detection. Simultaneously, adversarial training of the extractor ensures robustness against conventional signal processing operations. A local entropy perception module dynamically adjusts embedding intensity based on regional texture complexity, maintaining high perceptual fidelity. Extensive experiments indicate that compared to advanced methods, the proposed approach maintains robustness against conventional benign operations while achieving reliable detection of deepfake forgeries, thereby enabling precise protection of image semantic content.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"688-692"},"PeriodicalIF":3.9,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-15DOI: 10.1109/LSP.2026.3654546
Roope Salmi;Vesa Välimäki
Sample rate conversion, a common task in audio signal processing, can be performed with high quality using the fast Fourier transform (FFT) on the whole audio file. Before returning to the time domain using the inverse FFT, the sample rate of the signal is changed by either truncating or zero-padding the frequency-domain buffer. This operation leaves a discontinuity in the spectrum, which causes time-domain ringing at that frequency. The ringing can be suppressed by tapering the highest frequency bins. This letter introduces the double Dolph-Chebyshev window, a frequency-domain tapering function with a configurable level of ringing outside its main lobe in the transform domain. In comparison to basic cosine tapering, the proposed method provides, for example, a 150-dB suppression 91% faster. This letter improves the accuracy of FFT-based sample rate conversion, making it a practical tool for signal processing.
{"title":"Suppression of Nyquist Ringing in FFT-Based Sample Rate Conversion","authors":"Roope Salmi;Vesa Välimäki","doi":"10.1109/LSP.2026.3654546","DOIUrl":"https://doi.org/10.1109/LSP.2026.3654546","url":null,"abstract":"Sample rate conversion, a common task in audio signal processing, can be performed with high quality using the fast Fourier transform (FFT) on the whole audio file. Before returning to the time domain using the inverse FFT, the sample rate of the signal is changed by either truncating or zero-padding the frequency-domain buffer. This operation leaves a discontinuity in the spectrum, which causes time-domain ringing at that frequency. The ringing can be suppressed by tapering the highest frequency bins. This letter introduces the double Dolph-Chebyshev window, a frequency-domain tapering function with a configurable level of ringing outside its main lobe in the transform domain. In comparison to basic cosine tapering, the proposed method provides, for example, a 150-dB suppression 91% faster. This letter improves the accuracy of FFT-based sample rate conversion, making it a practical tool for signal processing.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"683-687"},"PeriodicalIF":3.9,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11354502","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-15DOI: 10.1109/LSP.2026.3654532
Franz Weißer;Wolfgang Utschick
The mean square error (MSE)-optimal estimator is known to be the conditional mean estimator (CME). This letter introduces a parametric channel estimation technique based on Bayesian estimation. This technique uses the estimated channel parameters to parameterize the well-known LMMSE channel estimator. We first derive an asymptotic CME formulation that holds for a wide range of priors on the channel parameters. Based on this, we show that parametric Bayesian channel estimation is MSE-optimal for high signal-to-noise ratio (SNR) and/or long coherence intervals, i.e., many noisy observations provided within one coherence interval. Numerical simulations validate the derived formulations.
{"title":"On the Asymptotic MSE-Optimality of Parametric Bayesian Channel Estimation in mmWave Systems","authors":"Franz Weißer;Wolfgang Utschick","doi":"10.1109/LSP.2026.3654532","DOIUrl":"https://doi.org/10.1109/LSP.2026.3654532","url":null,"abstract":"The mean square error (MSE)-optimal estimator is known to be the conditional mean estimator (CME). This letter introduces a parametric channel estimation technique based on Bayesian estimation. This technique uses the estimated channel parameters to parameterize the well-known LMMSE channel estimator. We first derive an asymptotic CME formulation that holds for a wide range of priors on the channel parameters. Based on this, we show that parametric Bayesian channel estimation is MSE-optimal for high signal-to-noise ratio (SNR) and/or long coherence intervals, i.e., many noisy observations provided within one coherence interval. Numerical simulations validate the derived formulations.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"653-657"},"PeriodicalIF":3.9,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11354545","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-15DOI: 10.1109/LSP.2026.3654540
Shichao Zhong;Zhongjie Ma;Xiaolu Zeng;Renjie Liu;Xiaopeng Yang
Building layout sensing of through-the-wall radar (TWR) plays a vital role in fields such as counter-terrorism operations and post-disaster rescue. Existing layout sensing methods based on TWR typically focus solely on either corner information or wall surface features, neglecting the complementarity between the two, which leads to low sensing accuracy in complex environments. To address this issue, we propose a Corner-Wall Sensing Network (CWSNet), a building layout sensing network that fuses corner and wall surface information. First, deep convolutional networks are used to extract wall and corner features from TWR images. Then, these complementary structural features are fused to form an integrated representation. Finally, a transformer-based dynamic graph reasoning module (DGRM) captures their spatial relationships, enabling high-precision layout sensing. Both simulated and real-world experimental datasets demonstrate that CWSNet significantly outperforms existing methods across multiple evaluation metrics, achieving superior wall localization accuracy and layout connectivity, while also exhibiting strong robustness and generalization capabilities.
{"title":"CWSNet: A Building Layout Sensing Network With Corner and Wall Information Fusion From Through-the-Wall Radar","authors":"Shichao Zhong;Zhongjie Ma;Xiaolu Zeng;Renjie Liu;Xiaopeng Yang","doi":"10.1109/LSP.2026.3654540","DOIUrl":"https://doi.org/10.1109/LSP.2026.3654540","url":null,"abstract":"Building layout sensing of through-the-wall radar (TWR) plays a vital role in fields such as counter-terrorism operations and post-disaster rescue. Existing layout sensing methods based on TWR typically focus solely on either corner information or wall surface features, neglecting the complementarity between the two, which leads to low sensing accuracy in complex environments. To address this issue, we propose a Corner-Wall Sensing Network (CWSNet), a building layout sensing network that fuses corner and wall surface information. First, deep convolutional networks are used to extract wall and corner features from TWR images. Then, these complementary structural features are fused to form an integrated representation. Finally, a transformer-based dynamic graph reasoning module (DGRM) captures their spatial relationships, enabling high-precision layout sensing. Both simulated and real-world experimental datasets demonstrate that CWSNet significantly outperforms existing methods across multiple evaluation metrics, achieving superior wall localization accuracy and layout connectivity, while also exhibiting strong robustness and generalization capabilities.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"703-707"},"PeriodicalIF":3.9,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-13DOI: 10.1109/LSP.2026.3653400
Zhifu Jiang;Jianxin Wu;Lei Zhang
High mobility of space-based radar (SBR) platforms risks target velocities falling below the minimum detectable velocity (MDV), rendering them undetectable in main-lobe clutter. Aiming at multi-target tracking (MTT) in space-based multiple-input multiple-output (MIMO) radar systems, this paper proposes a joint beam and dwell time allocation (JBTA) strategy. This strategy incorporates the MDV constraint and adopts the Bayesian Cramér-Rao Lower Bound (BCRLB) as the performance metric, where BCRLB is a lower bound for the mean square error (MSE) of target state estimation. To solve the non-convex mixed-integer optimization problem of JBTA, a two-step decomposition approach is designed. Numerical results verify that JBTA effectively improves global MTT performance.
{"title":"MTT Resource Allocation in Space-Based Netted MIMO Radar Under Main-Lobe Clutter","authors":"Zhifu Jiang;Jianxin Wu;Lei Zhang","doi":"10.1109/LSP.2026.3653400","DOIUrl":"https://doi.org/10.1109/LSP.2026.3653400","url":null,"abstract":"High mobility of space-based radar (SBR) platforms risks target velocities falling below the minimum detectable velocity (MDV), rendering them undetectable in main-lobe clutter. Aiming at multi-target tracking (MTT) in space-based multiple-input multiple-output (MIMO) radar systems, this paper proposes a joint beam and dwell time allocation (JBTA) strategy. This strategy incorporates the MDV constraint and adopts the Bayesian Cramér-Rao Lower Bound (BCRLB) as the performance metric, where BCRLB is a lower bound for the mean square error (MSE) of target state estimation. To solve the non-convex mixed-integer optimization problem of JBTA, a two-step decomposition approach is designed. Numerical results verify that JBTA effectively improves global MTT performance.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"624-628"},"PeriodicalIF":3.9,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-13DOI: 10.1109/LSP.2026.3653693
Fei Peng;Zhanhong Liu;Min Long
To address the limitations of current 3D mesh watermarking in robustness and imperceptibility, this paper proposes a deep watermarking based on a geometric-weighted aggregation mechanism. The message encoder and decoder networks are first improved to enable the effective embedding of 16-bit binary watermark information. An attack simulation module is then introduced to enhance the decoder’s robustness against various distortions. Additionally, an adversarial discriminator is incorporated to guide the encoder in optimizing the embedding strategy, thereby minimizing geometric distortion. Furthermore, a cross-resolution strategy is developed to enable training on low-resolution meshes and perform watermark embedding and extraction on high-resolution meshes. Experimental results demonstrate that it outperforms the existing mainstream approaches in terms of extraction accuracy, geometric fidelity, and imperceptibility.
{"title":"Robust Watermarking for 3D Mesh Models Based on Geometrically Weighted Aggregation","authors":"Fei Peng;Zhanhong Liu;Min Long","doi":"10.1109/LSP.2026.3653693","DOIUrl":"https://doi.org/10.1109/LSP.2026.3653693","url":null,"abstract":"To address the limitations of current 3D mesh watermarking in robustness and imperceptibility, this paper proposes a deep watermarking based on a geometric-weighted aggregation mechanism. The message encoder and decoder networks are first improved to enable the effective embedding of 16-bit binary watermark information. An attack simulation module is then introduced to enhance the decoder’s robustness against various distortions. Additionally, an adversarial discriminator is incorporated to guide the encoder in optimizing the embedding strategy, thereby minimizing geometric distortion. Furthermore, a cross-resolution strategy is developed to enable training on low-resolution meshes and perform watermark embedding and extraction on high-resolution meshes. Experimental results demonstrate that it outperforms the existing mainstream approaches in terms of extraction accuracy, geometric fidelity, and imperceptibility.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"648-652"},"PeriodicalIF":3.9,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-13DOI: 10.1109/LSP.2026.3653616
Yufeng Li;Chao Song;Chuanlong Xie
Object detection is a core task in computer vision, yet its performance is severely degraded in low-light environments, where foreground objects blend into the background, feature contrast is reduced, and object boundaries become blurred, ultimately impairing detection accuracy. To address this problem, we propose FFE-DETR, an end-to-end detection framework specifically designed for low-light scenes. The model incorporates a Frequency-Aware Feature Enhancer that applies Laplacian pyramid decomposition to separate low-frequency and high-frequency components. The low-frequency features are globally modeled to enhance foreground saliency and emphasize object boundaries, and the enhanced representation subsequently guides high-frequency detail restoration and noise suppression, yielding clearer and more discriminative features. In addition, a Multi-Scale Adaptive Feature Fusion module is introduced to efficiently integrate shallow texture information with deep semantic cues, enhancing the feature representation capability across different scales. Experimental results on widely used low-light benchmarks demonstrate that FFE-DETR consistently outperforms state-of-the-art methods and achieves significantly superior detection accuracy, highlighting its effectiveness and robustness.
{"title":"FFE-DETR: Frequency-Aware Feature Enhancement for Object Detection in Low-Light Scenarios","authors":"Yufeng Li;Chao Song;Chuanlong Xie","doi":"10.1109/LSP.2026.3653616","DOIUrl":"https://doi.org/10.1109/LSP.2026.3653616","url":null,"abstract":"Object detection is a core task in computer vision, yet its performance is severely degraded in low-light environments, where foreground objects blend into the background, feature contrast is reduced, and object boundaries become blurred, ultimately impairing detection accuracy. To address this problem, we propose FFE-DETR, an end-to-end detection framework specifically designed for low-light scenes. The model incorporates a Frequency-Aware Feature Enhancer that applies Laplacian pyramid decomposition to separate low-frequency and high-frequency components. The low-frequency features are globally modeled to enhance foreground saliency and emphasize object boundaries, and the enhanced representation subsequently guides high-frequency detail restoration and noise suppression, yielding clearer and more discriminative features. In addition, a Multi-Scale Adaptive Feature Fusion module is introduced to efficiently integrate shallow texture information with deep semantic cues, enhancing the feature representation capability across different scales. Experimental results on widely used low-light benchmarks demonstrate that FFE-DETR consistently outperforms state-of-the-art methods and achieves significantly superior detection accuracy, highlighting its effectiveness and robustness.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"678-682"},"PeriodicalIF":3.9,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frequency estimation plays a vital role in various research fields, such as Doppler compensation in wireless communication. Traditional DFT-based methods for frequency estimation often suffer from reduced performance under low-SNR conditions. In order to overcome this limitation, we present a novel non-iterative estimation approach that employs a bounded mapping strategy. By concentrating on the real part of the spectrum and constraining the frequency correction within a defined range, our method effectively mitigates inaccuracies caused by noise. Our proposed algorithm for frequency estimation achieves accuracy comparable to iterative methods while significantly reducing computational complexity. Through simulations and experiments, we illustrate that our approach enhances estimation accuracy at lower SNR levels with a limited number of samples compared to existing techniques.
{"title":"Bounded Mapping Frequency Estimation Algorithm for Low SNR Environments","authors":"Qingke Ma;Jiale Wang;Jie Lian;Xinyi Li;Benben Li;Qi Wang;Guolei Zhu","doi":"10.1109/LSP.2026.3653690","DOIUrl":"https://doi.org/10.1109/LSP.2026.3653690","url":null,"abstract":"Frequency estimation plays a vital role in various research fields, such as Doppler compensation in wireless communication. Traditional DFT-based methods for frequency estimation often suffer from reduced performance under low-SNR conditions. In order to overcome this limitation, we present a novel non-iterative estimation approach that employs a bounded mapping strategy. By concentrating on the real part of the spectrum and constraining the frequency correction within a defined range, our method effectively mitigates inaccuracies caused by noise. Our proposed algorithm for frequency estimation achieves accuracy comparable to iterative methods while significantly reducing computational complexity. Through simulations and experiments, we illustrate that our approach enhances estimation accuracy at lower SNR levels with a limited number of samples compared to existing techniques.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"629-633"},"PeriodicalIF":3.9,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frequency-modulated continuous-wave radar is a cornerstone of advanced driver assistance systems thanks to its low cost and resilience to adverse weather. Yet the absence of explicit semantics makes radar annotation difficult, and the scarcity of large-scale labeled data limits the performance of radar perception models. To address this issue, we propose a self-supervised framework for object detection directly from Range–Azimuth–Doppler (RAD) cubes that learns transferable representations from unlabeled radar data. Specifically, we introduce cross-view contrastive learning to model correspondences among complementary views of the RAD cube, encouraging the network to capture spatial structure from multiple perspectives. In addition, an auxiliary cross-modal contrastive objective distills semantic knowledge from vision into radar. The joint objective integrates cross-view and cross-modal signals to strengthen radar feature representations. We further extend the framework to cross-domain pretraining using datasets from different sources. Experimental results demonstrate that the proposed method significantly improves radar object detection performance, especially with limited labeled data.
{"title":"Cross-View and Cross-Modal Contrastive Learning for Radar Object Detection","authors":"Qiaolong Qian;Yi Shi;Ruichao Hou;Haoyu Qin;Gangshan Wu","doi":"10.1109/LSP.2026.3653684","DOIUrl":"https://doi.org/10.1109/LSP.2026.3653684","url":null,"abstract":"Frequency-modulated continuous-wave radar is a cornerstone of advanced driver assistance systems thanks to its low cost and resilience to adverse weather. Yet the absence of explicit semantics makes radar annotation difficult, and the scarcity of large-scale labeled data limits the performance of radar perception models. To address this issue, we propose a self-supervised framework for object detection directly from Range–Azimuth–Doppler (RAD) cubes that learns transferable representations from unlabeled radar data. Specifically, we introduce cross-view contrastive learning to model correspondences among complementary views of the RAD cube, encouraging the network to capture spatial structure from multiple perspectives. In addition, an auxiliary cross-modal contrastive objective distills semantic knowledge from vision into radar. The joint objective integrates cross-view and cross-modal signals to strengthen radar feature representations. We further extend the framework to cross-domain pretraining using datasets from different sources. Experimental results demonstrate that the proposed method significantly improves radar object detection performance, especially with limited labeled data.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"594-598"},"PeriodicalIF":3.9,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-13DOI: 10.1109/LSP.2026.3653694
Tanveer Alam Khan;Somanath Pradhan
The effectiveness of conventional active noise control (ANC) system deteriorates significantly when operating against impulsive noise environments. Over the past few years, the hyperbolic family of adaptive filtering algorithms have been extensively applied for suppressing impulsive noise. This work introduces a new exponential hyperbolic secant adaptive filter for active control operation, which is well suited for impulsive noise scenarios. Additionally, the stability condition in relation to the learning rate, steady-state analysis along with the computational complexity are also studied. Simulation outcomes based on measured acoustic paths demonstrate the efficiency of the proposed algorithm under strong and dynamic impulsive environment.
{"title":"Robust Exponential Hyperbolic Secant Algorithm for Active Control Against Impulsive Noise Environments","authors":"Tanveer Alam Khan;Somanath Pradhan","doi":"10.1109/LSP.2026.3653694","DOIUrl":"https://doi.org/10.1109/LSP.2026.3653694","url":null,"abstract":"The effectiveness of conventional active noise control (ANC) system deteriorates significantly when operating against impulsive noise environments. Over the past few years, the hyperbolic family of adaptive filtering algorithms have been extensively applied for suppressing impulsive noise. This work introduces a new exponential hyperbolic secant adaptive filter for active control operation, which is well suited for impulsive noise scenarios. Additionally, the stability condition in relation to the learning rate, steady-state analysis along with the computational complexity are also studied. Simulation outcomes based on measured acoustic paths demonstrate the efficiency of the proposed algorithm under strong and dynamic impulsive environment.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"663-667"},"PeriodicalIF":3.9,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}