Underwater images often exhibit common visual degradations, such as color distortion, loss of details, and reduced sharpness, which inevitably compromise the effectiveness of underwater vision tasks. However, most underwater image restoration methods solely focus on learning degradation features from raw images, neglecting the incorporation of additional contextual information to guide restoration, which limits the capability of deep models to restore image quality. In this letter, we propose Visual In-Context Learning (VICL) for underwater image restoration, which leverages degradation information from context to improve image quality. In VICL, Degraded Context Extraction Block (DCEB) employs a self-attention mechanism to extract degradation information from context. In addition, Context Spatial Feature Fusion Block (CSFFB) consists of a Degraded Context Guidance Block (DCGB) and a Multi-Feature Fusion Block (MFFB). DCGB employs a cross-attention mechanism to fuse degraded context with spatial features for guiding underwater image restoration. MFFB replaces traditional encoder–decoder skip connections to better coordinate feature fusion. Extensive experiments on multiple underwater image benchmarks demonstrate that VICL outperforms state-of-the-art methods both quantitatively and visually.
{"title":"Visual In-Context Learning for Underwater Image Restoration","authors":"Guangqi Jiang;Ao Zhang;Yi Liu;Huibing Wang;Shoukun Xu","doi":"10.1109/LSP.2026.3667439","DOIUrl":"https://doi.org/10.1109/LSP.2026.3667439","url":null,"abstract":"Underwater images often exhibit common visual degradations, such as color distortion, loss of details, and reduced sharpness, which inevitably compromise the effectiveness of underwater vision tasks. However, most underwater image restoration methods solely focus on learning degradation features from raw images, neglecting the incorporation of additional contextual information to guide restoration, which limits the capability of deep models to restore image quality. In this letter, we propose Visual In-Context Learning (VICL) for underwater image restoration, which leverages degradation information from context to improve image quality. In VICL, Degraded Context Extraction Block (DCEB) employs a self-attention mechanism to extract degradation information from context. In addition, Context Spatial Feature Fusion Block (CSFFB) consists of a Degraded Context Guidance Block (DCGB) and a Multi-Feature Fusion Block (MFFB). DCGB employs a cross-attention mechanism to fuse degraded context with spatial features for guiding underwater image restoration. MFFB replaces traditional encoder–decoder skip connections to better coordinate feature fusion. Extensive experiments on multiple underwater image benchmarks demonstrate that VICL outperforms state-of-the-art methods both quantitatively and visually.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"1072-1076"},"PeriodicalIF":3.9,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-23DOI: 10.1109/LSP.2026.3666849
Chin-Hung Chen;Ivana Nikoloska;Wim van Houtum;Yan Wu;Alex Alvarado
This paper addresses the well-known local maximum problem of the expectation-maximization (EM) algorithm in blind inter-symbol interference (ISI) channel estimation. This problem primarily arises from phase and shift ambiguities due to poor initialization, which blind EM estimation is inherently unable to resolve. We propose an effective initialization refinement algorithm that utilizes the decoder output as a metric for model selection. Finite candidate models are generated based on the physical properties of the channel and modulation format, incorporating a joint detection of phase and shift ambiguities. Our results show that the proposed algorithm significantly reduces the number of local maximum cases to nearly one-third for a 3-tap ISI channel under highly uncertain initial conditions. The improvement becomes more pronounced as initial errors increase and the channel memory grows. When used in a turbo equalizer, the proposed algorithm is required only in the first turbo iteration, thereby limiting any increase in complexity in subsequent iterations.
{"title":"Physics-Aware Initialization Refinement in Code-Aided EM for Blind Channel Estimation","authors":"Chin-Hung Chen;Ivana Nikoloska;Wim van Houtum;Yan Wu;Alex Alvarado","doi":"10.1109/LSP.2026.3666849","DOIUrl":"https://doi.org/10.1109/LSP.2026.3666849","url":null,"abstract":"This paper addresses the well-known local maximum problem of the expectation-maximization (EM) algorithm in blind inter-symbol interference (ISI) channel estimation. This problem primarily arises from phase and shift ambiguities due to poor initialization, which blind EM estimation is inherently unable to resolve. We propose an effective initialization refinement algorithm that utilizes the decoder output as a metric for model selection. Finite candidate models are generated based on the physical properties of the channel and modulation format, incorporating a joint detection of phase and shift ambiguities. Our results show that the proposed algorithm significantly reduces the number of local maximum cases to nearly one-third for a 3-tap ISI channel under highly uncertain initial conditions. The improvement becomes more pronounced as initial errors increase and the channel memory grows. When used in a turbo equalizer, the proposed algorithm is required only in the first turbo iteration, thereby limiting any increase in complexity in subsequent iterations.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"1018-1022"},"PeriodicalIF":3.9,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-17DOI: 10.1109/LSP.2026.3665633
Bingqi Shan;Ju Wang
Compressive sensing (CS)-based sparse Bayesian learning (SBL) algorithms for random frequency and pulse repetition interval agile (RFPA) radar exhibit poor robustness under low signal-to-noise ratio (SNR) conditions. To address this issue, this letter proposes a waveform-design-enhanced SBL (WDE-SBL) method. This method integrates waveform design into the SBL framework, which employs low-complexity complex Gaussian priors, without increasing the computational load. Specifically, for the first time, this letter derives the analytical expression of the grid energy associated with the SBL algorithm for RFPA radar, clearly revealing the contributions of individual components. Based on an in-depth analysis of this expression, the proposed WDE-SBL method constructs the objective function to suppress noise floor and designs the frequency-hopping sequence accordingly. Simulation results demonstrate that by incorporating waveform design, the proposed WDE-SBL can clearly identify targets and accurately estimate target parameters under low SNR conditions where other CS-based algorithms fail.
{"title":"WDE-SBL: Waveform Design for SBL-Based Low- SNR Target Parameter Estimation in RFPA Radar","authors":"Bingqi Shan;Ju Wang","doi":"10.1109/LSP.2026.3665633","DOIUrl":"https://doi.org/10.1109/LSP.2026.3665633","url":null,"abstract":"Compressive sensing (CS)-based sparse Bayesian learning (SBL) algorithms for random frequency and pulse repetition interval agile (RFPA) radar exhibit poor robustness under low signal-to-noise ratio (SNR) conditions. To address this issue, this letter proposes a waveform-design-enhanced SBL (WDE-SBL) method. This method integrates waveform design into the SBL framework, which employs low-complexity complex Gaussian priors, without increasing the computational load. Specifically, for the first time, this letter derives the analytical expression of the grid energy associated with the SBL algorithm for RFPA radar, clearly revealing the contributions of individual components. Based on an in-depth analysis of this expression, the proposed WDE-SBL method constructs the objective function to suppress noise floor and designs the frequency-hopping sequence accordingly. Simulation results demonstrate that by incorporating waveform design, the proposed WDE-SBL can clearly identify targets and accurately estimate target parameters under low SNR conditions where other CS-based algorithms fail.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"1008-1012"},"PeriodicalIF":3.9,"publicationDate":"2026-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147299557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deep autoencoder (DAE) frameworks have demonstrated their effectiveness in reducing channel state information (CSI) feedback overhead in massive multiple-input multiple-output (mMIMO) orthogonal frequency division multiplexing (OFDM) systems. However, existing CSI feedback models struggle to adapt to dynamic environments caused by user mobility, requiring retraining when encountering new CSI distributions. Moreover, returning to previously encountered environments often leads to performance degradation due to catastrophic forgetting. Continual learning involves enabling models to incorporate new information while maintaining performance on previously learned tasks. To address these challenges, we propose a generative adversarial network (GAN)-based learning approach for CSI feedback. By using a GAN generator as a memory unit, our method preserves knowledge from past environments and ensures consistently high performance across diverse scenarios without forgetting. Simulation results show that the proposed approach enhances the generalization capability of the DAE framework while maintaining low memory overhead. Furthermore, it can be seamlessly integrated with other advanced CSI feedback models, highlighting its robustness and adaptability.
{"title":"Generative Model-Aided Continual Learning for CSI Feedback in FDD mMIMO-OFDM Systems","authors":"Guijun Liu;Yuwen Cao;Tomoaki Ohtsuki;Jiguang He;Shahid Mumtaz","doi":"10.1109/LSP.2026.3665655","DOIUrl":"https://doi.org/10.1109/LSP.2026.3665655","url":null,"abstract":"Deep autoencoder (DAE) frameworks have demonstrated their effectiveness in reducing channel state information (CSI) feedback overhead in massive multiple-input multiple-output (mMIMO) orthogonal frequency division multiplexing (OFDM) systems. However, existing CSI feedback models struggle to adapt to dynamic environments caused by user mobility, requiring retraining when encountering new CSI distributions. Moreover, returning to previously encountered environments often leads to performance degradation due to catastrophic forgetting. Continual learning involves enabling models to incorporate new information while maintaining performance on previously learned tasks. To address these challenges, we propose a generative adversarial network (GAN)-based learning approach for CSI feedback. By using a GAN generator as a memory unit, our method preserves knowledge from past environments and ensures consistently high performance across diverse scenarios without forgetting. Simulation results show that the proposed approach enhances the generalization capability of the DAE framework while maintaining low memory overhead. Furthermore, it can be seamlessly integrated with other advanced CSI feedback models, highlighting its robustness and adaptability.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"1013-1017"},"PeriodicalIF":3.9,"publicationDate":"2026-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147299603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-17DOI: 10.1109/LSP.2026.3665652
Ziqi Yan;Zhichao Zhang
Wiener filtering in the joint time–vertex fractional Fourier transform (JFRFT) domain has shown high effectiveness in denoising time-varying graph signals. Traditional filtering model uses grid search to determine the transform order pair and compute filter coefficients, while the learnable one employs gradient descent strategy to optimize them, both requiring complete prior information of graph signals. To overcome this shortcoming, this letter proposes a data–model co-driven denoising approach, termed neural network aided joint time-vertex fractional Fourier filtering (JFRFFNet), which embeds the JFRFT domain filtering model into the neural network and updates the transform order pair and filter coefficients through data-driven approach. This design enables effective denoising using only partial prior information. Experiments demonstrate that JFRFFNet achieves significant improvements in output signal-to-noise ratio compared with some state-of-the-arts.
{"title":"JFRFFNet: A Data–Model Co-Driven Graph Signal Denoising Model With Partial Prior Information","authors":"Ziqi Yan;Zhichao Zhang","doi":"10.1109/LSP.2026.3665652","DOIUrl":"https://doi.org/10.1109/LSP.2026.3665652","url":null,"abstract":"Wiener filtering in the joint time–vertex fractional Fourier transform (JFRFT) domain has shown high effectiveness in denoising time-varying graph signals. Traditional filtering model uses grid search to determine the transform order pair and compute filter coefficients, while the learnable one employs gradient descent strategy to optimize them, both requiring complete prior information of graph signals. To overcome this shortcoming, this letter proposes a data–model co-driven denoising approach, termed neural network aided joint time-vertex fractional Fourier filtering (JFRFFNet), which embeds the JFRFT domain filtering model into the neural network and updates the transform order pair and filter coefficients through data-driven approach. This design enables effective denoising using only partial prior information. Experiments demonstrate that JFRFFNet achieves significant improvements in output signal-to-noise ratio compared with some state-of-the-arts.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"1038-1042"},"PeriodicalIF":3.9,"publicationDate":"2026-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-13DOI: 10.1109/LSP.2026.3664272
Lin Shi;Xinyu Liu;Li Zhao;Haiyang Zhang;Zhanlin Ji
Agricultural diseased leaf image segmentation is a critical technology for precision agriculture and intelligent crop protection. To overcome the limitations of current segmentation methods—such as imprecise leaf edge extraction, difficulty in detecting small disease lesions, and insufficient robustness in complex backgrounds—this paper proposes an agricultural diseased leaf image segmentation method based on an enhanced visual state space model, named MSVM-UNet (Multi-Scale Spatial Attention Vision Mamba U-Net). This method employs an encoder-decoder framework and integrates improved Visual State Space (VSS) modules in both the encoder and decoder, enhancing long-range dependency modeling and local-global feature fusion. Simultaneously, a Multi-Scale Spatial Attention (MSSA) module is introduced in the skip connections to enhance cross-scale feature representation and capture fine boundary details of disease spots. To simulate real field imaging conditions, we perform random horizontal or vertical flips on the images and randomly adjust hue, saturation, and brightness before training. Experimental results demonstrate that, compared with mainstream methods, MSVM-UNet achieves significant performance improvement in agricultural diseased leaf segmentation tasks, reaching 80.44% mIoU and 92.56% Dice on the validation set, providing our solution for intelligent agricultural disease monitoring.
{"title":"MSVM-UNet: Multi-Scale Spatial Attention Enhanced Vision Mamba U-Net for Agricultural Disease Segmentation","authors":"Lin Shi;Xinyu Liu;Li Zhao;Haiyang Zhang;Zhanlin Ji","doi":"10.1109/LSP.2026.3664272","DOIUrl":"https://doi.org/10.1109/LSP.2026.3664272","url":null,"abstract":"Agricultural diseased leaf image segmentation is a critical technology for precision agriculture and intelligent crop protection. To overcome the limitations of current segmentation methods—such as imprecise leaf edge extraction, difficulty in detecting small disease lesions, and insufficient robustness in complex backgrounds—this paper proposes an agricultural diseased leaf image segmentation method based on an enhanced visual state space model, named MSVM-UNet (Multi-Scale Spatial Attention Vision Mamba U-Net). This method employs an encoder-decoder framework and integrates improved Visual State Space (VSS) modules in both the encoder and decoder, enhancing long-range dependency modeling and local-global feature fusion. Simultaneously, a Multi-Scale Spatial Attention (MSSA) module is introduced in the skip connections to enhance cross-scale feature representation and capture fine boundary details of disease spots. To simulate real field imaging conditions, we perform random horizontal or vertical flips on the images and randomly adjust hue, saturation, and brightness before training. Experimental results demonstrate that, compared with mainstream methods, MSVM-UNet achieves significant performance improvement in agricultural diseased leaf segmentation tasks, reaching 80.44% mIoU and 92.56% Dice on the validation set, providing our solution for intelligent agricultural disease monitoring.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"1003-1007"},"PeriodicalIF":3.9,"publicationDate":"2026-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147299598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-11DOI: 10.1109/LSP.2026.3663968
Alavala Siva Sankar Reddy;Ram Bilas Pachori
Non-stationary signals with rapidly evolving and overlapping spectral components present challenges for obtaining accurate time–frequency distribution (TFD). Conventional dynamic mode decomposition (DMD) extracts mode frequencies and damping information but has difficulty in representing the TFD of highly non-stationary signals. Multi-resolution DMD (MR-DMD) partially addresses this limitation; however, employing a fixed embedding dimension across all scales may result in inadequate adaptation to local signal dynamics and reduced time–frequency resolution. This paper presents an adaptive MR-DMD (AMR-DMD) technique that automatically selects the embedding dimension at each decomposition level by jointly balancing spectral and temporal resolutions, guided by a time–frequency uncertainty criterion derived from the Heisenberg principle, for the analysis of real-valued signals. The method is further extended to complex-valued signals by separating positive and negative frequency components for complete spectral characterization. Hilbert spectral analysis (HSA) is applied to the modes obtained from AMR-DMD to generate the TFD.Experimental results on real, synthetic, and complex-valued signals demonstrate that the proposed AMR-DMD-based HSA method produces improved TFDs, offering sharper localization, lower reconstruction error, and higher quality reconstruction factor than existing method-based HSA techniques.
{"title":"Adaptive Multi-Resolution Dynamic Mode Decomposition for Non-Stationary Signal Analysis","authors":"Alavala Siva Sankar Reddy;Ram Bilas Pachori","doi":"10.1109/LSP.2026.3663968","DOIUrl":"https://doi.org/10.1109/LSP.2026.3663968","url":null,"abstract":"Non-stationary signals with rapidly evolving and overlapping spectral components present challenges for obtaining accurate time–frequency distribution (TFD). Conventional dynamic mode decomposition (DMD) extracts mode frequencies and damping information but has difficulty in representing the TFD of highly non-stationary signals. Multi-resolution DMD (MR-DMD) partially addresses this limitation; however, employing a fixed embedding dimension across all scales may result in inadequate adaptation to local signal dynamics and reduced time–frequency resolution. This paper presents an adaptive MR-DMD (AMR-DMD) technique that automatically selects the embedding dimension at each decomposition level by jointly balancing spectral and temporal resolutions, guided by a time–frequency uncertainty criterion derived from the Heisenberg principle, for the analysis of real-valued signals. The method is further extended to complex-valued signals by separating positive and negative frequency components for complete spectral characterization. Hilbert spectral analysis (HSA) is applied to the modes obtained from AMR-DMD to generate the TFD.Experimental results on real, synthetic, and complex-valued signals demonstrate that the proposed AMR-DMD-based HSA method produces improved TFDs, offering sharper localization, lower reconstruction error, and higher quality reconstruction factor than existing method-based HSA techniques.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"998-1002"},"PeriodicalIF":3.9,"publicationDate":"2026-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147299636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-09DOI: 10.1109/LSP.2026.3662578
He Zheng;Guimei Zheng;Fangqing Wen;Yuwei Song;Feilong Lv
In this letter, we propose a deep learning-based off-grid Direction of Arrival (DOA) estimation method for low Signal-to-Noise Ratio (SNR) scenarios. Specifically, we develop a dual-branch neural network with residual connections that processes frequency-domain features, consisting of a coarse classification branch and a fine regression branch. The classification branch employs a multi-label approach to obtain on-grid results, while the regression branch predicts the residual between the classification outputs and ground-truth angles. This structural design effectively leverages classification results to avoid convergence difficulties associated with direct off-grid angle prediction, thereby enhancing DOA estimation accuracy. Simulation results demonstrate that under low SNR conditions, the proposed method outperforms existing approaches, including both classical model-based and other deep learning-based methods.
{"title":"An Off-Grid DOA Estimation Method Based on a Frequency-Domain ViT","authors":"He Zheng;Guimei Zheng;Fangqing Wen;Yuwei Song;Feilong Lv","doi":"10.1109/LSP.2026.3662578","DOIUrl":"https://doi.org/10.1109/LSP.2026.3662578","url":null,"abstract":"In this letter, we propose a deep learning-based off-grid Direction of Arrival (DOA) estimation method for low Signal-to-Noise Ratio (SNR) scenarios. Specifically, we develop a dual-branch neural network with residual connections that processes frequency-domain features, consisting of a coarse classification branch and a fine regression branch. The classification branch employs a multi-label approach to obtain on-grid results, while the regression branch predicts the residual between the classification outputs and ground-truth angles. This structural design effectively leverages classification results to avoid convergence difficulties associated with direct off-grid angle prediction, thereby enhancing DOA estimation accuracy. Simulation results demonstrate that under low SNR conditions, the proposed method outperforms existing approaches, including both classical model-based and other deep learning-based methods.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"1028-1032"},"PeriodicalIF":3.9,"publicationDate":"2026-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-06DOI: 10.1109/LSP.2026.3661734
Zining Wang;Yao Li;Yancheng Li;Junjie Xu;Jianyong Zheng;Lu Sun
The security issues of wireless cyber-physical systems (CPSs) have attracted widespread attentions in the decade. Existing research mainly focuses on the specific attack paradigm of a single attacker or eavesdropper. In this letter, we consider a smart intruder which has two options including eavesdropping and DoS attack. The intruder can switch between these two options based on the global cost function which is constructed as a trade-off between the intruder's estimation error, energy consumption, and estimation error at the estimator side. By modeling it as a Markov decision process (MDP), we give some structural properties of the optimal schedule analytically and explicitly. It can be theoretically proved that there remains a threshold structure for the optimal intrusion scheduling when the holding time of estimator is fixed. Further more, when intruder's holding time is fixed, the optimal strategy consists of two cases, both of which follow a threshold structure, and they exhibit opposing forms of the optimal solution. Numerical examples and comparisons with the state-of-art methods are presented to demonstrate the correctness and effectiveness of our proposed results.
{"title":"Optimal Hybrid Intrusion Schedule Against State Estimation for Cyber-Physical Systems","authors":"Zining Wang;Yao Li;Yancheng Li;Junjie Xu;Jianyong Zheng;Lu Sun","doi":"10.1109/LSP.2026.3661734","DOIUrl":"https://doi.org/10.1109/LSP.2026.3661734","url":null,"abstract":"The security issues of wireless cyber-physical systems (CPSs) have attracted widespread attentions in the decade. Existing research mainly focuses on the specific attack paradigm of a single attacker or eavesdropper. In this letter, we consider a smart intruder which has two options including eavesdropping and DoS attack. The intruder can switch between these two options based on the global cost function which is constructed as a trade-off between the intruder's estimation error, energy consumption, and estimation error at the estimator side. By modeling it as a Markov decision process (MDP), we give some structural properties of the optimal schedule analytically and explicitly. It can be theoretically proved that there remains a threshold structure for the optimal intrusion scheduling when the holding time of estimator is fixed. Further more, when intruder's holding time is fixed, the optimal strategy consists of two cases, both of which follow a threshold structure, and they exhibit opposing forms of the optimal solution. Numerical examples and comparisons with the state-of-art methods are presented to demonstrate the correctness and effectiveness of our proposed results.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"888-892"},"PeriodicalIF":3.9,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146223709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-06DOI: 10.1109/LSP.2026.3661739
Haibo Jin;Baokai Zhang
Generalization of prognostic models across varying operating conditions is a primary challenge for methods relying on transient signal morphology. We introduce the Geometric Deviation (GD), an information-theoretic health indicator derived from intrinsic system dynamics on a latent manifold. The GD quantifies the Kullback-Leibler divergence between the current state and a learned nominal trajectory, which is probabilistically modeled in stages on a manifold reconstructed via Isomap. Experimental results on the NASA battery dataset substantiate that the GD-based prognostic approach stands out from state-of-the-art deep learning baselines by a notable margin in cross-condition generalization.
{"title":"Geometric Deviation: An Information-Theoretic Health Indicator for Cross-Condition Prognostics","authors":"Haibo Jin;Baokai Zhang","doi":"10.1109/LSP.2026.3661739","DOIUrl":"https://doi.org/10.1109/LSP.2026.3661739","url":null,"abstract":"Generalization of prognostic models across varying operating conditions is a primary challenge for methods relying on transient signal morphology. We introduce the Geometric Deviation (GD), an information-theoretic health indicator derived from intrinsic system dynamics on a latent manifold. The GD quantifies the Kullback-Leibler divergence between the current state and a learned nominal trajectory, which is probabilistically modeled in stages on a manifold reconstructed via Isomap. Experimental results on the NASA battery dataset substantiate that the GD-based prognostic approach stands out from state-of-the-art deep learning baselines by a notable margin in cross-condition generalization.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"873-877"},"PeriodicalIF":3.9,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}