Pub Date : 2026-01-12DOI: 10.1109/LSP.2026.3652951
Weicheng Gao;Shui Liu;Jinshuo Wang;Xiaodong Qu;Xiaopeng Yang
Through-the-wall radar (TWR) can monitor and analyze the motion characteristics and activity patterns of indoor human targets, with the advantages of non-contact, high flexibility and privacy protection. However, existing TWR human activity recognition (HAR) techniques developed based on single-channel radar contain limited Doppler information, making it difficult to achieve accurate recognition on data where the direction of human motion is not parallel to the radar observation. To solve this problem, in this letter, a multi-input-multi-output (MIMO) TWR micro-Doppler signature augmentation method based on multi-channel information fusion is proposed. First, a multi-channel Doppler profile feature fusion method based on multi-scale wavelets with low-rank decomposition is presented. Then, a motion parameter estimation method based on Broyden-Fletcher-Goldfarb-Shanno (BFGS) global optimization is proposed, and the fused Doppler profile transformation is implemented using the obtained orientation of human motion. Numerical simulated and measured experiments demonstrate the effectiveness of the proposed method.
{"title":"MIMO Through-the-Wall Radar Micro-Doppler Signature Augmentation Method Based on Multi-Channel Information Fusion","authors":"Weicheng Gao;Shui Liu;Jinshuo Wang;Xiaodong Qu;Xiaopeng Yang","doi":"10.1109/LSP.2026.3652951","DOIUrl":"https://doi.org/10.1109/LSP.2026.3652951","url":null,"abstract":"Through-the-wall radar (TWR) can monitor and analyze the motion characteristics and activity patterns of indoor human targets, with the advantages of non-contact, high flexibility and privacy protection. However, existing TWR human activity recognition (HAR) techniques developed based on single-channel radar contain limited Doppler information, making it difficult to achieve accurate recognition on data where the direction of human motion is not parallel to the radar observation. To solve this problem, in this letter, a multi-input-multi-output (MIMO) TWR micro-Doppler signature augmentation method based on multi-channel information fusion is proposed. First, a multi-channel Doppler profile feature fusion method based on multi-scale wavelets with low-rank decomposition is presented. Then, a motion parameter estimation method based on Broyden-Fletcher-Goldfarb-Shanno (BFGS) global optimization is proposed, and the fused Doppler profile transformation is implemented using the obtained orientation of human motion. Numerical simulated and measured experiments demonstrate the effectiveness of the proposed method.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"579-583"},"PeriodicalIF":3.9,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1109/LSP.2026.3652122
Bingzhang Wu;Shaoxuan Li;Ziyao Pan;Rongxin Zhang;Wei Su
This letter proposes a lightweight parallel recurrent–convolutional scheme to improve generalization capability and recognition accuracy while maintaining low computational complexity in resource-constrained underwater acoustic channels. In this scheme, the lightweight convolutional network is used to extract time–frequency features, and the lightweight recurrent network with gated recurrent units is used to capture long-term temporal phase correlations, thereby alleviating the Doppler-induced phase rotation and inter-symbol interference in time-varying multipath underwater acoustic channels. Sea-trial data are collected during shallow-water sea trials with strictly separated training and evaluation datasets. Experimental results on ten underwater acoustic modulation types show that the proposed scheme improves recognition accuracy by 6.2% and reduces computational cost by 22.4%, while exhibiting stronger generalization capability compared with benchmark schemes.
{"title":"LRCPN: A Lightweight Parallel Scheme for Underwater Acoustic Modulation Recognition","authors":"Bingzhang Wu;Shaoxuan Li;Ziyao Pan;Rongxin Zhang;Wei Su","doi":"10.1109/LSP.2026.3652122","DOIUrl":"https://doi.org/10.1109/LSP.2026.3652122","url":null,"abstract":"This letter proposes a lightweight parallel recurrent–convolutional scheme to improve generalization capability and recognition accuracy while maintaining low computational complexity in resource-constrained underwater acoustic channels. In this scheme, the lightweight convolutional network is used to extract time–frequency features, and the lightweight recurrent network with gated recurrent units is used to capture long-term temporal phase correlations, thereby alleviating the Doppler-induced phase rotation and inter-symbol interference in time-varying multipath underwater acoustic channels. Sea-trial data are collected during shallow-water sea trials with strictly separated training and evaluation datasets. Experimental results on ten underwater acoustic modulation types show that the proposed scheme improves recognition accuracy by 6.2% and reduces computational cost by 22.4%, while exhibiting stronger generalization capability compared with benchmark schemes.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"516-520"},"PeriodicalIF":3.9,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1109/LSP.2026.3653403
Lu Li;Qinkun Xiao;Peiran Liu
Continuous sign language recognition (CSLR) requires fine-grained alignment between visual sequences and gloss annotations under weak supervision, which is challenged by modality heterogeneity and ambiguous frame-to-gloss correspondence. We propose a Multimodal Cosine Similarity Transformer (MMCST) to address these issues. MMCST integrates RGB and keypoint heatmap features via gated fusion, and aligns them with gloss embeddings through a Gloss-Conditioned Cosine-Normalized Attention (GCNA) mechanism that stabilizes cross-modal alignment. To further enhance semantic consistency, we introduce Gloss-aware Contrastive Regularization (GLCR). The fused representation is modeled by a cosine-similarity Transformer and decoded with CTC. Experimental results show that MMCST achieves consistent improvements over strong baselines, and ablation studies confirm the effectiveness of gated fusion, GCNA, and GLCR in improving semantic alignment and yielding smoother training dynamics.
{"title":"Multimodal Cosine Similarity Transformer for Gloss-Guided Sign Language Recognition","authors":"Lu Li;Qinkun Xiao;Peiran Liu","doi":"10.1109/LSP.2026.3653403","DOIUrl":"https://doi.org/10.1109/LSP.2026.3653403","url":null,"abstract":"Continuous sign language recognition (CSLR) requires fine-grained alignment between visual sequences and gloss annotations under weak supervision, which is challenged by modality heterogeneity and ambiguous frame-to-gloss correspondence. We propose a Multimodal Cosine Similarity Transformer (MMCST) to address these issues. MMCST integrates RGB and keypoint heatmap features via gated fusion, and aligns them with gloss embeddings through a Gloss-Conditioned Cosine-Normalized Attention (GCNA) mechanism that stabilizes cross-modal alignment. To further enhance semantic consistency, we introduce Gloss-aware Contrastive Regularization (GLCR). The fused representation is modeled by a cosine-similarity Transformer and decoded with CTC. Experimental results show that MMCST achieves consistent improvements over strong baselines, and ablation studies confirm the effectiveness of gated fusion, GCNA, and GLCR in improving semantic alignment and yielding smoother training dynamics.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"673-677"},"PeriodicalIF":3.9,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1109/LSP.2026.3652119
Bo Tang;Steven Kay;Kaushallya Adhikari
A classification rule based on the cumulant generating function of the training data, called the Cumulant Generating Function Classifier (CGFC), has been recently proposed, and has shown promising performance in terms of improved classification accuracy and robustness against noises. This paper first presents a new information-theoretical explanation of CGFC which indeed makes a classification by minimizing sample mutual information. The original CGFC is a type of global model, and a new variant, called Local-CGFC, is further introduced in this paper to achieve a local classification rule. Experimental studies on real-life datasets demonstrate the effectiveness of the proposed classifier and further illustrate its great potential for a number of real-world applications.
{"title":"Local-CGFC: A Local Cumulant Generating Function Classification Rule","authors":"Bo Tang;Steven Kay;Kaushallya Adhikari","doi":"10.1109/LSP.2026.3652119","DOIUrl":"https://doi.org/10.1109/LSP.2026.3652119","url":null,"abstract":"A classification rule based on the cumulant generating function of the training data, called the Cumulant Generating Function Classifier (CGFC), has been recently proposed, and has shown promising performance in terms of improved classification accuracy and robustness against noises. This paper first presents a new information-theoretical explanation of CGFC which indeed makes a classification by minimizing sample mutual information. The original CGFC is a type of global model, and a new variant, called Local-CGFC, is further introduced in this paper to achieve a local classification rule. Experimental studies on real-life datasets demonstrate the effectiveness of the proposed classifier and further illustrate its great potential for a number of real-world applications.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"546-550"},"PeriodicalIF":3.9,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-06DOI: 10.1109/LSP.2026.3651083
Guobiao Li;Sheng Li;Zhenxing Qian;Xinpeng Zhang
Deep functionality hiding is an emerging technique that embeds confidential or sensitive functions within seemingly benign deep learning models (DLMs), which perform ordinary machine learning tasks. This enables such models to execute covert tasks while remaining undetected. Despite the rapid progress in deep functionality hiding, countermeasures remain unexplored. In this paper, we propose Distribution Offset Analysis (DOA), a novel method for detecting hidden functionalities in DLMs. Our key insight is that the weight distribution of a benign DLM typically follows a Gaussian distribution, whereas a container DLM with hidden functionalities exhibits notable statistical deviations from this Gaussian pattern. In our methodology, we first compute the distributional distance (i.e., offsets) between the model's weights and an ideal Gaussian distribution. We then fuse these offsets with weight features into a unified representation, which is subsequently used to train a meta-classifier for hidden functionality detection. Through extensive experiments, we demonstrate the effectiveness of the proposed DOA method, which achieves an average detection rate of over 87% against existing state-of-the-art deep functionality hiding techniques.
{"title":"Toward Detecting Hidden Functionalities in Deep Learning Models","authors":"Guobiao Li;Sheng Li;Zhenxing Qian;Xinpeng Zhang","doi":"10.1109/LSP.2026.3651083","DOIUrl":"https://doi.org/10.1109/LSP.2026.3651083","url":null,"abstract":"Deep functionality hiding is an emerging technique that embeds confidential or sensitive functions within seemingly benign deep learning models (DLMs), which perform ordinary machine learning tasks. This enables such models to execute covert tasks while remaining undetected. Despite the rapid progress in deep functionality hiding, countermeasures remain unexplored. In this paper, we propose Distribution Offset Analysis (DOA), a novel method for detecting hidden functionalities in DLMs. Our key insight is that the weight distribution of a benign DLM typically follows a Gaussian distribution, whereas a container DLM with hidden functionalities exhibits notable statistical deviations from this Gaussian pattern. In our methodology, we first compute the distributional distance (<italic>i.e.,</i> offsets) between the model's weights and an ideal Gaussian distribution. We then fuse these offsets with weight features into a unified representation, which is subsequently used to train a meta-classifier for hidden functionality detection. Through extensive experiments, we demonstrate the effectiveness of the proposed DOA method, which achieves an average detection rate of over 87% against existing state-of-the-art deep functionality hiding techniques.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"541-545"},"PeriodicalIF":3.9,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-06DOI: 10.1109/LSP.2026.3651227
Steven Kay;Kaushallya Adhikari;Kaan Icer
A new approach to the analytical implementation of the Rosenblatt transformation is described. It leverages the properties of the empirical probability density function, which is the standard estimate of an unknown density. As such its utility is to applications where training data is available for the unknown density. These applications include data-driven algorithms for detection/classification and other statistical signal processing problems where the underlying probabilistic description of the data is unknown. As an illustration, an application to anomaly detection is described in detail using Gaussian and radar datasets.
{"title":"An Analytical Implementation of the Rosenblatt Transformation","authors":"Steven Kay;Kaushallya Adhikari;Kaan Icer","doi":"10.1109/LSP.2026.3651227","DOIUrl":"https://doi.org/10.1109/LSP.2026.3651227","url":null,"abstract":"A new approach to the analytical implementation of the Rosenblatt transformation is described. It leverages the properties of the empirical probability density function, which is the standard estimate of an unknown density. As such its utility is to applications where training data is available for the unknown density. These applications include data-driven algorithms for detection/classification and other statistical signal processing problems where the underlying probabilistic description of the data is unknown. As an illustration, an application to anomaly detection is described in detail using Gaussian and radar datasets.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"511-515"},"PeriodicalIF":3.9,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-06DOI: 10.1109/LSP.2026.3651005
Yiran Zhao;Jinze Li;Shisheng Guo;Zhuohang Shi
The sparse array configuration of multi-input multi-output imaging radar leads to high grating lobes problem in the imaging process, which significantly degrades final image quality. Although the traditional Phase Coherence Factor can partially mitigate these grating lobes, it suffers from limitations such as attenuation of the main lobe energy. To overcome these drawbacks, this paper proposes a novel grating lobes suppression method based on phase-coherence-guided adaptive threshold classification. This method first adaptively determines a classification threshold by analyzing the phase coherence features of the target main lobe. Using this threshold, all the grids in the radar image are classified into two categories and distinct schemes are applied to compute their respective weighting factors. Finally, grating lobes in the image are suppressed by weighting the original radar image. Numerical simulation and field experiment both confirm the effectiveness of the proposed method, which achieves a higher peak sidelobe ratio than conventional methods, demonstrating promising practical value.
{"title":"A Grating Lobes Suppression Method for MIMO Imaging Radar Based on Phase-Coherence-Guided Adaptive Threshold Classification","authors":"Yiran Zhao;Jinze Li;Shisheng Guo;Zhuohang Shi","doi":"10.1109/LSP.2026.3651005","DOIUrl":"https://doi.org/10.1109/LSP.2026.3651005","url":null,"abstract":"The sparse array configuration of multi-input multi-output imaging radar leads to high grating lobes problem in the imaging process, which significantly degrades final image quality. Although the traditional Phase Coherence Factor can partially mitigate these grating lobes, it suffers from limitations such as attenuation of the main lobe energy. To overcome these drawbacks, this paper proposes a novel grating lobes suppression method based on phase-coherence-guided adaptive threshold classification. This method first adaptively determines a classification threshold by analyzing the phase coherence features of the target main lobe. Using this threshold, all the grids in the radar image are classified into two categories and distinct schemes are applied to compute their respective weighting factors. Finally, grating lobes in the image are suppressed by weighting the original radar image. Numerical simulation and field experiment both confirm the effectiveness of the proposed method, which achieves a higher peak sidelobe ratio than conventional methods, demonstrating promising practical value.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"501-505"},"PeriodicalIF":3.9,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Traditional skeleton-based action recognition methods rely on large labeled datasets, which are costly to collect and unsuitable for hazardous actions, thereby limiting generalization. To overcome these limitations, recent works adopt zero-shot learning by using rich textual descriptions to guide the alignment and recognition of unlabeled skeleton features. However, these methods still struggle with similar actions (e.g., reading vs. writing), due to ambiguity arising from noise in both modalities. We propose the Discriminative Dual-Prototype TextAlignment (DDPTA) framework. Our framework introduces a novel dual-prototype design with tailored refinement strategies to effectively distill these two complementary prototypes. For the Spatial Prototype, our CycleSpatial module first distills the action’s core joint form from noisy spatial features, which is then guided by a Sieve-based Alignment. For the Temporal Prototype, our MambaTempo module leverages the Selective State Space Model to extract representations across distinct temporal stages, enabling fine-grained alignment with descriptions of different time periods. Extensive experiments demonstrate the superior performance of our method, showcasing its effectiveness in advancing the field of zero-shot skeleton-based action recognition.
传统的基于骨架的动作识别方法依赖于大型标记数据集,这些数据集收集成本高且不适合危险动作,从而限制了泛化。为了克服这些限制,最近的研究采用零射击学习,通过丰富的文本描述来指导未标记骨架特征的对齐和识别。然而,由于两种方式的噪声产生的歧义,这些方法仍然难以处理类似的动作(例如,读与写)。我们提出了判别双原型文本对齐(DDPTA)框架。我们的框架引入了一种新的双原型设计,并采用定制的改进策略来有效地提取这两个互补的原型。对于空间原型,我们的CycleSpatial模块首先从嘈杂的空间特征中提取动作的核心关节形式,然后由基于筛子的对齐引导。对于时间原型,我们的MambaTempo模块利用选择性状态空间模型(Selective State Space Model)来提取跨不同时间阶段的表示,从而支持与不同时间段的描述进行细粒度对齐。大量的实验证明了该方法的优越性能,证明了其在推进基于零射击骨架的动作识别领域的有效性。
{"title":"DDPTA: Zero-Shot Learning for Skeleton-Based Action Recognition","authors":"Jinjie Wang;Bi Zeng;Shenghong Zhong;Pengfei Wei;Xiaoting Gao","doi":"10.1109/LSP.2025.3650464","DOIUrl":"https://doi.org/10.1109/LSP.2025.3650464","url":null,"abstract":"Traditional skeleton-based action recognition methods rely on large labeled datasets, which are costly to collect and unsuitable for hazardous actions, thereby limiting generalization. To overcome these limitations, recent works adopt zero-shot learning by using rich textual descriptions to guide the alignment and recognition of unlabeled skeleton features. However, these methods still struggle with similar actions (e.g., reading vs. writing), due to ambiguity arising from noise in both modalities. We propose the Discriminative Dual-Prototype TextAlignment (DDPTA) framework. Our framework introduces a novel dual-prototype design with tailored refinement strategies to effectively distill these two complementary prototypes. For the Spatial Prototype, our CycleSpatial module first distills the action’s core joint form from noisy spatial features, which is then guided by a Sieve-based Alignment. For the Temporal Prototype, our MambaTempo module leverages the Selective State Space Model to extract representations across distinct temporal stages, enabling fine-grained alignment with descriptions of different time periods. Extensive experiments demonstrate the superior performance of our method, showcasing its effectiveness in advancing the field of zero-shot skeleton-based action recognition.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"506-510"},"PeriodicalIF":3.9,"publicationDate":"2026-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01DOI: 10.1109/LSP.2025.3650438
Jianwen Huang;Feng Zhang;Xinling Liu;Runbin Tang;Jinping Jia;Runke Wang
The weighted $ell _{r}-ell _{1}$ minimization with weight $alpha$ has been extensively employed to robustly estimate a high-dimensional sparse signal $x$ coded by the underdetermined linear measurements $y=Ax+z$, where $A$ and $z$ are the measurement matrix and noise, respectively. In this paper, we demonstrate that if the restricted isometry constant (RIC) $delta _{s}$ of $A$ fulfills $delta _{s}< 1/(1+3t/sqrt{5})$, where $t$ relies on sparsity level $s$ for known model parameters $alpha$ and $r$, then any sparse signal $x$ are ensured to be robustly reconstructed through solving the weighted $ell _{r}-ell _{1}$ minimization in the noisy situation. The gained condition is testified to be much better that the state-of-art ones.
{"title":"An Improved Sufficient Condition for Weighted $ell _{r}-ell _{1}$ Minimization","authors":"Jianwen Huang;Feng Zhang;Xinling Liu;Runbin Tang;Jinping Jia;Runke Wang","doi":"10.1109/LSP.2025.3650438","DOIUrl":"https://doi.org/10.1109/LSP.2025.3650438","url":null,"abstract":"The weighted <inline-formula><tex-math>$ell _{r}-ell _{1}$</tex-math></inline-formula> minimization with weight <inline-formula><tex-math>$alpha$</tex-math></inline-formula> has been extensively employed to robustly estimate a high-dimensional sparse signal <inline-formula><tex-math>$x$</tex-math></inline-formula> coded by the underdetermined linear measurements <inline-formula><tex-math>$y=Ax+z$</tex-math></inline-formula>, where <inline-formula><tex-math>$A$</tex-math></inline-formula> and <inline-formula><tex-math>$z$</tex-math></inline-formula> are the measurement matrix and noise, respectively. In this paper, we demonstrate that if the restricted isometry constant (RIC) <inline-formula><tex-math>$delta _{s}$</tex-math></inline-formula> of <inline-formula><tex-math>$A$</tex-math></inline-formula> fulfills <inline-formula><tex-math>$delta _{s}< 1/(1+3t/sqrt{5})$</tex-math></inline-formula>, where <inline-formula><tex-math>$t$</tex-math></inline-formula> relies on sparsity level <inline-formula><tex-math>$s$</tex-math></inline-formula> for known model parameters <inline-formula><tex-math>$alpha$</tex-math></inline-formula> and <inline-formula><tex-math>$r$</tex-math></inline-formula>, then any sparse signal <inline-formula><tex-math>$x$</tex-math></inline-formula> are ensured to be robustly reconstructed through solving the weighted <inline-formula><tex-math>$ell _{r}-ell _{1}$</tex-math></inline-formula> minimization in the noisy situation. The gained condition is testified to be much better that the state-of-art ones.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"698-702"},"PeriodicalIF":3.9,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-12-05DOI: 10.1109/lsp.2025.3640510
Christopher K Kovach, Stephen V Gliske, Erin M Radcliffe, Sam Shipley, John A Thompson, Aviva Abosch
The fourth-order time-invariant spectrum, or trispectrum, has a simple derivation as the cross-spectrum among frequency bands in the Wigner-Ville distribution (WVD). Viewed this way, the trispectrum gains intuitive meaning as a measure of the linear dependence of power across frequencies, which yields some insight into its structure and interpretation. We highlight, in particular, a two-dimensional subdomain as useful for identifying modulated oscillations when the modulating envelope is non-negative or lowpass. Spectral characteristics of the carrier and modulating signals are revealed along separate axes of a two-dimensional representation of this domain. The application of this framework, combined with a previously described additive decomposition technique for higher-order spectra, is demonstrated by the blind identification and separation of sleep spindles and beta bursts in EEG.
{"title":"Interpreting the Trispectrum as the Cross-Spectrum of the Wigner-Ville Distribution.","authors":"Christopher K Kovach, Stephen V Gliske, Erin M Radcliffe, Sam Shipley, John A Thompson, Aviva Abosch","doi":"10.1109/lsp.2025.3640510","DOIUrl":"10.1109/lsp.2025.3640510","url":null,"abstract":"<p><p>The fourth-order time-invariant spectrum, or trispectrum, has a simple derivation as the cross-spectrum among frequency bands in the Wigner-Ville distribution (WVD). Viewed this way, the trispectrum gains intuitive meaning as a measure of the linear dependence of power across frequencies, which yields some insight into its structure and interpretation. We highlight, in particular, a two-dimensional subdomain as useful for identifying modulated oscillations when the modulating envelope is non-negative or lowpass. Spectral characteristics of the carrier and modulating signals are revealed along separate axes of a two-dimensional representation of this domain. The application of this framework, combined with a previously described additive decomposition technique for higher-order spectra, is demonstrated by the blind identification and separation of sleep spindles and beta bursts in EEG.</p>","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"221-225"},"PeriodicalIF":3.9,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12829976/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146046600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}