首页 > 最新文献

IEEE Signal Processing Letters最新文献

英文 中文
Deep Unrolled Networks for Nonnegative Least Squares Problem: Analysis and Application 非负最小二乘问题的深度展开网络:分析与应用
IF 3.9 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-03-02 DOI: 10.1109/LSP.2026.3669434
Akash Sen;C.S. Sastry
The problem of nonnegative least squares (NNLS) has numerous applications in signal analysis. Recently, the algorithm unrolling has gained significant attention due to its superior approximation results compared to iterative methods. In this paper, we discuss the NNLS probelm in an interpretable data-driven set up using unrolled proximal gradient descent method (UPGDM), and establish its analytical guarantees. An advantage of this method over its conventional counterparts is that it provides a faster and better inference for an input data, once the network is trained. In particular, this paper provides convergence guarantees of the network by bounding the number of training samples for zero training error. Further, it demonstrates relevance of UPGDM through an application in Electrical Impedance Tomography.
非负最小二乘问题在信号分析中有着广泛的应用。近年来,展开算法因其优于迭代方法的逼近效果而备受关注。本文利用展开近端梯度下降法(UPGDM)讨论了可解释数据驱动环境下的NNLS问题,并建立了其解析保证。与传统方法相比,这种方法的一个优点是,一旦网络经过训练,它就能为输入数据提供更快更好的推断。特别地,本文通过约束训练样本的数量为零训练误差提供了网络的收敛性保证。此外,它通过电阻抗断层成像的应用证明了UPGDM的相关性。
{"title":"Deep Unrolled Networks for Nonnegative Least Squares Problem: Analysis and Application","authors":"Akash Sen;C.S. Sastry","doi":"10.1109/LSP.2026.3669434","DOIUrl":"https://doi.org/10.1109/LSP.2026.3669434","url":null,"abstract":"The problem of <italic>nonnegative least squares</i> (NNLS) has numerous applications in signal analysis. Recently, the algorithm unrolling has gained significant attention due to its superior approximation results compared to iterative methods. In this paper, we discuss the NNLS probelm in an interpretable data-driven set up using <italic>unrolled proximal gradient descent method</i> (UPGDM), and establish its analytical guarantees. An advantage of this method over its conventional counterparts is that it provides a faster and better inference for an input data, once the network is trained. In particular, this paper provides convergence guarantees of the network by bounding the number of training samples for zero training error. Further, it demonstrates relevance of UPGDM through an application in Electrical Impedance Tomography.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"1150-1154"},"PeriodicalIF":3.9,"publicationDate":"2026-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147440519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image Dehazing Using Patch-Wise Nonlinear Brightness Prior 图像去雾使用斑块非线性亮度先验
IF 3.9 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-02-27 DOI: 10.1109/LSP.2026.3668456
Xiaoyue Wu;Tianyi Lyu;Mingye Ju
Currently available dehazing methods, whether based on hand-crafted priors or learned from datasets, typically ignore the brightness consistency between hazy images and their dehazed results, which often leads to over-enhancement and color cast. To address this issue, we first investigate a patch-wise nonlinear brightness prior (PNBP) that explicitly characterizes the relationship between the brightness of hazy patches and that of their clear counterparts. By combining PNBP with the atmospheric scattering model, the single image dehazing problem can be recast as a restoration formula with only three parameters, substantially shrinking the solution space for haze removal. Under a multi-objective joint optimization that simultaneously considers information gain, exposure, and preservation of the pixel-histogram distribution, this restoration formula can directly produce high-quality dehazed images. Thanks to PNBP, our method inherits brightness consistency from the prior and thereby avoids the risk of over-enhancement while reducing the possibility of color cast. Extensive experiments demonstrate that the proposed method outperforms state-of-the-art approaches in terms of defogging quality, robustness, and computational efficiency.
目前可用的去雾方法,无论是基于手工制作的先验还是从数据集中学习,通常都忽略了模糊图像与其去雾结果之间的亮度一致性,这通常会导致过度增强和偏色。为了解决这个问题,我们首先研究了一个基于斑块的非线性亮度先验(PNBP),它明确表征了朦胧斑块和清晰斑块亮度之间的关系。通过将PNBP与大气散射模型相结合,可以将单幅图像去雾问题重新塑造为只有三个参数的恢复公式,大大缩小了去雾的解决空间。在同时考虑信息增益、曝光和保持像素直方图分布的多目标联合优化下,该恢复公式可以直接生成高质量的去雾图像。由于PNBP,我们的方法继承了先前的亮度一致性,从而避免了过度增强的风险,同时减少了偏色的可能性。大量的实验表明,该方法在除雾质量、鲁棒性和计算效率方面优于最先进的方法。
{"title":"Image Dehazing Using Patch-Wise Nonlinear Brightness Prior","authors":"Xiaoyue Wu;Tianyi Lyu;Mingye Ju","doi":"10.1109/LSP.2026.3668456","DOIUrl":"https://doi.org/10.1109/LSP.2026.3668456","url":null,"abstract":"Currently available dehazing methods, whether based on hand-crafted priors or learned from datasets, typically ignore the brightness consistency between hazy images and their dehazed results, which often leads to over-enhancement and color cast. To address this issue, we first investigate a patch-wise nonlinear brightness prior (PNBP) that explicitly characterizes the relationship between the brightness of hazy patches and that of their clear counterparts. By combining PNBP with the atmospheric scattering model, the single image dehazing problem can be recast as a restoration formula with only three parameters, substantially shrinking the solution space for haze removal. Under a multi-objective joint optimization that simultaneously considers information gain, exposure, and preservation of the pixel-histogram distribution, this restoration formula can directly produce high-quality dehazed images. Thanks to PNBP, our method inherits brightness consistency from the prior and thereby avoids the risk of over-enhancement while reducing the possibility of color cast. Extensive experiments demonstrate that the proposed method outperforms state-of-the-art approaches in terms of defogging quality, robustness, and computational efficiency.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"1140-1144"},"PeriodicalIF":3.9,"publicationDate":"2026-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147440546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fuzzy Measure-Guided Semi-Supervised Breast Cancer Image Segmentation Network 模糊度量引导的半监督乳腺癌图像分割网络
IF 3.9 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-02-26 DOI: 10.1109/LSP.2026.3668611
Ran Zhang;Kaihong Guo
Histopathological breast cancer images often suffer from structural heterogeneity and unclear or complex boundaries. Acquiring pixel-level annotations is costly, limiting the effectiveness and generalizability of traditional segmentation methods. To address these challenges, we propose the Fuzzy Measure-Guided Semi-Supervised Breast Cancer Image Segmentation Network (FuzMGNet). This approach combines fuzzy measures with Convolutional Recurrent Neural Networks (CRNN) and employs Choquet integral-based non-additive feature fusion. A pseudo-labeling guidance mechanism is used to improve boundary delineation. FuzMGNet captures multi-scale contextual information through hierarchical convolutional encoding and spatial modeling using recurrent units. The fuzzy measure dynamically adjusts feature fusion strategies, enhancing the network's adaptability across different images. The Choquet integral strengthens the model's ability to handle complex dependencies, improving segmentation accuracy. Finally, the pseudo-labeling mechanism enables effective training with limited labeled data. Experimental results show that FuzMGNet significantly outperforms traditional deep learning segmentation methods on the MIAS, BreakHis, and BACH datasets.
组织病理学乳腺癌图像经常遭受结构异质性和不清楚或复杂的边界。获取像素级注释的成本很高,限制了传统分割方法的有效性和可泛化性。为了解决这些挑战,我们提出了模糊度量引导的半监督乳腺癌图像分割网络(FuzMGNet)。该方法将模糊测度与卷积递归神经网络(CRNN)相结合,采用基于Choquet积分的非加性特征融合。采用伪标注引导机制改进边界划分。FuzMGNet通过分层卷积编码和使用循环单元的空间建模来捕获多尺度上下文信息。模糊度量动态调整特征融合策略,增强网络对不同图像的适应性。Choquet积分增强了模型处理复杂依赖关系的能力,提高了分割精度。最后,伪标注机制可以使用有限的标注数据进行有效的训练。实验结果表明,在MIAS、BreakHis和BACH数据集上,FuzMGNet显著优于传统的深度学习分割方法。
{"title":"Fuzzy Measure-Guided Semi-Supervised Breast Cancer Image Segmentation Network","authors":"Ran Zhang;Kaihong Guo","doi":"10.1109/LSP.2026.3668611","DOIUrl":"https://doi.org/10.1109/LSP.2026.3668611","url":null,"abstract":"Histopathological breast cancer images often suffer from structural heterogeneity and unclear or complex boundaries. Acquiring pixel-level annotations is costly, limiting the effectiveness and generalizability of traditional segmentation methods. To address these challenges, we propose the Fuzzy Measure-Guided Semi-Supervised Breast Cancer Image Segmentation Network (FuzMGNet). This approach combines fuzzy measures with Convolutional Recurrent Neural Networks (CRNN) and employs Choquet integral-based non-additive feature fusion. A pseudo-labeling guidance mechanism is used to improve boundary delineation. FuzMGNet captures multi-scale contextual information through hierarchical convolutional encoding and spatial modeling using recurrent units. The fuzzy measure dynamically adjusts feature fusion strategies, enhancing the network's adaptability across different images. The Choquet integral strengthens the model's ability to handle complex dependencies, improving segmentation accuracy. Finally, the pseudo-labeling mechanism enables effective training with limited labeled data. Experimental results show that FuzMGNet significantly outperforms traditional deep learning segmentation methods on the MIAS, BreakHis, and BACH datasets.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"1145-1149"},"PeriodicalIF":3.9,"publicationDate":"2026-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147440580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-View Manifold-Adaptive Kernel Regression for Speech Classification From EEG Signals 多视图流形自适应核回归在脑电信号语音分类中的应用
IF 3.9 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-02-26 DOI: 10.1109/LSP.2026.3668169
Xie He;Qi Cui;Chang Wu;Yong Peng;Wanzeng Kong
Decoding speech intentions from electroencephalogram (EEG) data is the primary task in speech brain–computer interface (BCI) systems, which remains challenging due to the unclear discriminative task-aware features, and underlying nonlinear properties besides the well-known low signal-to-noise ratio of EEG data. Existing approaches typically rely either on single-domain features or performing feature learning by deep neural networks; therefore, they either fail to capture comprehensive signal patterns, or typically require large-sized EEG data to fit the parameter spaces and often have limited interpretability. To address these limitations, we propose a Multi-view Manifold-Adaptive Kernel Regression (MMKR) model for speech recognition from EEG signals in this paper. By treating temporal, spectral, and statistical EEG representations as complementary feature views, view-specific manifold-adaptive kernels are constructed in MMKR to incorporate local graph structure into kernel similarity; besides, a data-driven adaptive view weighting mechanism is used to characterize their contributions. We evaluate MMKR on both overt and imagined speech EEG datasets and the results demonstrate that MMKR achieves superior classification accuracy and robustness compared to some representative single-view, multi-view, and kernel-based baselines. Moreover, analysis on the local manifold-modulated kernel matrix and the learned view contributions are provided.
从脑电图(EEG)数据中解码语音意图是语音脑机接口(BCI)系统的主要任务,由于不明确的区分任务感知特征,以及众所周知的低信噪比脑电图数据潜在的非线性特性,这一任务仍然具有挑战性。现有的方法通常依赖于单域特征或通过深度神经网络进行特征学习;因此,它们要么无法捕获全面的信号模式,要么通常需要大尺寸的EEG数据来拟合参数空间,并且通常具有有限的可解释性。为了解决这些局限性,本文提出了一种多视图流形自适应核回归(MMKR)模型,用于脑电信号的语音识别。通过将时间、频谱和统计EEG表示作为互补的特征视图,在MMKR中构建特定于视图的流形自适应核,将局部图结构纳入核相似度;此外,采用数据驱动的自适应视图权重机制来表征其贡献。我们在显性和想象语音脑电图数据集上对MMKR进行了评估,结果表明,与一些代表性的单视图、多视图和基于核的基线相比,MMKR具有更高的分类精度和鲁棒性。此外,对局部流形调制核矩阵进行了分析,并给出了学习到的视图贡献。
{"title":"Multi-View Manifold-Adaptive Kernel Regression for Speech Classification From EEG Signals","authors":"Xie He;Qi Cui;Chang Wu;Yong Peng;Wanzeng Kong","doi":"10.1109/LSP.2026.3668169","DOIUrl":"https://doi.org/10.1109/LSP.2026.3668169","url":null,"abstract":"Decoding speech intentions from electroencephalogram (EEG) data is the primary task in speech brain–computer interface (BCI) systems, which remains challenging due to the unclear discriminative task-aware features, and underlying nonlinear properties besides the well-known low signal-to-noise ratio of EEG data. Existing approaches typically rely either on single-domain features or performing feature learning by deep neural networks; therefore, they either fail to capture comprehensive signal patterns, or typically require large-sized EEG data to fit the parameter spaces and often have limited interpretability. To address these limitations, we propose a Multi-view Manifold-Adaptive Kernel Regression (MMKR) model for speech recognition from EEG signals in this paper. By treating temporal, spectral, and statistical EEG representations as complementary feature views, view-specific manifold-adaptive kernels are constructed in MMKR to incorporate local graph structure into kernel similarity; besides, a data-driven adaptive view weighting mechanism is used to characterize their contributions. We evaluate MMKR on both overt and imagined speech EEG datasets and the results demonstrate that MMKR achieves superior classification accuracy and robustness compared to some representative single-view, multi-view, and kernel-based baselines. Moreover, analysis on the local manifold-modulated kernel matrix and the learned view contributions are provided.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"1077-1081"},"PeriodicalIF":3.9,"publicationDate":"2026-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MIMO Radar Waveform Design in Spectrum-Crowded Environments With Uncertain Steering Vectors 具有不确定转向矢量的频谱拥挤环境下MIMO雷达波形设计
IF 3.9 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-02-24 DOI: 10.1109/LSP.2026.3667841
Meiyingzi Xu;Wei Yang;Wenpeng Zhang
The growing density of communication and radar devices renders multiple-input multiple-output (MIMO) radar systems susceptible to severe detection performance degradation caused by even slight steering vector mismatches. To ensure robust detection under such mismatches, this paper presents a robust waveform design approach based on a steering vector uncertainty-constrained Max-Min signal-to-interference-plus-noise ratio (SINR) formulation. Compared to conventional Max-SINR designs, the proposed method optimizes waveforms that maintain high SINR even in the presence of steering vector errors. The problem incorporates constraints for spectral compatibility and peak-to-average power ratio (PAPR). To solve this non-convex problem, we develop an efficient iterative algorithm that employs successive convex approximation (SCA) to transform the original problem into a sequence of convex subproblems, which are then solved in parallel via the alternating direction method of multipliers (ADMM). Numerical simulations show a reduction in convergence time of up to 30% compared to existing techniques.
通信和雷达设备的密度不断增加,使得多输入多输出(MIMO)雷达系统容易受到严重的探测性能下降的影响,即使是轻微的转向矢量不匹配。为了确保在这种不匹配情况下的鲁棒检测,本文提出了一种基于导向矢量不确定性约束的最大-最小信噪比(SINR)公式的鲁棒波形设计方法。与传统的最大信噪比设计相比,该方法优化了波形,即使在存在转向矢量误差的情况下也能保持高信噪比。该问题结合了频谱兼容性和峰均功率比(PAPR)的约束。为了解决这一非凸问题,我们开发了一种有效的迭代算法,该算法采用逐次凸逼近(SCA)将原始问题转换为一系列凸子问题,然后通过乘法器的交替方向方法(ADMM)并行求解。数值模拟表明,与现有技术相比,收敛时间减少了30%。
{"title":"MIMO Radar Waveform Design in Spectrum-Crowded Environments With Uncertain Steering Vectors","authors":"Meiyingzi Xu;Wei Yang;Wenpeng Zhang","doi":"10.1109/LSP.2026.3667841","DOIUrl":"https://doi.org/10.1109/LSP.2026.3667841","url":null,"abstract":"The growing density of communication and radar devices renders multiple-input multiple-output (MIMO) radar systems susceptible to severe detection performance degradation caused by even slight steering vector mismatches. To ensure robust detection under such mismatches, this paper presents a robust waveform design approach based on a steering vector uncertainty-constrained Max-Min signal-to-interference-plus-noise ratio (SINR) formulation. Compared to conventional Max-SINR designs, the proposed method optimizes waveforms that maintain high SINR even in the presence of steering vector errors. The problem incorporates constraints for spectral compatibility and peak-to-average power ratio (PAPR). To solve this non-convex problem, we develop an efficient iterative algorithm that employs successive convex approximation (SCA) to transform the original problem into a sequence of convex subproblems, which are then solved in parallel via the alternating direction method of multipliers (ADMM). Numerical simulations show a reduction in convergence time of up to 30% compared to existing techniques.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"1136-1139"},"PeriodicalIF":3.9,"publicationDate":"2026-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147440505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UCSMC: An Underwater Compressed Sensing With Measurement Compression Framework UCSMC:带测量压缩框架的水下压缩传感
IF 3.9 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-02-23 DOI: 10.1109/LSP.2026.3667068
Yanqi Zhang;Liquan Shen;Mengyao Li;Shiwei Wang;Junjie Zhu;Minjian Chen
Thriving ocean applications demand efficient underwater image compression over bandwidth-limited acoustic channels. Recent works combine compressed sensing with measurement compression to improve compression ratios. However, as underwater attenuation weakens structural cues, sampling methods tend to overlook structural information and yield poor reconstructions. Meanwhile, sampling leaves discrete measurements with weak intra-image correlations, making it difficult for entropy models within measurement compression to predict accurate probability distributions. In this paper, we propose an Underwater Compressed Sensing with Measurement Compression (UCSMC) framework including Sketch-Assisted Sampling (SAS) and Spatial-Dictionary-based Mixture Entropy Coding (SDMEC) for low-bit-rate reconstruction. Specifically, in sampling, we incorporate sketch with underwater priors to drive the sampling process, steering more measurements toward critical structural regions and ultimately improving reconstruction quality. Additionally, we introduce a learnable spatial dictionary storing per-location entropy statistical characteristics in the underwater domain, which indicates local estimation difficulty and guides adaptive attention allocation in the entropy model, thereby improving probability estimation accuracy. Experimental results show our method outperforms previous schemes in reconstruction quality and measurement compression efficiency.
蓬勃发展的海洋应用需要在带宽有限的声学信道上进行有效的水下图像压缩。最近的研究将压缩感知与测量压缩相结合,以提高压缩比。然而,由于水下衰减削弱了结构线索,采样方法往往会忽略结构信息,从而产生较差的重建效果。同时,采样使得离散测量具有弱的图像内相关性,使得测量压缩中的熵模型难以预测准确的概率分布。在本文中,我们提出了一个水下压缩感知与测量压缩(UCSMC)框架,包括草图辅助采样(SAS)和基于空间字典的混合熵编码(SDMEC)用于低比特率重建。具体而言,在采样中,我们将草图与水下先验相结合,以驱动采样过程,将更多的测量转向关键结构区域,最终提高重建质量。此外,我们引入了一个可学习的空间字典来存储水下区域的每位置熵统计特征,该字典可以指示局部估计难度,并指导熵模型中的自适应注意力分配,从而提高概率估计精度。实验结果表明,该方法在重构质量和测量压缩效率方面均优于现有方法。
{"title":"UCSMC: An Underwater Compressed Sensing With Measurement Compression Framework","authors":"Yanqi Zhang;Liquan Shen;Mengyao Li;Shiwei Wang;Junjie Zhu;Minjian Chen","doi":"10.1109/LSP.2026.3667068","DOIUrl":"https://doi.org/10.1109/LSP.2026.3667068","url":null,"abstract":"Thriving ocean applications demand efficient underwater image compression over bandwidth-limited acoustic channels. Recent works combine compressed sensing with measurement compression to improve compression ratios. However, as underwater attenuation weakens structural cues, sampling methods tend to overlook structural information and yield poor reconstructions. Meanwhile, sampling leaves discrete measurements with weak intra-image correlations, making it difficult for entropy models within measurement compression to predict accurate probability distributions. In this paper, we propose an Underwater Compressed Sensing with Measurement Compression (UCSMC) framework including Sketch-Assisted Sampling (SAS) and Spatial-Dictionary-based Mixture Entropy Coding (SDMEC) for low-bit-rate reconstruction. Specifically, in sampling, we incorporate sketch with underwater priors to drive the sampling process, steering more measurements toward critical structural regions and ultimately improving reconstruction quality. Additionally, we introduce a learnable spatial dictionary storing per-location entropy statistical characteristics in the underwater domain, which indicates local estimation difficulty and guides adaptive attention allocation in the entropy model, thereby improving probability estimation accuracy. Experimental results show our method outperforms previous schemes in reconstruction quality and measurement compression efficiency.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"1067-1071"},"PeriodicalIF":3.9,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parametric Chunk Quantization Algorithm for Fast Passive Emitter Localization 无源辐射源快速定位的参数块量化算法
IF 3.9 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-02-23 DOI: 10.1109/LSP.2026.3666895
Mingrui Wu;Huan Hao;Ran Tao
Passive emitter localization using airborne platforms presents a challenging grid-search problem, complicated by complex platform motion and unknown carrier frequency offsets. Conventional methods often address motion compensation and frequency offset correction as separate, inefficient, and potentially error-prone steps. This paper introduces the Parametric Chunk Quantization (PCQ) algorithm, a unified framework that accelerates the grid search while jointly compensating for both factors. Inspired by product quantization, PCQ divides the received signal and candidate phase histories into chunks, which are then parametrically approximated as linear frequency modulated (LFM) components. By leveraging a precomputed lookup table of inner products between these LFM surrogates, PCQ dramatically reduces the computational cost of the grid search. Simulations using real-world UAV trajectory data demonstrate that PCQ achieves significant acceleration over conventional methods while maintaining competitive localization accuracy. The proposed technique offers a generalizable approach for accelerating parameter estimation in problems involving piecewise-LFM signals.
利用机载平台进行无源辐射源定位是一个具有挑战性的网格搜索问题,该问题由于平台运动复杂和载波频率偏移未知而变得复杂。传统方法通常将运动补偿和频率偏移校正作为单独的,低效的,并且可能容易出错的步骤。本文介绍了参数块量化(PCQ)算法,该算法是一种统一的框架,在加速网格搜索的同时联合补偿这两个因素。受积量化的启发,PCQ将接收到的信号和候选相位历史划分成块,然后将其参数化近似为线性调频(LFM)分量。通过利用预先计算的LFM代理之间的内积查找表,PCQ显著降低了网格搜索的计算成本。使用真实世界无人机轨迹数据的仿真表明,PCQ在保持有竞争力的定位精度的同时,比传统方法实现了显著的加速。该方法为分段lfm信号的参数估计提供了一种可推广的方法。
{"title":"Parametric Chunk Quantization Algorithm for Fast Passive Emitter Localization","authors":"Mingrui Wu;Huan Hao;Ran Tao","doi":"10.1109/LSP.2026.3666895","DOIUrl":"https://doi.org/10.1109/LSP.2026.3666895","url":null,"abstract":"Passive emitter localization using airborne platforms presents a challenging grid-search problem, complicated by complex platform motion and unknown carrier frequency offsets. Conventional methods often address motion compensation and frequency offset correction as separate, inefficient, and potentially error-prone steps. This paper introduces the Parametric Chunk Quantization (PCQ) algorithm, a unified framework that accelerates the grid search while jointly compensating for both factors. Inspired by product quantization, PCQ divides the received signal and candidate phase histories into chunks, which are then parametrically approximated as linear frequency modulated (LFM) components. By leveraging a precomputed lookup table of inner products between these LFM surrogates, PCQ dramatically reduces the computational cost of the grid search. Simulations using real-world UAV trajectory data demonstrate that PCQ achieves significant acceleration over conventional methods while maintaining competitive localization accuracy. The proposed technique offers a generalizable approach for accelerating parameter estimation in problems involving piecewise-LFM signals.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"1062-1066"},"PeriodicalIF":3.9,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Approximating Analytically-Intractable Likelihood Densities With Deterministic Arithmetic for Optimal Particle Filtering 最优粒子滤波的确定性算法逼近解析难处理似然密度
IF 3.9 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-02-23 DOI: 10.1109/LSP.2026.3664784
Orestis Kaparounakis;Yunqi Zhang;Phillip Stanley-Marbell
Particle filtering algorithms have enabled practical solutions to problems in autonomous robotics (self-driving cars, UAVs, warehouse robots), target tracking, and econometrics, with further applications in speech processing and medicine (patient monitoring). Yet, their inherent weakness at representing the likelihood of the observation (which often leads to particle degeneracy) remains unaddressed for real-time resource-constrained systems. Improvements such as the optimal proposal and auxiliary particle filter mitigate this issue under specific circumstances and with increased computational cost. This work presents a new particle filtering method and its implementation, which enables tunably-approximative representation of arbitrary likelihood densities as program transformations of parametric distributions. Our method leverages a recent computing platform that can perform deterministic computation on probability distribution representations (UxHw) without relying on stochastic methods. For non-Gaussian non-linear systems and with an optimal-auxiliary particle filter, we benchmark the likelihood evaluation error and speed for a total of 294 840 evaluation points. For such models, the results show that the UxHw method leads to as much as 37.7x speedup compared to the Monte Carlo alternative. For narrow uniform measurement uncertainty, the particle filter falsely assigns zero likelihood as much as 81.89% of the time whereas UxHw achieves 1.52% false-zero rate. The UxHw approach achieves filter RMSE improvement of as much as 18.9% (average 3.3%) over the Monte Carlo alternative.
粒子滤波算法已经为自主机器人(自动驾驶汽车、无人机、仓库机器人)、目标跟踪和计量经济学中的问题提供了实际解决方案,并进一步应用于语音处理和医学(患者监护)。然而,对于实时资源受限的系统,它们在表示观测可能性方面的固有弱点(这通常会导致粒子退化)仍然没有得到解决。优化提议和辅助粒子滤波等改进在特定情况下和计算成本增加的情况下缓解了这个问题。这项工作提出了一种新的粒子滤波方法及其实现,该方法可以将任意似然密度作为参数分布的程序变换进行可调近似表示。我们的方法利用了一个最新的计算平台,该平台可以在不依赖随机方法的情况下对概率分布表示(UxHw)进行确定性计算。对于非高斯非线性系统,使用最优辅助粒子滤波器,我们对总共294 840个评价点的似然评价误差和速度进行了基准测试。对于这些模型,结果表明,与蒙特卡罗替代方法相比,UxHw方法的加速速度高达37.7倍。对于窄的均匀测量不确定度,粒子滤波器错误地分配零似然的概率高达81.89%,而UxHw的假零率为1.52%。与蒙特卡罗替代方案相比,UxHw方法实现了高达18.9%(平均3.3%)的滤波器RMSE改进。
{"title":"Approximating Analytically-Intractable Likelihood Densities With Deterministic Arithmetic for Optimal Particle Filtering","authors":"Orestis Kaparounakis;Yunqi Zhang;Phillip Stanley-Marbell","doi":"10.1109/LSP.2026.3664784","DOIUrl":"https://doi.org/10.1109/LSP.2026.3664784","url":null,"abstract":"Particle filtering algorithms have enabled practical solutions to problems in autonomous robotics (self-driving cars, UAVs, warehouse robots), target tracking, and econometrics, with further applications in speech processing and medicine (patient monitoring). Yet, their inherent weakness at representing the likelihood of the observation (which often leads to particle degeneracy) remains unaddressed for real-time resource-constrained systems. Improvements such as the optimal proposal and auxiliary particle filter mitigate this issue under specific circumstances and with increased computational cost. This work presents a new particle filtering method and its implementation, which enables tunably-approximative representation of arbitrary likelihood densities as program transformations of parametric distributions. Our method leverages a recent computing platform that can perform deterministic computation on probability distribution representations (UxHw) without relying on stochastic methods. For non-Gaussian non-linear systems and with an optimal-auxiliary particle filter, we benchmark the likelihood evaluation error and speed for a total of 294 840 evaluation points. For such models, the results show that the UxHw method leads to as much as 37.7x speedup compared to the Monte Carlo alternative. For narrow uniform measurement uncertainty, the particle filter falsely assigns zero likelihood as much as 81.89% of the time whereas UxHw achieves 1.52% false-zero rate. The UxHw approach achieves filter RMSE improvement of as much as 18.9% (average 3.3%) over the Monte Carlo alternative.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"1033-1037"},"PeriodicalIF":3.9,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reduced Complexity Blind Recognition Method of LDPC Codes Over a Candidate Set 候选集LDPC码的低复杂度盲识别方法
IF 3.9 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-02-23 DOI: 10.1109/LSP.2026.3667070
Zhuolun Wu;Yushan Zhang;Wei Zhang;Yanyan Liu
Adaptive modulation and coding (AMC) systems require the transmission of control signals, thereby reducing overall system transmission efficiency. The channel coding blind recognition technique is key to solving this problem. This paper proposes a reduced-complexity method for blind recognition of low-density parity-check (LDPC) coding parameters within a given candidate set, thereby enhancing the work efficiency of AMC systems. This paper applies a method based on code rate for classification evaluation, circumventing superfluous calculations for several candidate parity-check matrices. Furthermore, computational complexity is reduced by applying the offset min-sum algorithm (OMSA) to the parity-check stage. Subsequently, the Z-score is used to measure the difference between the actual data and the theoretical distribution. Compared with the best existing recognition methods, the proposed algorithm offers clear advantages in computational complexity and is virtually identical in recognition performance.
自适应调制编码(AMC)系统需要传输控制信号,从而降低了系统的整体传输效率。信道编码盲识别技术是解决这一问题的关键。提出了一种在给定候选集内对低密度奇偶校验(LDPC)编码参数进行盲识别的降低复杂度的方法,从而提高了AMC系统的工作效率。本文采用一种基于码率的分类评估方法,避免了对多个候选奇偶校验矩阵的多余计算。此外,将偏移最小和算法(OMSA)应用于奇偶校验阶段,降低了计算复杂度。随后,使用Z-score来衡量实际数据与理论分布之间的差异。与现有最好的识别方法相比,该算法在计算复杂度上有明显优势,在识别性能上基本一致。
{"title":"Reduced Complexity Blind Recognition Method of LDPC Codes Over a Candidate Set","authors":"Zhuolun Wu;Yushan Zhang;Wei Zhang;Yanyan Liu","doi":"10.1109/LSP.2026.3667070","DOIUrl":"https://doi.org/10.1109/LSP.2026.3667070","url":null,"abstract":"Adaptive modulation and coding (AMC) systems require the transmission of control signals, thereby reducing overall system transmission efficiency. The channel coding blind recognition technique is key to solving this problem. This paper proposes a reduced-complexity method for blind recognition of low-density parity-check (LDPC) coding parameters within a given candidate set, thereby enhancing the work efficiency of AMC systems. This paper applies a method based on code rate for classification evaluation, circumventing superfluous calculations for several candidate parity-check matrices. Furthermore, computational complexity is reduced by applying the offset min-sum algorithm (OMSA) to the parity-check stage. Subsequently, the Z-score is used to measure the difference between the actual data and the theoretical distribution. Compared with the best existing recognition methods, the proposed algorithm offers clear advantages in computational complexity and is virtually identical in recognition performance.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"1047-1051"},"PeriodicalIF":3.9,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Improved Time Series Similarity Measurement via Elliptical Information Granules 一种改进的椭圆信息颗粒时间序列相似性度量方法
IF 3.9 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-02-23 DOI: 10.1109/LSP.2026.3666904
Sheng Du;Chunyang Chu;Zixin Huang;Yunlong Wu;Witold Pedrycz
The role of granular computing in time series analysis is becoming increasingly important. Currently, there remains scope for enhancing the specificity of information description by information granules. To improve the specificity, a novel elliptical information granule is designed for time series similarity measurement in this paper. An elliptical information granule is defined by its centre and its long and short half-axes. Elliptical information granules can be constructed based on justifiability and specificity, guided by the principle of justifiable granularity. Multiple elliptical information granules are constructed using fuzzy C-means clustering and compactness principles. A time series similarity measurement method is then developed based on the geometric similarity of elliptical information granules. Experimental result shows that the constructed elliptical information granules provide a more specific information description compared to rectangular information granules, while offering significant advantages in time series similarity measurement. The proposed method has significant potential for time series analysis and modelling.
粒度计算在时间序列分析中的作用越来越重要。目前,信息颗粒对信息描述的专一性还有待提高。为了提高时间序列相似性度量的特异性,本文设计了一种新的椭圆信息粒。椭圆信息粒由其中心和长、短半轴来定义。椭圆信息颗粒可以在合理粒度原则的指导下,以正当性和专一性为基础构建。利用模糊c均值聚类和紧性原理构造了多个椭圆信息粒。基于椭圆信息粒的几何相似度,提出了一种时间序列相似度度量方法。实验结果表明,与矩形信息颗粒相比,构建的椭圆信息颗粒提供了更具体的信息描述,同时在时间序列相似性度量方面具有显著优势。该方法在时间序列分析和建模方面具有很大的潜力。
{"title":"An Improved Time Series Similarity Measurement via Elliptical Information Granules","authors":"Sheng Du;Chunyang Chu;Zixin Huang;Yunlong Wu;Witold Pedrycz","doi":"10.1109/LSP.2026.3666904","DOIUrl":"https://doi.org/10.1109/LSP.2026.3666904","url":null,"abstract":"The role of granular computing in time series analysis is becoming increasingly important. Currently, there remains scope for enhancing the specificity of information description by information granules. To improve the specificity, a novel elliptical information granule is designed for time series similarity measurement in this paper. An elliptical information granule is defined by its centre and its long and short half-axes. Elliptical information granules can be constructed based on justifiability and specificity, guided by the principle of justifiable granularity. Multiple elliptical information granules are constructed using fuzzy C-means clustering and compactness principles. A time series similarity measurement method is then developed based on the geometric similarity of elliptical information granules. Experimental result shows that the constructed elliptical information granules provide a more specific information description compared to rectangular information granules, while offering significant advantages in time series similarity measurement. The proposed method has significant potential for time series analysis and modelling.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"1043-1046"},"PeriodicalIF":3.9,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Signal Processing Letters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1