Pub Date : 2026-03-02DOI: 10.1109/LSP.2026.3669434
Akash Sen;C.S. Sastry
The problem of nonnegative least squares (NNLS) has numerous applications in signal analysis. Recently, the algorithm unrolling has gained significant attention due to its superior approximation results compared to iterative methods. In this paper, we discuss the NNLS probelm in an interpretable data-driven set up using unrolled proximal gradient descent method (UPGDM), and establish its analytical guarantees. An advantage of this method over its conventional counterparts is that it provides a faster and better inference for an input data, once the network is trained. In particular, this paper provides convergence guarantees of the network by bounding the number of training samples for zero training error. Further, it demonstrates relevance of UPGDM through an application in Electrical Impedance Tomography.
{"title":"Deep Unrolled Networks for Nonnegative Least Squares Problem: Analysis and Application","authors":"Akash Sen;C.S. Sastry","doi":"10.1109/LSP.2026.3669434","DOIUrl":"https://doi.org/10.1109/LSP.2026.3669434","url":null,"abstract":"The problem of <italic>nonnegative least squares</i> (NNLS) has numerous applications in signal analysis. Recently, the algorithm unrolling has gained significant attention due to its superior approximation results compared to iterative methods. In this paper, we discuss the NNLS probelm in an interpretable data-driven set up using <italic>unrolled proximal gradient descent method</i> (UPGDM), and establish its analytical guarantees. An advantage of this method over its conventional counterparts is that it provides a faster and better inference for an input data, once the network is trained. In particular, this paper provides convergence guarantees of the network by bounding the number of training samples for zero training error. Further, it demonstrates relevance of UPGDM through an application in Electrical Impedance Tomography.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"1150-1154"},"PeriodicalIF":3.9,"publicationDate":"2026-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147440519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-27DOI: 10.1109/LSP.2026.3668456
Xiaoyue Wu;Tianyi Lyu;Mingye Ju
Currently available dehazing methods, whether based on hand-crafted priors or learned from datasets, typically ignore the brightness consistency between hazy images and their dehazed results, which often leads to over-enhancement and color cast. To address this issue, we first investigate a patch-wise nonlinear brightness prior (PNBP) that explicitly characterizes the relationship between the brightness of hazy patches and that of their clear counterparts. By combining PNBP with the atmospheric scattering model, the single image dehazing problem can be recast as a restoration formula with only three parameters, substantially shrinking the solution space for haze removal. Under a multi-objective joint optimization that simultaneously considers information gain, exposure, and preservation of the pixel-histogram distribution, this restoration formula can directly produce high-quality dehazed images. Thanks to PNBP, our method inherits brightness consistency from the prior and thereby avoids the risk of over-enhancement while reducing the possibility of color cast. Extensive experiments demonstrate that the proposed method outperforms state-of-the-art approaches in terms of defogging quality, robustness, and computational efficiency.
{"title":"Image Dehazing Using Patch-Wise Nonlinear Brightness Prior","authors":"Xiaoyue Wu;Tianyi Lyu;Mingye Ju","doi":"10.1109/LSP.2026.3668456","DOIUrl":"https://doi.org/10.1109/LSP.2026.3668456","url":null,"abstract":"Currently available dehazing methods, whether based on hand-crafted priors or learned from datasets, typically ignore the brightness consistency between hazy images and their dehazed results, which often leads to over-enhancement and color cast. To address this issue, we first investigate a patch-wise nonlinear brightness prior (PNBP) that explicitly characterizes the relationship between the brightness of hazy patches and that of their clear counterparts. By combining PNBP with the atmospheric scattering model, the single image dehazing problem can be recast as a restoration formula with only three parameters, substantially shrinking the solution space for haze removal. Under a multi-objective joint optimization that simultaneously considers information gain, exposure, and preservation of the pixel-histogram distribution, this restoration formula can directly produce high-quality dehazed images. Thanks to PNBP, our method inherits brightness consistency from the prior and thereby avoids the risk of over-enhancement while reducing the possibility of color cast. Extensive experiments demonstrate that the proposed method outperforms state-of-the-art approaches in terms of defogging quality, robustness, and computational efficiency.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"1140-1144"},"PeriodicalIF":3.9,"publicationDate":"2026-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147440546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-26DOI: 10.1109/LSP.2026.3668611
Ran Zhang;Kaihong Guo
Histopathological breast cancer images often suffer from structural heterogeneity and unclear or complex boundaries. Acquiring pixel-level annotations is costly, limiting the effectiveness and generalizability of traditional segmentation methods. To address these challenges, we propose the Fuzzy Measure-Guided Semi-Supervised Breast Cancer Image Segmentation Network (FuzMGNet). This approach combines fuzzy measures with Convolutional Recurrent Neural Networks (CRNN) and employs Choquet integral-based non-additive feature fusion. A pseudo-labeling guidance mechanism is used to improve boundary delineation. FuzMGNet captures multi-scale contextual information through hierarchical convolutional encoding and spatial modeling using recurrent units. The fuzzy measure dynamically adjusts feature fusion strategies, enhancing the network's adaptability across different images. The Choquet integral strengthens the model's ability to handle complex dependencies, improving segmentation accuracy. Finally, the pseudo-labeling mechanism enables effective training with limited labeled data. Experimental results show that FuzMGNet significantly outperforms traditional deep learning segmentation methods on the MIAS, BreakHis, and BACH datasets.
{"title":"Fuzzy Measure-Guided Semi-Supervised Breast Cancer Image Segmentation Network","authors":"Ran Zhang;Kaihong Guo","doi":"10.1109/LSP.2026.3668611","DOIUrl":"https://doi.org/10.1109/LSP.2026.3668611","url":null,"abstract":"Histopathological breast cancer images often suffer from structural heterogeneity and unclear or complex boundaries. Acquiring pixel-level annotations is costly, limiting the effectiveness and generalizability of traditional segmentation methods. To address these challenges, we propose the Fuzzy Measure-Guided Semi-Supervised Breast Cancer Image Segmentation Network (FuzMGNet). This approach combines fuzzy measures with Convolutional Recurrent Neural Networks (CRNN) and employs Choquet integral-based non-additive feature fusion. A pseudo-labeling guidance mechanism is used to improve boundary delineation. FuzMGNet captures multi-scale contextual information through hierarchical convolutional encoding and spatial modeling using recurrent units. The fuzzy measure dynamically adjusts feature fusion strategies, enhancing the network's adaptability across different images. The Choquet integral strengthens the model's ability to handle complex dependencies, improving segmentation accuracy. Finally, the pseudo-labeling mechanism enables effective training with limited labeled data. Experimental results show that FuzMGNet significantly outperforms traditional deep learning segmentation methods on the MIAS, BreakHis, and BACH datasets.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"1145-1149"},"PeriodicalIF":3.9,"publicationDate":"2026-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147440580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-26DOI: 10.1109/LSP.2026.3668169
Xie He;Qi Cui;Chang Wu;Yong Peng;Wanzeng Kong
Decoding speech intentions from electroencephalogram (EEG) data is the primary task in speech brain–computer interface (BCI) systems, which remains challenging due to the unclear discriminative task-aware features, and underlying nonlinear properties besides the well-known low signal-to-noise ratio of EEG data. Existing approaches typically rely either on single-domain features or performing feature learning by deep neural networks; therefore, they either fail to capture comprehensive signal patterns, or typically require large-sized EEG data to fit the parameter spaces and often have limited interpretability. To address these limitations, we propose a Multi-view Manifold-Adaptive Kernel Regression (MMKR) model for speech recognition from EEG signals in this paper. By treating temporal, spectral, and statistical EEG representations as complementary feature views, view-specific manifold-adaptive kernels are constructed in MMKR to incorporate local graph structure into kernel similarity; besides, a data-driven adaptive view weighting mechanism is used to characterize their contributions. We evaluate MMKR on both overt and imagined speech EEG datasets and the results demonstrate that MMKR achieves superior classification accuracy and robustness compared to some representative single-view, multi-view, and kernel-based baselines. Moreover, analysis on the local manifold-modulated kernel matrix and the learned view contributions are provided.
{"title":"Multi-View Manifold-Adaptive Kernel Regression for Speech Classification From EEG Signals","authors":"Xie He;Qi Cui;Chang Wu;Yong Peng;Wanzeng Kong","doi":"10.1109/LSP.2026.3668169","DOIUrl":"https://doi.org/10.1109/LSP.2026.3668169","url":null,"abstract":"Decoding speech intentions from electroencephalogram (EEG) data is the primary task in speech brain–computer interface (BCI) systems, which remains challenging due to the unclear discriminative task-aware features, and underlying nonlinear properties besides the well-known low signal-to-noise ratio of EEG data. Existing approaches typically rely either on single-domain features or performing feature learning by deep neural networks; therefore, they either fail to capture comprehensive signal patterns, or typically require large-sized EEG data to fit the parameter spaces and often have limited interpretability. To address these limitations, we propose a Multi-view Manifold-Adaptive Kernel Regression (MMKR) model for speech recognition from EEG signals in this paper. By treating temporal, spectral, and statistical EEG representations as complementary feature views, view-specific manifold-adaptive kernels are constructed in MMKR to incorporate local graph structure into kernel similarity; besides, a data-driven adaptive view weighting mechanism is used to characterize their contributions. We evaluate MMKR on both overt and imagined speech EEG datasets and the results demonstrate that MMKR achieves superior classification accuracy and robustness compared to some representative single-view, multi-view, and kernel-based baselines. Moreover, analysis on the local manifold-modulated kernel matrix and the learned view contributions are provided.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"1077-1081"},"PeriodicalIF":3.9,"publicationDate":"2026-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-24DOI: 10.1109/LSP.2026.3667841
Meiyingzi Xu;Wei Yang;Wenpeng Zhang
The growing density of communication and radar devices renders multiple-input multiple-output (MIMO) radar systems susceptible to severe detection performance degradation caused by even slight steering vector mismatches. To ensure robust detection under such mismatches, this paper presents a robust waveform design approach based on a steering vector uncertainty-constrained Max-Min signal-to-interference-plus-noise ratio (SINR) formulation. Compared to conventional Max-SINR designs, the proposed method optimizes waveforms that maintain high SINR even in the presence of steering vector errors. The problem incorporates constraints for spectral compatibility and peak-to-average power ratio (PAPR). To solve this non-convex problem, we develop an efficient iterative algorithm that employs successive convex approximation (SCA) to transform the original problem into a sequence of convex subproblems, which are then solved in parallel via the alternating direction method of multipliers (ADMM). Numerical simulations show a reduction in convergence time of up to 30% compared to existing techniques.
{"title":"MIMO Radar Waveform Design in Spectrum-Crowded Environments With Uncertain Steering Vectors","authors":"Meiyingzi Xu;Wei Yang;Wenpeng Zhang","doi":"10.1109/LSP.2026.3667841","DOIUrl":"https://doi.org/10.1109/LSP.2026.3667841","url":null,"abstract":"The growing density of communication and radar devices renders multiple-input multiple-output (MIMO) radar systems susceptible to severe detection performance degradation caused by even slight steering vector mismatches. To ensure robust detection under such mismatches, this paper presents a robust waveform design approach based on a steering vector uncertainty-constrained Max-Min signal-to-interference-plus-noise ratio (SINR) formulation. Compared to conventional Max-SINR designs, the proposed method optimizes waveforms that maintain high SINR even in the presence of steering vector errors. The problem incorporates constraints for spectral compatibility and peak-to-average power ratio (PAPR). To solve this non-convex problem, we develop an efficient iterative algorithm that employs successive convex approximation (SCA) to transform the original problem into a sequence of convex subproblems, which are then solved in parallel via the alternating direction method of multipliers (ADMM). Numerical simulations show a reduction in convergence time of up to 30% compared to existing techniques.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"1136-1139"},"PeriodicalIF":3.9,"publicationDate":"2026-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147440505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thriving ocean applications demand efficient underwater image compression over bandwidth-limited acoustic channels. Recent works combine compressed sensing with measurement compression to improve compression ratios. However, as underwater attenuation weakens structural cues, sampling methods tend to overlook structural information and yield poor reconstructions. Meanwhile, sampling leaves discrete measurements with weak intra-image correlations, making it difficult for entropy models within measurement compression to predict accurate probability distributions. In this paper, we propose an Underwater Compressed Sensing with Measurement Compression (UCSMC) framework including Sketch-Assisted Sampling (SAS) and Spatial-Dictionary-based Mixture Entropy Coding (SDMEC) for low-bit-rate reconstruction. Specifically, in sampling, we incorporate sketch with underwater priors to drive the sampling process, steering more measurements toward critical structural regions and ultimately improving reconstruction quality. Additionally, we introduce a learnable spatial dictionary storing per-location entropy statistical characteristics in the underwater domain, which indicates local estimation difficulty and guides adaptive attention allocation in the entropy model, thereby improving probability estimation accuracy. Experimental results show our method outperforms previous schemes in reconstruction quality and measurement compression efficiency.
{"title":"UCSMC: An Underwater Compressed Sensing With Measurement Compression Framework","authors":"Yanqi Zhang;Liquan Shen;Mengyao Li;Shiwei Wang;Junjie Zhu;Minjian Chen","doi":"10.1109/LSP.2026.3667068","DOIUrl":"https://doi.org/10.1109/LSP.2026.3667068","url":null,"abstract":"Thriving ocean applications demand efficient underwater image compression over bandwidth-limited acoustic channels. Recent works combine compressed sensing with measurement compression to improve compression ratios. However, as underwater attenuation weakens structural cues, sampling methods tend to overlook structural information and yield poor reconstructions. Meanwhile, sampling leaves discrete measurements with weak intra-image correlations, making it difficult for entropy models within measurement compression to predict accurate probability distributions. In this paper, we propose an Underwater Compressed Sensing with Measurement Compression (UCSMC) framework including Sketch-Assisted Sampling (SAS) and Spatial-Dictionary-based Mixture Entropy Coding (SDMEC) for low-bit-rate reconstruction. Specifically, in sampling, we incorporate sketch with underwater priors to drive the sampling process, steering more measurements toward critical structural regions and ultimately improving reconstruction quality. Additionally, we introduce a learnable spatial dictionary storing per-location entropy statistical characteristics in the underwater domain, which indicates local estimation difficulty and guides adaptive attention allocation in the entropy model, thereby improving probability estimation accuracy. Experimental results show our method outperforms previous schemes in reconstruction quality and measurement compression efficiency.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"1067-1071"},"PeriodicalIF":3.9,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-23DOI: 10.1109/LSP.2026.3666895
Mingrui Wu;Huan Hao;Ran Tao
Passive emitter localization using airborne platforms presents a challenging grid-search problem, complicated by complex platform motion and unknown carrier frequency offsets. Conventional methods often address motion compensation and frequency offset correction as separate, inefficient, and potentially error-prone steps. This paper introduces the Parametric Chunk Quantization (PCQ) algorithm, a unified framework that accelerates the grid search while jointly compensating for both factors. Inspired by product quantization, PCQ divides the received signal and candidate phase histories into chunks, which are then parametrically approximated as linear frequency modulated (LFM) components. By leveraging a precomputed lookup table of inner products between these LFM surrogates, PCQ dramatically reduces the computational cost of the grid search. Simulations using real-world UAV trajectory data demonstrate that PCQ achieves significant acceleration over conventional methods while maintaining competitive localization accuracy. The proposed technique offers a generalizable approach for accelerating parameter estimation in problems involving piecewise-LFM signals.
{"title":"Parametric Chunk Quantization Algorithm for Fast Passive Emitter Localization","authors":"Mingrui Wu;Huan Hao;Ran Tao","doi":"10.1109/LSP.2026.3666895","DOIUrl":"https://doi.org/10.1109/LSP.2026.3666895","url":null,"abstract":"Passive emitter localization using airborne platforms presents a challenging grid-search problem, complicated by complex platform motion and unknown carrier frequency offsets. Conventional methods often address motion compensation and frequency offset correction as separate, inefficient, and potentially error-prone steps. This paper introduces the Parametric Chunk Quantization (PCQ) algorithm, a unified framework that accelerates the grid search while jointly compensating for both factors. Inspired by product quantization, PCQ divides the received signal and candidate phase histories into chunks, which are then parametrically approximated as linear frequency modulated (LFM) components. By leveraging a precomputed lookup table of inner products between these LFM surrogates, PCQ dramatically reduces the computational cost of the grid search. Simulations using real-world UAV trajectory data demonstrate that PCQ achieves significant acceleration over conventional methods while maintaining competitive localization accuracy. The proposed technique offers a generalizable approach for accelerating parameter estimation in problems involving piecewise-LFM signals.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"1062-1066"},"PeriodicalIF":3.9,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Particle filtering algorithms have enabled practical solutions to problems in autonomous robotics (self-driving cars, UAVs, warehouse robots), target tracking, and econometrics, with further applications in speech processing and medicine (patient monitoring). Yet, their inherent weakness at representing the likelihood of the observation (which often leads to particle degeneracy) remains unaddressed for real-time resource-constrained systems. Improvements such as the optimal proposal and auxiliary particle filter mitigate this issue under specific circumstances and with increased computational cost. This work presents a new particle filtering method and its implementation, which enables tunably-approximative representation of arbitrary likelihood densities as program transformations of parametric distributions. Our method leverages a recent computing platform that can perform deterministic computation on probability distribution representations (UxHw) without relying on stochastic methods. For non-Gaussian non-linear systems and with an optimal-auxiliary particle filter, we benchmark the likelihood evaluation error and speed for a total of 294 840 evaluation points. For such models, the results show that the UxHw method leads to as much as 37.7x speedup compared to the Monte Carlo alternative. For narrow uniform measurement uncertainty, the particle filter falsely assigns zero likelihood as much as 81.89% of the time whereas UxHw achieves 1.52% false-zero rate. The UxHw approach achieves filter RMSE improvement of as much as 18.9% (average 3.3%) over the Monte Carlo alternative.
{"title":"Approximating Analytically-Intractable Likelihood Densities With Deterministic Arithmetic for Optimal Particle Filtering","authors":"Orestis Kaparounakis;Yunqi Zhang;Phillip Stanley-Marbell","doi":"10.1109/LSP.2026.3664784","DOIUrl":"https://doi.org/10.1109/LSP.2026.3664784","url":null,"abstract":"Particle filtering algorithms have enabled practical solutions to problems in autonomous robotics (self-driving cars, UAVs, warehouse robots), target tracking, and econometrics, with further applications in speech processing and medicine (patient monitoring). Yet, their inherent weakness at representing the likelihood of the observation (which often leads to particle degeneracy) remains unaddressed for real-time resource-constrained systems. Improvements such as the optimal proposal and auxiliary particle filter mitigate this issue under specific circumstances and with increased computational cost. This work presents a new particle filtering method and its implementation, which enables tunably-approximative representation of arbitrary likelihood densities as program transformations of parametric distributions. Our method leverages a recent computing platform that can perform deterministic computation on probability distribution representations (UxHw) without relying on stochastic methods. For non-Gaussian non-linear systems and with an optimal-auxiliary particle filter, we benchmark the likelihood evaluation error and speed for a total of 294 840 evaluation points. For such models, the results show that the UxHw method leads to as much as 37.7x speedup compared to the Monte Carlo alternative. For narrow uniform measurement uncertainty, the particle filter falsely assigns zero likelihood as much as 81.89% of the time whereas UxHw achieves 1.52% false-zero rate. The UxHw approach achieves filter RMSE improvement of as much as 18.9% (average 3.3%) over the Monte Carlo alternative.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"1033-1037"},"PeriodicalIF":3.9,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-23DOI: 10.1109/LSP.2026.3667070
Zhuolun Wu;Yushan Zhang;Wei Zhang;Yanyan Liu
Adaptive modulation and coding (AMC) systems require the transmission of control signals, thereby reducing overall system transmission efficiency. The channel coding blind recognition technique is key to solving this problem. This paper proposes a reduced-complexity method for blind recognition of low-density parity-check (LDPC) coding parameters within a given candidate set, thereby enhancing the work efficiency of AMC systems. This paper applies a method based on code rate for classification evaluation, circumventing superfluous calculations for several candidate parity-check matrices. Furthermore, computational complexity is reduced by applying the offset min-sum algorithm (OMSA) to the parity-check stage. Subsequently, the Z-score is used to measure the difference between the actual data and the theoretical distribution. Compared with the best existing recognition methods, the proposed algorithm offers clear advantages in computational complexity and is virtually identical in recognition performance.
{"title":"Reduced Complexity Blind Recognition Method of LDPC Codes Over a Candidate Set","authors":"Zhuolun Wu;Yushan Zhang;Wei Zhang;Yanyan Liu","doi":"10.1109/LSP.2026.3667070","DOIUrl":"https://doi.org/10.1109/LSP.2026.3667070","url":null,"abstract":"Adaptive modulation and coding (AMC) systems require the transmission of control signals, thereby reducing overall system transmission efficiency. The channel coding blind recognition technique is key to solving this problem. This paper proposes a reduced-complexity method for blind recognition of low-density parity-check (LDPC) coding parameters within a given candidate set, thereby enhancing the work efficiency of AMC systems. This paper applies a method based on code rate for classification evaluation, circumventing superfluous calculations for several candidate parity-check matrices. Furthermore, computational complexity is reduced by applying the offset min-sum algorithm (OMSA) to the parity-check stage. Subsequently, the Z-score is used to measure the difference between the actual data and the theoretical distribution. Compared with the best existing recognition methods, the proposed algorithm offers clear advantages in computational complexity and is virtually identical in recognition performance.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"1047-1051"},"PeriodicalIF":3.9,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The role of granular computing in time series analysis is becoming increasingly important. Currently, there remains scope for enhancing the specificity of information description by information granules. To improve the specificity, a novel elliptical information granule is designed for time series similarity measurement in this paper. An elliptical information granule is defined by its centre and its long and short half-axes. Elliptical information granules can be constructed based on justifiability and specificity, guided by the principle of justifiable granularity. Multiple elliptical information granules are constructed using fuzzy C-means clustering and compactness principles. A time series similarity measurement method is then developed based on the geometric similarity of elliptical information granules. Experimental result shows that the constructed elliptical information granules provide a more specific information description compared to rectangular information granules, while offering significant advantages in time series similarity measurement. The proposed method has significant potential for time series analysis and modelling.
{"title":"An Improved Time Series Similarity Measurement via Elliptical Information Granules","authors":"Sheng Du;Chunyang Chu;Zixin Huang;Yunlong Wu;Witold Pedrycz","doi":"10.1109/LSP.2026.3666904","DOIUrl":"https://doi.org/10.1109/LSP.2026.3666904","url":null,"abstract":"The role of granular computing in time series analysis is becoming increasingly important. Currently, there remains scope for enhancing the specificity of information description by information granules. To improve the specificity, a novel elliptical information granule is designed for time series similarity measurement in this paper. An elliptical information granule is defined by its centre and its long and short half-axes. Elliptical information granules can be constructed based on justifiability and specificity, guided by the principle of justifiable granularity. Multiple elliptical information granules are constructed using fuzzy C-means clustering and compactness principles. A time series similarity measurement method is then developed based on the geometric similarity of elliptical information granules. Experimental result shows that the constructed elliptical information granules provide a more specific information description compared to rectangular information granules, while offering significant advantages in time series similarity measurement. The proposed method has significant potential for time series analysis and modelling.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"33 ","pages":"1043-1046"},"PeriodicalIF":3.9,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}