首页 > 最新文献

Digital Signal Processing最新文献

英文 中文
GLS: A hybrid deep learning model for radar emitter signal sorting
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-02-26 DOI: 10.1016/j.dsp.2025.105117
Liangang Qi , Hongzhuo Chen , Qiang Guo , Shuai Huang , Mykola Kaliuzhnyi
Radar emitter signal sorting is a pivotal aspect of radar reconnaissance signal processing. The increasing density of the electromagnetic environment in modern radar pulse streams, coupled with the growing complexity and variability of operational modes and signal forms, results in extremely limited reference data. Consequently, most existing sorting methods fall short of meeting the performance requirements of modern electronic warfare. To enhance sorting performance under conditions of limited samples and labeled data, this paper proposes a radar emitter signal sorting model based on ResGCN-BiLSTM-SE (GLS). Firstly, we propose a novel adaptive weighted adjacency matrix construction method that aggregates multi-scale information of local and global features. Based on this, for GLS networks, the graph convolutional network (ResGCN) is combined with the bidirectional long short-term memory (BiLSTM) network. The GCN is employed to extract attribute features from interleaved radar pulse sequences, while the BiLSTM is utilized to deeply capture the temporal dependence in interleaved pulse sequences after feature extraction. Finally, an improved squeeze-and-excitation (SE) module is applied to perform weighted fusion of critical channel information from both spatial and temporal features. Simulation results demonstrate that the proposed method not only achieves higher accuracy under small sample conditions compared to existing methods, but also exhibits strong robustness in challenging scenarios involving measurement errors, missing pulses, and spurious pulses.
{"title":"GLS: A hybrid deep learning model for radar emitter signal sorting","authors":"Liangang Qi ,&nbsp;Hongzhuo Chen ,&nbsp;Qiang Guo ,&nbsp;Shuai Huang ,&nbsp;Mykola Kaliuzhnyi","doi":"10.1016/j.dsp.2025.105117","DOIUrl":"10.1016/j.dsp.2025.105117","url":null,"abstract":"<div><div>Radar emitter signal sorting is a pivotal aspect of radar reconnaissance signal processing. The increasing density of the electromagnetic environment in modern radar pulse streams, coupled with the growing complexity and variability of operational modes and signal forms, results in extremely limited reference data. Consequently, most existing sorting methods fall short of meeting the performance requirements of modern electronic warfare. To enhance sorting performance under conditions of limited samples and labeled data, this paper proposes a radar emitter signal sorting model based on ResGCN-BiLSTM-SE (GLS). Firstly, we propose a novel adaptive weighted adjacency matrix construction method that aggregates multi-scale information of local and global features. Based on this, for GLS networks, the graph convolutional network (ResGCN) is combined with the bidirectional long short-term memory (BiLSTM) network. The GCN is employed to extract attribute features from interleaved radar pulse sequences, while the BiLSTM is utilized to deeply capture the temporal dependence in interleaved pulse sequences after feature extraction. Finally, an improved squeeze-and-excitation (SE) module is applied to perform weighted fusion of critical channel information from both spatial and temporal features. Simulation results demonstrate that the proposed method not only achieves higher accuracy under small sample conditions compared to existing methods, but also exhibits strong robustness in challenging scenarios involving measurement errors, missing pulses, and spurious pulses.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"161 ","pages":"Article 105117"},"PeriodicalIF":2.9,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143509856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HAMSA: Hybrid attention transformer and multi-scale alignment aggregation network for video super-resolution
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-02-25 DOI: 10.1016/j.dsp.2025.105098
Hanguang Xiao , Hao Wen , Xin Wang , Kun Zuo , Tianqi Liu , Wei Wang , Yong Xu
Video Super-Resolution (VSR) aims to enhance the resolution of video frames by utilizing multiple adjacent low-resolution frames. For across-frame information extraction, most existing methods usually employ the optical flow or learned offsets through deformable convolution to perform alignment. However, due to the complexity of real-world motions, the estimating of flow or motion offsets can be inaccurate while challenging. To address this problem, we propose a novel hybrid attention transformer and multi-scale alignment aggregation network for video super-resolution, named HAMSA. The proposed HAMSA adopts a U-shaped architecture to achieve progressive alignment using a multi-scale manner. Specifically, we develop a hybrid attention transformer (HAT) feature extraction module, which uses the proposed channel motion attention (CMA) to extract features that facilitate inter-frame alignment. Second, we first design a U-shaped multi-scale feature alignment (MSFA) module that ensures precise motion estimation between different frames by starting from large-scale features, gradually aligning them to smaller scales, and then restoring them using skip connections and upsampling. In addition, to further refine the alignment process, we introduce a non-local feature aggregation (NLFA) module, which serves to apply non-local operations to minimize alignment errors and enhance the detail fidelity, thereby improving the overall quality of the super-resolved video frames. Extensive experiments on the Vid4, Vimeo90k-T, and REDS4 datasets demonstrate that our HAMSA achieves superior VSR performance compared to other state-of-the-art (SOTA) methods while maintaining a good balance between model size and performance.
{"title":"HAMSA: Hybrid attention transformer and multi-scale alignment aggregation network for video super-resolution","authors":"Hanguang Xiao ,&nbsp;Hao Wen ,&nbsp;Xin Wang ,&nbsp;Kun Zuo ,&nbsp;Tianqi Liu ,&nbsp;Wei Wang ,&nbsp;Yong Xu","doi":"10.1016/j.dsp.2025.105098","DOIUrl":"10.1016/j.dsp.2025.105098","url":null,"abstract":"<div><div>Video Super-Resolution (VSR) aims to enhance the resolution of video frames by utilizing multiple adjacent low-resolution frames. For across-frame information extraction, most existing methods usually employ the optical flow or learned offsets through deformable convolution to perform alignment. However, due to the complexity of real-world motions, the estimating of flow or motion offsets can be inaccurate while challenging. To address this problem, we propose a novel hybrid attention transformer and multi-scale alignment aggregation network for video super-resolution, named HAMSA. The proposed HAMSA adopts a U-shaped architecture to achieve progressive alignment using a multi-scale manner. Specifically, we develop a hybrid attention transformer (HAT) feature extraction module, which uses the proposed channel motion attention (CMA) to extract features that facilitate inter-frame alignment. Second, we first design a U-shaped multi-scale feature alignment (MSFA) module that ensures precise motion estimation between different frames by starting from large-scale features, gradually aligning them to smaller scales, and then restoring them using skip connections and upsampling. In addition, to further refine the alignment process, we introduce a non-local feature aggregation (NLFA) module, which serves to apply non-local operations to minimize alignment errors and enhance the detail fidelity, thereby improving the overall quality of the super-resolved video frames. Extensive experiments on the Vid4, Vimeo90k-T, and REDS4 datasets demonstrate that our HAMSA achieves superior VSR performance compared to other state-of-the-art (SOTA) methods while maintaining a good balance between model size and performance.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"161 ","pages":"Article 105098"},"PeriodicalIF":2.9,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143487879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-capacity reversible data hiding in encrypted images based on multi-predictions and efficient parametric binary tree labeling
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-02-24 DOI: 10.1016/j.dsp.2025.105096
Hua Ren , Tongtong Chen , Ming Li , Zhen Yue , Danjie Han , Guangrong Bai
Reversible data hiding in encrypted images (RDHEI) enables the embedding of secret data into encrypted images while preserving the ability to fully recover the original images. Existing schemes typically leverage pixel redundancies for data embedding, but they are constrained by the choices of predictors and coding rules, which may result in inefficient bit utilization and increased auxiliary data. This paper presents a novel high-capacity RDHEI method to address these issues. We propose a multi-prediction strategy combining the median edge detector (MED) and the gradient-adjusted predictor (GAP) to improve prediction accuracy. Additionally, we introduce an efficient parametric binary tree labeling approach to categorize image pixels into embeddable, self-recording, and non-embeddable categories, which reduces the generation of auxiliary bits. Experimental results show that our method achieves embedding rates of 3.177, 3.098, 2.722, and 2.6533 bit per pixel (bpp) on the BOSSbase, BOWS-2, UCID, and CT-COVID datasets, respectively, while preserving the security and reversibility of the original image.
{"title":"High-capacity reversible data hiding in encrypted images based on multi-predictions and efficient parametric binary tree labeling","authors":"Hua Ren ,&nbsp;Tongtong Chen ,&nbsp;Ming Li ,&nbsp;Zhen Yue ,&nbsp;Danjie Han ,&nbsp;Guangrong Bai","doi":"10.1016/j.dsp.2025.105096","DOIUrl":"10.1016/j.dsp.2025.105096","url":null,"abstract":"<div><div>Reversible data hiding in encrypted images (RDHEI) enables the embedding of secret data into encrypted images while preserving the ability to fully recover the original images. Existing schemes typically leverage pixel redundancies for data embedding, but they are constrained by the choices of predictors and coding rules, which may result in inefficient bit utilization and increased auxiliary data. This paper presents a novel high-capacity RDHEI method to address these issues. We propose a multi-prediction strategy combining the median edge detector (MED) and the gradient-adjusted predictor (GAP) to improve prediction accuracy. Additionally, we introduce an efficient parametric binary tree labeling approach to categorize image pixels into embeddable, self-recording, and non-embeddable categories, which reduces the generation of auxiliary bits. Experimental results show that our method achieves embedding rates of 3.177, 3.098, 2.722, and 2.6533 bit per pixel (bpp) on the BOSSbase, BOWS-2, UCID, and CT-COVID datasets, respectively, while preserving the security and reversibility of the original image.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"161 ","pages":"Article 105096"},"PeriodicalIF":2.9,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143480466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SGRNet: Semantic-guided Retinex network for low-light image enhancement
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-02-24 DOI: 10.1016/j.dsp.2025.105087
Yun Wei, Lei Qiu
Under low-light conditions, details and edges in images are often difficult to discern. Semantic information of an image is related to the human understanding of the image's content. In low-light image enhancement (LLIE), it helps to recognize different objects, scenes and edges in images. Specifically, it can serve as prior knowledge to guide LLIE methods. However, existing semantic-guided LLIE methods still have shortcomings, such as semantic incoherence and insufficient target perception. To address those issues, a semantic-guided low-light image enhancement network (SGRNet) is proposed to improve the role of semantic priors in the enhancement process. Based on Retinex, low-light images are decomposed into illumination and reflectance with the aid of semantic maps. The semantic perception module, integrating semantic and structural information into images, can stabilize image structure and illumination distribution. The heterogeneous affinity module, incorporating high-resolution intermediate features of different scales into the enhancement net, can reduce the loss of image details during enhancement. Additionally, a self-calibration attention module is designed to decompose the reflectance, leveraging its cross-channel interaction capabilities to maintain color consistency. Extensive experiments on seven real datasets demonstrate the superiority of this method in preserving illumination distribution, details, and color consistency in enhanced images.
{"title":"SGRNet: Semantic-guided Retinex network for low-light image enhancement","authors":"Yun Wei,&nbsp;Lei Qiu","doi":"10.1016/j.dsp.2025.105087","DOIUrl":"10.1016/j.dsp.2025.105087","url":null,"abstract":"<div><div>Under low-light conditions, details and edges in images are often difficult to discern. Semantic information of an image is related to the human understanding of the image's content. In low-light image enhancement (LLIE), it helps to recognize different objects, scenes and edges in images. Specifically, it can serve as prior knowledge to guide LLIE methods. However, existing semantic-guided LLIE methods still have shortcomings, such as semantic incoherence and insufficient target perception. To address those issues, a semantic-guided low-light image enhancement network (SGRNet) is proposed to improve the role of semantic priors in the enhancement process. Based on Retinex, low-light images are decomposed into illumination and reflectance with the aid of semantic maps. The semantic perception module, integrating semantic and structural information into images, can stabilize image structure and illumination distribution. The heterogeneous affinity module, incorporating high-resolution intermediate features of different scales into the enhancement net, can reduce the loss of image details during enhancement. Additionally, a self-calibration attention module is designed to decompose the reflectance, leveraging its cross-channel interaction capabilities to maintain color consistency. Extensive experiments on seven real datasets demonstrate the superiority of this method in preserving illumination distribution, details, and color consistency in enhanced images.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"161 ","pages":"Article 105087"},"PeriodicalIF":2.9,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143480471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Monotonically accelerated proximal gradient for nonnegative tensor decomposition
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-02-24 DOI: 10.1016/j.dsp.2025.105097
Deqing Wang
Efficient tensor decomposition requires stable and convergent optimization algorithms. The accelerated proximal gradient (APG) is a workhorse algorithm for nonnegative tensor decomposition. For large-scale tensors, APG is always implemented to optimize the subproblems in the block coordinate descent framework. However, APG cannot guarantee monotonic convergence in the optimization process. In this paper, we develop monotonically accelerated algorithms to improve the efficiency of tensor decomposition. We propose four criteria to monitor the convergence state in the subproblem. Based on each criterion, we propose monotonic convergence rules for the subproblem. We evaluate the monotonically accelerated algorithms via six experiments covering a wide range of types of tensors. The experimental results demonstrate that our proposed algorithms with monotonic convergence monitoring have significant acceleration effects and high precision compared with those without monitoring. After the experiments, we present the selection rule of the monotonic monitoring criterion for different types of tensors.
{"title":"Monotonically accelerated proximal gradient for nonnegative tensor decomposition","authors":"Deqing Wang","doi":"10.1016/j.dsp.2025.105097","DOIUrl":"10.1016/j.dsp.2025.105097","url":null,"abstract":"<div><div>Efficient tensor decomposition requires stable and convergent optimization algorithms. The accelerated proximal gradient (APG) is a workhorse algorithm for nonnegative tensor decomposition. For large-scale tensors, APG is always implemented to optimize the subproblems in the block coordinate descent framework. However, APG cannot guarantee monotonic convergence in the optimization process. In this paper, we develop monotonically accelerated algorithms to improve the efficiency of tensor decomposition. We propose four criteria to monitor the convergence state in the subproblem. Based on each criterion, we propose monotonic convergence rules for the subproblem. We evaluate the monotonically accelerated algorithms via six experiments covering a wide range of types of tensors. The experimental results demonstrate that our proposed algorithms with monotonic convergence monitoring have significant acceleration effects and high precision compared with those without monitoring. After the experiments, we present the selection rule of the monotonic monitoring criterion for different types of tensors.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"161 ","pages":"Article 105097"},"PeriodicalIF":2.9,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143510530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fully differential decoder for decoding lattice codes using neural networks
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-02-24 DOI: 10.1016/j.dsp.2025.105088
Mohammad-Reza Sadeghi, Hassan Noghrei
Short-length lattice codes are crucial in various applications, including channel estimation and quantization. This paper introduces a novel weighted lattice decoder (WLD) that utilizes a parametric function to process decoder inputs and incorporates a weighted Belief Propagation (BP) algorithm. To further enhance the accuracy of the decoder's estimations, a new two-part multiloss function is proposed. This innovative approach significantly improves the performance of E8, Barns-Wall BW8, and BCH lattice codes. The proposed WLD demonstrates notable improvements in the error-floor region, achieving gains of up to 1.4 dB and 2.3 dB on the Symbol Error Rate (SER) curve compared to the primary BP decoder and the Neural Network Lattice Decoding Algorithm, respectively. By leveraging these advancements, the WLD offers a more robust and efficient decoding solution, making it highly suitable for real-time applications where low latency and high accuracy are paramount.
{"title":"Fully differential decoder for decoding lattice codes using neural networks","authors":"Mohammad-Reza Sadeghi,&nbsp;Hassan Noghrei","doi":"10.1016/j.dsp.2025.105088","DOIUrl":"10.1016/j.dsp.2025.105088","url":null,"abstract":"<div><div>Short-length lattice codes are crucial in various applications, including channel estimation and quantization. This paper introduces a novel weighted lattice decoder (WLD) that utilizes a parametric function to process decoder inputs and incorporates a weighted Belief Propagation (BP) algorithm. To further enhance the accuracy of the decoder's estimations, a new two-part multiloss function is proposed. This innovative approach significantly improves the performance of <span><math><msub><mrow><mi>E</mi></mrow><mrow><mn>8</mn></mrow></msub></math></span>, Barns-Wall <span><math><msub><mrow><mtext>BW</mtext></mrow><mrow><mn>8</mn></mrow></msub></math></span>, and BCH lattice codes. The proposed WLD demonstrates notable improvements in the error-floor region, achieving gains of up to 1.4 dB and 2.3 dB on the Symbol Error Rate (SER) curve compared to the primary BP decoder and the Neural Network Lattice Decoding Algorithm, respectively. By leveraging these advancements, the WLD offers a more robust and efficient decoding solution, making it highly suitable for real-time applications where low latency and high accuracy are paramount.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"161 ","pages":"Article 105088"},"PeriodicalIF":2.9,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143480469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-terminal modulation classification network with rain attenuation interference for UAV MIMO-OFDM communications using blind signal reconstruction and gradient integration optimization
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-02-24 DOI: 10.1016/j.dsp.2025.105071
Gongjing Zhang , Nan Yan , Jiashu Dai , Zeliang An , Yifa Li
The field of Automatic Modulation Classification (AMC) has emerged as a critical component in the advancement of next-generation intelligent Unmanned Aerial Vehicles (UAVs), 6G cognitive space communications, and spectrum regulation initiatives. Our research introduces an innovative AMC algorithm tailored for UAV MIMO-OFDM communication systems. This algorithm leverages blind signal reconstruction, constellation density matrix analysis, multi-terminal decision fusion, and model optimization training to enhance performance. The algorithm begins with the application of blind source separation to reconstruct signals and bolster their representation capabilities. Subsequently, we introduce a novel feature, the Enhanced Constellation Density Matrix (CDM), crafted to withstand the challenges posed by UAV channel interferences while providing a robust representation of the constellation diagram. Building upon this foundation, we propose the UAV-Decision Fusion Network (UAV-DFNet), an advanced network that utilizes CDM features as inputs to deeply mine signal characteristics and achieve superior signal recognition accuracy. To further refine the classification precision, we implement dual strategies: multi-terminal decision fusion and gradient integration, into the UAV-DFNet. Comprehensive experimental results substantiate the effectiveness and superiority of our UAV-DFNet classifier over existing deep learning (DL)-based classifiers, demonstrating its potential to significantly advance the state of the art in UAV cognitive communications and beyond.
{"title":"Multi-terminal modulation classification network with rain attenuation interference for UAV MIMO-OFDM communications using blind signal reconstruction and gradient integration optimization","authors":"Gongjing Zhang ,&nbsp;Nan Yan ,&nbsp;Jiashu Dai ,&nbsp;Zeliang An ,&nbsp;Yifa Li","doi":"10.1016/j.dsp.2025.105071","DOIUrl":"10.1016/j.dsp.2025.105071","url":null,"abstract":"<div><div>The field of Automatic Modulation Classification (AMC) has emerged as a critical component in the advancement of next-generation intelligent Unmanned Aerial Vehicles (UAVs), 6G cognitive space communications, and spectrum regulation initiatives. Our research introduces an innovative AMC algorithm tailored for UAV MIMO-OFDM communication systems. This algorithm leverages blind signal reconstruction, constellation density matrix analysis, multi-terminal decision fusion, and model optimization training to enhance performance. The algorithm begins with the application of blind source separation to reconstruct signals and bolster their representation capabilities. Subsequently, we introduce a novel feature, the Enhanced Constellation Density Matrix (CDM), crafted to withstand the challenges posed by UAV channel interferences while providing a robust representation of the constellation diagram. Building upon this foundation, we propose the UAV-Decision Fusion Network (UAV-DFNet), an advanced network that utilizes CDM features as inputs to deeply mine signal characteristics and achieve superior signal recognition accuracy. To further refine the classification precision, we implement dual strategies: multi-terminal decision fusion and gradient integration, into the UAV-DFNet. Comprehensive experimental results substantiate the effectiveness and superiority of our UAV-DFNet classifier over existing deep learning (DL)-based classifiers, demonstrating its potential to significantly advance the state of the art in UAV cognitive communications and beyond.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"161 ","pages":"Article 105071"},"PeriodicalIF":2.9,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143487876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Blind source separation method based on blind compression transformation under impulsive noise
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-02-24 DOI: 10.1016/j.dsp.2025.105095
Zhiwei Zhang , Hongyuan Gao , Qinglin Zhu , Yufeng Wang , Jiayi Wang
When strong impulsive noise exists in observed signals, the existing blind source separation (BSS) methods are less accurate or even ineffective, and the parameter settings of existing noise suppression methods rely on prior knowledge to ensure good performance, thus cannot be applied to the BSS problem. To address the above problems, this paper proposes a BSS method that can still achieve effective signal separation under impulsive noise. A new compression transformation function that does not depend on any prior knowledge is designed to process the observed signals, named the blind compression transformation (BCT) function. The received observed signals are processed using the proposed BCT, and then the short-time Fourier transformation (STFT) is performed on the processed observed signals to complete the signal separation in the frequency domain. An adaptive energy correlation permutation algorithm based on frequency correction is designed to solve the permutation ambiguity in the frequency domain, and the inverse short-time Fourier transformation (ISTFT) is performed to achieve the source signals recovery. In general, the proposed method can suppress impulsive noise without any prior knowledge and solve permutation ambiguity without empirically setting threshold, which achieves effective signal separation under impulsive noise. The superior performance of our proposed method is evaluated through numerical simulations for the considered scenarios.
{"title":"Blind source separation method based on blind compression transformation under impulsive noise","authors":"Zhiwei Zhang ,&nbsp;Hongyuan Gao ,&nbsp;Qinglin Zhu ,&nbsp;Yufeng Wang ,&nbsp;Jiayi Wang","doi":"10.1016/j.dsp.2025.105095","DOIUrl":"10.1016/j.dsp.2025.105095","url":null,"abstract":"<div><div>When strong impulsive noise exists in observed signals, the existing blind source separation (BSS) methods are less accurate or even ineffective, and the parameter settings of existing noise suppression methods rely on prior knowledge to ensure good performance, thus cannot be applied to the BSS problem. To address the above problems, this paper proposes a BSS method that can still achieve effective signal separation under impulsive noise. A new compression transformation function that does not depend on any prior knowledge is designed to process the observed signals, named the blind compression transformation (BCT) function. The received observed signals are processed using the proposed BCT, and then the short-time Fourier transformation (STFT) is performed on the processed observed signals to complete the signal separation in the frequency domain. An adaptive energy correlation permutation algorithm based on frequency correction is designed to solve the permutation ambiguity in the frequency domain, and the inverse short-time Fourier transformation (ISTFT) is performed to achieve the source signals recovery. In general, the proposed method can suppress impulsive noise without any prior knowledge and solve permutation ambiguity without empirically setting threshold, which achieves effective signal separation under impulsive noise. The superior performance of our proposed method is evaluated through numerical simulations for the considered scenarios.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"161 ","pages":"Article 105095"},"PeriodicalIF":2.9,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143480472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D localization using lensless event sensors for fast-moving objects
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-02-24 DOI: 10.1016/j.dsp.2025.105077
Yue You , Yihong Wang , Yu Cai , Mingzhu Zhu , Bingwei He
A novel event sensor-based object localization method is proposed in this paper. It addresses the accuracy limitations of event sensors caused by their limited spatial resolution and binary grayscale levels. The method uses flickering beacons and replaces the event camera's lens with a mask printed with a marker field. This configuration distributes location-coded events across the entire sensor instead of confining them to a small region, as in traditional methods. Major algorithms, including pattern simulation and optimized matching, are designed to achieve 3D localization and pose estimation. Experiments show a 17.3% accuracy improvement over state-of-the-art event-based methods in average translation error, consistent across varying distances and angles. This demonstrates its suitability for surgical navigation, virtual reality, and other precise, real-time localization tasks.
{"title":"3D localization using lensless event sensors for fast-moving objects","authors":"Yue You ,&nbsp;Yihong Wang ,&nbsp;Yu Cai ,&nbsp;Mingzhu Zhu ,&nbsp;Bingwei He","doi":"10.1016/j.dsp.2025.105077","DOIUrl":"10.1016/j.dsp.2025.105077","url":null,"abstract":"<div><div>A novel event sensor-based object localization method is proposed in this paper. It addresses the accuracy limitations of event sensors caused by their limited spatial resolution and binary grayscale levels. The method uses flickering beacons and replaces the event camera's lens with a mask printed with a marker field. This configuration distributes location-coded events across the entire sensor instead of confining them to a small region, as in traditional methods. Major algorithms, including pattern simulation and optimized matching, are designed to achieve 3D localization and pose estimation. Experiments show a <strong>17.3%</strong> accuracy improvement over state-of-the-art event-based methods in average translation error, consistent across varying distances and angles. This demonstrates its suitability for <strong>surgical navigation</strong>, <strong>virtual reality</strong>, and other precise, real-time localization tasks.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"161 ","pages":"Article 105077"},"PeriodicalIF":2.9,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143480456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel particle filter with noisy input
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-02-24 DOI: 10.1016/j.dsp.2025.105086
Xinyu Zhang , Miao Gao , Tiancheng Li , Jiemin Duan , Yingmin Yi , Junli Liang
In nonlinear systems, system inputs play a critical role in achieving control objectives, yet they are highly susceptible to noise during measurement and execution. Ignoring input noise can cause the standard particle filter (SPF) algorithm to produce biased estimates. To address this issue, this study begins by analyzing how input noise contributes to the deviation in the SPF at first. A novel particle filter (PF) then is proposed, designed to be robust against noisy inputs by incorporating information from both process noise and input noise. This approach constructs a new importance density. Drawing inspiration from Gibbs sampling, the method hierarchically and independently samples input and state variables from the new importance density, which accounts for both input and state randomness. The input random variable is eliminated through Monte Carlo independent resampling of the two variables, yielding the final state estimate. To validate the proposed method, three comparative experiments were conducted, evaluating the SPF, the combined particle filter (CPF), and the auxiliary particle filter (APF) algorithms. The results demonstrate that the new PF outperforms SPF in handling nonlinear, non-Gaussian systems with noisy inputs and effectively mitigates deviations caused by input noise.
{"title":"A novel particle filter with noisy input","authors":"Xinyu Zhang ,&nbsp;Miao Gao ,&nbsp;Tiancheng Li ,&nbsp;Jiemin Duan ,&nbsp;Yingmin Yi ,&nbsp;Junli Liang","doi":"10.1016/j.dsp.2025.105086","DOIUrl":"10.1016/j.dsp.2025.105086","url":null,"abstract":"<div><div>In nonlinear systems, system inputs play a critical role in achieving control objectives, yet they are highly susceptible to noise during measurement and execution. Ignoring input noise can cause the standard particle filter (SPF) algorithm to produce biased estimates. To address this issue, this study begins by analyzing how input noise contributes to the deviation in the SPF at first. A novel particle filter (PF) then is proposed, designed to be robust against noisy inputs by incorporating information from both process noise and input noise. This approach constructs a new importance density. Drawing inspiration from Gibbs sampling, the method hierarchically and independently samples input and state variables from the new importance density, which accounts for both input and state randomness. The input random variable is eliminated through Monte Carlo independent resampling of the two variables, yielding the final state estimate. To validate the proposed method, three comparative experiments were conducted, evaluating the SPF, the combined particle filter (CPF), and the auxiliary particle filter (APF) algorithms. The results demonstrate that the new PF outperforms SPF in handling nonlinear, non-Gaussian systems with noisy inputs and effectively mitigates deviations caused by input noise.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"161 ","pages":"Article 105086"},"PeriodicalIF":2.9,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143511795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Digital Signal Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1