首页 > 最新文献

Signal Processing最新文献

英文 中文
Two-stage reversible data hiding in encrypted domain with public key embedding mechanism
IF 3.4 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-02-24 DOI: 10.1016/j.sigpro.2025.109918
Yan Ke , Jia Liu , Yiliang Han
Reversible data hiding in encrypted domain (RDH-ED) can perform encryption and data embedding to simultaneously fulfill the privacy protection and access control. The key distribution in current RDH-ED primarily follows a symmetric mechanism, resulting in limitations in key management and distribution. Therefore, public-key embedding (PKE) mechanism in RDH-ED is proposed to address the limitations, where embedding permission is open to the public while extracting is under control. Then a two-stage RDH-ED scheme with PKE mechanism is designed based on learning with errors (LWE) for images. The algorithm of the first stage is redundancy recoding in LWE encrypted domain (RR-LWE) for the ciphertext encrypted from any a pixel bit. Public embedding key is specially constructed. N bits data could be embedded per ciphertext. The algorithm of the second stage is difference expansion in LWE encrypted domain (DE-LWE) for the ciphertext of the entire image following RR-LWE. It transfers the bit operations of DE from spatial domain into LWE encrypted domain. We theoretically deduce the necessary conditions for embedding correctness and security. Experimental results demonstrate the outperformed effects in security and efficiency of the proposed algorithms. RR-LWE achieves an embedding capacity up to 24 bits per pixel (bpp) and DE-LWE further enhances that by approximately 0.5 bpp.
{"title":"Two-stage reversible data hiding in encrypted domain with public key embedding mechanism","authors":"Yan Ke ,&nbsp;Jia Liu ,&nbsp;Yiliang Han","doi":"10.1016/j.sigpro.2025.109918","DOIUrl":"10.1016/j.sigpro.2025.109918","url":null,"abstract":"<div><div>Reversible data hiding in encrypted domain (RDH-ED) can perform encryption and data embedding to simultaneously fulfill the privacy protection and access control. The key distribution in current RDH-ED primarily follows a symmetric mechanism, resulting in limitations in key management and distribution. Therefore, public-key embedding (PKE) mechanism in RDH-ED is proposed to address the limitations, where embedding permission is open to the public while extracting is under control. Then a two-stage RDH-ED scheme with PKE mechanism is designed based on learning with errors (LWE) for images. The algorithm of the first stage is redundancy recoding in LWE encrypted domain (RR-LWE) for the ciphertext encrypted from any a pixel bit. Public embedding key is specially constructed. <span><math><mi>N</mi></math></span> bits data could be embedded per ciphertext. The algorithm of the second stage is difference expansion in LWE encrypted domain (DE-LWE) for the ciphertext of the entire image following RR-LWE. It transfers the bit operations of DE from spatial domain into LWE encrypted domain. We theoretically deduce the necessary conditions for embedding correctness and security. Experimental results demonstrate the outperformed effects in security and efficiency of the proposed algorithms. RR-LWE achieves an embedding capacity up to 24 bits per pixel (bpp) and DE-LWE further enhances that by approximately 0.5 bpp.</div></div>","PeriodicalId":49523,"journal":{"name":"Signal Processing","volume":"233 ","pages":"Article 109918"},"PeriodicalIF":3.4,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143487454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-focus image fusion based on visual depth and fractional-order differentiation operators embedding convolution norm 基于视觉深度和分数阶微分算子嵌入卷积规范的多焦点图像融合
IF 3.4 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-02-24 DOI: 10.1016/j.sigpro.2025.109955
Yongli Xian , Guangxin Zhao , Xuejian Chen , Congzheng Wang
Multi-focus image fusion technology integrates the focused regions of multiple source images to produce a single, all-in-focus image. However, existing methods have drawbacks, including image artifacts, color distortion, and ambiguous boundaries. In this paper, a spatial-domain two-stage fusion approach is proposed to address these challenges. In the first stage, a fractional-order differentiation operator embedding convolution norm is proposed to amplify pixel texture, while a weighted fusion is applied to obtain an initial fusion result. Here, the absolute difference map between initial fusion result and source images is used as the focus information, ensuring the accuracy of initial decision map. During the second stage, the source images and pseudo-depth information are jointly constructed the feature vector of K-nearest neighbors matting (KNNM) algorithm to refine the decision map, aiming to obtain final decision map with smoother boundaries. Experimental results indicate that the proposed method outperforms existing representative algorithms in both qualitative and quantitative evaluations.
{"title":"Multi-focus image fusion based on visual depth and fractional-order differentiation operators embedding convolution norm","authors":"Yongli Xian ,&nbsp;Guangxin Zhao ,&nbsp;Xuejian Chen ,&nbsp;Congzheng Wang","doi":"10.1016/j.sigpro.2025.109955","DOIUrl":"10.1016/j.sigpro.2025.109955","url":null,"abstract":"<div><div>Multi-focus image fusion technology integrates the focused regions of multiple source images to produce a single, all-in-focus image. However, existing methods have drawbacks, including image artifacts, color distortion, and ambiguous boundaries. In this paper, a spatial-domain two-stage fusion approach is proposed to address these challenges. In the first stage, a fractional-order differentiation operator embedding convolution norm is proposed to amplify pixel texture, while a weighted fusion is applied to obtain an initial fusion result. Here, the absolute difference map between initial fusion result and source images is used as the focus information, ensuring the accuracy of initial decision map. During the second stage, the source images and pseudo-depth information are jointly constructed the feature vector of K-nearest neighbors matting (KNNM) algorithm to refine the decision map, aiming to obtain final decision map with smoother boundaries. Experimental results indicate that the proposed method outperforms existing representative algorithms in both qualitative and quantitative evaluations.</div></div>","PeriodicalId":49523,"journal":{"name":"Signal Processing","volume":"233 ","pages":"Article 109955"},"PeriodicalIF":3.4,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143487297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analog filters based on the Mittag-Leffler functions
IF 3.4 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-02-23 DOI: 10.1016/j.sigpro.2025.109953
Anis Allagui , Ahmed S. Elwakil , Julia Nako , Costas Psychalinos
We propose and study a new class of filters (named hereinafter the Mittag-Leffler filters) based on the Mittag-Leffler function Eα,β(z) in its single-parameter or double-parameter forms by transposing its argument to the frequency-domain; i.e. z=s=jω. A unique feature of these filters is that their impulse response is a Gaussian-like (delta-like) deformed and delayed impulse function for which we derive exact expressions using the HFox function. We also study the frequency response of this class of filters and obtain lower-order, realizable integer-order approximations of its transfer functions. A second-order curve-fitting approximation is then used to perform experimental results using a Field Programmable Analog Array platform to verify the theory.
{"title":"Analog filters based on the Mittag-Leffler functions","authors":"Anis Allagui ,&nbsp;Ahmed S. Elwakil ,&nbsp;Julia Nako ,&nbsp;Costas Psychalinos","doi":"10.1016/j.sigpro.2025.109953","DOIUrl":"10.1016/j.sigpro.2025.109953","url":null,"abstract":"<div><div>We propose and study a new class of filters (named hereinafter the <em>Mittag-Leffler filters</em>) based on the Mittag-Leffler function <span><math><mrow><msub><mrow><mi>E</mi></mrow><mrow><mi>α</mi><mo>,</mo><mi>β</mi></mrow></msub><mrow><mo>(</mo><mi>z</mi><mo>)</mo></mrow></mrow></math></span> in its single-parameter or double-parameter forms by transposing its argument to the frequency-domain; i.e. <span><math><mrow><mi>z</mi><mo>=</mo><mo>−</mo><mi>s</mi><mo>=</mo><mo>−</mo><mi>j</mi><mi>ω</mi></mrow></math></span>. A unique feature of these filters is that their impulse response is a Gaussian-like (delta-like) deformed and delayed impulse function for which we derive exact expressions using the <span><math><mrow><mi>H</mi><mo>−</mo></mrow></math></span>Fox function. We also study the frequency response of this class of filters and obtain lower-order, realizable integer-order approximations of its transfer functions. A second-order curve-fitting approximation is then used to perform experimental results using a Field Programmable Analog Array platform to verify the theory.</div></div>","PeriodicalId":49523,"journal":{"name":"Signal Processing","volume":"233 ","pages":"Article 109953"},"PeriodicalIF":3.4,"publicationDate":"2025-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143487299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Two non-convex optimization approaches for joint transmit waveform and receive filter design
IF 3.4 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-02-22 DOI: 10.1016/j.sigpro.2025.109952
Mohammad Mahdi Omati, Seyed Mohammad Karbasi, Arash Amini
This study presents two innovative approaches for jointly optimizing the radar transmit waveform and receive filter to improve the signal-to-interference-plus-noise ratio (SINR) for extended targets under signal-dependent interference. We operate under the assumption of incomplete information about the target impulse response (TIR), which is confined within a predefined uncertainty set. To ensure robustness against this uncertainty, we frame the problem as a max–min (worst-case) optimization. Additionally, we impose a constant modulus constraint (because it has the lowest possible peak-to-average power ratio (PAPR)) on the transmit waveform to guarantee our system operates close to saturation. To solve this, both approaches use a sequential optimization procedure, alternating between the transmit waveform and receive filter subproblems. The first approach employs the ADMM, decomposing each subproblem into a semi-definite programming (SDP) problem and a least squares problem with a fixed rank constraint, solvable via SVD. The second approach tackles the problem over two Riemannian manifolds: the sphere manifold for the receive filter and the product of complex circles for the transmit signal. By applying manifold optimization, the constrained problem is transformed into an unconstrained one within a restricted search space. The max–min problem is reformulated as a minimization problem, yielding a closed-form expression involving log-sum-exp. This is solved using the Riemannian conjugate gradient descent (RCG) algorithm, which builds on Euclidean conjugate gradient descent and utilizes the manifold’s properties, such as the Riemannian metric and retraction. Our numerical results demonstrate the robustness and effectiveness of these methods across various uncertainty sets and target types.
{"title":"Two non-convex optimization approaches for joint transmit waveform and receive filter design","authors":"Mohammad Mahdi Omati,&nbsp;Seyed Mohammad Karbasi,&nbsp;Arash Amini","doi":"10.1016/j.sigpro.2025.109952","DOIUrl":"10.1016/j.sigpro.2025.109952","url":null,"abstract":"<div><div>This study presents two innovative approaches for jointly optimizing the radar transmit waveform and receive filter to improve the signal-to-interference-plus-noise ratio (SINR) for extended targets under signal-dependent interference. We operate under the assumption of incomplete information about the target impulse response (TIR), which is confined within a predefined uncertainty set. To ensure robustness against this uncertainty, we frame the problem as a max–min (worst-case) optimization. Additionally, we impose a constant modulus constraint (because it has the lowest possible peak-to-average power ratio (PAPR)) on the transmit waveform to guarantee our system operates close to saturation. To solve this, both approaches use a sequential optimization procedure, alternating between the transmit waveform and receive filter subproblems. The first approach employs the ADMM, decomposing each subproblem into a semi-definite programming (SDP) problem and a least squares problem with a fixed rank constraint, solvable via SVD. The second approach tackles the problem over two Riemannian manifolds: the sphere manifold for the receive filter and the product of complex circles for the transmit signal. By applying manifold optimization, the constrained problem is transformed into an unconstrained one within a restricted search space. The max–min problem is reformulated as a minimization problem, yielding a closed-form expression involving log-sum-exp. This is solved using the Riemannian conjugate gradient descent (RCG) algorithm, which builds on Euclidean conjugate gradient descent and utilizes the manifold’s properties, such as the Riemannian metric and retraction. Our numerical results demonstrate the robustness and effectiveness of these methods across various uncertainty sets and target types.</div></div>","PeriodicalId":49523,"journal":{"name":"Signal Processing","volume":"233 ","pages":"Article 109952"},"PeriodicalIF":3.4,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143474152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A bio-inspired approach to line segment detection utilizing orientation-selective neurons
IF 3.4 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-02-21 DOI: 10.1016/j.sigpro.2025.109950
Daipeng Yang , Bo Peng , Xi Wu
Line segment detection is essential for tasks like SLAM, camera pose estimation, and 3D reconstruction. Although many excellent line segment detection methods have been proposed, detecting more true positives while rejecting false positives remains challenging. The human visual system can effectively perceive line segments in complex environments through a processing pathway involving multiple visual cortices. Inspired by this, we propose a novel bio-inspired line segment detection method that mimics the perception of line segments in the visual cortex. Our method models orientation-selective neurons in the primary and secondary visual cortices. Based on the preferred orientations of these neurons, we integrate them to mimic the function of orientation and curvature domains in the fourth visual cortex, generating continuous and smooth edge segments. A post-processing step, including least squares line fitting and gap merging, is employed to obtain line segments. We evaluated our method against other state-of-the-art methods on YorkUrban-LineSegment and Wireframe. Results show that our method achieves a higher F-score, improving by 2.9% and 2.1%, respectively, while ensuring both precision and recall. Additionally, in 3D reconstruction, our method produces more complete and accurate scenes with fewer fragments and omissions compared to other methods. Our code is available at https://github.com/DaipengYang7/BILSD.
{"title":"A bio-inspired approach to line segment detection utilizing orientation-selective neurons","authors":"Daipeng Yang ,&nbsp;Bo Peng ,&nbsp;Xi Wu","doi":"10.1016/j.sigpro.2025.109950","DOIUrl":"10.1016/j.sigpro.2025.109950","url":null,"abstract":"<div><div>Line segment detection is essential for tasks like SLAM, camera pose estimation, and 3D reconstruction. Although many excellent line segment detection methods have been proposed, detecting more true positives while rejecting false positives remains challenging. The human visual system can effectively perceive line segments in complex environments through a processing pathway involving multiple visual cortices. Inspired by this, we propose a novel bio-inspired line segment detection method that mimics the perception of line segments in the visual cortex. Our method models orientation-selective neurons in the primary and secondary visual cortices. Based on the preferred orientations of these neurons, we integrate them to mimic the function of orientation and curvature domains in the fourth visual cortex, generating continuous and smooth edge segments. A post-processing step, including least squares line fitting and gap merging, is employed to obtain line segments. We evaluated our method against other state-of-the-art methods on YorkUrban-LineSegment and Wireframe. Results show that our method achieves a higher F-score, improving by 2.9% and 2.1%, respectively, while ensuring both precision and recall. Additionally, in 3D reconstruction, our method produces more complete and accurate scenes with fewer fragments and omissions compared to other methods. Our code is available at <span><span>https://github.com/DaipengYang7/BILSD</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49523,"journal":{"name":"Signal Processing","volume":"233 ","pages":"Article 109950"},"PeriodicalIF":3.4,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143479206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CurvPnP: Plug-and-play blind image restoration with deep curvature denoiser
IF 3.4 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-02-21 DOI: 10.1016/j.sigpro.2025.109951
Yutong Li , Huibin Chang , Yuping Duan
Due to the development of deep learning-based denoisers, the plug-and-play strategy has achieved great success in image restoration problems. However, existing plug-and-play image restoration methods are designed for non-blind Gaussian denoising such as Zhang et al. (2022), the performance of which visibly deteriorates for unknown noise. To push the limits of plug-and-play image restoration, we propose a novel image restoration framework with a blind Gaussian prior, which can deal with more complicated image restoration problems in the real world. More specifically, we build up a curvature regularization image restoration model by regarding the noise level as a variable, where the regularization term is realized by a two-stage blind Gaussian denoiser consisting of a noise estimation subnetwork and a denoising subnetwork. We also introduce curvature regularization into the encoder–decoder architecture and the supervised attention module to achieve a highly flexible and effective network. Numerous experimental results are provided to demonstrate the advantages of our deep curvature denoiser and the resulting plug-and-play blind image restoration method over the state-of-the-art denoising methods. Our model is shown to be able to recover fine image details and tiny structures even when the noise level is unknown for different image restoration tasks. The source codes are available at https://github.com/Duanlab123/CurvPnP.
{"title":"CurvPnP: Plug-and-play blind image restoration with deep curvature denoiser","authors":"Yutong Li ,&nbsp;Huibin Chang ,&nbsp;Yuping Duan","doi":"10.1016/j.sigpro.2025.109951","DOIUrl":"10.1016/j.sigpro.2025.109951","url":null,"abstract":"<div><div>Due to the development of deep learning-based denoisers, the plug-and-play strategy has achieved great success in image restoration problems. However, existing plug-and-play image restoration methods are designed for non-blind Gaussian denoising such as Zhang et al. (2022), the performance of which visibly deteriorates for unknown noise. To push the limits of plug-and-play image restoration, we propose a novel image restoration framework with a blind Gaussian prior, which can deal with more complicated image restoration problems in the real world. More specifically, we build up a curvature regularization image restoration model by regarding the noise level as a variable, where the regularization term is realized by a two-stage blind Gaussian denoiser consisting of a noise estimation subnetwork and a denoising subnetwork. We also introduce curvature regularization into the encoder–decoder architecture and the supervised attention module to achieve a highly flexible and effective network. Numerous experimental results are provided to demonstrate the advantages of our deep curvature denoiser and the resulting plug-and-play blind image restoration method over the state-of-the-art denoising methods. Our model is shown to be able to recover fine image details and tiny structures even when the noise level is unknown for different image restoration tasks. The source codes are available at <span><span>https://github.com/Duanlab123/CurvPnP</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49523,"journal":{"name":"Signal Processing","volume":"233 ","pages":"Article 109951"},"PeriodicalIF":3.4,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143487298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A quantum reversible color-to-grayscale conversion scheme via image encryption based on true random numbers and two-dimensional quantum walks
IF 3.4 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-02-19 DOI: 10.1016/j.sigpro.2025.109949
Nianqiao Li, Zhenjun Tang
Reversible Color-to-Grayscale Conversion (RCGC) is a method for converting color images to grayscale while retaining sufficient information to reconstruct the original color image when needed. This study proposes a Quantum RCGC (QRCGC) scheme that integrates quantum encryption techniques. The scheme first uses a finite number of true random numbers as seeds, which are then extended using a two-dimensional quantum walks system to generate sufficiently large random matrices for performing bitwise XOR operations with the original image. Subsequently, a quantum confusion technique is proposed, combining quantum block Arnold scrambling, cyclic shifts, and subsequence exchanges, which enhances the complexity of the relationship between the keys and the ciphertext in parallel. Additionally, a quantum diffusion technique is designed, efficiently generating hash values via a two-dimensional quantum walks system to verify image integrity. These hash values are used as content-based key inputs in a chaotic system to generate quantum secure matrices for diffusing the image information. Finally, a quantum bidirectional conversion operation is designed to achieve lossless reversible conversion between color and grayscale images. Experimental results show that the QRCGC scheme demonstrates significant advantages in terms of security, efficiency, and information retention.
{"title":"A quantum reversible color-to-grayscale conversion scheme via image encryption based on true random numbers and two-dimensional quantum walks","authors":"Nianqiao Li,&nbsp;Zhenjun Tang","doi":"10.1016/j.sigpro.2025.109949","DOIUrl":"10.1016/j.sigpro.2025.109949","url":null,"abstract":"<div><div>Reversible Color-to-Grayscale Conversion (RCGC) is a method for converting color images to grayscale while retaining sufficient information to reconstruct the original color image when needed. This study proposes a Quantum RCGC (QRCGC) scheme that integrates quantum encryption techniques. The scheme first uses a finite number of true random numbers as seeds, which are then extended using a two-dimensional quantum walks system to generate sufficiently large random matrices for performing bitwise XOR operations with the original image. Subsequently, a quantum confusion technique is proposed, combining quantum block Arnold scrambling, cyclic shifts, and subsequence exchanges, which enhances the complexity of the relationship between the keys and the ciphertext in parallel. Additionally, a quantum diffusion technique is designed, efficiently generating hash values via a two-dimensional quantum walks system to verify image integrity. These hash values are used as content-based key inputs in a chaotic system to generate quantum secure matrices for diffusing the image information. Finally, a quantum bidirectional conversion operation is designed to achieve lossless reversible conversion between color and grayscale images. Experimental results show that the QRCGC scheme demonstrates significant advantages in terms of security, efficiency, and information retention.</div></div>","PeriodicalId":49523,"journal":{"name":"Signal Processing","volume":"233 ","pages":"Article 109949"},"PeriodicalIF":3.4,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143464055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of space–time covariance matrix estimation on bin-wise eigenvalue and eigenspace perturbations
IF 3.4 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-02-19 DOI: 10.1016/j.sigpro.2025.109946
Connor Delaosa , Jennifer Pestana , Ian K. Proudler , Stephan Weiss
In the context of broadband multichannel signal processing, problems can often be formulated using a space–time covariance matrix, and solved using a diagonalisation of this quantity via a polynomial or analytic eigenvalue decomposition (EVD). In this paper, we address the impact that an estimation of the space–time covariance has on the factors of such a decomposition. In order to address this, we consider a linear unbiased estimator based on Gaussian distributed data, and characterise the variance of this estimate, as well as the variance of the error between the estimate and the ground truth. These quantities in turn enable to find expressions for the bin-wise perturbation of the eigenvalues, which depends on the error variance of the estimate, and for the bin-wise perturbation of the eigenspaces, which depends on both the error variance but also on the eigenvalue distance. We adapt a number of known bounds for ordinary matrices and demonstrate the fit of these bounds in simulations. In order to minimise the error variance of the estimate, and hence the perturbation of the EVD factors, we discuss a way to optimise the lag support of the space–time covariance estimate without access to the ground truth on which the estimate is based.
{"title":"Impact of space–time covariance matrix estimation on bin-wise eigenvalue and eigenspace perturbations","authors":"Connor Delaosa ,&nbsp;Jennifer Pestana ,&nbsp;Ian K. Proudler ,&nbsp;Stephan Weiss","doi":"10.1016/j.sigpro.2025.109946","DOIUrl":"10.1016/j.sigpro.2025.109946","url":null,"abstract":"<div><div>In the context of broadband multichannel signal processing, problems can often be formulated using a space–time covariance matrix, and solved using a diagonalisation of this quantity via a polynomial or analytic eigenvalue decomposition (EVD). In this paper, we address the impact that an estimation of the space–time covariance has on the factors of such a decomposition. In order to address this, we consider a linear unbiased estimator based on Gaussian distributed data, and characterise the variance of this estimate, as well as the variance of the error between the estimate and the ground truth. These quantities in turn enable to find expressions for the bin-wise perturbation of the eigenvalues, which depends on the error variance of the estimate, and for the bin-wise perturbation of the eigenspaces, which depends on both the error variance but also on the eigenvalue distance. We adapt a number of known bounds for ordinary matrices and demonstrate the fit of these bounds in simulations. In order to minimise the error variance of the estimate, and hence the perturbation of the EVD factors, we discuss a way to optimise the lag support of the space–time covariance estimate without access to the ground truth on which the estimate is based.</div></div>","PeriodicalId":49523,"journal":{"name":"Signal Processing","volume":"233 ","pages":"Article 109946"},"PeriodicalIF":3.4,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143453824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint time-vertex fractional Fourier transform
IF 3.4 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-02-18 DOI: 10.1016/j.sigpro.2025.109944
Tuna Alikaşifoğlu , Bünyamin Kartal , Eray Özgünay , Aykut Koç
Graph signal processing (GSP) facilitates the analysis of high-dimensional data on non-Euclidean domains by utilizing graph signals defined on graph vertices. In addition to static data, each vertex can provide continuous time-series signals, transforming graph signals into time-series signals on each vertex. The joint time-vertex Fourier transform (JFT) framework offers spectral analysis capabilities to analyze these joint time-vertex signals. Analogous to the fractional Fourier transform (FRT) extending the ordinary Fourier transform (FT), we introduce the joint time-vertex fractional Fourier transform (JFRT) as a generalization of JFT. The JFRT enables fractional analysis for joint time-vertex processing by extending Fourier analysis to fractional orders in both temporal and vertex domains. We theoretically demonstrate that JFRT generalizes JFT and maintains properties such as index additivity, reversibility, reduction to identity, and unitarity for specific graph topologies. Additionally, we derive Tikhonov regularization-based denoising in the JFRT domain, ensuring robust and well-behaved solutions. Comprehensive numerical experiments on synthetic and real-world datasets highlight the effectiveness of JFRT in denoising and clustering tasks that outperform state-of-the-art approaches.
{"title":"Joint time-vertex fractional Fourier transform","authors":"Tuna Alikaşifoğlu ,&nbsp;Bünyamin Kartal ,&nbsp;Eray Özgünay ,&nbsp;Aykut Koç","doi":"10.1016/j.sigpro.2025.109944","DOIUrl":"10.1016/j.sigpro.2025.109944","url":null,"abstract":"<div><div>Graph signal processing (GSP) facilitates the analysis of high-dimensional data on non-Euclidean domains by utilizing graph signals defined on graph vertices. In addition to static data, each vertex can provide continuous time-series signals, transforming graph signals into time-series signals on each vertex. The joint time-vertex Fourier transform (JFT) framework offers spectral analysis capabilities to analyze these joint time-vertex signals. Analogous to the fractional Fourier transform (FRT) extending the ordinary Fourier transform (FT), we introduce the joint time-vertex fractional Fourier transform (JFRT) as a generalization of JFT. The JFRT enables fractional analysis for joint time-vertex processing by extending Fourier analysis to fractional orders in both temporal and vertex domains. We theoretically demonstrate that JFRT generalizes JFT and maintains properties such as index additivity, reversibility, reduction to identity, and unitarity for specific graph topologies. Additionally, we derive Tikhonov regularization-based denoising in the JFRT domain, ensuring robust and well-behaved solutions. Comprehensive numerical experiments on synthetic and real-world datasets highlight the effectiveness of JFRT in denoising and clustering tasks that outperform state-of-the-art approaches.</div></div>","PeriodicalId":49523,"journal":{"name":"Signal Processing","volume":"233 ","pages":"Article 109944"},"PeriodicalIF":3.4,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143464362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nonlinear chirp mode extraction: A new efficient method to decompose nonstationary signals
IF 3.4 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-02-15 DOI: 10.1016/j.sigpro.2025.109943
Cuiwentong Xu, Yuhe Liao
Current signal decomposition methods face difficulties such as mode mixing, low efficiency, and the need for prior knowledge, etc. In view of that, this paper proposes a new method, called Nonlinear Chirp Mode Extraction (NCME), for adaptively extracting nonlinear chirp modes from nonstationary signals. This method can decompose a signal into desired mode and residual mode adaptively without any prior knowledge. A functional filter is used here to tackle the mode mixing problem and therefore improves the constraint optimization to help extract the desired mode accurately. Prior knowledge for initializing the number of modes in the signal is then no longer required and the desired mode can be extracted directly from the signal. Both computational efficiency and accuracy are greatly improved. The effectiveness and advantages of NCME are verified with simulated and measured signals. The results show that NCME can extract nonlinear chirp modes with higher precision, noise robustness, and computational efficiency than the comparative methods.
{"title":"Nonlinear chirp mode extraction: A new efficient method to decompose nonstationary signals","authors":"Cuiwentong Xu,&nbsp;Yuhe Liao","doi":"10.1016/j.sigpro.2025.109943","DOIUrl":"10.1016/j.sigpro.2025.109943","url":null,"abstract":"<div><div>Current signal decomposition methods face difficulties such as mode mixing, low efficiency, and the need for prior knowledge, etc. In view of that, this paper proposes a new method, called Nonlinear Chirp Mode Extraction (NCME), for adaptively extracting nonlinear chirp modes from nonstationary signals. This method can decompose a signal into desired mode and residual mode adaptively without any prior knowledge. A functional filter is used here to tackle the mode mixing problem and therefore improves the constraint optimization to help extract the desired mode accurately. Prior knowledge for initializing the number of modes in the signal is then no longer required and the desired mode can be extracted directly from the signal. Both computational efficiency and accuracy are greatly improved. The effectiveness and advantages of NCME are verified with simulated and measured signals. The results show that NCME can extract nonlinear chirp modes with higher precision, noise robustness, and computational efficiency than the comparative methods.</div></div>","PeriodicalId":49523,"journal":{"name":"Signal Processing","volume":"233 ","pages":"Article 109943"},"PeriodicalIF":3.4,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143429940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Signal Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1