Pub Date : 2025-01-18DOI: 10.1016/j.sigpro.2025.109898
Sebastian Rodriguez , Marc Rébillat , Shweta Paunikar , Pierre Margerit , Eric Monteiro , Francisco Chinesta , Nazih Mechbal
Lamb Waves (LW) based Structural Health Monitoring (SHM) aims to monitor the health state of thin structures. An Initial Wave Packet (IWP) is sent in the structure and interacts with boundaries, discontinuities, and with eventual damages thus generating many wave packets. An issue with LW based SHM is that at least two LW dispersive modes simultaneously exist. Matching Pursuit Method (MPM), which approximates a signal as a sum of delayed and scaled atoms taken from a known dictionary, is limited to nondispersive signals and relies on a priori known dictionary and is thus inappropriate for LW-based SHM. Single Atom Convolutional MPM, which addresses dispersion by decomposing a signal as delayed and dispersed atoms and limits the learning dictionary to only one atom, is alternatively proposed here. Its performances are demonstrated on numerical and experimental signals and it is used for damage monitoring. Beyond LW-based SHM, this method remains very general and applicable to a large class of signal processing problems.
{"title":"Single atom convolutional matching pursuit: Theoretical framework and application to Lamb waves based structural health monitoring","authors":"Sebastian Rodriguez , Marc Rébillat , Shweta Paunikar , Pierre Margerit , Eric Monteiro , Francisco Chinesta , Nazih Mechbal","doi":"10.1016/j.sigpro.2025.109898","DOIUrl":"10.1016/j.sigpro.2025.109898","url":null,"abstract":"<div><div>Lamb Waves (LW) based Structural Health Monitoring (SHM) aims to monitor the health state of thin structures. An Initial Wave Packet (IWP) is sent in the structure and interacts with boundaries, discontinuities, and with eventual damages thus generating many wave packets. An issue with LW based SHM is that at least two LW dispersive modes simultaneously exist. Matching Pursuit Method (MPM), which approximates a signal as a sum of delayed and scaled atoms taken from a known dictionary, is limited to nondispersive signals and relies on a priori known dictionary and is thus inappropriate for LW-based SHM. Single Atom Convolutional MPM, which addresses dispersion by decomposing a signal as delayed and dispersed atoms and limits the learning dictionary to only one atom, is alternatively proposed here. Its performances are demonstrated on numerical and experimental signals and it is used for damage monitoring. Beyond LW-based SHM, this method remains very general and applicable to a large class of signal processing problems.</div></div>","PeriodicalId":49523,"journal":{"name":"Signal Processing","volume":"231 ","pages":"Article 109898"},"PeriodicalIF":3.4,"publicationDate":"2025-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reversible data hiding in encrypted images (RDHEI) is an effective method for privacy protection, medical diagnosis, and covert communication. It can also facilitate the management of large amounts of encrypted images. Despite recent advances in RDHEI methods, several issues remain, including the inefficiency of compression and the excess of auxiliary data. To address these issues, this paper proposes a novel RDHEI method based on prediction error modification (PEM) and basic block compression (BBC). PEM greatly increases the occurrence of PE “0” (a PE with a value of 0) and reduces the information entropy of PEs by decreasing the positive PEs or increasing the negative PEs. The modified PEs are then divided into non-overlapping blocks which are subsequently compressed based on the proposed BBC technique. After PEM and image compression, the secret data is embedded into the encrypted image to generate the marked image, from which authorized recipients can extract the hidden payload and recover the original image non-destructively. Experimental results show that the proposed method is highly resistant to statistical analysis, brute force, and differential attacks, and outperforms some state-of-the-art methods in terms of embedding capacity.
{"title":"Reversible data hiding in encrypted images using prediction error modification and basic block compression","authors":"Xuemao Zhang, Xianquan Zhang, Chunqiang Yu, Guoxiang Li, Zhenjun Tang","doi":"10.1016/j.sigpro.2025.109896","DOIUrl":"10.1016/j.sigpro.2025.109896","url":null,"abstract":"<div><div>Reversible data hiding in encrypted images (RDHEI) is an effective method for privacy protection, medical diagnosis, and covert communication. It can also facilitate the management of large amounts of encrypted images. Despite recent advances in RDHEI methods, several issues remain, including the inefficiency of compression and the excess of auxiliary data. To address these issues, this paper proposes a novel RDHEI method based on prediction error modification (PEM) and basic block compression (BBC). PEM greatly increases the occurrence of PE “0” (a PE with a value of 0) and reduces the information entropy of PEs by decreasing the positive PEs or increasing the negative PEs. The modified PEs are then divided into non-overlapping blocks which are subsequently compressed based on the proposed BBC technique. After PEM and image compression, the secret data is embedded into the encrypted image to generate the marked image, from which authorized recipients can extract the hidden payload and recover the original image non-destructively. Experimental results show that the proposed method is highly resistant to statistical analysis, brute force, and differential attacks, and outperforms some state-of-the-art methods in terms of embedding capacity.</div></div>","PeriodicalId":49523,"journal":{"name":"Signal Processing","volume":"231 ","pages":"Article 109896"},"PeriodicalIF":3.4,"publicationDate":"2025-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-18DOI: 10.1016/j.sigpro.2025.109899
Tianqi Li , Nan Chen , Zhike Peng , Qingbo He
The escalating integration of multi-sensor systems across diverse high-dimensional monitoring areas generates a large amount of large-scale multivariate signals. However, the theoretical foundation of efficient multivariate signal processing methods is currently lacking, making it challenging to fully exploit the value of multivariate signals under the requirement of rapid response. Here, we introduce super Fourier analysis (SFA), which innovates traditional Fourier analysis with the principle of multivariate statistics for highly efficient processing of multivariate signals. By integrating multi-channel information and reducing the data dimensionality, SFA can inherently handle the correlation across channels and has low time complexity. In the framework of SFA, we deduce and define the super Fourier series, super Fourier transform, and discrete super Fourier transform. Mode alignment property and noise resilience property of the SFA are analyzed. As an example, variational mode decomposition, a classic univariate signal processing method, is extended to multivariate context based on SFA. Our demonstrations include simulated signals, multi-channel electroencephalography, global sea surface temperature, and motion microscopy, highlighting SFA’s potential in rapid and large-scale multivariate signal processing. SFA’s efficiency and effectiveness promise its applications in various areas with a large number of sensors or channels, making the processing of multivariate signals as simple as univariate signals.
{"title":"Super Fourier analysis: A highly efficient framework for multivariate signal processing","authors":"Tianqi Li , Nan Chen , Zhike Peng , Qingbo He","doi":"10.1016/j.sigpro.2025.109899","DOIUrl":"10.1016/j.sigpro.2025.109899","url":null,"abstract":"<div><div>The escalating integration of multi-sensor systems across diverse high-dimensional monitoring areas generates a large amount of large-scale multivariate signals. However, the theoretical foundation of efficient multivariate signal processing methods is currently lacking, making it challenging to fully exploit the value of multivariate signals under the requirement of rapid response. Here, we introduce super Fourier analysis (SFA), which innovates traditional Fourier analysis with the principle of multivariate statistics for highly efficient processing of multivariate signals. By integrating multi-channel information and reducing the data dimensionality, SFA can inherently handle the correlation across channels and has low time complexity. In the framework of SFA, we deduce and define the super Fourier series, super Fourier transform, and discrete super Fourier transform. Mode alignment property and noise resilience property of the SFA are analyzed. As an example, variational mode decomposition, a classic univariate signal processing method, is extended to multivariate context based on SFA. Our demonstrations include simulated signals, multi-channel electroencephalography, global sea surface temperature, and motion microscopy, highlighting SFA’s potential in rapid and large-scale multivariate signal processing. SFA’s efficiency and effectiveness promise its applications in various areas with a large number of sensors or channels, making the processing of multivariate signals as simple as univariate signals.</div></div>","PeriodicalId":49523,"journal":{"name":"Signal Processing","volume":"231 ","pages":"Article 109899"},"PeriodicalIF":3.4,"publicationDate":"2025-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-18DOI: 10.1016/j.sigpro.2025.109890
Weijun Long, Qinghua Zhao, Chuan Du
In order to solve the pattern synthesis problem of opportunity array radar, a dynamic array element pattern synthesis mathematical model and opportunity unit reconstruction scheme based on correlation chance programming under uncertain scenarios are established. The model is optimized to maximize the reliability of task requirements,and is solved by fuzzy simulation and mathematical analysis under given relevant constraints. In addition, a parallel differential evolution algorithm with secondary mutation(SMPDE) is proposed to improve the adaptability of opportunity array radar to different environments and tasks. By constructing Hanke matrices using different ideal radar arrays, feasible and base regions for initial population evolution can be obtained. On this basis, elements are added or deleted according to the requirements and new elements are flexibly arranged through differential evolution to obtain the best performance. In the process of iteration, the time point of secondary mutation is determined in real time according to the evolution curve and the potential inferior evolution is terminated in time. The simulation and analysis show that the model can achieve the maximum reliability of the task target in the current environment when the pattern reconstruction problem is involved, especially the opportunity array radar with multiple dynamic elements. Moreover, the model can flexibly reconstruct and optimize the pattern according to different requirements under different conditions to further enhance the overall robustness of the system.
{"title":"Method for reconstructing the directional pattern of opportunistic array radar with dynamic elements","authors":"Weijun Long, Qinghua Zhao, Chuan Du","doi":"10.1016/j.sigpro.2025.109890","DOIUrl":"10.1016/j.sigpro.2025.109890","url":null,"abstract":"<div><div>In order to solve the pattern synthesis problem of opportunity array radar, a dynamic array element pattern synthesis mathematical model and opportunity unit reconstruction scheme based on correlation chance programming under uncertain scenarios are established. The model is optimized to maximize the reliability of task requirements,and is solved by fuzzy simulation and mathematical analysis under given relevant constraints. In addition, a parallel differential evolution algorithm with secondary mutation(SMPDE) is proposed to improve the adaptability of opportunity array radar to different environments and tasks. By constructing Hanke matrices using different ideal radar arrays, feasible and base regions for initial population evolution can be obtained. On this basis, elements are added or deleted according to the requirements and new elements are flexibly arranged through differential evolution to obtain the best performance. In the process of iteration, the time point of secondary mutation is determined in real time according to the evolution curve and the potential inferior evolution is terminated in time. The simulation and analysis show that the model can achieve the maximum reliability of the task target in the current environment when the pattern reconstruction problem is involved, especially the opportunity array radar with multiple dynamic elements. Moreover, the model can flexibly reconstruct and optimize the pattern according to different requirements under different conditions to further enhance the overall robustness of the system.</div></div>","PeriodicalId":49523,"journal":{"name":"Signal Processing","volume":"231 ","pages":"Article 109890"},"PeriodicalIF":3.4,"publicationDate":"2025-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-17DOI: 10.1016/j.sigpro.2024.109883
Qinghua Zhang , Liangtian He , Shaobing Gao , Liang-Jian Deng , Jun Liu
Deep image prior (DIP) has demonstrated remarkable efficacy in addressing various imaging inverse problems by capitalizing on the inherent biases of deep convolutional architectures to implicitly regularize the solutions. However, its application to color images has been hampered by the conventional DIP method’s treatment of color channels in isolation, ignoring their important inter-channel correlations. To mitigate this limitation, we extend the DIP framework from the real domain to the quaternion domain, introducing a novel quaternion-based deep image prior (QDIP) model specifically tailored for color image restoration. Moreover, to enhance the recovery performance of QDIP and alleviate its susceptibility to the unfavorable overfitting issue, we propose incorporating the concept of regularization by denoising (RED). This approach leverages existing denoisers to regularize inverse problems and integrates the RED scheme into our QDIP model. Extensive experiments on color image denoising, deblurring, and super-resolution demonstrate that the proposed QDIP and QDIP-RED algorithms perform competitively with many state-of-the-art alternatives, both in quantitative and qualitative assessments. The code and data are available at the website: https://github.com/qiuxuanzhizi/QDIP-RED.
{"title":"Quaternion-based deep image prior with regularization by denoising for color image restoration","authors":"Qinghua Zhang , Liangtian He , Shaobing Gao , Liang-Jian Deng , Jun Liu","doi":"10.1016/j.sigpro.2024.109883","DOIUrl":"10.1016/j.sigpro.2024.109883","url":null,"abstract":"<div><div>Deep image prior (DIP) has demonstrated remarkable efficacy in addressing various imaging inverse problems by capitalizing on the inherent biases of deep convolutional architectures to implicitly regularize the solutions. However, its application to color images has been hampered by the conventional DIP method’s treatment of color channels in isolation, ignoring their important inter-channel correlations. To mitigate this limitation, we extend the DIP framework from the real domain to the quaternion domain, introducing a novel quaternion-based deep image prior (QDIP) model specifically tailored for color image restoration. Moreover, to enhance the recovery performance of QDIP and alleviate its susceptibility to the unfavorable overfitting issue, we propose incorporating the concept of regularization by denoising (RED). This approach leverages existing denoisers to regularize inverse problems and integrates the RED scheme into our QDIP model. Extensive experiments on color image denoising, deblurring, and super-resolution demonstrate that the proposed QDIP and QDIP-RED algorithms perform competitively with many state-of-the-art alternatives, both in quantitative and qualitative assessments. The code and data are available at the website: <span><span>https://github.com/qiuxuanzhizi/QDIP-RED</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49523,"journal":{"name":"Signal Processing","volume":"231 ","pages":"Article 109883"},"PeriodicalIF":3.4,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-13DOI: 10.1016/j.sigpro.2025.109893
Congwei Feng, Huawei Chen
Wideband beamformers are known sensitive to sensor imperfections, especially for small-sized sensor arrays. Mean performance optimization (MPO) is a commonly used criterion for the design of robust wideband beamformers in the presence of sensor imperfections, which aims to synthesize the mean beampattern. However, the existing designs for robust wideband beamformers cannot guarantee precise control of the mean beampattern. In this paper, we propose a MPO-criterion-based robust design approach for steerable wideband beamformers (SWBs) using a weighted spatial response variation (SRV) measure. By exploiting the increased degrees of freedom provided by the weighted-SRV, the proposed robust SWB design can achieve a frequency-invariant mean beampattern with both mainlobe inconsistency and sidelobe level being able to be precisely controlled. We develop a theory and the corresponding algorithm to find the solution for the weighting function of the weighted-SRV-based cost function to achieve precise mean beampattern control. Some insights into the effect of sensor imperfections on the achievable frequency invariance are also revealed. The effectiveness of the proposed design is verified by the simulation results.
{"title":"Robust beampattern control for steerable frequency-invariant beamforming in the presence of sensor imperfections","authors":"Congwei Feng, Huawei Chen","doi":"10.1016/j.sigpro.2025.109893","DOIUrl":"10.1016/j.sigpro.2025.109893","url":null,"abstract":"<div><div>Wideband beamformers are known sensitive to sensor imperfections, especially for small-sized sensor arrays. Mean performance optimization (MPO) is a commonly used criterion for the design of robust wideband beamformers in the presence of sensor imperfections, which aims to synthesize the mean beampattern. However, the existing designs for robust wideband beamformers cannot guarantee precise control of the mean beampattern. In this paper, we propose a MPO-criterion-based robust design approach for steerable wideband beamformers (SWBs) using a weighted spatial response variation (SRV) measure. By exploiting the increased degrees of freedom provided by the weighted-SRV, the proposed robust SWB design can achieve a frequency-invariant mean beampattern with both mainlobe inconsistency and sidelobe level being able to be precisely controlled. We develop a theory and the corresponding algorithm to find the solution for the weighting function of the weighted-SRV-based cost function to achieve precise mean beampattern control. Some insights into the effect of sensor imperfections on the achievable frequency invariance are also revealed. The effectiveness of the proposed design is verified by the simulation results.</div></div>","PeriodicalId":49523,"journal":{"name":"Signal Processing","volume":"231 ","pages":"Article 109893"},"PeriodicalIF":3.4,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-13DOI: 10.1016/j.sigpro.2025.109886
Kai Wu , Jing Dong , Guifu Hu , Chang Liu , Wenwu Wang
Deep unfolding attempts to leverage the interpretability of traditional model-based algorithms and the learning ability of deep neural networks by unrolling model-based algorithms as neural networks. Following the framework of deep unfolding, some conventional dictionary learning algorithms have been expanded as networks. However, existing deep unfolding networks for dictionary learning are developed based on formulations with pre-defined priors, e.g., -norm, or learn priors using convolutional neural networks with limited receptive fields. To address these issues, we propose a transformer-based deep unfolding network for dictionary learning (TDU-DLNet). The network is developed by unrolling a general formulation of dictionary learning with an implicit prior of representation coefficients. The prior is learned by a transformer-based network where an inter-stage feature fusion module is introduced to decrease information loss among stages. The effectiveness and superiority of the proposed method are validated on image denoising. Experiments based on widely used datasets demonstrate that the proposed method achieves competitive results with fewer parameters as compared with deep learning and other deep unfolding methods.
{"title":"TDU-DLNet: A transformer-based deep unfolding network for dictionary learning","authors":"Kai Wu , Jing Dong , Guifu Hu , Chang Liu , Wenwu Wang","doi":"10.1016/j.sigpro.2025.109886","DOIUrl":"10.1016/j.sigpro.2025.109886","url":null,"abstract":"<div><div>Deep unfolding attempts to leverage the interpretability of traditional model-based algorithms and the learning ability of deep neural networks by unrolling model-based algorithms as neural networks. Following the framework of deep unfolding, some conventional dictionary learning algorithms have been expanded as networks. However, existing deep unfolding networks for dictionary learning are developed based on formulations with pre-defined priors, e.g., <span><math><msub><mrow><mi>ℓ</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span>-norm, or learn priors using convolutional neural networks with limited receptive fields. To address these issues, we propose a transformer-based deep unfolding network for dictionary learning (TDU-DLNet). The network is developed by unrolling a general formulation of dictionary learning with an implicit prior of representation coefficients. The prior is learned by a transformer-based network where an inter-stage feature fusion module is introduced to decrease information loss among stages. The effectiveness and superiority of the proposed method are validated on image denoising. Experiments based on widely used datasets demonstrate that the proposed method achieves competitive results with fewer parameters as compared with deep learning and other deep unfolding methods.</div></div>","PeriodicalId":49523,"journal":{"name":"Signal Processing","volume":"231 ","pages":"Article 109886"},"PeriodicalIF":3.4,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-13DOI: 10.1016/j.sigpro.2025.109891
Dongyao Bi , Lijun Zhang , Jie Chen
Underwater acoustic target recognition (UATR) is typically challenging due to the complex underwater environment and poor prior knowledge. Deep learning (DL)-based UATR methods have demonstrated their effectiveness by extracting more discriminative features on time–frequency (T–F) spectrograms. However, the existing methods exhibit the lack of robustness and ability to capture the time–frequency correlation inherent in the T–F representation. To this end, we first introduce the Wavelet Scattering Transform (WST) to obtain the T–F scattering coefficients of underwater acoustic signals. Then, we treat the scattering coefficients as multivariate time-series data and design a new Two-Stream Time–Frequency (newTSTF) transformer. This model can simultaneously extract temporal and frequency-related features from the scattering coefficients, enhancing accuracy. Specifically, we introduce the Non-stationary encoder to recover the temporal features lost during normalization. Experimental results on real-world data demonstrate that our model achieves high accuracy in UATR.
{"title":"A new Two-Stream Temporal-Frequency transformer network for underwater acoustic target recognition","authors":"Dongyao Bi , Lijun Zhang , Jie Chen","doi":"10.1016/j.sigpro.2025.109891","DOIUrl":"10.1016/j.sigpro.2025.109891","url":null,"abstract":"<div><div>Underwater acoustic target recognition (UATR) is typically challenging due to the complex underwater environment and poor prior knowledge. Deep learning (DL)-based UATR methods have demonstrated their effectiveness by extracting more discriminative features on time–frequency (T–F) spectrograms. However, the existing methods exhibit the lack of robustness and ability to capture the time–frequency correlation inherent in the T–F representation. To this end, we first introduce the Wavelet Scattering Transform (WST) to obtain the T–F scattering coefficients of underwater acoustic signals. Then, we treat the scattering coefficients as multivariate time-series data and design a new Two-Stream Time–Frequency (newTSTF) transformer. This model can simultaneously extract temporal and frequency-related features from the scattering coefficients, enhancing accuracy. Specifically, we introduce the Non-stationary encoder to recover the temporal features lost during normalization. Experimental results on real-world data demonstrate that our model achieves high accuracy in UATR.</div></div>","PeriodicalId":49523,"journal":{"name":"Signal Processing","volume":"231 ","pages":"Article 109891"},"PeriodicalIF":3.4,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-13DOI: 10.1016/j.sigpro.2025.109894
Jasin Machkour , Michael Muma , Daniel P. Palomar
We propose the Terminating-Random Experiments (T-Rex) selector, a fast variable selection method for high-dimensional data. The T-Rex selector controls a user-defined target false discovery rate (FDR) while maximizing the number of selected variables. This is achieved by fusing the solutions of multiple early terminated random experiments. The experiments are conducted on a combination of the original predictors and multiple sets of randomly generated dummy predictors. A finite sample proof based on martingale theory for the FDR control property is provided. Numerical simulations confirm that the FDR is controlled at the target level while allowing for high power. We prove that the dummies can be sampled from any univariate probability distribution with finite expectation and variance. The computational complexity of the proposed method is linear in the number of variables. The T-Rex selector outperforms state-of-the-art methods for FDR control in numerical experiments and on a simulated genome-wide association study (GWAS), while its sequential computation time is more than two orders of magnitude lower than that of the strongest benchmark methods. The open source R package TRexSelector containing the implementation of the T-Rex selector is available on CRAN.
{"title":"The terminating-random experiments selector: Fast high-dimensional variable selection with false discovery rate control","authors":"Jasin Machkour , Michael Muma , Daniel P. Palomar","doi":"10.1016/j.sigpro.2025.109894","DOIUrl":"10.1016/j.sigpro.2025.109894","url":null,"abstract":"<div><div>We propose the Terminating-Random Experiments (T-Rex) selector, a fast variable selection method for high-dimensional data. The T-Rex selector controls a user-defined target false discovery rate (FDR) while maximizing the number of selected variables. This is achieved by fusing the solutions of multiple early terminated random experiments. The experiments are conducted on a combination of the original predictors and multiple sets of randomly generated dummy predictors. A finite sample proof based on martingale theory for the FDR control property is provided. Numerical simulations confirm that the FDR is controlled at the target level while allowing for high power. We prove that the dummies can be sampled from any univariate probability distribution with finite expectation and variance. The computational complexity of the proposed method is linear in the number of variables. The T-Rex selector outperforms state-of-the-art methods for FDR control in numerical experiments and on a simulated genome-wide association study (GWAS), while its sequential computation time is more than two orders of magnitude lower than that of the strongest benchmark methods. The open source R package TRexSelector containing the implementation of the T-Rex selector is available on CRAN.</div></div>","PeriodicalId":49523,"journal":{"name":"Signal Processing","volume":"231 ","pages":"Article 109894"},"PeriodicalIF":3.4,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-12DOI: 10.1016/j.sigpro.2024.109880
Zhe Zhang, Ke Wang, Lan Cheng, Xinying Xu
In recent years, single-image super-resolution has made great progress due to the vigorous development of deep learning, but still has challenges in texture recovery for images with complex scenes. To improve the texture recovery performance, we propose an adaptive image super-resolution method with multi-feature prior to model the diverse mapping relations from low resolution images to their high resolution counterparts. Experimental results show that the proposed method recovers more faithful and vivid textures than static methods and other adaptive methods based on single feature prior. The proposed dynamic module can be flexibly introduced to any static model and further improve its performance. Our code is available at: https://github.com/zzsmg/ASRMF.
{"title":"ASRMF: Adaptive image super-resolution based on dynamic-parameter DNN with multi-feature prior","authors":"Zhe Zhang, Ke Wang, Lan Cheng, Xinying Xu","doi":"10.1016/j.sigpro.2024.109880","DOIUrl":"10.1016/j.sigpro.2024.109880","url":null,"abstract":"<div><div>In recent years, single-image super-resolution has made great progress due to the vigorous development of deep learning, but still has challenges in texture recovery for images with complex scenes. To improve the texture recovery performance, we propose an adaptive image super-resolution method with multi-feature prior to model the diverse mapping relations from low resolution images to their high resolution counterparts. Experimental results show that the proposed method recovers more faithful and vivid textures than static methods and other adaptive methods based on single feature prior. The proposed dynamic module can be flexibly introduced to any static model and further improve its performance. Our code is available at: <span><span>https://github.com/zzsmg/ASRMF</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49523,"journal":{"name":"Signal Processing","volume":"231 ","pages":"Article 109880"},"PeriodicalIF":3.4,"publicationDate":"2025-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}