Pub Date : 2017-08-01DOI: 10.23919/EUSIPCO.2017.8081318
Tae Young Han, Yong Jun Kim, B. Song
Convolutional neural networks (CNN) have been successfully applied to visible image super-resolution (SR) methods. In this paper, for up-scaling near-infrared (NIR) image under low light environment, we propose a CNN-based SR algorithm using corresponding visible image. Our algorithm firstly extracts high-frequency (HF) components from low-resolution (LR) NIR image and its corresponding high-resolution (HR) visible image, and then takes them as the multiple inputs of the CNN. Next, the CNN outputs HR HF component of the input NIR image. Finally, HR NIR image is synthesized by adding the HR HF component to the up-scaled LR NIR image. Simulation results show that the proposed algorithm outperforms the state-of-the-art methods in terms of qualitative as well as quantitative metrics.
{"title":"Convolutional neural network-based infrared image super resolution under low light environment","authors":"Tae Young Han, Yong Jun Kim, B. Song","doi":"10.23919/EUSIPCO.2017.8081318","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2017.8081318","url":null,"abstract":"Convolutional neural networks (CNN) have been successfully applied to visible image super-resolution (SR) methods. In this paper, for up-scaling near-infrared (NIR) image under low light environment, we propose a CNN-based SR algorithm using corresponding visible image. Our algorithm firstly extracts high-frequency (HF) components from low-resolution (LR) NIR image and its corresponding high-resolution (HR) visible image, and then takes them as the multiple inputs of the CNN. Next, the CNN outputs HR HF component of the input NIR image. Finally, HR NIR image is synthesized by adding the HR HF component to the up-scaled LR NIR image. Simulation results show that the proposed algorithm outperforms the state-of-the-art methods in terms of qualitative as well as quantitative metrics.","PeriodicalId":346811,"journal":{"name":"2017 25th European Signal Processing Conference (EUSIPCO)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123484504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-01DOI: 10.23919/EUSIPCO.2017.8081413
F. Albu, Yingsong Li, Yanyan Wang
This paper describes new algorithms that incorporates the non-uniform norm constraint into the zero-attracting and reweighted modified filtered-x affine projection or pseudo affine projection algorithms for active noise control. The simulations indicate that the proposed algorithms can obtain better performance for primary and secondary paths with various sparseness levels with insignificant numerical complexity increase. It is also shown that the version using a linear function instead of the reweighted term leads to the best results, particularly for combinations of sparse or semi-sparse primary and secondary paths.
{"title":"Low-complexity non-uniform penalized affine projection algorithms for active noise control","authors":"F. Albu, Yingsong Li, Yanyan Wang","doi":"10.23919/EUSIPCO.2017.8081413","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2017.8081413","url":null,"abstract":"This paper describes new algorithms that incorporates the non-uniform norm constraint into the zero-attracting and reweighted modified filtered-x affine projection or pseudo affine projection algorithms for active noise control. The simulations indicate that the proposed algorithms can obtain better performance for primary and secondary paths with various sparseness levels with insignificant numerical complexity increase. It is also shown that the version using a linear function instead of the reweighted term leads to the best results, particularly for combinations of sparse or semi-sparse primary and secondary paths.","PeriodicalId":346811,"journal":{"name":"2017 25th European Signal Processing Conference (EUSIPCO)","volume":"395 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124448014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-01DOI: 10.23919/EUSIPCO.2017.8081171
N. Passalis, A. Tefas
Unmanned Aerial Vehicles, also known as drones, are becoming increasingly popular for video shooting tasks since they are capable of capturing spectacular aerial shots. Deep learning techniques, such as Convolutional Neural Networks (CNNs), can be utilized to assist various aspects of the flying and the shooting process allowing one human to operate one or more drones at once. However, using deep learning techniques on drones is not straightforward since computational power and memory constraints exist. In this work, a quantization-based method for learning lightweight convolutional networks is proposed. The ability of the proposed approach to significantly reduce the model size and increase both the feed-forward speed and the accuracy is demonstrated on two different drone-related tasks, i.e., human concept detection and face pose estimation.
{"title":"Concept detection and face pose estimation using lightweight convolutional neural networks for steering drone video shooting","authors":"N. Passalis, A. Tefas","doi":"10.23919/EUSIPCO.2017.8081171","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2017.8081171","url":null,"abstract":"Unmanned Aerial Vehicles, also known as drones, are becoming increasingly popular for video shooting tasks since they are capable of capturing spectacular aerial shots. Deep learning techniques, such as Convolutional Neural Networks (CNNs), can be utilized to assist various aspects of the flying and the shooting process allowing one human to operate one or more drones at once. However, using deep learning techniques on drones is not straightforward since computational power and memory constraints exist. In this work, a quantization-based method for learning lightweight convolutional networks is proposed. The ability of the proposed approach to significantly reduce the model size and increase both the feed-forward speed and the accuracy is demonstrated on two different drone-related tasks, i.e., human concept detection and face pose estimation.","PeriodicalId":346811,"journal":{"name":"2017 25th European Signal Processing Conference (EUSIPCO)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124122885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-01DOI: 10.23919/eusipco.2017.8081264
Musab T. S. Al-Kaltakchi, W. L. Woo, S. Dlay, J. Chambers
In this paper, two models, the I-vector and the Gaussian Mixture Model-Universal Background Model (GMM-UBM), are compared for the speaker identification task. Four feature combinations of I-vectors with seven fusion techniques are considered: maximum, mean, weighted sum, cumulative, interleaving and concatenated for both two and four features. In addition, an Extreme Learning Machine (ELM) is exploited to identify speakers, and then Speaker Identification Accuracy (SIA) is calculated. Both systems are evaluated for 120 speakers from the TIMIT and NIST 2008 databases for clean speech. Furthermore, a comprehensive evaluation is made under Additive White Gaussian Noise (AWGN) conditions and with three types of Non Stationary Noise (NSN), both with and without handset effects for the TIMIT database. The results show that the I-vector approach is better than the GMM-UBM for both clean and AWGN conditions without a handset. However, the GMM-UBM had better accuracy for NSN types.
{"title":"Comparison of I-vector and GMM-UBM approaches to speaker identification with TIMIT and NIST 2008 databases in challenging environments","authors":"Musab T. S. Al-Kaltakchi, W. L. Woo, S. Dlay, J. Chambers","doi":"10.23919/eusipco.2017.8081264","DOIUrl":"https://doi.org/10.23919/eusipco.2017.8081264","url":null,"abstract":"In this paper, two models, the I-vector and the Gaussian Mixture Model-Universal Background Model (GMM-UBM), are compared for the speaker identification task. Four feature combinations of I-vectors with seven fusion techniques are considered: maximum, mean, weighted sum, cumulative, interleaving and concatenated for both two and four features. In addition, an Extreme Learning Machine (ELM) is exploited to identify speakers, and then Speaker Identification Accuracy (SIA) is calculated. Both systems are evaluated for 120 speakers from the TIMIT and NIST 2008 databases for clean speech. Furthermore, a comprehensive evaluation is made under Additive White Gaussian Noise (AWGN) conditions and with three types of Non Stationary Noise (NSN), both with and without handset effects for the TIMIT database. The results show that the I-vector approach is better than the GMM-UBM for both clean and AWGN conditions without a handset. However, the GMM-UBM had better accuracy for NSN types.","PeriodicalId":346811,"journal":{"name":"2017 25th European Signal Processing Conference (EUSIPCO)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126546217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-01DOI: 10.23919/EUSIPCO.2017.8081704
Alex Minetto, C. Cristodaro, F. Dovis
The limited availability and the lack of continuity in the service of Global Positioning Satellite Systems (GNSS) in harsh environments is a critical issue for Intelligent Transport Systems (ITS) applications relying on the position. This work is developed within the framework of vehicle-to-everything (V2X) communication, with the aim to guarantee a continuous position availability to all the agents belonging to the network when GNSS is not available for a subset of them. The simultaneous observation of shared satellites is exploited to estimate the Non-Line-Of-Sight Inter-Agent Range within a real-time-connected network of receivers. It is demonstrated the effectiveness of a hybrid localization algorithm based on the the integration of standard GNSS measurements and linearised IAR estimates. The hybrid position estimation is solved through a self-adaptive iterative algorithm to find the position of receivers experiencing GNSS outages.
{"title":"A collaborative method for positioning based on GNSS inter agent range estimation","authors":"Alex Minetto, C. Cristodaro, F. Dovis","doi":"10.23919/EUSIPCO.2017.8081704","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2017.8081704","url":null,"abstract":"The limited availability and the lack of continuity in the service of Global Positioning Satellite Systems (GNSS) in harsh environments is a critical issue for Intelligent Transport Systems (ITS) applications relying on the position. This work is developed within the framework of vehicle-to-everything (V2X) communication, with the aim to guarantee a continuous position availability to all the agents belonging to the network when GNSS is not available for a subset of them. The simultaneous observation of shared satellites is exploited to estimate the Non-Line-Of-Sight Inter-Agent Range within a real-time-connected network of receivers. It is demonstrated the effectiveness of a hybrid localization algorithm based on the the integration of standard GNSS measurements and linearised IAR estimates. The hybrid position estimation is solved through a self-adaptive iterative algorithm to find the position of receivers experiencing GNSS outages.","PeriodicalId":346811,"journal":{"name":"2017 25th European Signal Processing Conference (EUSIPCO)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125447000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-01DOI: 10.23919/eusipco.2017.8081313
H. Abdulrahman, Baptiste Magnier, P. Montesinos
Corners and junctions play an important role in many image analysis applications. Nevertheless, these features extracted by the majority of the proposed algorithms in the literature do not correspond to the exact position of the corners. In this paper, an approach for corner detection based on the combination of different asymmetric kernels is proposed. Informations captured by the directional kernels enable to describe precisely all the grayscale variations and the directions of the crossing edges around the considered pixel. Compared to other corner detection algorithms on synthetic and real images, the proposed approach remains more stable and robust to noise than the comparative methods.
{"title":"Oriented asymmetric kernels for corner detection","authors":"H. Abdulrahman, Baptiste Magnier, P. Montesinos","doi":"10.23919/eusipco.2017.8081313","DOIUrl":"https://doi.org/10.23919/eusipco.2017.8081313","url":null,"abstract":"Corners and junctions play an important role in many image analysis applications. Nevertheless, these features extracted by the majority of the proposed algorithms in the literature do not correspond to the exact position of the corners. In this paper, an approach for corner detection based on the combination of different asymmetric kernels is proposed. Informations captured by the directional kernels enable to describe precisely all the grayscale variations and the directions of the crossing edges around the considered pixel. Compared to other corner detection algorithms on synthetic and real images, the proposed approach remains more stable and robust to noise than the comparative methods.","PeriodicalId":346811,"journal":{"name":"2017 25th European Signal Processing Conference (EUSIPCO)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125606323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-01DOI: 10.23919/EUSIPCO.2017.8081257
K. Fotiadou, Grigorios Tsagkatakis, B. Moraes, F. Abdalla, P. Tsakalides
The Euclid satellite aims to measure accurately the global properties of the Universe, with particular emphasis on the properties of the mysterious Dark Energy that is driving the acceleration of its expansion. One of its two main observational probes relies on accurate measurements of the radial distances of galaxies through the identification of important features in their individual light spectra that are redshifted due to their receding velocity. However, several challenges for robust automated spectroscopic redshift estimation remain unsolved, one of which is the characterization of the types of spectra present in the observed galaxy population. This paper proposes a denoising technique that exploits the mathematical frameworks of Sparse Representations and Coupled Dictionary Learning, and tests it on simulated Euclid-like noisy spectroscopic templates. The reconstructed spectral profiles are able to improve the accuracy, reliability and robustness of automated redshift estimation methods. The key contribution of this work is the design of a novel model which considers coupled feature spaces, composed of high- and low-quality spectral profiles, when applied to the spectroscopic data denoising problem. The coupled dictionary learning technique is formulated within the context of the Alternating Direction Method of Multipliers, optimizing each variable via closed-form expressions. Experimental results suggest that the proposed powerful coupled dictionary learning scheme reconstructs successfully spectral profiles from their corresponding noisy versions, even with extreme noise scenarios.
{"title":"Denoising galaxy spectra with coupled dictionary learning","authors":"K. Fotiadou, Grigorios Tsagkatakis, B. Moraes, F. Abdalla, P. Tsakalides","doi":"10.23919/EUSIPCO.2017.8081257","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2017.8081257","url":null,"abstract":"The Euclid satellite aims to measure accurately the global properties of the Universe, with particular emphasis on the properties of the mysterious Dark Energy that is driving the acceleration of its expansion. One of its two main observational probes relies on accurate measurements of the radial distances of galaxies through the identification of important features in their individual light spectra that are redshifted due to their receding velocity. However, several challenges for robust automated spectroscopic redshift estimation remain unsolved, one of which is the characterization of the types of spectra present in the observed galaxy population. This paper proposes a denoising technique that exploits the mathematical frameworks of Sparse Representations and Coupled Dictionary Learning, and tests it on simulated Euclid-like noisy spectroscopic templates. The reconstructed spectral profiles are able to improve the accuracy, reliability and robustness of automated redshift estimation methods. The key contribution of this work is the design of a novel model which considers coupled feature spaces, composed of high- and low-quality spectral profiles, when applied to the spectroscopic data denoising problem. The coupled dictionary learning technique is formulated within the context of the Alternating Direction Method of Multipliers, optimizing each variable via closed-form expressions. Experimental results suggest that the proposed powerful coupled dictionary learning scheme reconstructs successfully spectral profiles from their corresponding noisy versions, even with extreme noise scenarios.","PeriodicalId":346811,"journal":{"name":"2017 25th European Signal Processing Conference (EUSIPCO)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128191061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-01DOI: 10.23919/EUSIPCO.2017.8081165
MV AchuthRao, N. Kausthubha, Shivani Yadav, D. Gope, U. Krishnaswamy, P. Ghosh
We consider the task of automatically predicting spirometry readings from cough and wheeze audio signals for asthma severity monitoring. Spirometry is a pulmonary function test used to measure forced expiratory volume in one second (FEV1) and forced vital capacity (FVC) when a subject exhales in the spirometry sensor after taking a deep breath. FEV1%, FVC% and their ratio are typically used to determine the asthma severity. Accurate prediction of these spirometry readings from cough and wheeze could help patients to non-invasively monitor their asthma severity in the absence of spirometry. We use statistical spectrum description (SSD) as the cue from cough and wheeze signal to predict the spirometry readings using support vector regression (SVR). We perform experiments with cough and wheeze recordings from 16 healthy persons and 12 patients. We find that the coughs are better predictor of spirometry readings compared to the wheeze signal. FEV1%, FVC% and their ratio are predicted with root mean squared error of 11.06%, 10.3% and 0.08 respectively. We also perform a three class asthma severity level classification with predicted FEV1% and obtain an accuracy of 77.77%.
{"title":"Automatic prediction of spirometry readings from cough and wheeze for monitoring of asthma severity","authors":"MV AchuthRao, N. Kausthubha, Shivani Yadav, D. Gope, U. Krishnaswamy, P. Ghosh","doi":"10.23919/EUSIPCO.2017.8081165","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2017.8081165","url":null,"abstract":"We consider the task of automatically predicting spirometry readings from cough and wheeze audio signals for asthma severity monitoring. Spirometry is a pulmonary function test used to measure forced expiratory volume in one second (FEV1) and forced vital capacity (FVC) when a subject exhales in the spirometry sensor after taking a deep breath. FEV1%, FVC% and their ratio are typically used to determine the asthma severity. Accurate prediction of these spirometry readings from cough and wheeze could help patients to non-invasively monitor their asthma severity in the absence of spirometry. We use statistical spectrum description (SSD) as the cue from cough and wheeze signal to predict the spirometry readings using support vector regression (SVR). We perform experiments with cough and wheeze recordings from 16 healthy persons and 12 patients. We find that the coughs are better predictor of spirometry readings compared to the wheeze signal. FEV1%, FVC% and their ratio are predicted with root mean squared error of 11.06%, 10.3% and 0.08 respectively. We also perform a three class asthma severity level classification with predicted FEV1% and obtain an accuracy of 77.77%.","PeriodicalId":346811,"journal":{"name":"2017 25th European Signal Processing Conference (EUSIPCO)","volume":"287 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115891046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-01DOI: 10.23919/EUSIPCO.2017.8081401
Irene Santos, P. Djurić
We address the problem of estimating a spatial field of signal strength from measurements of low accuracy. The measurements are obtained by users whose locations are inaccurately estimated. The spatial field is defined on a grid of nodes with known locations. The users report their locations and received signal strength to a central unit where all the measurements are processed. After the processing of the measurements, the estimated spatial field of signal strength is updated. We use a propagation model of the signal that includes an unknown path loss exponent. Furthermore, our model takes into account the inaccurate locations of the reporting users. In this paper, we employ a Bayesian approach for crowdsourcing that is based on Gaussian Processes. Unlike methods that provide only point estimates, with this approach we get the complete joint distribution of the spatial field. We demonstrate the performance of our method and compare it with the performance of some other methods by computer simulations. The results show that our approach outperforms the other approaches.
{"title":"Crowdsource-based signal strength field estimation by Gaussian processes","authors":"Irene Santos, P. Djurić","doi":"10.23919/EUSIPCO.2017.8081401","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2017.8081401","url":null,"abstract":"We address the problem of estimating a spatial field of signal strength from measurements of low accuracy. The measurements are obtained by users whose locations are inaccurately estimated. The spatial field is defined on a grid of nodes with known locations. The users report their locations and received signal strength to a central unit where all the measurements are processed. After the processing of the measurements, the estimated spatial field of signal strength is updated. We use a propagation model of the signal that includes an unknown path loss exponent. Furthermore, our model takes into account the inaccurate locations of the reporting users. In this paper, we employ a Bayesian approach for crowdsourcing that is based on Gaussian Processes. Unlike methods that provide only point estimates, with this approach we get the complete joint distribution of the spatial field. We demonstrate the performance of our method and compare it with the performance of some other methods by computer simulations. The results show that our approach outperforms the other approaches.","PeriodicalId":346811,"journal":{"name":"2017 25th European Signal Processing Conference (EUSIPCO)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131362023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-01DOI: 10.23919/EUSIPCO.2017.8081188
M. Castella, J. Pesquet
We consider the problem of recovering an unknown signal observed through a nonlinear model and corrupted with additive noise. More precisely, the nonlinear degradation consists of a convolution followed by a nonlinear rational transform. As a prior information, the original signal is assumed to be sparse. We tackle the problem by minimizing a least-squares fit criterion penalized by a Geman-McClure like potential. In order to find a globally optimal solution to this rational minimization problem, we transform it in a generalized moment problem, for which a hierarchy of semidefinite programming relaxations can be used. To overcome computational limitations on the number of involved variables, the structure of the problem is carefully addressed, yielding a sparse relaxation able to deal with up to several hundreds of optimized variables. Our experiments show the good performance of the proposed approach.
{"title":"A global optimization approach for rational sparsity promoting criteria","authors":"M. Castella, J. Pesquet","doi":"10.23919/EUSIPCO.2017.8081188","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2017.8081188","url":null,"abstract":"We consider the problem of recovering an unknown signal observed through a nonlinear model and corrupted with additive noise. More precisely, the nonlinear degradation consists of a convolution followed by a nonlinear rational transform. As a prior information, the original signal is assumed to be sparse. We tackle the problem by minimizing a least-squares fit criterion penalized by a Geman-McClure like potential. In order to find a globally optimal solution to this rational minimization problem, we transform it in a generalized moment problem, for which a hierarchy of semidefinite programming relaxations can be used. To overcome computational limitations on the number of involved variables, the structure of the problem is carefully addressed, yielding a sparse relaxation able to deal with up to several hundreds of optimized variables. Our experiments show the good performance of the proposed approach.","PeriodicalId":346811,"journal":{"name":"2017 25th European Signal Processing Conference (EUSIPCO)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132309331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}