Pub Date : 2015-12-16DOI: 10.1109/CAMSAP.2015.7383772
Lishuai Jing, Z. Utkovski, E. Carvalho, P. Popovski
Energy detection (ED) is an attractive technique for symbol detection at receivers equipped with a large number of antennas, for example in millimeter wave communication systems. This paper investigates the performance bounds of ED with pulse amplitude modulation (PAM) in large antenna arrays under single stream transmission and fast fading assumptions. The analysis leverages information-theoretic tools and semi-numerical approach to provide bounds on the information rate, which are shown to be tight in the low and high signal-to-noise ratio (SNR) regimes, respectively. For a fixed constellation size, the impact of the number of antennas and SNR on the achievable information rate is investigated. Based on the results, heuristics are provided for the choice of the cardinality of the adaptive modulation scheme as a function of the SNR and the number of antennas.
{"title":"Performance limits of energy detection systems with massive receiver arrays","authors":"Lishuai Jing, Z. Utkovski, E. Carvalho, P. Popovski","doi":"10.1109/CAMSAP.2015.7383772","DOIUrl":"https://doi.org/10.1109/CAMSAP.2015.7383772","url":null,"abstract":"Energy detection (ED) is an attractive technique for symbol detection at receivers equipped with a large number of antennas, for example in millimeter wave communication systems. This paper investigates the performance bounds of ED with pulse amplitude modulation (PAM) in large antenna arrays under single stream transmission and fast fading assumptions. The analysis leverages information-theoretic tools and semi-numerical approach to provide bounds on the information rate, which are shown to be tight in the low and high signal-to-noise ratio (SNR) regimes, respectively. For a fixed constellation size, the impact of the number of antennas and SNR on the achievable information rate is investigated. Based on the results, heuristics are provided for the choice of the cardinality of the adaptive modulation scheme as a function of the SNR and the number of antennas.","PeriodicalId":223156,"journal":{"name":"2015 IEEE 6th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)","volume":"15 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133918971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-16DOI: 10.1109/CAMSAP.2015.7383762
Y. Altmann, M. Pereyra, S. Mclaughlin
This paper presents a new Bayesian nonlinear unmixing model for hyperspectral images. The proposed model represents pixel reflectances as linear mixtures of end-members, corrupted by an additional combination of nonlinear terms (with respect to the end-members) and additive Gaussian noise. A central contribution of this work is to use a Gamma Markov random field to capture the spatial structure and correlations of the nonlinear terms, and by doing so to improve significantly estimation performance. In order to perform hyperspectral image unmixing, the Gamma Markov random field is embedded in a hierarchical Bayesian model representing the image observation process and prior knowledge, followed by inference with a Markov chain Monte Carlo algorithm that jointly estimates the model parameters of interest and marginalises latent variables. Simulations conducted with synthetic and real data show the accuracy of the proposed SU and nonlinearity estimation strategy for the analysis of hyperspectral images.
{"title":"Nonlinear spectral unmixing using residual component analysis and a Gamma Markov random field","authors":"Y. Altmann, M. Pereyra, S. Mclaughlin","doi":"10.1109/CAMSAP.2015.7383762","DOIUrl":"https://doi.org/10.1109/CAMSAP.2015.7383762","url":null,"abstract":"This paper presents a new Bayesian nonlinear unmixing model for hyperspectral images. The proposed model represents pixel reflectances as linear mixtures of end-members, corrupted by an additional combination of nonlinear terms (with respect to the end-members) and additive Gaussian noise. A central contribution of this work is to use a Gamma Markov random field to capture the spatial structure and correlations of the nonlinear terms, and by doing so to improve significantly estimation performance. In order to perform hyperspectral image unmixing, the Gamma Markov random field is embedded in a hierarchical Bayesian model representing the image observation process and prior knowledge, followed by inference with a Markov chain Monte Carlo algorithm that jointly estimates the model parameters of interest and marginalises latent variables. Simulations conducted with synthetic and real data show the accuracy of the proposed SU and nonlinearity estimation strategy for the analysis of hyperspectral images.","PeriodicalId":223156,"journal":{"name":"2015 IEEE 6th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122194046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-13DOI: 10.1109/CAMSAP.2015.7383825
Stephanie Bernhardt, R. Boyer, S. Marcos, P. Larzabal
Compressed sensing theory promises to sample sparse signals using a limited number of samples. It also resolves the problem of under-determined systems of linear equations when the unknown vector is sparse. Those promising applications induced a growing interest for this field in the past decade. In compressed sensing, the sparse signal estimation is performed using the knowledge of the dictionary used to sample the signal. However, dictionary mismatch often occurs in practical applications, in which case the estimation algorithm uses an uncertain dictionary knowledge. This mismatch introduces an estimation bias even when the noise is low and the support (i.e. location of non-zero amplitudes) is perfectly estimated. In this paper we consider that the dictionary suffers from a structured mismatch, this type of error being of particular interest in sparse estimation applications. We propose the Bias-Correction Estimator (BiCE) post-processing step which enhances the non-zero amplitude estimation of any sparse-based estimator in the presence of a structured dictionary mismatch. We give the theoretical Bayesian Mean Square Error of the proposed estimator and show its statistical efficiency in the low noise variance regime.
{"title":"Sparse-based estimators improvement in case of Basis mismatch","authors":"Stephanie Bernhardt, R. Boyer, S. Marcos, P. Larzabal","doi":"10.1109/CAMSAP.2015.7383825","DOIUrl":"https://doi.org/10.1109/CAMSAP.2015.7383825","url":null,"abstract":"Compressed sensing theory promises to sample sparse signals using a limited number of samples. It also resolves the problem of under-determined systems of linear equations when the unknown vector is sparse. Those promising applications induced a growing interest for this field in the past decade. In compressed sensing, the sparse signal estimation is performed using the knowledge of the dictionary used to sample the signal. However, dictionary mismatch often occurs in practical applications, in which case the estimation algorithm uses an uncertain dictionary knowledge. This mismatch introduces an estimation bias even when the noise is low and the support (i.e. location of non-zero amplitudes) is perfectly estimated. In this paper we consider that the dictionary suffers from a structured mismatch, this type of error being of particular interest in sparse estimation applications. We propose the Bias-Correction Estimator (BiCE) post-processing step which enhances the non-zero amplitude estimation of any sparse-based estimator in the presence of a structured dictionary mismatch. We give the theoretical Bayesian Mean Square Error of the proposed estimator and show its statistical efficiency in the low noise variance regime.","PeriodicalId":223156,"journal":{"name":"2015 IEEE 6th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130929213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-13DOI: 10.1109/CAMSAP.2015.7383786
F. Costa, H. Batatia, T. Oberlin, J. Tourneret
In this paper, we propose a hierarchical Bayesian model approximating the ℓ20 mixed-norm regularization by a multivariate Bernoulli Laplace prior to solve the EEG inverse problem by promoting spatial structured sparsity. The posterior distribution of this model is too complex to derive closed-form expressions of the standard Bayesian estimators. An MCMC method is proposed to sample this posterior and estimate the model parameters from the generated samples. The algorithm is based on a partially collapsed Gibbs sampler and a dual dipole random shift proposal for the non-zero positions. The brain activity and all other model parameters are jointly estimated in a completely unsupervised framework. The results obtained on synthetic data with controlled ground truth show the good performance of the proposed method when compared to the ℓ21 approach in different scenarios, and its capacity to estimate point-like source activity.
{"title":"EEG source localization based on a structured sparsity prior and a partially collapsed Gibbs sampler","authors":"F. Costa, H. Batatia, T. Oberlin, J. Tourneret","doi":"10.1109/CAMSAP.2015.7383786","DOIUrl":"https://doi.org/10.1109/CAMSAP.2015.7383786","url":null,"abstract":"In this paper, we propose a hierarchical Bayesian model approximating the ℓ20 mixed-norm regularization by a multivariate Bernoulli Laplace prior to solve the EEG inverse problem by promoting spatial structured sparsity. The posterior distribution of this model is too complex to derive closed-form expressions of the standard Bayesian estimators. An MCMC method is proposed to sample this posterior and estimate the model parameters from the generated samples. The algorithm is based on a partially collapsed Gibbs sampler and a dual dipole random shift proposal for the non-zero positions. The brain activity and all other model parameters are jointly estimated in a completely unsupervised framework. The results obtained on synthetic data with controlled ground truth show the good performance of the proposed method when compared to the ℓ21 approach in different scenarios, and its capacity to estimate point-like source activity.","PeriodicalId":223156,"journal":{"name":"2015 IEEE 6th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126719193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-13DOI: 10.1109/CAMSAP.2015.7383839
J. Galy, A. Renaux, É. Chaumette, F. Vincent, P. Larzabal
In statistical signal processing, hybrid parameter estimation refers to the case where the parameters vector to estimate contains both deterministic and random parameters. Lately computationally tractable hybrid Cramér-Rao lower bounds for discrete-time Markovian dynamic systems depending on unknown time invariant deterministic parameters has been released. However in many applications (radar, sonar, telecoms, ...) the unknown deterministic parameters of the measurement model are time variant which prevents from using the aforementioned bounds. It is therefore the aim of this communication to tackle this issue by introducing new computationally tractable hybrid Cramér-Rao lower bounds.
{"title":"Recursive hybrid CRB for Markovian systems with time-variant measurement parameters","authors":"J. Galy, A. Renaux, É. Chaumette, F. Vincent, P. Larzabal","doi":"10.1109/CAMSAP.2015.7383839","DOIUrl":"https://doi.org/10.1109/CAMSAP.2015.7383839","url":null,"abstract":"In statistical signal processing, hybrid parameter estimation refers to the case where the parameters vector to estimate contains both deterministic and random parameters. Lately computationally tractable hybrid Cramér-Rao lower bounds for discrete-time Markovian dynamic systems depending on unknown time invariant deterministic parameters has been released. However in many applications (radar, sonar, telecoms, ...) the unknown deterministic parameters of the measurement model are time variant which prevents from using the aforementioned bounds. It is therefore the aim of this communication to tackle this issue by introducing new computationally tractable hybrid Cramér-Rao lower bounds.","PeriodicalId":223156,"journal":{"name":"2015 IEEE 6th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133252324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-13DOI: 10.1109/CAMSAP.2015.7383796
Rémi Flamary, A. Rakotomamonjy, G. Gasso
As the number of samples and dimensionality of optimization problems related to statistics and machine learning explode, block coordinate descent algorithms have gained popularity since they reduce the original problem to several smaller ones. Coordinates to be optimized are usually selected randomly according to a given probability distribution. We introduce an importance sampling strategy that helps randomized coordinate descent algorithms to focus on blocks that are still far from convergence. The framework applies to problems composed of the sum of two possibly non-convex terms, one being separable and non-smooth. We have compared our algorithm to a full gradient proximal approach as well as to a randomized block coordinate algorithm that considers uniform sampling and cyclic block coordinate descent. Experimental evidences show the clear benefit of using an importance sampling strategy.
{"title":"Importance sampling strategy for non-convex randomized block-coordinate descent","authors":"Rémi Flamary, A. Rakotomamonjy, G. Gasso","doi":"10.1109/CAMSAP.2015.7383796","DOIUrl":"https://doi.org/10.1109/CAMSAP.2015.7383796","url":null,"abstract":"As the number of samples and dimensionality of optimization problems related to statistics and machine learning explode, block coordinate descent algorithms have gained popularity since they reduce the original problem to several smaller ones. Coordinates to be optimized are usually selected randomly according to a given probability distribution. We introduce an importance sampling strategy that helps randomized coordinate descent algorithms to focus on blocks that are still far from convergence. The framework applies to problems composed of the sum of two possibly non-convex terms, one being separable and non-smooth. We have compared our algorithm to a full gradient proximal approach as well as to a randomized block coordinate algorithm that considers uniform sampling and cyclic block coordinate descent. Experimental evidences show the clear benefit of using an importance sampling strategy.","PeriodicalId":223156,"journal":{"name":"2015 IEEE 6th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127589828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-13DOI: 10.1109/CAMSAP.2015.7383728
Romain Couillet, F. Benaych-Georges
This article introduces an original approach to understand the behavior of standard kernel spectral clustering algorithms (such as the Ng-Jordan-Weiss method) for large dimensional datasets. Precisely, using advanced methods from the field of random matrix theory and assuming Gaussian data vectors, we show that the Laplacian of the kernel matrix can asymptotically be well approximated by an analytically tractable equivalent random matrix. The study of the latter unveils the mechanisms into play and in particular the impact of the choice of the kernel function and some theoretical limits of the method. Despite our Gaussian assumption, we also observe that the predicted theoretical behavior is a close match to that experienced on real datasets (taken from the MNIST database).
{"title":"Understanding big data spectral clustering","authors":"Romain Couillet, F. Benaych-Georges","doi":"10.1109/CAMSAP.2015.7383728","DOIUrl":"https://doi.org/10.1109/CAMSAP.2015.7383728","url":null,"abstract":"This article introduces an original approach to understand the behavior of standard kernel spectral clustering algorithms (such as the Ng-Jordan-Weiss method) for large dimensional datasets. Precisely, using advanced methods from the field of random matrix theory and assuming Gaussian data vectors, we show that the Laplacian of the kernel matrix can asymptotically be well approximated by an analytically tractable equivalent random matrix. The study of the latter unveils the mechanisms into play and in particular the impact of the choice of the kernel function and some theoretical limits of the method. Despite our Gaussian assumption, we also observe that the predicted theoretical behavior is a close match to that experienced on real datasets (taken from the MNIST database).","PeriodicalId":223156,"journal":{"name":"2015 IEEE 6th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126979749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-13DOI: 10.1109/CAMSAP.2015.7383798
M. Castella, J. Pesquet
This paper deals with the problem of recovering a sparse unknown signal from a set of observations. The latter are obtained by convolution of the original signal and corruption with additive noise. We tackle the problem by minimizing a least-squares fit criterion penalized by a Geman-McClure like potential. The resulting criterion is a rational function, which makes it possible to formulate its minimization as a generalized problem of moments for which a hierarchy of semidefinite programming relaxations can be proposed. These convex relaxations yield a monotone sequence of values which converges to the global optimum. To overcome the computational limitations due to the large number of involved variables, a stochastic block-coordinate descent method is proposed. The algorithm has been implemented and shows promising results.
{"title":"Optimization of a Geman-McClure like criterion for sparse signal deconvolution","authors":"M. Castella, J. Pesquet","doi":"10.1109/CAMSAP.2015.7383798","DOIUrl":"https://doi.org/10.1109/CAMSAP.2015.7383798","url":null,"abstract":"This paper deals with the problem of recovering a sparse unknown signal from a set of observations. The latter are obtained by convolution of the original signal and corruption with additive noise. We tackle the problem by minimizing a least-squares fit criterion penalized by a Geman-McClure like potential. The resulting criterion is a rational function, which makes it possible to formulate its minimization as a generalized problem of moments for which a hierarchy of semidefinite programming relaxations can be proposed. These convex relaxations yield a monotone sequence of values which converges to the global optimum. To overcome the computational limitations due to the large number of involved variables, a stochastic block-coordinate descent method is proposed. The algorithm has been implemented and shows promising results.","PeriodicalId":223156,"journal":{"name":"2015 IEEE 6th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124689798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-13DOI: 10.1109/CAMSAP.2015.7383766
H. Becker, A. Karfoul, L. Albera, R. Gribonval, J. Fleureau, P. Guillotel, A. Kachenoura, L. Senhadji, I. Merlet
The separation of Electroencephalography (EEG) sources is a typical application of tensor decompositions in biomedical engineering. The objective of most approaches studied in the literature consists in providing separate spatial maps and time signatures for the identified sources. However, for some applications, a precise localization of each source is required. To achieve this, a two-step approach has been proposed. The idea of this approach is to separate the sources using the canonical polyadic decomposition in the first step and to employ the results of the tensor decomposition to estimate distributed sources in the second step, using the so-called disk algorithm. In this paper, we propose to combine the tensor decomposition and the source localization in a single step. To this end, we directly impose structural constraints, which are based on a priori information on the possible source locations, on the factor matrix of spatial characteristics. The resulting optimization problem is solved using the alternating direction method of multipliers, which is incorporated in the alternating least squares tensor decomposition algorithm. Realistic simulations with epileptic EEG data confirm that the proposed single-step source localization approach outperforms the previously developed two-step approach.
{"title":"Tensor decomposition exploiting structural constraints for brain source imaging","authors":"H. Becker, A. Karfoul, L. Albera, R. Gribonval, J. Fleureau, P. Guillotel, A. Kachenoura, L. Senhadji, I. Merlet","doi":"10.1109/CAMSAP.2015.7383766","DOIUrl":"https://doi.org/10.1109/CAMSAP.2015.7383766","url":null,"abstract":"The separation of Electroencephalography (EEG) sources is a typical application of tensor decompositions in biomedical engineering. The objective of most approaches studied in the literature consists in providing separate spatial maps and time signatures for the identified sources. However, for some applications, a precise localization of each source is required. To achieve this, a two-step approach has been proposed. The idea of this approach is to separate the sources using the canonical polyadic decomposition in the first step and to employ the results of the tensor decomposition to estimate distributed sources in the second step, using the so-called disk algorithm. In this paper, we propose to combine the tensor decomposition and the source localization in a single step. To this end, we directly impose structural constraints, which are based on a priori information on the possible source locations, on the factor matrix of spatial characteristics. The resulting optimization problem is solved using the alternating direction method of multipliers, which is incorporated in the alternating least squares tensor decomposition algorithm. Realistic simulations with epileptic EEG data confirm that the proposed single-step source localization approach outperforms the previously developed two-step approach.","PeriodicalId":223156,"journal":{"name":"2015 IEEE 6th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131217963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-13DOI: 10.1109/CAMSAP.2015.7383797
Adilson Chinatto, Emmanuel Soubies, C. Junqueira, J. Romano, P. Larzabal, J. Barbot, L. Blanc-Féraud
This paper is devoted to two classical sparse problems in array processing: Channel estimation and DOA estimation. It is shown after some background and some recent results in ℓ0 optimization how this latter can be used, at the same computational cost, in order to obtain improvement in comparison with ℓ1 optimization for sparse estimation.
{"title":"ℓ0-optimization for channel and DOA sparse estimation","authors":"Adilson Chinatto, Emmanuel Soubies, C. Junqueira, J. Romano, P. Larzabal, J. Barbot, L. Blanc-Féraud","doi":"10.1109/CAMSAP.2015.7383797","DOIUrl":"https://doi.org/10.1109/CAMSAP.2015.7383797","url":null,"abstract":"This paper is devoted to two classical sparse problems in array processing: Channel estimation and DOA estimation. It is shown after some background and some recent results in ℓ0 optimization how this latter can be used, at the same computational cost, in order to obtain improvement in comparison with ℓ1 optimization for sparse estimation.","PeriodicalId":223156,"journal":{"name":"2015 IEEE 6th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133141906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}