{"title":"FAST CONVOLUTIONAL SPARSE CODING WITH ℓ0 PENALTY","authors":"P. Rodríguez","doi":"10.1109/INTERCON.2018.8526377","DOIUrl":null,"url":null,"abstract":"Given a set of dictionary filters, the most widely used formulation of the convolutional sparse coding (CSC) problem is Convolutional BPDN (CBPDN), in which an image is represented as a sum over a set of convolutions of coefficient maps; usually, the coefficient maps are ℓ<inf>1</inf>-norm penalized in order to enforce a sparse solution. Recent theoretical results, have provided meaningful guarantees for the success of popular ℓ<inf>1</inf>-norm penalized CSC algorithms in the noiseless case. However, experimental results related to the ℓ<inf>0</inf>-norm penalized CSC case have not been addressed.In this paper we propose a two-step ℓ<inf>0</inf>-norm penalized CSC (ℓ<inf>0</inf>-CSC) algorithm, which outperforms (convergence rate, reconstruction performance and sparsity) known solutions to the ℓ<inf>0</inf>-CSC problem. Furthermore, our proposed algorithm, which is a convolutional extension of our previous work [1], originally develop for the ℓ<inf>0</inf> regularized optimization problem, includes an escape strategy to avoid being trapped in a saddle points or in inferior local solutions, which are common in nonconvex optimization problems, such those that use the ℓ<inf>0</inf>-norm as the penalty function.","PeriodicalId":305576,"journal":{"name":"2018 IEEE XXV International Conference on Electronics, Electrical Engineering and Computing (INTERCON)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE XXV International Conference on Electronics, Electrical Engineering and Computing (INTERCON)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INTERCON.2018.8526377","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Given a set of dictionary filters, the most widely used formulation of the convolutional sparse coding (CSC) problem is Convolutional BPDN (CBPDN), in which an image is represented as a sum over a set of convolutions of coefficient maps; usually, the coefficient maps are ℓ1-norm penalized in order to enforce a sparse solution. Recent theoretical results, have provided meaningful guarantees for the success of popular ℓ1-norm penalized CSC algorithms in the noiseless case. However, experimental results related to the ℓ0-norm penalized CSC case have not been addressed.In this paper we propose a two-step ℓ0-norm penalized CSC (ℓ0-CSC) algorithm, which outperforms (convergence rate, reconstruction performance and sparsity) known solutions to the ℓ0-CSC problem. Furthermore, our proposed algorithm, which is a convolutional extension of our previous work [1], originally develop for the ℓ0 regularized optimization problem, includes an escape strategy to avoid being trapped in a saddle points or in inferior local solutions, which are common in nonconvex optimization problems, such those that use the ℓ0-norm as the penalty function.