Pub Date : 2017-04-27DOI: 10.1109/MLSP.2017.8168119
Szu-Wei Fu, Ting-yao Hu, Yu Tsao, Xugang Lu
This paper aims to address two issues existing in the current speech enhancement methods: 1) the difficulty of phase estimations; 2) a single objective function cannot consider multiple metrics simultaneously. To solve the first problem, we propose a novel convolutional neural network (CNN) model for complex spectrogram enhancement, namely estimating clean real and imaginary (RI) spectrograms from noisy ones. The reconstructed RI spectrograms are directly used to synthesize enhanced speech waveforms. In addition, since log-power spectrogram (LPS) can be represented as a function of RI spectrograms, its reconstruction is also considered as another target. Thus a unified objective function, which combines these two targets (reconstruction of RI spectrograms and LPS), is equivalent to simultaneously optimizing two commonly used objective metrics: segmental signal-to-noise ratio (SSNR) and log-spectral distortion (LSD). Therefore, the learning process is called multi-metrics learning (MML). Experimental results confirm the effectiveness of the proposed CNN with RI spectrograms and MML in terms of improved standardized evaluation metrics on a speech enhancement task.
{"title":"Complex spectrogram enhancement by convolutional neural network with multi-metrics learning","authors":"Szu-Wei Fu, Ting-yao Hu, Yu Tsao, Xugang Lu","doi":"10.1109/MLSP.2017.8168119","DOIUrl":"https://doi.org/10.1109/MLSP.2017.8168119","url":null,"abstract":"This paper aims to address two issues existing in the current speech enhancement methods: 1) the difficulty of phase estimations; 2) a single objective function cannot consider multiple metrics simultaneously. To solve the first problem, we propose a novel convolutional neural network (CNN) model for complex spectrogram enhancement, namely estimating clean real and imaginary (RI) spectrograms from noisy ones. The reconstructed RI spectrograms are directly used to synthesize enhanced speech waveforms. In addition, since log-power spectrogram (LPS) can be represented as a function of RI spectrograms, its reconstruction is also considered as another target. Thus a unified objective function, which combines these two targets (reconstruction of RI spectrograms and LPS), is equivalent to simultaneously optimizing two commonly used objective metrics: segmental signal-to-noise ratio (SSNR) and log-spectral distortion (LSD). Therefore, the learning process is called multi-metrics learning (MML). Experimental results confirm the effectiveness of the proposed CNN with RI spectrograms and MML in terms of improved standardized evaluation metrics on a speech enhancement task.","PeriodicalId":6542,"journal":{"name":"2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"141 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80109280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-02-08DOI: 10.1109/MLSP.2017.8168194
Afonso M. Teodoro, J. Bioucas-Dias, Mário A. T. Figueiredo
Recent frameworks, such as the so-called plug-and-play, allow us to leverage the developments in image denoising to tackle other, and more involved, problems in image processing. As the name suggests, state-of-the-art denoisers are plugged into an iterative algorithm that alternates between a denoising step and the inversion of the observation operator. While these tools offer flexibility, the convergence of the resulting algorithm may be difficult to analyse. In this paper, we plug a state-of-the-art denoiser, based on a Gaussian mixture model, in the iterations of an alternating direction method of multipliers and prove the algorithm is guaranteed to converge. Moreover, we build upon the concept of scene-adapted priors where we learn a model targeted to a specific scene being imaged, and apply the proposed method to address the hyperspectral sharpening problem.
{"title":"Scene-Adapted plug-and-play algorithm with convergence guarantees","authors":"Afonso M. Teodoro, J. Bioucas-Dias, Mário A. T. Figueiredo","doi":"10.1109/MLSP.2017.8168194","DOIUrl":"https://doi.org/10.1109/MLSP.2017.8168194","url":null,"abstract":"Recent frameworks, such as the so-called plug-and-play, allow us to leverage the developments in image denoising to tackle other, and more involved, problems in image processing. As the name suggests, state-of-the-art denoisers are plugged into an iterative algorithm that alternates between a denoising step and the inversion of the observation operator. While these tools offer flexibility, the convergence of the resulting algorithm may be difficult to analyse. In this paper, we plug a state-of-the-art denoiser, based on a Gaussian mixture model, in the iterations of an alternating direction method of multipliers and prove the algorithm is guaranteed to converge. Moreover, we build upon the concept of scene-adapted priors where we learn a model targeted to a specific scene being imaged, and apply the proposed method to address the hyperspectral sharpening problem.","PeriodicalId":6542,"journal":{"name":"2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"19 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83676335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1109/MLSP.2017.8168123
Cem Tekin, E. Turğay
In this paper, we propose a new contextual bandit problem with two objectives, where one of the objectives dominates the other objective. Unlike single-objective bandit problems in which the learner obtains a random scalar reward for each arm it selects, in the proposed problem, the learner obtains a random reward vector, where each component of the reward vector corresponds to one of the objectives. The goal of the learner is to maximize its total reward in the non-dominant objective while ensuring that it maximizes its reward in the dominant objective. In this case, the optimal arm given a context is the one that maximizes the expected reward in the non-dominant objective among all arms that maximize the expected reward in the dominant objective. For this problem, we propose the multi-objective contextual multi-armed bandit algorithm (MOC-MAB), and prove that it achieves sublinear regret with respect to the optimal context dependent policy. Then, we compare the performance of the proposed algorithm with other state-of-the-art bandit algorithms. The proposed contextual bandit model and the algorithm have a wide range of real-world applications that involve multiple and possibly conflicting objectives ranging from wireless communication to medical diagnosis and recommender systems.
{"title":"Multi-Objective contextual bandits with a dominant objective","authors":"Cem Tekin, E. Turğay","doi":"10.1109/MLSP.2017.8168123","DOIUrl":"https://doi.org/10.1109/MLSP.2017.8168123","url":null,"abstract":"In this paper, we propose a new contextual bandit problem with two objectives, where one of the objectives dominates the other objective. Unlike single-objective bandit problems in which the learner obtains a random scalar reward for each arm it selects, in the proposed problem, the learner obtains a random reward vector, where each component of the reward vector corresponds to one of the objectives. The goal of the learner is to maximize its total reward in the non-dominant objective while ensuring that it maximizes its reward in the dominant objective. In this case, the optimal arm given a context is the one that maximizes the expected reward in the non-dominant objective among all arms that maximize the expected reward in the dominant objective. For this problem, we propose the multi-objective contextual multi-armed bandit algorithm (MOC-MAB), and prove that it achieves sublinear regret with respect to the optimal context dependent policy. Then, we compare the performance of the proposed algorithm with other state-of-the-art bandit algorithms. The proposed contextual bandit model and the algorithm have a wide range of real-world applications that involve multiple and possibly conflicting objectives ranging from wireless communication to medical diagnosis and recommender systems.","PeriodicalId":6542,"journal":{"name":"2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"110 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76066702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1109/MLSP.2017.8168180
Tzu-Chien Fu, Wei-Chen Chiu, Y. Wang
Cross-resolution face recognition tackles the problem of matching face images with different resolutions. Although state-of-the-art convolutional neural network (CNN) based methods have reported promising performances on standard face recognition problems, such models cannot sufficiently describe images with resolution different from those seen during training, and thus cannot solve the above task accordingly. In this paper, we propose Guided Convolutional Neural Network (Guided-CNN), which is a novel CNN architecture with parallel sub-CNN models as guide and learners. Unique loss functions are introduced, which would serve as joint supervision for images within and across resolutions. Our experiments not only verify the use of our model for cross-resolution recognition, but also its applicability of recognizing face images with different degrees of occlusion.
{"title":"Learning guided convolutional neural networks for cross-resolution face recognition","authors":"Tzu-Chien Fu, Wei-Chen Chiu, Y. Wang","doi":"10.1109/MLSP.2017.8168180","DOIUrl":"https://doi.org/10.1109/MLSP.2017.8168180","url":null,"abstract":"Cross-resolution face recognition tackles the problem of matching face images with different resolutions. Although state-of-the-art convolutional neural network (CNN) based methods have reported promising performances on standard face recognition problems, such models cannot sufficiently describe images with resolution different from those seen during training, and thus cannot solve the above task accordingly. In this paper, we propose Guided Convolutional Neural Network (Guided-CNN), which is a novel CNN architecture with parallel sub-CNN models as guide and learners. Unique loss functions are introduced, which would serve as joint supervision for images within and across resolutions. Our experiments not only verify the use of our model for cross-resolution recognition, but also its applicability of recognizing face images with different degrees of occlusion.","PeriodicalId":6542,"journal":{"name":"2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"4 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76097600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1109/MLSP.2017.8168181
Yen-Cheng Liu, Wei-Chen Chiu, Sheng-De Wang, Y. Wang
Generating photo-realistic images from multiple style sketches is one of challenging tasks in image synthesis with important applications such as facial composite for suspects. While machine learning techniques have been applied for solving this problem, the requirement of collecting sketch and face photo image pairs would limit the use of the learned model for rendering sketches of different styles. In this paper, we propose a novel deep learning model of Domain-adaptive Generative Adversarial Networks (DA-GAN). The design of DA-GAN performs cross-style sketch-to-photo inversion, which mitigates the difference across input sketch styles without the need to collect a large number of sketch and face image pairs for training purposes. In experiments, we show that our method is able to produce satisfactory results as well as performing favorably against state-of-the-art approaches.
{"title":"Domain-Adaptive generative adversarial networks for sketch-to-photo inversion","authors":"Yen-Cheng Liu, Wei-Chen Chiu, Sheng-De Wang, Y. Wang","doi":"10.1109/MLSP.2017.8168181","DOIUrl":"https://doi.org/10.1109/MLSP.2017.8168181","url":null,"abstract":"Generating photo-realistic images from multiple style sketches is one of challenging tasks in image synthesis with important applications such as facial composite for suspects. While machine learning techniques have been applied for solving this problem, the requirement of collecting sketch and face photo image pairs would limit the use of the learned model for rendering sketches of different styles. In this paper, we propose a novel deep learning model of Domain-adaptive Generative Adversarial Networks (DA-GAN). The design of DA-GAN performs cross-style sketch-to-photo inversion, which mitigates the difference across input sketch styles without the need to collect a large number of sketch and face image pairs for training purposes. In experiments, we show that our method is able to produce satisfactory results as well as performing favorably against state-of-the-art approaches.","PeriodicalId":6542,"journal":{"name":"2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"6 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86664574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1109/MLSP.2017.8168121
Jen-Chieh Tsai, Jen-Tzung Chien
Traditional domain adaptation methods attempted to learn the shared representation for distribution matching between source domain and target domain where the individual information in both domains was not characterized. Such a solution suffers from the mixing problem of individual information with the shared features which considerably constrains the performance for domain adaptation. To relax this constraint, it is crucial to extract both shared information and individual information. This study captures both information via a new domain separation network where the shared features are extracted and purified via separate modeling of individual information in both domains. In particular, a hybrid adversarial learning is incorporated in a separation network as well as an adaptation network where the associated discriminators are jointly trained for domain separation and adaptation according to the minmax optimization over separation loss and domain discrepancy, respectively. Experiments on different tasks show the merit of using the proposed adversarial domain separation and adaptation.
{"title":"Adversarial domain separation and adaptation","authors":"Jen-Chieh Tsai, Jen-Tzung Chien","doi":"10.1109/MLSP.2017.8168121","DOIUrl":"https://doi.org/10.1109/MLSP.2017.8168121","url":null,"abstract":"Traditional domain adaptation methods attempted to learn the shared representation for distribution matching between source domain and target domain where the individual information in both domains was not characterized. Such a solution suffers from the mixing problem of individual information with the shared features which considerably constrains the performance for domain adaptation. To relax this constraint, it is crucial to extract both shared information and individual information. This study captures both information via a new domain separation network where the shared features are extracted and purified via separate modeling of individual information in both domains. In particular, a hybrid adversarial learning is incorporated in a separation network as well as an adaptation network where the associated discriminators are jointly trained for domain separation and adaptation according to the minmax optimization over separation loss and domain discrepancy, respectively. Experiments on different tasks show the merit of using the proposed adversarial domain separation and adaptation.","PeriodicalId":6542,"journal":{"name":"2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"3 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76925083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1109/MLSP.2017.8168190
Tsaipei Wang
This paper describes an iterative data-driven algorithm for automatically labeling coronary vessel segments in MDCT images. Such techniques are useful for effective presentation and communication of findings on coronary vessel pathology by physicians and computer-assisted diagnosis systems. The experiments are done on the 18 sets of coronary vessel data in the Rotterdam Coronary Artery Algorithm Evaluation Framework that contain segment labeling by medical experts. The performance of our algorithm show both good accuracy and efficiency compared to previous works on this task.
{"title":"Iterative data-driven coronary vessel labeling","authors":"Tsaipei Wang","doi":"10.1109/MLSP.2017.8168190","DOIUrl":"https://doi.org/10.1109/MLSP.2017.8168190","url":null,"abstract":"This paper describes an iterative data-driven algorithm for automatically labeling coronary vessel segments in MDCT images. Such techniques are useful for effective presentation and communication of findings on coronary vessel pathology by physicians and computer-assisted diagnosis systems. The experiments are done on the 18 sets of coronary vessel data in the Rotterdam Coronary Artery Algorithm Evaluation Framework that contain segment labeling by medical experts. The performance of our algorithm show both good accuracy and efficiency compared to previous works on this task.","PeriodicalId":6542,"journal":{"name":"2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"140 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86668216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1109/MLSP.2017.8168120
K. Tsou, Jen-Tzung Chien
Recurrent neural network (RNN) based on long short-term memory (LSTM) has been successfully developed for single-channel source separation. Temporal information is learned by using dynamic states which are evolved through time and stored as an internal memory. The performance of source separation is constrained due to the limitation of internal memory which could not sufficiently preserve long-term characteristics from different sources. This study deals with this limitation by incorporating an external memory in RNN and accordingly presents a memory augmented neural network for source separation. In particular, we carry out a neural Turing machine to learn a separation model for sequential signals of speech and noise in presence of different speakers and noise types. Experiments show that speech enhancement based on memory augmented neural network consistently outperforms that using deep neural network and LSTM in terms of short-term objective intelligibility measure.
{"title":"Memory augmented neural network for source separation","authors":"K. Tsou, Jen-Tzung Chien","doi":"10.1109/MLSP.2017.8168120","DOIUrl":"https://doi.org/10.1109/MLSP.2017.8168120","url":null,"abstract":"Recurrent neural network (RNN) based on long short-term memory (LSTM) has been successfully developed for single-channel source separation. Temporal information is learned by using dynamic states which are evolved through time and stored as an internal memory. The performance of source separation is constrained due to the limitation of internal memory which could not sufficiently preserve long-term characteristics from different sources. This study deals with this limitation by incorporating an external memory in RNN and accordingly presents a memory augmented neural network for source separation. In particular, we carry out a neural Turing machine to learn a separation model for sequential signals of speech and noise in presence of different speakers and noise types. Experiments show that speech enhancement based on memory augmented neural network consistently outperforms that using deep neural network and LSTM in terms of short-term objective intelligibility measure.","PeriodicalId":6542,"journal":{"name":"2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"37 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78901930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-11-17DOI: 10.1109/MLSP.2017.8168183
Andrey Kuzmin, Dmitry Mikushin, V. Lempitsky
We present a new deep learning-based approach for dense stereo matching. Compared to previous works, our approach does not use deep learning of pixel appearance descriptors, employing very fast classical matching scores instead. At the same time, our approach uses a deep convolutional network to predict the local parameters of cost volume aggregation process, which in this paper we implement using differentiable domain transform. By treating such transform as a recurrent neural network, we are able to train our whole system that includes cost volume computation, cost-volume aggregation (smoothing), and winner-takes-all disparity selection end-to-end. The resulting method is highly efficient at test time, while achieving good matching accuracy. On the KITTI 2012 and KITTI 2015 benchmark, it achieves a result of 5.08% and 6.34% error rate respectively while running at 29 frames per second rate on a modern GPU.
{"title":"End-to-End learning of cost-volume aggregation for real-time dense stereo","authors":"Andrey Kuzmin, Dmitry Mikushin, V. Lempitsky","doi":"10.1109/MLSP.2017.8168183","DOIUrl":"https://doi.org/10.1109/MLSP.2017.8168183","url":null,"abstract":"We present a new deep learning-based approach for dense stereo matching. Compared to previous works, our approach does not use deep learning of pixel appearance descriptors, employing very fast classical matching scores instead. At the same time, our approach uses a deep convolutional network to predict the local parameters of cost volume aggregation process, which in this paper we implement using differentiable domain transform. By treating such transform as a recurrent neural network, we are able to train our whole system that includes cost volume computation, cost-volume aggregation (smoothing), and winner-takes-all disparity selection end-to-end. The resulting method is highly efficient at test time, while achieving good matching accuracy. On the KITTI 2012 and KITTI 2015 benchmark, it achieves a result of 5.08% and 6.34% error rate respectively while running at 29 frames per second rate on a modern GPU.","PeriodicalId":6542,"journal":{"name":"2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"1 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89802647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-25DOI: 10.1109/MLSP.2017.8168130
A. Grigorievskiy, Neil D. Lawrence, S. Särkkä
We propose a parallelizable sparse inverse formulation Gaussian process (SpInGP) for temporal models. It uses a sparse precision GP formulation and sparse matrix routines to speed up the computations. Due to the state-space formulation used in the algorithm, the time complexity of the basic SpInGP is linear, and because all the computations are parallelizable, the parallel form of the algorithm is sublinear in the number of data points. We provide example algorithms to implement the sparse matrix routines and experimentally test the method using both simulated and real data.
{"title":"Parallelizable sparse inverse formulation Gaussian processes (SpInGP)","authors":"A. Grigorievskiy, Neil D. Lawrence, S. Särkkä","doi":"10.1109/MLSP.2017.8168130","DOIUrl":"https://doi.org/10.1109/MLSP.2017.8168130","url":null,"abstract":"We propose a parallelizable sparse inverse formulation Gaussian process (SpInGP) for temporal models. It uses a sparse precision GP formulation and sparse matrix routines to speed up the computations. Due to the state-space formulation used in the algorithm, the time complexity of the basic SpInGP is linear, and because all the computations are parallelizable, the parallel form of the algorithm is sublinear in the number of data points. We provide example algorithms to implement the sparse matrix routines and experimentally test the method using both simulated and real data.","PeriodicalId":6542,"journal":{"name":"2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"11 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81862144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}