Pub Date : 2017-09-01DOI: 10.1109/MLSP.2017.8168182
G. Gokdogan, Elif Vural
An important research topic of the recent years has been to understand and analyze manifold-modeled data for clustering and classification applications. Most clustering methods developed for data of non-linear and low-dimensional structure are based on local linearity assumptions. However, clustering algorithms based on locally linear representations can tolerate difficult sampling conditions only to some extent, and may fail for scarcely sampled data manifolds or at high-curvature regions. In this paper, we consider a setting where each cluster is concentrated around a manifold and propose a manifold clustering algorithm that relies on the observation that the variation of the tangent space must be consistent along curves over the same data manifold. In order to achieve robustness against challenges due to noise, manifold intersections, and high curvature, we propose a progressive clustering approach: Observing the variation of the tangent space, we first detect the non-problematic manifold regions and form pre-clusters with the data samples belonging to such reliable regions. Next, these pre-clusters are merged together to form larger clusters with respect to constraints on both the distance and the tangent space variations. Finally, the samples identified as problematic are also assigned to the computed clusters to finalize the clustering. Experiments with synthetic and real datasets show that the proposed method outperforms the manifold clustering algorithms in comparison based on Euclidean distance and sparse representations.
{"title":"Progressive clustering of manifold-modeled data based on tangent space variations","authors":"G. Gokdogan, Elif Vural","doi":"10.1109/MLSP.2017.8168182","DOIUrl":"https://doi.org/10.1109/MLSP.2017.8168182","url":null,"abstract":"An important research topic of the recent years has been to understand and analyze manifold-modeled data for clustering and classification applications. Most clustering methods developed for data of non-linear and low-dimensional structure are based on local linearity assumptions. However, clustering algorithms based on locally linear representations can tolerate difficult sampling conditions only to some extent, and may fail for scarcely sampled data manifolds or at high-curvature regions. In this paper, we consider a setting where each cluster is concentrated around a manifold and propose a manifold clustering algorithm that relies on the observation that the variation of the tangent space must be consistent along curves over the same data manifold. In order to achieve robustness against challenges due to noise, manifold intersections, and high curvature, we propose a progressive clustering approach: Observing the variation of the tangent space, we first detect the non-problematic manifold regions and form pre-clusters with the data samples belonging to such reliable regions. Next, these pre-clusters are merged together to form larger clusters with respect to constraints on both the distance and the tangent space variations. Finally, the samples identified as problematic are also assigned to the computed clusters to finalize the clustering. Experiments with synthetic and real datasets show that the proposed method outperforms the manifold clustering algorithms in comparison based on Euclidean distance and sparse representations.","PeriodicalId":6542,"journal":{"name":"2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"10 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82643937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01DOI: 10.1109/MLSP.2017.8168148
G. Biagetti, P. Crippa, L. Falaschetti, C. Turchetti
Polynomials have shown to be useful basis functions in the identification of nonlinear systems. However estimation of the unknown coefficients requires expensive algorithms, as for instance it occurs by applying an optimal least square approach. Bernstein polynomials have the property that the coefficients are the values of the function to be approximated at points in a fixed grid, thus avoiding a time-consuming training stage. This paper presents a novel machine learning approach to regression, based on new functions named particle-Bernstein polynomials, which is particularly suitable to solve multivariate regression problems. Several experimental results show the validity of the technique for the identification of nonlinear systems and the better performance achieved with respect to the standard techniques.
{"title":"Machine learning regression based on particle bernstein polynomials for nonlinear system identification","authors":"G. Biagetti, P. Crippa, L. Falaschetti, C. Turchetti","doi":"10.1109/MLSP.2017.8168148","DOIUrl":"https://doi.org/10.1109/MLSP.2017.8168148","url":null,"abstract":"Polynomials have shown to be useful basis functions in the identification of nonlinear systems. However estimation of the unknown coefficients requires expensive algorithms, as for instance it occurs by applying an optimal least square approach. Bernstein polynomials have the property that the coefficients are the values of the function to be approximated at points in a fixed grid, thus avoiding a time-consuming training stage. This paper presents a novel machine learning approach to regression, based on new functions named particle-Bernstein polynomials, which is particularly suitable to solve multivariate regression problems. Several experimental results show the validity of the technique for the identification of nonlinear systems and the better performance achieved with respect to the standard techniques.","PeriodicalId":6542,"journal":{"name":"2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"30 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84088314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-31DOI: 10.1109/MLSP.2017.8168152
Morten Kolbæk, Dong Yu, Z. Tan, J. Jensen
In this paper we propose to use utterance-level Permutation Invariant Training (uPIT) for speaker independent multi-talker speech separation and denoising, simultaneously. Specifically, we train deep bi-directional Long Short-Term Memory (LSTM) Recurrent Neural Networks (RNNs) using uPIT, for single-channel speaker independent multi-talker speech separation in multiple noisy conditions, including both synthetic and real-life noise signals. We focus our experiments on generalizability and noise robustness of models that rely on various types of a priori knowledge e.g. in terms of noise type and number of simultaneous speakers. We show that deep bi-directional LSTM RNNs trained using uPIT in noisy environments can improve the Signal-to-Distortion Ratio (SDR) as well as the Extended Short-Time Objective Intelligibility (ESTOI) measure, on the speaker independent multi-talker speech separation and denoising task, for various noise types and Signal-to-Noise Ratios (SNRs). Specifically, we first show that LSTM RNNs can achieve large SDR and ESTOI improvements, when evaluated using known noise types, and that a single model is capable of handling multiple noise types with only a slight decrease in performance. Furthermore, we show that a single LSTM RNN can handle both two-speaker and three-speaker noisy mixtures, without a priori knowledge about the exact number of speakers. Finally, we show that LSTM RNNs trained using uPIT generalize well to noise types not seen during training.
{"title":"Joint separation and denoising of noisy multi-talker speech using recurrent neural networks and permutation invariant training","authors":"Morten Kolbæk, Dong Yu, Z. Tan, J. Jensen","doi":"10.1109/MLSP.2017.8168152","DOIUrl":"https://doi.org/10.1109/MLSP.2017.8168152","url":null,"abstract":"In this paper we propose to use utterance-level Permutation Invariant Training (uPIT) for speaker independent multi-talker speech separation and denoising, simultaneously. Specifically, we train deep bi-directional Long Short-Term Memory (LSTM) Recurrent Neural Networks (RNNs) using uPIT, for single-channel speaker independent multi-talker speech separation in multiple noisy conditions, including both synthetic and real-life noise signals. We focus our experiments on generalizability and noise robustness of models that rely on various types of a priori knowledge e.g. in terms of noise type and number of simultaneous speakers. We show that deep bi-directional LSTM RNNs trained using uPIT in noisy environments can improve the Signal-to-Distortion Ratio (SDR) as well as the Extended Short-Time Objective Intelligibility (ESTOI) measure, on the speaker independent multi-talker speech separation and denoising task, for various noise types and Signal-to-Noise Ratios (SNRs). Specifically, we first show that LSTM RNNs can achieve large SDR and ESTOI improvements, when evaluated using known noise types, and that a single model is capable of handling multiple noise types with only a slight decrease in performance. Furthermore, we show that a single LSTM RNN can handle both two-speaker and three-speaker noisy mixtures, without a priori knowledge about the exact number of speakers. Finally, we show that LSTM RNNs trained using uPIT generalize well to noise types not seen during training.","PeriodicalId":6542,"journal":{"name":"2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"89 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89021542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-30DOI: 10.1109/MLSP.2017.8168140
Miriam Cha, Youngjune Gwon, H. T. Kung
Recent approaches in generative adversarial networks (GANs) can automatically synthesize realistic images from descriptive text. Despite the overall fair quality, the generated images often expose visible flaws that lack structural definition for an object of interest. In this paper, we aim to extend state of the art for GAN-based text-to-image synthesis by improving perceptual quality of generated images. Differentiated from previous work, our synthetic image generator optimizes on perceptual loss functions that measure pixel, feature activation, and texture differences against a natural image. We present visually more compelling synthetic images of birds and flowers generated from text descriptions in comparison to some of the most prominent existing work.
{"title":"Adversarial nets with perceptual losses for text-to-image synthesis","authors":"Miriam Cha, Youngjune Gwon, H. T. Kung","doi":"10.1109/MLSP.2017.8168140","DOIUrl":"https://doi.org/10.1109/MLSP.2017.8168140","url":null,"abstract":"Recent approaches in generative adversarial networks (GANs) can automatically synthesize realistic images from descriptive text. Despite the overall fair quality, the generated images often expose visible flaws that lack structural definition for an object of interest. In this paper, we aim to extend state of the art for GAN-based text-to-image synthesis by improving perceptual quality of generated images. Differentiated from previous work, our synthetic image generator optimizes on perceptual loss functions that measure pixel, feature activation, and texture differences against a natural image. We present visually more compelling synthetic images of birds and flowers generated from text descriptions in comparison to some of the most prominent existing work.","PeriodicalId":6542,"journal":{"name":"2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"86 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84008200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-30DOI: 10.1109/MLSP.2017.8168141
Amarjot Singh, N. Kingsbury
The paper proposes the ScatterNet Hybrid Deep Learning (SHDL) network that extracts invariant and discriminative image representations for object recognition. SHDL framework is constructed with a multi-layer ScatterNet front-end, an unsupervised learning middle, and a supervised learning back-end module. Each layer of the SHDL network is automatically designed as an explicit optimization problem leading to an optimal deep learning architecture with improved computational performance as compared to the more usual deep network architectures. SHDL network produces the state-of-the-art classification performance against unsupervised and semi-supervised learning (GANs) on two image datasets. Advantages of the SHDL network over supervised methods (NIN, VGG) are also demonstrated with experiments performed on training datasets of reduced size.
{"title":"Scatternet hybrid deep learning (SHDL) network for object classification","authors":"Amarjot Singh, N. Kingsbury","doi":"10.1109/MLSP.2017.8168141","DOIUrl":"https://doi.org/10.1109/MLSP.2017.8168141","url":null,"abstract":"The paper proposes the ScatterNet Hybrid Deep Learning (SHDL) network that extracts invariant and discriminative image representations for object recognition. SHDL framework is constructed with a multi-layer ScatterNet front-end, an unsupervised learning middle, and a supervised learning back-end module. Each layer of the SHDL network is automatically designed as an explicit optimization problem leading to an optimal deep learning architecture with improved computational performance as compared to the more usual deep network architectures. SHDL network produces the state-of-the-art classification performance against unsupervised and semi-supervised learning (GANs) on two image datasets. Advantages of the SHDL network over supervised methods (NIN, VGG) are also demonstrated with experiments performed on training datasets of reduced size.","PeriodicalId":6542,"journal":{"name":"2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"47 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73327774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-16DOI: 10.1109/MLSP.2017.8168129
Shinichi Mogami, Daichi Kitamura, Yoshiki Mitsui, Norihiro Takamune, H. Saruwatari, Nobutaka Ono
In this paper, we generalize a source generative model in a state-of-the-art blind source separation (BSS), independent low-rank matrix analysis (ILRMA). ILRMA is a unified method of frequency-domain independent component analysis and nonnegative matrix factorization and can provide better performance for audio BSS tasks. To further improve the performance and stability of the separation, we introduce an isotropic complex Student's t-distribution as a source generative model, which includes the isotropic complex Gaussian distribution used in conventional ILRMA. Experiments are conducted using both music and speech BSS tasks, and the results show the validity of the proposed method.
{"title":"Independent low-rank matrix analysis based on complex student's t-distribution for blind audio source separation","authors":"Shinichi Mogami, Daichi Kitamura, Yoshiki Mitsui, Norihiro Takamune, H. Saruwatari, Nobutaka Ono","doi":"10.1109/MLSP.2017.8168129","DOIUrl":"https://doi.org/10.1109/MLSP.2017.8168129","url":null,"abstract":"In this paper, we generalize a source generative model in a state-of-the-art blind source separation (BSS), independent low-rank matrix analysis (ILRMA). ILRMA is a unified method of frequency-domain independent component analysis and nonnegative matrix factorization and can provide better performance for audio BSS tasks. To further improve the performance and stability of the separation, we introduce an isotropic complex Student's t-distribution as a source generative model, which includes the isotropic complex Gaussian distribution used in conventional ILRMA. Experiments are conducted using both music and speech BSS tasks, and the results show the validity of the proposed method.","PeriodicalId":6542,"journal":{"name":"2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"132 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73012085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-16DOI: 10.1109/MLSP.2017.8168155
Asim Munawar, Phongtharin Vinayavekhin, Giovanni De Magistris
Generative models are widely used for unsupervised learning with various applications, including data compression and signal restoration. Training methods for such systems focus on the generality of the network given limited amount of training data. A less researched type of techniques concerns generation of only a single type of input. This is useful for applications such as constraint handling, noise reduction and anomaly detection. In this paper we present a technique to limit the generative capability of the network using negative learning. The proposed method searches the solution in the gradient direction for the desired input and in the opposite direction for the undesired input. One of the application can be anomaly detection where the undesired inputs are the anomalous data. We demonstrate the features of the algorithm using MNIST handwritten digit dataset and latter apply the technique to a real-world obstacle detection problem. The results clearly show that the proposed learning technique can significantly improve the performance for anomaly detection.
{"title":"Limiting the reconstruction capability of generative neural network using negative learning","authors":"Asim Munawar, Phongtharin Vinayavekhin, Giovanni De Magistris","doi":"10.1109/MLSP.2017.8168155","DOIUrl":"https://doi.org/10.1109/MLSP.2017.8168155","url":null,"abstract":"Generative models are widely used for unsupervised learning with various applications, including data compression and signal restoration. Training methods for such systems focus on the generality of the network given limited amount of training data. A less researched type of techniques concerns generation of only a single type of input. This is useful for applications such as constraint handling, noise reduction and anomaly detection. In this paper we present a technique to limit the generative capability of the network using negative learning. The proposed method searches the solution in the gradient direction for the desired input and in the opposite direction for the undesired input. One of the application can be anomaly detection where the undesired inputs are the anomalous data. We demonstrate the features of the algorithm using MNIST handwritten digit dataset and latter apply the technique to a real-world obstacle detection problem. The results clearly show that the proposed learning technique can significantly improve the performance for anomaly detection.","PeriodicalId":6542,"journal":{"name":"2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"6 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81168438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-19DOI: 10.1109/MLSP.2017.8168144
Tim O'Shea, Kiran Karra, T. Clancy
Estimation is a critical component of synchronization in wireless and signal processing systems. There is a rich body of work on estimator derivation, optimization, and statistical characterization from analytic system models which are used pervasively today. We explore an alternative approach to building estimators which relies principally on approximate regression using large datasets and large computationally efficient artificial neural network models capable of learning non-linear function mappings which provide compact and accurate estimates. For single carrier PSK modulation, we explore the accuracy and computational complexity of such estimators compared with the current gold-standard analytically derived alternatives. We compare performance in various wireless operating conditions and consider the trade offs between the two different classes of systems. Our results show the learned estimators can provide improvements in areas such as short-time estimation and estimation under non-trivial real world channel conditions such as fading or other non-linear hardware or propagation effects.
{"title":"Learning approximate neural estimators for wireless channel state information","authors":"Tim O'Shea, Kiran Karra, T. Clancy","doi":"10.1109/MLSP.2017.8168144","DOIUrl":"https://doi.org/10.1109/MLSP.2017.8168144","url":null,"abstract":"Estimation is a critical component of synchronization in wireless and signal processing systems. There is a rich body of work on estimator derivation, optimization, and statistical characterization from analytic system models which are used pervasively today. We explore an alternative approach to building estimators which relies principally on approximate regression using large datasets and large computationally efficient artificial neural network models capable of learning non-linear function mappings which provide compact and accurate estimates. For single carrier PSK modulation, we explore the accuracy and computational complexity of such estimators compared with the current gold-standard analytically derived alternatives. We compare performance in various wireless operating conditions and consider the trade offs between the two different classes of systems. Our results show the learned estimators can provide improvements in areas such as short-time estimation and estimation under non-trivial real world channel conditions such as fading or other non-linear hardware or propagation effects.","PeriodicalId":6542,"journal":{"name":"2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"36 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2017-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82825430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-06-01DOI: 10.1109/MLSP.2017.8168167
Mohammad Golbabaee, Zhouye Chen, Y. Wiaux, M. Davies
We adopt a data structure in the form of cover trees and iteratively apply approximate nearest neighbour (ANN) searches for fast compressed sensing reconstruction of signals living on discrete smooth manifolds. Leveraging on the recent stability results for the inexact Iterative Projected Gradient (IPG) algorithm and by using the cover tree's ANN searches, we decrease the projection cost of the IPG algorithm to be logarithmically growing with data population for low dimensional smooth manifolds. We apply our results to quantitative MRI compressed sensing and in particular within the Magnetic Resonance Fingerprinting (MRF) framework. For a similar (or sometimes better) reconstruction accuracy, we report 2–3 orders of magnitude reduction in computations compared to the standard iterative method, which uses brute-force searches.
{"title":"Cover tree compressed sensing for fast mr fingerprint recovery","authors":"Mohammad Golbabaee, Zhouye Chen, Y. Wiaux, M. Davies","doi":"10.1109/MLSP.2017.8168167","DOIUrl":"https://doi.org/10.1109/MLSP.2017.8168167","url":null,"abstract":"We adopt a data structure in the form of cover trees and iteratively apply approximate nearest neighbour (ANN) searches for fast compressed sensing reconstruction of signals living on discrete smooth manifolds. Leveraging on the recent stability results for the inexact Iterative Projected Gradient (IPG) algorithm and by using the cover tree's ANN searches, we decrease the projection cost of the IPG algorithm to be logarithmically growing with data population for low dimensional smooth manifolds. We apply our results to quantitative MRI compressed sensing and in particular within the Magnetic Resonance Fingerprinting (MRF) framework. For a similar (or sometimes better) reconstruction accuracy, we report 2–3 orders of magnitude reduction in computations compared to the standard iterative method, which uses brute-force searches.","PeriodicalId":6542,"journal":{"name":"2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"22 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72776692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-05-27DOI: 10.1109/MLSP.2017.8168163
David J. Miller, Xinyi Hu, Zhicong Qiu, G. Kesidis
This papers consists of two parts. The first is a critical review of prior art on adversarial learning, i) identifying some significant limitations of previous works, which have focused mainly on attack exploits and ii) proposing novel defenses against adversarial attacks. The second part is an experimental study considering the adversarial active learning scenario and an investigation of the efficacy of a mixed sample selection strategy for combating an adversary who attempts to disrupt the classifier learning.
{"title":"Adversarial learning: A critical review and active learning study","authors":"David J. Miller, Xinyi Hu, Zhicong Qiu, G. Kesidis","doi":"10.1109/MLSP.2017.8168163","DOIUrl":"https://doi.org/10.1109/MLSP.2017.8168163","url":null,"abstract":"This papers consists of two parts. The first is a critical review of prior art on adversarial learning, i) identifying some significant limitations of previous works, which have focused mainly on attack exploits and ii) proposing novel defenses against adversarial attacks. The second part is an experimental study considering the adversarial active learning scenario and an investigation of the efficacy of a mixed sample selection strategy for combating an adversary who attempts to disrupt the classifier learning.","PeriodicalId":6542,"journal":{"name":"2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"8 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73236402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}