{"title":"Robustness Analysis of Gaussian Process Convolutional Neural Network with Uncertainty Quantification","authors":"Mahed Javed, L. Mihaylova, N. Bouaynaya","doi":"10.18178/ijmlc.2022.12.5.1097","DOIUrl":null,"url":null,"abstract":" Abstract —This paper presents a novel framework for image classification which comprises a convolutional neural network (CNN) feature map extractor combined with a Gaussian process (GP) classifier. Learning within the CNN-GP involves forward propagating the predicted class labels, then followed by backpropagation of the maximum likelihood function of the GP with a regularization term added. The regularization term takes the form of one of the three loss functions: the Kullback-Leibler divergence, Wasserstein distance, and maximum correntropy. The training and testing are performed in mini batches of images. The forward step (before the regularization) involves replacing the original images in the mini batch with their close neighboring images and then providing these to the CNN-GP to get the new predictive labels. The network performance is evaluated on MNIST, Fashion-MNIST, CIFAR10, and CIFAR100 datasets. Precision-recall and receiver operating characteristics curves are used to evaluate the performance of the GP classifier. The proposed CNN-GP performance is validated with different levels of noise, motion blur, and adversarial attacks. Results are explained using uncertainty analysis and further tests on quantifying the impact on uncertainty with attack strength are carried out. The results show that the testing accuracy improves for networks that backpropagate the maximum likelihood with regularized losses when compared with methods that do not. Moreover, a comparison with a state-of-art CNN Monte Carlo dropout method is presented. The outperformance of the CNN-GP framework with respect to reliability and computational efficiency is","PeriodicalId":91709,"journal":{"name":"International journal of machine learning and computing","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International journal of machine learning and computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.18178/ijmlc.2022.12.5.1097","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Abstract —This paper presents a novel framework for image classification which comprises a convolutional neural network (CNN) feature map extractor combined with a Gaussian process (GP) classifier. Learning within the CNN-GP involves forward propagating the predicted class labels, then followed by backpropagation of the maximum likelihood function of the GP with a regularization term added. The regularization term takes the form of one of the three loss functions: the Kullback-Leibler divergence, Wasserstein distance, and maximum correntropy. The training and testing are performed in mini batches of images. The forward step (before the regularization) involves replacing the original images in the mini batch with their close neighboring images and then providing these to the CNN-GP to get the new predictive labels. The network performance is evaluated on MNIST, Fashion-MNIST, CIFAR10, and CIFAR100 datasets. Precision-recall and receiver operating characteristics curves are used to evaluate the performance of the GP classifier. The proposed CNN-GP performance is validated with different levels of noise, motion blur, and adversarial attacks. Results are explained using uncertainty analysis and further tests on quantifying the impact on uncertainty with attack strength are carried out. The results show that the testing accuracy improves for networks that backpropagate the maximum likelihood with regularized losses when compared with methods that do not. Moreover, a comparison with a state-of-art CNN Monte Carlo dropout method is presented. The outperformance of the CNN-GP framework with respect to reliability and computational efficiency is