Michael R. Clark, Peter Swartz, Andrew Alten, Raed M. Salih
{"title":"Toward Black-box Image Extraction Attacks on RBF SVM Classification Model","authors":"Michael R. Clark, Peter Swartz, Andrew Alten, Raed M. Salih","doi":"10.1109/SEC50012.2020.00058","DOIUrl":null,"url":null,"abstract":"Image extraction attacks on machine learning models seek to recover semantically meaningful training imagery from a trained classifier model. Such attacks are concerning because training data include sensitive information. Research has shown that extracting training images is generally much harder than model inversion, which attempts to duplicate the functionality of the model. In this paper, we use the RBF SVM classifier to show that we can extract individual training images from models trained on thousands of images, which refutes the notion that these attacks can only extract an “average” of each class. Also, we correct common misperceptions about black-box image extraction attacks and developing a deep understanding of why some trained models are vulnerable to our attack while others are not. Our work is the first to show semantically meaningful images extracted from the RBF SVM classifier.Ccs Concepts•Computing methodologies~Machine learning~Machine learning approaches~Logical and relational learning•Security and privacy ~Systems security~Vulnerability management","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SEC50012.2020.00058","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Image extraction attacks on machine learning models seek to recover semantically meaningful training imagery from a trained classifier model. Such attacks are concerning because training data include sensitive information. Research has shown that extracting training images is generally much harder than model inversion, which attempts to duplicate the functionality of the model. In this paper, we use the RBF SVM classifier to show that we can extract individual training images from models trained on thousands of images, which refutes the notion that these attacks can only extract an “average” of each class. Also, we correct common misperceptions about black-box image extraction attacks and developing a deep understanding of why some trained models are vulnerable to our attack while others are not. Our work is the first to show semantically meaningful images extracted from the RBF SVM classifier.Ccs Concepts•Computing methodologies~Machine learning~Machine learning approaches~Logical and relational learning•Security and privacy ~Systems security~Vulnerability management