Song Gao;Xiaoxuan Wang;Bingbing Song;Renyang Liu;Shaowen Yao;Wei Zhou;Shui Yu
{"title":"Exploiting Type I Adversarial Examples to Hide Data Information: A New Privacy-Preserving Approach","authors":"Song Gao;Xiaoxuan Wang;Bingbing Song;Renyang Liu;Shaowen Yao;Wei Zhou;Shui Yu","doi":"10.1109/TETCI.2024.3367812","DOIUrl":null,"url":null,"abstract":"Deep neural networks (DNNs) are sensitive to adversarial examples which are generated by corrupting benign examples with imperceptible perturbations, or have significant changes but can still achieve original prediction results. The latter case is termed as the Type I adversarial example which, however, has limited attention in the literature. In this paper, we introduce two methods, termed HRG and GAG, to generate Type I adversarial examples and attempt to apply them to the privacy-preserving Machine Learning as a Service (MLaaS). Existing methods for the privacy-preserving MLaaS are mostly based on cryptographic techniques, which often incur additional communication and computation overhead, while using Type I adversarial examples to hide users' privacy data is a brand-new exploration. Specifically, HRG utilizes the high-level representations of DNNs to guide generators, and GAG leverages the generative adversarial network to transform original images. Our solution does not involve any model modifications and allows DNNs to run directly on transformed data, thus arousing no additional communication and computation overhead. Extensive experiments on MNIST, CIFAR-10, and ImageNet show that HRG can perfectly hide images into noise and achieve similar accuracy as the original accuracy, and GAG can generate natural images that are completely different from the original images with a small loss of accuracy.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":null,"pages":null},"PeriodicalIF":5.3000,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Emerging Topics in Computational Intelligence","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10458271/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Deep neural networks (DNNs) are sensitive to adversarial examples which are generated by corrupting benign examples with imperceptible perturbations, or have significant changes but can still achieve original prediction results. The latter case is termed as the Type I adversarial example which, however, has limited attention in the literature. In this paper, we introduce two methods, termed HRG and GAG, to generate Type I adversarial examples and attempt to apply them to the privacy-preserving Machine Learning as a Service (MLaaS). Existing methods for the privacy-preserving MLaaS are mostly based on cryptographic techniques, which often incur additional communication and computation overhead, while using Type I adversarial examples to hide users' privacy data is a brand-new exploration. Specifically, HRG utilizes the high-level representations of DNNs to guide generators, and GAG leverages the generative adversarial network to transform original images. Our solution does not involve any model modifications and allows DNNs to run directly on transformed data, thus arousing no additional communication and computation overhead. Extensive experiments on MNIST, CIFAR-10, and ImageNet show that HRG can perfectly hide images into noise and achieve similar accuracy as the original accuracy, and GAG can generate natural images that are completely different from the original images with a small loss of accuracy.
期刊介绍:
The IEEE Transactions on Emerging Topics in Computational Intelligence (TETCI) publishes original articles on emerging aspects of computational intelligence, including theory, applications, and surveys.
TETCI is an electronics only publication. TETCI publishes six issues per year.
Authors are encouraged to submit manuscripts in any emerging topic in computational intelligence, especially nature-inspired computing topics not covered by other IEEE Computational Intelligence Society journals. A few such illustrative examples are glial cell networks, computational neuroscience, Brain Computer Interface, ambient intelligence, non-fuzzy computing with words, artificial life, cultural learning, artificial endocrine networks, social reasoning, artificial hormone networks, computational intelligence for the IoT and Smart-X technologies.