Zhang Pan, Yangjie Cao, Chenxi Zhu, Zhuang Yan, Wang Haobo, Li Jie
{"title":"DefenseFea: An Input Transformation Feature Searching Algorithm Based Latent Space for Adversarial Defense","authors":"Zhang Pan, Yangjie Cao, Chenxi Zhu, Zhuang Yan, Wang Haobo, Li Jie","doi":"10.2478/fcds-2024-0002","DOIUrl":null,"url":null,"abstract":"\n Deep neural networks based image classification systems could suffer from adversarial attack algorithms, which generate input examples by adding deliberately crafted yet imperceptible noise to original input images. These crafted examples can fool systems and further threaten their security. In this paper, we propose to use latent space protect image classification. Specifically, we train a feature searching network to make up the difference between adversarial examples and clean examples with label guided loss function. We name it DefenseFea(input transformation based defense with label guided loss function), experimental result shows that DefenseFea can improve the rate of adversarial examples that achieved a success rate of about 99% on a specific set of 5000 images from ILSVRC 2012. This study plays a positive role in the further investigation of the relationship between adversarial examples and clean examples.","PeriodicalId":1,"journal":{"name":"Accounts of Chemical Research","volume":"42 6","pages":""},"PeriodicalIF":16.4000,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Accounts of Chemical Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2478/fcds-2024-0002","RegionNum":1,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CHEMISTRY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Deep neural networks based image classification systems could suffer from adversarial attack algorithms, which generate input examples by adding deliberately crafted yet imperceptible noise to original input images. These crafted examples can fool systems and further threaten their security. In this paper, we propose to use latent space protect image classification. Specifically, we train a feature searching network to make up the difference between adversarial examples and clean examples with label guided loss function. We name it DefenseFea(input transformation based defense with label guided loss function), experimental result shows that DefenseFea can improve the rate of adversarial examples that achieved a success rate of about 99% on a specific set of 5000 images from ILSVRC 2012. This study plays a positive role in the further investigation of the relationship between adversarial examples and clean examples.
期刊介绍:
Accounts of Chemical Research presents short, concise and critical articles offering easy-to-read overviews of basic research and applications in all areas of chemistry and biochemistry. These short reviews focus on research from the author’s own laboratory and are designed to teach the reader about a research project. In addition, Accounts of Chemical Research publishes commentaries that give an informed opinion on a current research problem. Special Issues online are devoted to a single topic of unusual activity and significance.
Accounts of Chemical Research replaces the traditional article abstract with an article "Conspectus." These entries synopsize the research affording the reader a closer look at the content and significance of an article. Through this provision of a more detailed description of the article contents, the Conspectus enhances the article's discoverability by search engines and the exposure for the research.