{"title":"Real World Robustness from Systematic Noise","authors":"Yan Wang, Yuhang Li, Ruihao Gong","doi":"10.1145/3475724.3483607","DOIUrl":null,"url":null,"abstract":"Systematic error, which is not determined by chance, often refers to the inaccuracy (involving either the observation or measurement process) inherent to a system. In this paper, we exhibit some long-neglected but frequent-happening adversarial examples caused by systematic error. More specifically, we find the trained neural network classifier can be fooled by inconsistent implementations of image decoding and resize. This tiny difference between these implementations often causes an accuracy drop from training to deployment. To benchmark these real-world adversarial examples, we propose ImageNet-S dataset, which enables researchers to measure a classifier's robustness to systematic error. For example, we find a normal ResNet-50 trained on ImageNet can have 1%$\\sim$5% accuracy difference due to the systematic error. Together our evaluation and dataset may aid future work toward real-world robustness and practical generalization.","PeriodicalId":279202,"journal":{"name":"Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3475724.3483607","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
Systematic error, which is not determined by chance, often refers to the inaccuracy (involving either the observation or measurement process) inherent to a system. In this paper, we exhibit some long-neglected but frequent-happening adversarial examples caused by systematic error. More specifically, we find the trained neural network classifier can be fooled by inconsistent implementations of image decoding and resize. This tiny difference between these implementations often causes an accuracy drop from training to deployment. To benchmark these real-world adversarial examples, we propose ImageNet-S dataset, which enables researchers to measure a classifier's robustness to systematic error. For example, we find a normal ResNet-50 trained on ImageNet can have 1%$\sim$5% accuracy difference due to the systematic error. Together our evaluation and dataset may aid future work toward real-world robustness and practical generalization.