{"title":"Towards Understanding The Space of Unrobust Features of Neural Networks","authors":"Bingli Liao, Takahiro Kanzaki, Danilo Vasconcellos Vargas","doi":"10.1109/CYBCONF51991.2021.9464137","DOIUrl":null,"url":null,"abstract":"Despite the convolutional neural network has achieved tremendous monumental success on a variety of computer vision-related tasks, it is still extremely challenging to build a neural network with doubtless reliability. Previous works have demonstrated that the deep neural network can be efficiently fooled by human imperceptible perturbation to the input, which actually revealed the instability for interpolation. Like human-beings, an ideally trained neural network should be constrained within desired inference space and maintain correctness for both interpolation and extrapolation. In this paper, we develop a technique to verify the correctness when convolutional neural networks extrapolate beyond training data distribution by generating legitimated feature broken images, and we show that the decision boundary for convolutional neural network is not well formulated based on image features for extrapolating.","PeriodicalId":231194,"journal":{"name":"2021 5th IEEE International Conference on Cybernetics (CYBCONF)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 5th IEEE International Conference on Cybernetics (CYBCONF)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CYBCONF51991.2021.9464137","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Despite the convolutional neural network has achieved tremendous monumental success on a variety of computer vision-related tasks, it is still extremely challenging to build a neural network with doubtless reliability. Previous works have demonstrated that the deep neural network can be efficiently fooled by human imperceptible perturbation to the input, which actually revealed the instability for interpolation. Like human-beings, an ideally trained neural network should be constrained within desired inference space and maintain correctness for both interpolation and extrapolation. In this paper, we develop a technique to verify the correctness when convolutional neural networks extrapolate beyond training data distribution by generating legitimated feature broken images, and we show that the decision boundary for convolutional neural network is not well formulated based on image features for extrapolating.