{"title":"基于对抗实例的神经网络模型提取","authors":"Huiwen Fang, Chunhua Wu","doi":"10.1145/3503047.3503085","DOIUrl":null,"url":null,"abstract":"The neural network model has been applied to all walks of life. By detecting the internal information of a black-box model, the attacker can obtain potential commercial value of the model. At the same time, understanding the model structure helps the attacker customize the strategy to attack the model. We have improved a model detection method based on input and output pairs to detect the internal information of the trained neural network black-box model. On the one hand, our work proved that adversarial examples are very likely to carry architecture information of the neural network model. On the other hand, we added adversarial examples to the model pre-detection module, and verified the positive effects of adversarial examples on model detection through experiments, which improved the accuracy of the meta-model and reduced the cost of model detection.","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"76 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Neural Network Model Extraction Based on Adversarial Examples\",\"authors\":\"Huiwen Fang, Chunhua Wu\",\"doi\":\"10.1145/3503047.3503085\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The neural network model has been applied to all walks of life. By detecting the internal information of a black-box model, the attacker can obtain potential commercial value of the model. At the same time, understanding the model structure helps the attacker customize the strategy to attack the model. We have improved a model detection method based on input and output pairs to detect the internal information of the trained neural network black-box model. On the one hand, our work proved that adversarial examples are very likely to carry architecture information of the neural network model. On the other hand, we added adversarial examples to the model pre-detection module, and verified the positive effects of adversarial examples on model detection through experiments, which improved the accuracy of the meta-model and reduced the cost of model detection.\",\"PeriodicalId\":190604,\"journal\":{\"name\":\"Proceedings of the 3rd International Conference on Advanced Information Science and System\",\"volume\":\"76 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-11-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 3rd International Conference on Advanced Information Science and System\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3503047.3503085\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 3rd International Conference on Advanced Information Science and System","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3503047.3503085","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Neural Network Model Extraction Based on Adversarial Examples
The neural network model has been applied to all walks of life. By detecting the internal information of a black-box model, the attacker can obtain potential commercial value of the model. At the same time, understanding the model structure helps the attacker customize the strategy to attack the model. We have improved a model detection method based on input and output pairs to detect the internal information of the trained neural network black-box model. On the one hand, our work proved that adversarial examples are very likely to carry architecture information of the neural network model. On the other hand, we added adversarial examples to the model pre-detection module, and verified the positive effects of adversarial examples on model detection through experiments, which improved the accuracy of the meta-model and reduced the cost of model detection.