基于对抗实例的神经网络模型提取

Huiwen Fang, Chunhua Wu
{"title":"基于对抗实例的神经网络模型提取","authors":"Huiwen Fang, Chunhua Wu","doi":"10.1145/3503047.3503085","DOIUrl":null,"url":null,"abstract":"The neural network model has been applied to all walks of life. By detecting the internal information of a black-box model, the attacker can obtain potential commercial value of the model. At the same time, understanding the model structure helps the attacker customize the strategy to attack the model. We have improved a model detection method based on input and output pairs to detect the internal information of the trained neural network black-box model. On the one hand, our work proved that adversarial examples are very likely to carry architecture information of the neural network model. On the other hand, we added adversarial examples to the model pre-detection module, and verified the positive effects of adversarial examples on model detection through experiments, which improved the accuracy of the meta-model and reduced the cost of model detection.","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"76 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Neural Network Model Extraction Based on Adversarial Examples\",\"authors\":\"Huiwen Fang, Chunhua Wu\",\"doi\":\"10.1145/3503047.3503085\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The neural network model has been applied to all walks of life. By detecting the internal information of a black-box model, the attacker can obtain potential commercial value of the model. At the same time, understanding the model structure helps the attacker customize the strategy to attack the model. We have improved a model detection method based on input and output pairs to detect the internal information of the trained neural network black-box model. On the one hand, our work proved that adversarial examples are very likely to carry architecture information of the neural network model. On the other hand, we added adversarial examples to the model pre-detection module, and verified the positive effects of adversarial examples on model detection through experiments, which improved the accuracy of the meta-model and reduced the cost of model detection.\",\"PeriodicalId\":190604,\"journal\":{\"name\":\"Proceedings of the 3rd International Conference on Advanced Information Science and System\",\"volume\":\"76 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-11-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 3rd International Conference on Advanced Information Science and System\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3503047.3503085\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 3rd International Conference on Advanced Information Science and System","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3503047.3503085","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

神经网络模型已被应用于各行各业。通过检测黑箱模型的内部信息,攻击者可以获得该模型潜在的商业价值。同时,了解模型结构有助于攻击者定制攻击模型的策略。我们改进了一种基于输入输出对的模型检测方法来检测训练后的神经网络黑箱模型的内部信息。一方面,我们的工作证明了对抗样例很可能携带神经网络模型的结构信息。另一方面,我们在模型预检测模块中加入了对抗样例,并通过实验验证了对抗样例对模型检测的积极作用,提高了元模型的准确率,降低了模型检测的成本。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Neural Network Model Extraction Based on Adversarial Examples
The neural network model has been applied to all walks of life. By detecting the internal information of a black-box model, the attacker can obtain potential commercial value of the model. At the same time, understanding the model structure helps the attacker customize the strategy to attack the model. We have improved a model detection method based on input and output pairs to detect the internal information of the trained neural network black-box model. On the one hand, our work proved that adversarial examples are very likely to carry architecture information of the neural network model. On the other hand, we added adversarial examples to the model pre-detection module, and verified the positive effects of adversarial examples on model detection through experiments, which improved the accuracy of the meta-model and reduced the cost of model detection.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Comparing the Popularity of Testing Careers among Canadian, Indian, Chinese, and Malaysian Students Radar Working Mode Recognition Method Based on Complex Network Analysis Unsupervised Barcode Image Reconstruction Based on Knowledge Distillation Research on the information System architecture design framework and reference resources of American Army Rearch on quantitative evaluation technology of equipment battlefield environment adaptability
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1