Xingbin Wang, Rui Hou, Yifan Zhu, Jun Zhang, Dan Meng
{"title":"NPUFort: a secure architecture of DNN accelerator against model inversion attack","authors":"Xingbin Wang, Rui Hou, Yifan Zhu, Jun Zhang, Dan Meng","doi":"10.1145/3310273.3323070","DOIUrl":null,"url":null,"abstract":"Deep neural network (DNN) models are widely used for inference in many application scenarios. DNN accelerators are not designed with security in mind, but for higher performance and lower energy consumption. Hence, they are suffering from the security risk of being attacked. The insecure design flaws of existing DNN accelerators can be exploited to recover the structure of DNN model from the plain instructions, thus the runtime environment can be controlled to obtain the weights of DNN model. Furthermore, the structure of DNN model running on the accelerator is acquired by the side channel information and interrupt status register. To protect general DNN accelerator from being attacked by model inversion attack, this paper proposes a secure and general architecture called NPUFort, which guarantees the confidentiality of the parameters of DNN model and mitigates side-channel information leakage. The experimental results demonstrate the feasibility and effectiveness of the secure architecture of DNN accelerators with negligible performance overhead.","PeriodicalId":431860,"journal":{"name":"Proceedings of the 16th ACM International Conference on Computing Frontiers","volume":"55 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"26","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 16th ACM International Conference on Computing Frontiers","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3310273.3323070","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 26
Abstract
Deep neural network (DNN) models are widely used for inference in many application scenarios. DNN accelerators are not designed with security in mind, but for higher performance and lower energy consumption. Hence, they are suffering from the security risk of being attacked. The insecure design flaws of existing DNN accelerators can be exploited to recover the structure of DNN model from the plain instructions, thus the runtime environment can be controlled to obtain the weights of DNN model. Furthermore, the structure of DNN model running on the accelerator is acquired by the side channel information and interrupt status register. To protect general DNN accelerator from being attacked by model inversion attack, this paper proposes a secure and general architecture called NPUFort, which guarantees the confidentiality of the parameters of DNN model and mitigates side-channel information leakage. The experimental results demonstrate the feasibility and effectiveness of the secure architecture of DNN accelerators with negligible performance overhead.