{"title":"Ginver: Generative Model Inversion Attacks Against Collaborative Inference","authors":"Yupeng Yin, Xianglong Zhang, Huanle Zhang, Feng Li, Yue Yu, Xiuzhen Cheng, Pengfei Hu","doi":"10.1145/3543507.3583306","DOIUrl":null,"url":null,"abstract":"Deep Learning (DL) has been widely adopted in almost all domains, from threat recognition to medical diagnosis. Albeit its supreme model accuracy, DL imposes a heavy burden on devices as it incurs overwhelming system overhead to execute DL models, especially on Internet-of-Things (IoT) and edge devices. Collaborative inference is a promising approach to supporting DL models, by which the data owner (the victim) runs the first layers of the model on her local device and then a cloud provider (the adversary) runs the remaining layers of the model. Compared to offloading the entire model to the cloud, the collaborative inference approach is more data privacy-preserving as the owner’s model input is not exposed to outsiders. However, we show in this paper that the adversary can restore the victim’s model input by exploiting the output of the victim’s local model. Our attack is dubbed Ginver 1: Generative model inversion attacks against collaborative inference. Once trained, Ginver can infer the victim’s unseen model inputs without remaking the inversion attack model and thus has the generative capability. We extensively evaluate Ginver under different settings (e.g., white-box and black-box of the victim’s local model) and applications (e.g., CIFAR10 and FaceScrub datasets). The experimental results show that Ginver recovers high-quality images from the victims.","PeriodicalId":296351,"journal":{"name":"Proceedings of the ACM Web Conference 2023","volume":"51 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ACM Web Conference 2023","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3543507.3583306","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Deep Learning (DL) has been widely adopted in almost all domains, from threat recognition to medical diagnosis. Albeit its supreme model accuracy, DL imposes a heavy burden on devices as it incurs overwhelming system overhead to execute DL models, especially on Internet-of-Things (IoT) and edge devices. Collaborative inference is a promising approach to supporting DL models, by which the data owner (the victim) runs the first layers of the model on her local device and then a cloud provider (the adversary) runs the remaining layers of the model. Compared to offloading the entire model to the cloud, the collaborative inference approach is more data privacy-preserving as the owner’s model input is not exposed to outsiders. However, we show in this paper that the adversary can restore the victim’s model input by exploiting the output of the victim’s local model. Our attack is dubbed Ginver 1: Generative model inversion attacks against collaborative inference. Once trained, Ginver can infer the victim’s unseen model inputs without remaking the inversion attack model and thus has the generative capability. We extensively evaluate Ginver under different settings (e.g., white-box and black-box of the victim’s local model) and applications (e.g., CIFAR10 and FaceScrub datasets). The experimental results show that Ginver recovers high-quality images from the victims.