Yuepeng Li, Deze Zeng, Lin Gu, Quan Chen, Song Guo, Albert Y. Zomaya, M. Guo
{"title":"Lasagna: Accelerating Secure Deep Learning Inference in SGX-enabled Edge Cloud","authors":"Yuepeng Li, Deze Zeng, Lin Gu, Quan Chen, Song Guo, Albert Y. Zomaya, M. Guo","doi":"10.1145/3472883.3486988","DOIUrl":null,"url":null,"abstract":"Edge intelligence has already been widely regarded as a key enabling technology in a variety of domains. Along with the prosperity, increasing concern is raised on the security and privacy of intelligent applications. As these applications are usually deployed on shared and untrusted edge servers, malicious co-located attackers, or even untrustworthy infrastructure providers, may acquire highly security-sensitive data and code (i.e., the pre-trained model). Software Guard Extensions (SGX) provides an isolated Trust Execution Environment (TEE) for task security guarantee. However, we notice that DNN inference performance in SGX is severely affected by the limited enclave memory space due to the resultant frequent page swapping operations and the high enclave call overhead. To tackle this problem, we propose Lasagna, an SGX oriented DNN inference performance acceleration framework without compromising the task security. Lasagna consists of a local task scheduler and a global task balancer to optimize the system performance by exploring the layered-structure of DNN models. Our experiment results show that our layer-aware Lasagna effectively speeds up the well-known DNN inference in SGX by 1.31x-1.97x.","PeriodicalId":91949,"journal":{"name":"Proceedings of the ... ACM Symposium on Cloud Computing [electronic resource] : SOCC ... ... SoCC (Conference)","volume":"8 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ... ACM Symposium on Cloud Computing [electronic resource] : SOCC ... ... SoCC (Conference)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3472883.3486988","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10
Abstract
Edge intelligence has already been widely regarded as a key enabling technology in a variety of domains. Along with the prosperity, increasing concern is raised on the security and privacy of intelligent applications. As these applications are usually deployed on shared and untrusted edge servers, malicious co-located attackers, or even untrustworthy infrastructure providers, may acquire highly security-sensitive data and code (i.e., the pre-trained model). Software Guard Extensions (SGX) provides an isolated Trust Execution Environment (TEE) for task security guarantee. However, we notice that DNN inference performance in SGX is severely affected by the limited enclave memory space due to the resultant frequent page swapping operations and the high enclave call overhead. To tackle this problem, we propose Lasagna, an SGX oriented DNN inference performance acceleration framework without compromising the task security. Lasagna consists of a local task scheduler and a global task balancer to optimize the system performance by exploring the layered-structure of DNN models. Our experiment results show that our layer-aware Lasagna effectively speeds up the well-known DNN inference in SGX by 1.31x-1.97x.