{"title":"Why Deep Learning Makes it Difficult to Keep Secrets in FPGAs","authors":"Yang Yu, M. Moraitis, E. Dubrova","doi":"10.1145/3477997.3478001","DOIUrl":null,"url":null,"abstract":"With the growth of popularity of Field-Programmable Gate Arrays (FPGAs) in cloud environments, new paradigms such as FPGA-as-a-Service (FaaS) emerge. This challenges the conventional FPGA security models which assume trust between the user and the hardware owner. In an FaaS scenario, the user may want to keep data or FPGA configuration bitstream confidential in order to protect privacy or intellectual property. However, securing FaaS use cases is hard due to the difficulty of protecting encryption keys and other secrets from the hardware owner. In this paper we demonstrate that even advanced key provisioning and remote attestation methods based on Physical Unclonable Functions (PUFs) can be broken by profiling side-channel attacks employing deep learning. Using power traces from two profiling FPGA boards implementing an arbiter PUF, we train a Convolutional Neural Network (CNN) model to learn features corresponding to “0” and “1” PUF’s responses. Then, we use the resulting model to classify responses of PUFs implemented in FPGA boards under attack (different from the profiling boards). We show that the presented attack can overcome countermeasures based on encrypting challenges and responses of a PUF.","PeriodicalId":130265,"journal":{"name":"Proceedings of the 2020 Workshop on DYnamic and Novel Advances in Machine Learning and Intelligent Cyber Security","volume":"39 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 Workshop on DYnamic and Novel Advances in Machine Learning and Intelligent Cyber Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3477997.3478001","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
With the growth of popularity of Field-Programmable Gate Arrays (FPGAs) in cloud environments, new paradigms such as FPGA-as-a-Service (FaaS) emerge. This challenges the conventional FPGA security models which assume trust between the user and the hardware owner. In an FaaS scenario, the user may want to keep data or FPGA configuration bitstream confidential in order to protect privacy or intellectual property. However, securing FaaS use cases is hard due to the difficulty of protecting encryption keys and other secrets from the hardware owner. In this paper we demonstrate that even advanced key provisioning and remote attestation methods based on Physical Unclonable Functions (PUFs) can be broken by profiling side-channel attacks employing deep learning. Using power traces from two profiling FPGA boards implementing an arbiter PUF, we train a Convolutional Neural Network (CNN) model to learn features corresponding to “0” and “1” PUF’s responses. Then, we use the resulting model to classify responses of PUFs implemented in FPGA boards under attack (different from the profiling boards). We show that the presented attack can overcome countermeasures based on encrypting challenges and responses of a PUF.