Why Deep Learning Makes it Difficult to Keep Secrets in FPGAs

Yang Yu, M. Moraitis, E. Dubrova
{"title":"Why Deep Learning Makes it Difficult to Keep Secrets in FPGAs","authors":"Yang Yu, M. Moraitis, E. Dubrova","doi":"10.1145/3477997.3478001","DOIUrl":null,"url":null,"abstract":"With the growth of popularity of Field-Programmable Gate Arrays (FPGAs) in cloud environments, new paradigms such as FPGA-as-a-Service (FaaS) emerge. This challenges the conventional FPGA security models which assume trust between the user and the hardware owner. In an FaaS scenario, the user may want to keep data or FPGA configuration bitstream confidential in order to protect privacy or intellectual property. However, securing FaaS use cases is hard due to the difficulty of protecting encryption keys and other secrets from the hardware owner. In this paper we demonstrate that even advanced key provisioning and remote attestation methods based on Physical Unclonable Functions (PUFs) can be broken by profiling side-channel attacks employing deep learning. Using power traces from two profiling FPGA boards implementing an arbiter PUF, we train a Convolutional Neural Network (CNN) model to learn features corresponding to “0” and “1” PUF’s responses. Then, we use the resulting model to classify responses of PUFs implemented in FPGA boards under attack (different from the profiling boards). We show that the presented attack can overcome countermeasures based on encrypting challenges and responses of a PUF.","PeriodicalId":130265,"journal":{"name":"Proceedings of the 2020 Workshop on DYnamic and Novel Advances in Machine Learning and Intelligent Cyber Security","volume":"39 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 Workshop on DYnamic and Novel Advances in Machine Learning and Intelligent Cyber Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3477997.3478001","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

With the growth of popularity of Field-Programmable Gate Arrays (FPGAs) in cloud environments, new paradigms such as FPGA-as-a-Service (FaaS) emerge. This challenges the conventional FPGA security models which assume trust between the user and the hardware owner. In an FaaS scenario, the user may want to keep data or FPGA configuration bitstream confidential in order to protect privacy or intellectual property. However, securing FaaS use cases is hard due to the difficulty of protecting encryption keys and other secrets from the hardware owner. In this paper we demonstrate that even advanced key provisioning and remote attestation methods based on Physical Unclonable Functions (PUFs) can be broken by profiling side-channel attacks employing deep learning. Using power traces from two profiling FPGA boards implementing an arbiter PUF, we train a Convolutional Neural Network (CNN) model to learn features corresponding to “0” and “1” PUF’s responses. Then, we use the resulting model to classify responses of PUFs implemented in FPGA boards under attack (different from the profiling boards). We show that the presented attack can overcome countermeasures based on encrypting challenges and responses of a PUF.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
为什么深度学习使fpga难以保密
随着现场可编程门阵列(fpga)在云环境中的普及,出现了fpga即服务(FaaS)等新范式。这挑战了传统的FPGA安全模型,该模型假定用户和硬件所有者之间存在信任。在FaaS场景中,用户可能希望对数据或FPGA配置位流保密,以保护隐私或知识产权。然而,保护FaaS用例是很困难的,因为很难对硬件所有者保护加密密钥和其他秘密。在本文中,我们证明了即使是基于物理不可克隆函数(puf)的高级密钥供应和远程认证方法也可以通过使用深度学习分析侧信道攻击来破解。利用实现仲裁PUF的两个分析FPGA板的电源走线,我们训练了一个卷积神经网络(CNN)模型来学习对应于“0”和“1”PUF响应的特征。然后,我们使用所得模型对FPGA板中实现的puf在攻击下的响应进行分类(与分析板不同)。我们证明了所提出的攻击可以克服基于加密挑战和PUF响应的对策。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Program Behavior Analysis and Clustering using Performance Counters A Statistical Approach to Detecting Low-Throughput Exfiltration through the Domain Name System Protocol Efficient Black-Box Search for Adversarial Examples using Relevance Masks Why Deep Learning Makes it Difficult to Keep Secrets in FPGAs WikipediaBot: Machine Learning Assisted Adversarial Manipulation of Wikipedia Articles
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1