深度学习侧信道碰撞攻击

M. Staib, A. Moradi
{"title":"深度学习侧信道碰撞攻击","authors":"M. Staib, A. Moradi","doi":"10.46586/tches.v2023.i3.422-444","DOIUrl":null,"url":null,"abstract":"With the breakthrough of Deep Neural Networks, many fields benefited from its enormously increasing performance. Although there is an increasing trend to utilize Deep Learning (DL) for Side-Channel Analysis (SCA) attacks, previous works made specific assumptions for the attack to work. Especially the concept of template attacks is widely adapted while not much attention was paid to other attack strategies. In this work, we present a new methodology, that is able to exploit side-channel collisions in a black-box setting. In particular, our attack is performed in a non-profiled setting and requires neither a hypothetical power model (or let’s say a many-to-one function) nor details about the underlying implementation. While the existing non-profiled DL attacks utilize training metrics to distinguish the correct key, our attack is more efficient by training a model that can be applied to recover multiple key portions, e.g., bytes. In order to perform our attack on raw traces instead of pre-selected samples, we further introduce a DL-based technique that can localize input-dependent leakages in masked implementations, e.g., the leakages associated to one byte of the cipher state in case of AES. We validated our approach by targeting several publicly available power consumption datasets measured from implementations protected by different masking schemes. As a concrete example, we demonstrate how to successfully recover the key bytes of the ASCAD dataset with only a single trained model in a non-profiled setting.","PeriodicalId":13186,"journal":{"name":"IACR Trans. Cryptogr. Hardw. Embed. Syst.","volume":"20 1","pages":"422-444"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Deep Learning Side-Channel Collision Attack\",\"authors\":\"M. Staib, A. Moradi\",\"doi\":\"10.46586/tches.v2023.i3.422-444\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the breakthrough of Deep Neural Networks, many fields benefited from its enormously increasing performance. Although there is an increasing trend to utilize Deep Learning (DL) for Side-Channel Analysis (SCA) attacks, previous works made specific assumptions for the attack to work. Especially the concept of template attacks is widely adapted while not much attention was paid to other attack strategies. In this work, we present a new methodology, that is able to exploit side-channel collisions in a black-box setting. In particular, our attack is performed in a non-profiled setting and requires neither a hypothetical power model (or let’s say a many-to-one function) nor details about the underlying implementation. While the existing non-profiled DL attacks utilize training metrics to distinguish the correct key, our attack is more efficient by training a model that can be applied to recover multiple key portions, e.g., bytes. In order to perform our attack on raw traces instead of pre-selected samples, we further introduce a DL-based technique that can localize input-dependent leakages in masked implementations, e.g., the leakages associated to one byte of the cipher state in case of AES. We validated our approach by targeting several publicly available power consumption datasets measured from implementations protected by different masking schemes. As a concrete example, we demonstrate how to successfully recover the key bytes of the ASCAD dataset with only a single trained model in a non-profiled setting.\",\"PeriodicalId\":13186,\"journal\":{\"name\":\"IACR Trans. Cryptogr. Hardw. Embed. Syst.\",\"volume\":\"20 1\",\"pages\":\"422-444\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IACR Trans. Cryptogr. Hardw. Embed. Syst.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.46586/tches.v2023.i3.422-444\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IACR Trans. Cryptogr. Hardw. Embed. Syst.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.46586/tches.v2023.i3.422-444","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

摘要

随着深度神经网络的突破,许多领域都受益于其性能的大幅提高。尽管利用深度学习(DL)进行侧信道分析(SCA)攻击的趋势越来越多,但以前的工作对攻击的工作进行了特定的假设。特别是模板攻击的概念被广泛采用,而其他的攻击策略却很少受到重视。在这项工作中,我们提出了一种新的方法,能够在黑箱设置中利用侧信道碰撞。特别是,我们的攻击是在非概要设置中执行的,既不需要假设的权力模型(或者我们说多对一函数),也不需要关于底层实现的详细信息。虽然现有的非分析深度学习攻击利用训练指标来区分正确的密钥,但我们的攻击通过训练一个可以应用于恢复多个密钥部分(例如字节)的模型来提高效率。为了对原始痕迹而不是预先选择的样本进行攻击,我们进一步引入了一种基于dl的技术,该技术可以在屏蔽实现中定位与输入相关的泄漏,例如,在AES的情况下,与密码状态的一个字节相关的泄漏。我们通过针对几个公开可用的功耗数据集来验证我们的方法,这些数据集来自受不同屏蔽方案保护的实现。作为一个具体的例子,我们演示了如何在非概要设置中仅使用单个训练模型成功恢复ASCAD数据集的关键字节。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Deep Learning Side-Channel Collision Attack
With the breakthrough of Deep Neural Networks, many fields benefited from its enormously increasing performance. Although there is an increasing trend to utilize Deep Learning (DL) for Side-Channel Analysis (SCA) attacks, previous works made specific assumptions for the attack to work. Especially the concept of template attacks is widely adapted while not much attention was paid to other attack strategies. In this work, we present a new methodology, that is able to exploit side-channel collisions in a black-box setting. In particular, our attack is performed in a non-profiled setting and requires neither a hypothetical power model (or let’s say a many-to-one function) nor details about the underlying implementation. While the existing non-profiled DL attacks utilize training metrics to distinguish the correct key, our attack is more efficient by training a model that can be applied to recover multiple key portions, e.g., bytes. In order to perform our attack on raw traces instead of pre-selected samples, we further introduce a DL-based technique that can localize input-dependent leakages in masked implementations, e.g., the leakages associated to one byte of the cipher state in case of AES. We validated our approach by targeting several publicly available power consumption datasets measured from implementations protected by different masking schemes. As a concrete example, we demonstrate how to successfully recover the key bytes of the ASCAD dataset with only a single trained model in a non-profiled setting.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
MMM: Authenticated Encryption with Minimum Secret State for Masking Don't Forget Pairing-Friendly Curves with Odd Prime Embedding Degrees LPN-based Attacks in the White-box Setting Enhancing Quality and Security of the PLL-TRNG Protecting Dilithium against Leakage Revisited Sensitivity Analysis and Improved Implementations
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1