{"title":"Robust profiled attacks: should the adversary trust the dataset?","authors":"Liran Lerman, Zdenek Martinasek, O. Markowitch","doi":"10.1049/iet-ifs.2015.0574","DOIUrl":null,"url":null,"abstract":"Side-channel attacks provide tools to analyse the degree of resilience of a cryptographic device against adversaries measuring leakages (e.g. power traces) on the target device executing cryptographic algorithms. In 2002, Chari et al. introduced template attacks (TA) as the strongest parametric profiled attacks in an information theoretic sense. Few years later, Schindler et al. proposed stochastic attacks (representing other parametric profiled attacks) as improved attacks (with respect to TA) when the adversary has information on the data-dependent part of the leakage. Less than ten years later, the machine learning field provided non-parametric profiled attacks especially useful in high dimensionality contexts. In this study, the authors provide new contexts in which profiled attacks based on machine learning outperform conventional parametric profiled attacks: when the set of leakages contains errors or distortions. More precisely, the authors found that (i) profiled attacks based on machine learning remain effective in a wide range of scenarios, and (ii) TA are more sensitive to distortions and errors in the profiling and attacking sets.","PeriodicalId":13305,"journal":{"name":"IET Inf. Secur.","volume":"37 1","pages":"188-194"},"PeriodicalIF":0.0000,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IET Inf. Secur.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1049/iet-ifs.2015.0574","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10
Abstract
Side-channel attacks provide tools to analyse the degree of resilience of a cryptographic device against adversaries measuring leakages (e.g. power traces) on the target device executing cryptographic algorithms. In 2002, Chari et al. introduced template attacks (TA) as the strongest parametric profiled attacks in an information theoretic sense. Few years later, Schindler et al. proposed stochastic attacks (representing other parametric profiled attacks) as improved attacks (with respect to TA) when the adversary has information on the data-dependent part of the leakage. Less than ten years later, the machine learning field provided non-parametric profiled attacks especially useful in high dimensionality contexts. In this study, the authors provide new contexts in which profiled attacks based on machine learning outperform conventional parametric profiled attacks: when the set of leakages contains errors or distortions. More precisely, the authors found that (i) profiled attacks based on machine learning remain effective in a wide range of scenarios, and (ii) TA are more sensitive to distortions and errors in the profiling and attacking sets.