{"title":"Exploration into the Explainability of Neural Network Models for Power Side-Channel Analysis","authors":"Anupam Golder, Ashwin Bhat, A. Raychowdhury","doi":"10.1145/3526241.3530346","DOIUrl":null,"url":null,"abstract":"In this work, we present a comprehensive analysis of explainability of Neural Network (NN) models in the context of power Side-Channel Analysis (SCA), to gain insight into which features or Points of Interest (PoI) contribute the most to the classification decision. Although many existing works claim state-of-the-art accuracy in recovering secret key from cryptographic implementations, it remains to be seen whether the models actually learn representations from the leakage points. In this work, we evaluated the reasoning behind the success of a NN model, by validating the relevance scores of features derived from the network to the ones identified by traditional statistical PoI selection methods. Thus, utilizing the explainability techniques as a standard validation technique for NN models is justified.","PeriodicalId":188228,"journal":{"name":"Proceedings of the Great Lakes Symposium on VLSI 2022","volume":"51 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Great Lakes Symposium on VLSI 2022","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3526241.3530346","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
In this work, we present a comprehensive analysis of explainability of Neural Network (NN) models in the context of power Side-Channel Analysis (SCA), to gain insight into which features or Points of Interest (PoI) contribute the most to the classification decision. Although many existing works claim state-of-the-art accuracy in recovering secret key from cryptographic implementations, it remains to be seen whether the models actually learn representations from the leakage points. In this work, we evaluated the reasoning behind the success of a NN model, by validating the relevance scores of features derived from the network to the ones identified by traditional statistical PoI selection methods. Thus, utilizing the explainability techniques as a standard validation technique for NN models is justified.