{"title":"Explanations of Deep Networks on EEG Data via Interpretable Approaches","authors":"Chen Cui, Y. Zhang, Shenghua Zhong","doi":"10.1109/CBMS55023.2022.00037","DOIUrl":null,"url":null,"abstract":"Despite achieving success in many domains, deep learning models remain mostly black boxes. However, understanding the reasons behind predictions is quite important in assessing trust, which is fundamental in the EEG analysis task. In this work, we propose to use two representative explanation approaches, including LIME and Grad-CAM, to explain the predictions of a simple convolutional neural network on an EEG-based emotional brain-computer interface. Our results demonstrate the interpretability approaches provide the understanding of which features better discriminate the target emotions and provide insights into the neural processes involved in the model learned behaviors.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CBMS55023.2022.00037","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Despite achieving success in many domains, deep learning models remain mostly black boxes. However, understanding the reasons behind predictions is quite important in assessing trust, which is fundamental in the EEG analysis task. In this work, we propose to use two representative explanation approaches, including LIME and Grad-CAM, to explain the predictions of a simple convolutional neural network on an EEG-based emotional brain-computer interface. Our results demonstrate the interpretability approaches provide the understanding of which features better discriminate the target emotions and provide insights into the neural processes involved in the model learned behaviors.