{"title":"评估人工神经网络的分层相关传播可解释性图","authors":"E. Ranguelova, E. Pauwels, J. Berkhout","doi":"10.1109/eScience.2018.00107","DOIUrl":null,"url":null,"abstract":"Layer-wise relevance propagation (LRP) heatmaps aim to provide graphical explanation for decisions of a classifier. This could be of great benefit to scientists for trusting complex black-box models and getting insights from their data. The LRP heatmaps tested on benchmark datasets are reported to correlate significantly with interpretable image features. In this work, we investigate these claims and propose to refine them.","PeriodicalId":6476,"journal":{"name":"2018 IEEE 14th International Conference on e-Science (e-Science)","volume":"26 1","pages":"377-378"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Evaluating Layer-Wise Relevance Propagation Explainability Maps for Artificial Neural Networks\",\"authors\":\"E. Ranguelova, E. Pauwels, J. Berkhout\",\"doi\":\"10.1109/eScience.2018.00107\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Layer-wise relevance propagation (LRP) heatmaps aim to provide graphical explanation for decisions of a classifier. This could be of great benefit to scientists for trusting complex black-box models and getting insights from their data. The LRP heatmaps tested on benchmark datasets are reported to correlate significantly with interpretable image features. In this work, we investigate these claims and propose to refine them.\",\"PeriodicalId\":6476,\"journal\":{\"name\":\"2018 IEEE 14th International Conference on e-Science (e-Science)\",\"volume\":\"26 1\",\"pages\":\"377-378\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 IEEE 14th International Conference on e-Science (e-Science)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/eScience.2018.00107\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE 14th International Conference on e-Science (e-Science)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/eScience.2018.00107","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Evaluating Layer-Wise Relevance Propagation Explainability Maps for Artificial Neural Networks
Layer-wise relevance propagation (LRP) heatmaps aim to provide graphical explanation for decisions of a classifier. This could be of great benefit to scientists for trusting complex black-box models and getting insights from their data. The LRP heatmaps tested on benchmark datasets are reported to correlate significantly with interpretable image features. In this work, we investigate these claims and propose to refine them.