A. Cuzzocrea, Q. E. A. Ratul, Islam Belmerabet, Edoardo Serra
{"title":"An AI Framework for Modelling and Evaluating Attribution Methods in Enhanced Machine Learning Interpretability","authors":"A. Cuzzocrea, Q. E. A. Ratul, Islam Belmerabet, Edoardo Serra","doi":"10.1109/COMPSAC57700.2023.00158","DOIUrl":null,"url":null,"abstract":"In this paper, we propose a general methodology for estimating the degree of the attribution methods precision and generality in machine learning interpretability. Additionally, we propose a technique to measure the attribution consistency between two attribution methods. In our experiments, we focus on the two well-known model agnostic attribution methods, SHAP and LIME, then we evaluate them on two real applications in the attack detection field. Our proposed methodology highlights the fact that both LIME and SHAP are lacking precision, generality, and consistency. Therefore, more inspection is needed in the attribution research field.","PeriodicalId":296288,"journal":{"name":"2023 IEEE 47th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"780 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 47th Annual Computers, Software, and Applications Conference (COMPSAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/COMPSAC57700.2023.00158","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In this paper, we propose a general methodology for estimating the degree of the attribution methods precision and generality in machine learning interpretability. Additionally, we propose a technique to measure the attribution consistency between two attribution methods. In our experiments, we focus on the two well-known model agnostic attribution methods, SHAP and LIME, then we evaluate them on two real applications in the attack detection field. Our proposed methodology highlights the fact that both LIME and SHAP are lacking precision, generality, and consistency. Therefore, more inspection is needed in the attribution research field.