An AI Framework for Modelling and Evaluating Attribution Methods in Enhanced Machine Learning Interpretability

A. Cuzzocrea, Q. E. A. Ratul, Islam Belmerabet, Edoardo Serra
{"title":"An AI Framework for Modelling and Evaluating Attribution Methods in Enhanced Machine Learning Interpretability","authors":"A. Cuzzocrea, Q. E. A. Ratul, Islam Belmerabet, Edoardo Serra","doi":"10.1109/COMPSAC57700.2023.00158","DOIUrl":null,"url":null,"abstract":"In this paper, we propose a general methodology for estimating the degree of the attribution methods precision and generality in machine learning interpretability. Additionally, we propose a technique to measure the attribution consistency between two attribution methods. In our experiments, we focus on the two well-known model agnostic attribution methods, SHAP and LIME, then we evaluate them on two real applications in the attack detection field. Our proposed methodology highlights the fact that both LIME and SHAP are lacking precision, generality, and consistency. Therefore, more inspection is needed in the attribution research field.","PeriodicalId":296288,"journal":{"name":"2023 IEEE 47th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"780 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 47th Annual Computers, Software, and Applications Conference (COMPSAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/COMPSAC57700.2023.00158","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In this paper, we propose a general methodology for estimating the degree of the attribution methods precision and generality in machine learning interpretability. Additionally, we propose a technique to measure the attribution consistency between two attribution methods. In our experiments, we focus on the two well-known model agnostic attribution methods, SHAP and LIME, then we evaluate them on two real applications in the attack detection field. Our proposed methodology highlights the fact that both LIME and SHAP are lacking precision, generality, and consistency. Therefore, more inspection is needed in the attribution research field.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
在增强机器学习可解释性中建模和评估归因方法的AI框架
在本文中,我们提出了一种估计归因方法在机器学习可解释性中的精度和通用性程度的通用方法。此外,我们还提出了一种测量两种归因方法之间归因一致性的技术。在实验中,我们重点研究了两种著名的模型不可知归因方法SHAP和LIME,并对它们在攻击检测领域的两个实际应用进行了评估。我们提出的方法强调了这样一个事实,即LIME和SHAP都缺乏精度、通用性和一致性。因此,归因研究领域还需要更多的检验。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Study on Performance Bottleneck of Flow-Level Information-Centric Network Simulator An Empathetic Approach to Human-Centric Requirements Engineering Using Virtual Reality Comprehensive Analysis of Dieting Apps: Effectiveness, Design, and Frequency Usage Towards data generation to alleviate privacy concerns for cybersecurity applications VA4SM: A Visual Analytics Tool for Software Maintenance
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1