解释机器学习预测:一个案例研究

Prarthana Dutta, Naresh Babu Muppalaneni
{"title":"解释机器学习预测:一个案例研究","authors":"Prarthana Dutta, Naresh Babu Muppalaneni","doi":"10.1109/TEECCON54414.2022.9854821","DOIUrl":null,"url":null,"abstract":"The growing trends and demands for Artificial Intelligence in various domains due to their excellent performance and generalization ability are known to all. These decisions affect the population in general as they usually deal with sensitive tasks in various fields such as healthcare, education, transportation, etc. Hence, understanding these learned representations would add more descriptive knowledge to better interpret the decisions with the ground truth. The European General Data Protection Regulation reserves the right to receive an explanation against a model producing an automated decision. Understanding the decisions would validate the model behavior, ensure trust, and deal with the risk associated with the model. Upon analyzing the relevant features, we can decide whether the model predictions could be trusted or not in the future. We can further try to reduce the misclassification rate by rectifying the features (of the misclassified instances) if needed. In this way, we can peek into the black-box and gain insight into a model’s prediction, thus understanding the learned representations. In pursuit of this objective, a common approach would be to devise an explanatory model that would explain the predictions made by a model and further analyze those predictions with the ground truth information. We initiated a case study on a diabetes risk prediction dataset by understanding local predictions made by five different Machine Learning models and trying to provide explanations for the misclassified instances.","PeriodicalId":251455,"journal":{"name":"2022 Trends in Electrical, Electronics, Computer Engineering Conference (TEECCON)","volume":"11 2","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Explaining Machine Learning Predictions: A Case Study\",\"authors\":\"Prarthana Dutta, Naresh Babu Muppalaneni\",\"doi\":\"10.1109/TEECCON54414.2022.9854821\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The growing trends and demands for Artificial Intelligence in various domains due to their excellent performance and generalization ability are known to all. These decisions affect the population in general as they usually deal with sensitive tasks in various fields such as healthcare, education, transportation, etc. Hence, understanding these learned representations would add more descriptive knowledge to better interpret the decisions with the ground truth. The European General Data Protection Regulation reserves the right to receive an explanation against a model producing an automated decision. Understanding the decisions would validate the model behavior, ensure trust, and deal with the risk associated with the model. Upon analyzing the relevant features, we can decide whether the model predictions could be trusted or not in the future. We can further try to reduce the misclassification rate by rectifying the features (of the misclassified instances) if needed. In this way, we can peek into the black-box and gain insight into a model’s prediction, thus understanding the learned representations. In pursuit of this objective, a common approach would be to devise an explanatory model that would explain the predictions made by a model and further analyze those predictions with the ground truth information. We initiated a case study on a diabetes risk prediction dataset by understanding local predictions made by five different Machine Learning models and trying to provide explanations for the misclassified instances.\",\"PeriodicalId\":251455,\"journal\":{\"name\":\"2022 Trends in Electrical, Electronics, Computer Engineering Conference (TEECCON)\",\"volume\":\"11 2\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-05-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 Trends in Electrical, Electronics, Computer Engineering Conference (TEECCON)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/TEECCON54414.2022.9854821\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 Trends in Electrical, Electronics, Computer Engineering Conference (TEECCON)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TEECCON54414.2022.9854821","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

人工智能由于其优异的性能和泛化能力,在各个领域的发展趋势和需求是众所周知的。这些决定一般会影响人口,因为它们通常涉及各个领域的敏感任务,如医疗保健、教育、交通等。因此,理解这些学习到的表示将增加更多的描述性知识,以更好地用基础真理解释决策。欧洲通用数据保护条例保留收到对产生自动决策的模型的解释的权利。理解决策将验证模型行为,确保信任,并处理与模型相关的风险。通过对相关特征的分析,我们可以决定未来模型的预测是否可信。如果需要,我们可以进一步尝试通过纠正(错误分类实例的)特征来降低错误分类率。通过这种方式,我们可以窥视黑箱,洞察模型的预测,从而理解学习到的表征。为了实现这一目标,一种常见的方法是设计一个解释性模型,该模型将解释模型所做的预测,并进一步分析这些预测与基本事实信息。我们通过了解五种不同的机器学习模型所做的局部预测,并试图为错误分类的实例提供解释,对糖尿病风险预测数据集进行了案例研究。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Explaining Machine Learning Predictions: A Case Study
The growing trends and demands for Artificial Intelligence in various domains due to their excellent performance and generalization ability are known to all. These decisions affect the population in general as they usually deal with sensitive tasks in various fields such as healthcare, education, transportation, etc. Hence, understanding these learned representations would add more descriptive knowledge to better interpret the decisions with the ground truth. The European General Data Protection Regulation reserves the right to receive an explanation against a model producing an automated decision. Understanding the decisions would validate the model behavior, ensure trust, and deal with the risk associated with the model. Upon analyzing the relevant features, we can decide whether the model predictions could be trusted or not in the future. We can further try to reduce the misclassification rate by rectifying the features (of the misclassified instances) if needed. In this way, we can peek into the black-box and gain insight into a model’s prediction, thus understanding the learned representations. In pursuit of this objective, a common approach would be to devise an explanatory model that would explain the predictions made by a model and further analyze those predictions with the ground truth information. We initiated a case study on a diabetes risk prediction dataset by understanding local predictions made by five different Machine Learning models and trying to provide explanations for the misclassified instances.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Explaining Machine Learning Predictions: A Case Study Stable Gain With Frequency Selective Surface in Planar and Conformal Structure: For Radome Application Control of Modified Switched Reluctance Motor for EV Applications A Multi Objective Artificial Eco-System Based Optimization Technique Integrating Solar Photovoltaic System In Distribution Network Novel Adders for Xilinx Versal FPGAs
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1