一种估计和解释分类器不确定性的元启发式方法

IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Applied Intelligence Pub Date : 2025-01-14 DOI:10.1007/s10489-024-06127-0
Andrew Houston, Georgina Cosma
{"title":"一种估计和解释分类器不确定性的元启发式方法","authors":"Andrew Houston,&nbsp;Georgina Cosma","doi":"10.1007/s10489-024-06127-0","DOIUrl":null,"url":null,"abstract":"<div><p>Trust is a crucial factor affecting the adoption of machine learning (ML) models. Qualitative studies have revealed that end-users, particularly in the medical domain, need models that can express their uncertainty in decision-making allowing users to know when to ignore the model’s recommendations. However, existing approaches for quantifying decision-making uncertainty are not model-agnostic, or they rely on complex mathematical derivations that are not easily understood by laypersons or end-users, making them less useful for explaining the model’s decision-making process. This work proposes a set of class-independent meta-heuristics that can characterise the complexity of an instance in terms of factors that are mutually relevant to both human and ML decision-making. The measures are integrated into a meta-learning framework that estimates the risk of misclassification. The proposed framework outperformed predicted probabilities and entropy-based methods of identifying instances at risk of being misclassified. Furthermore, the proposed approach resulted in uncertainty estimates that proves more independent of model accuracy and calibration than existing approaches. The proposed measures and framework demonstrate promise for improving model development for more complex instances and provides a new means of model abstention and explanation.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 5","pages":""},"PeriodicalIF":3.4000,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10489-024-06127-0.pdf","citationCount":"0","resultStr":"{\"title\":\"A meta-heuristic approach to estimate and explain classifier uncertainty\",\"authors\":\"Andrew Houston,&nbsp;Georgina Cosma\",\"doi\":\"10.1007/s10489-024-06127-0\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Trust is a crucial factor affecting the adoption of machine learning (ML) models. Qualitative studies have revealed that end-users, particularly in the medical domain, need models that can express their uncertainty in decision-making allowing users to know when to ignore the model’s recommendations. However, existing approaches for quantifying decision-making uncertainty are not model-agnostic, or they rely on complex mathematical derivations that are not easily understood by laypersons or end-users, making them less useful for explaining the model’s decision-making process. This work proposes a set of class-independent meta-heuristics that can characterise the complexity of an instance in terms of factors that are mutually relevant to both human and ML decision-making. The measures are integrated into a meta-learning framework that estimates the risk of misclassification. The proposed framework outperformed predicted probabilities and entropy-based methods of identifying instances at risk of being misclassified. Furthermore, the proposed approach resulted in uncertainty estimates that proves more independent of model accuracy and calibration than existing approaches. The proposed measures and framework demonstrate promise for improving model development for more complex instances and provides a new means of model abstention and explanation.</p></div>\",\"PeriodicalId\":8041,\"journal\":{\"name\":\"Applied Intelligence\",\"volume\":\"55 5\",\"pages\":\"\"},\"PeriodicalIF\":3.4000,\"publicationDate\":\"2025-01-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://link.springer.com/content/pdf/10.1007/s10489-024-06127-0.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s10489-024-06127-0\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Intelligence","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10489-024-06127-0","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

信任是影响机器学习(ML)模型采用的一个关键因素。定性研究表明,终端用户,尤其是医疗领域的终端用户,需要能够表达其决策不确定性的模型,以便用户知道何时忽略模型的建议。然而,现有的量化决策不确定性的方法与模型无关,或者依赖于外行或最终用户不易理解的复杂数学推导,因此在解释模型的决策过程方面作用不大。这项工作提出了一套独立于类的元启发式方法,可以根据与人类和人工智能决策相互相关的因素来描述实例的复杂性。这些测量方法被集成到一个元学习框架中,用于估算错误分类的风险。在识别有被误分类风险的实例方面,所提出的框架优于预测概率和基于熵的方法。此外,与现有方法相比,拟议方法得出的不确定性估计值证明更独立于模型的准确性和校准。所提出的测量方法和框架有望改善针对更复杂实例的模型开发,并提供了一种放弃和解释模型的新方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A meta-heuristic approach to estimate and explain classifier uncertainty

Trust is a crucial factor affecting the adoption of machine learning (ML) models. Qualitative studies have revealed that end-users, particularly in the medical domain, need models that can express their uncertainty in decision-making allowing users to know when to ignore the model’s recommendations. However, existing approaches for quantifying decision-making uncertainty are not model-agnostic, or they rely on complex mathematical derivations that are not easily understood by laypersons or end-users, making them less useful for explaining the model’s decision-making process. This work proposes a set of class-independent meta-heuristics that can characterise the complexity of an instance in terms of factors that are mutually relevant to both human and ML decision-making. The measures are integrated into a meta-learning framework that estimates the risk of misclassification. The proposed framework outperformed predicted probabilities and entropy-based methods of identifying instances at risk of being misclassified. Furthermore, the proposed approach resulted in uncertainty estimates that proves more independent of model accuracy and calibration than existing approaches. The proposed measures and framework demonstrate promise for improving model development for more complex instances and provides a new means of model abstention and explanation.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Applied Intelligence
Applied Intelligence 工程技术-计算机:人工智能
CiteScore
6.60
自引率
20.80%
发文量
1361
审稿时长
5.9 months
期刊介绍: With a focus on research in artificial intelligence and neural networks, this journal addresses issues involving solutions of real-life manufacturing, defense, management, government and industrial problems which are too complex to be solved through conventional approaches and require the simulation of intelligent thought processes, heuristics, applications of knowledge, and distributed and parallel processing. The integration of these multiple approaches in solving complex problems is of particular importance. The journal presents new and original research and technological developments, addressing real and complex issues applicable to difficult problems. It provides a medium for exchanging scientific research and technological achievements accomplished by the international community.
期刊最新文献
Insulator defect detection from aerial images in adverse weather conditions A review of the emotion recognition model of robots Knowledge guided relation enhancement for human-object interaction detection A modified dueling DQN algorithm for robot path planning incorporating priority experience replay and artificial potential fields A non-parameter oversampling approach for imbalanced data classification based on hybrid natural neighbors
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1