模型评估的 QUEST:通过认识不确定性量化识别困难子群。

AMIA ... Annual Symposium proceedings. AMIA Symposium Pub Date : 2024-01-11 eCollection Date: 2023-01-01
Katherine E Brown, Steve Talbert, Douglas A Talbert
{"title":"模型评估的 QUEST:通过认识不确定性量化识别困难子群。","authors":"Katherine E Brown, Steve Talbert, Douglas A Talbert","doi":"","DOIUrl":null,"url":null,"abstract":"<p><p>Uncertainty quantification in machine learning can provide powerful insight into a model's capabilities and enhance human trust in opaque models. Well-calibrated uncertainty quantification reveals a connection between high uncertainty and an increased likelihood of an incorrect classification. We hypothesize that if we are able to explain the model's uncertainty by generating rules that define subgroups of data with high and low levels of classification uncertainty, then those same rules will identify subgroups of data on which the model performs well and subgroups on which the model does not perform well. If true, then the utility of uncertainty quantification is not limited to understanding the certainty of individual predictions; it can also be used to provide a more global understanding of the model's understanding of patient subpopulations. We evaluate our proposed technique and hypotheses on deep neural networks and tree-based gradient boosting ensemble across benchmark and real-world medical datasets.</p>","PeriodicalId":72180,"journal":{"name":"AMIA ... Annual Symposium proceedings. AMIA Symposium","volume":"2023 ","pages":"854-863"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10785870/pdf/","citationCount":"0","resultStr":"{\"title\":\"A QUEST for Model Assessment: Identifying Difficult Subgroups via Epistemic Uncertainty Quantification.\",\"authors\":\"Katherine E Brown, Steve Talbert, Douglas A Talbert\",\"doi\":\"\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Uncertainty quantification in machine learning can provide powerful insight into a model's capabilities and enhance human trust in opaque models. Well-calibrated uncertainty quantification reveals a connection between high uncertainty and an increased likelihood of an incorrect classification. We hypothesize that if we are able to explain the model's uncertainty by generating rules that define subgroups of data with high and low levels of classification uncertainty, then those same rules will identify subgroups of data on which the model performs well and subgroups on which the model does not perform well. If true, then the utility of uncertainty quantification is not limited to understanding the certainty of individual predictions; it can also be used to provide a more global understanding of the model's understanding of patient subpopulations. We evaluate our proposed technique and hypotheses on deep neural networks and tree-based gradient boosting ensemble across benchmark and real-world medical datasets.</p>\",\"PeriodicalId\":72180,\"journal\":{\"name\":\"AMIA ... Annual Symposium proceedings. AMIA Symposium\",\"volume\":\"2023 \",\"pages\":\"854-863\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-01-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10785870/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"AMIA ... Annual Symposium proceedings. AMIA Symposium\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2023/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"AMIA ... Annual Symposium proceedings. AMIA Symposium","FirstCategoryId":"1085","ListUrlMain":"","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/1/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

机器学习中的不确定性量化可以提供对模型能力的强大洞察力,并增强人类对不透明模型的信任。经过良好校准的不确定性量化揭示了高不确定性与错误分类可能性增加之间的联系。我们假设,如果我们能够通过生成规则来解释模型的不确定性,这些规则定义了分类不确定性水平高低的数据子组,那么这些规则也将确定模型表现良好的数据子组和模型表现不佳的数据子组。如果这是真的,那么不确定性量化的效用就不仅限于了解单个预测的确定性;它还可以用来提供对模型理解患者亚群的更全面的理解。我们通过基准数据集和现实世界的医疗数据集,评估了我们在深度神经网络和基于树的梯度提升集合上提出的技术和假设。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A QUEST for Model Assessment: Identifying Difficult Subgroups via Epistemic Uncertainty Quantification.

Uncertainty quantification in machine learning can provide powerful insight into a model's capabilities and enhance human trust in opaque models. Well-calibrated uncertainty quantification reveals a connection between high uncertainty and an increased likelihood of an incorrect classification. We hypothesize that if we are able to explain the model's uncertainty by generating rules that define subgroups of data with high and low levels of classification uncertainty, then those same rules will identify subgroups of data on which the model performs well and subgroups on which the model does not perform well. If true, then the utility of uncertainty quantification is not limited to understanding the certainty of individual predictions; it can also be used to provide a more global understanding of the model's understanding of patient subpopulations. We evaluate our proposed technique and hypotheses on deep neural networks and tree-based gradient boosting ensemble across benchmark and real-world medical datasets.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Ethicara for Responsible AI in Healthcare: A System for Bias Detection and AI Risk Management. Towards Fair Patient-Trial Matching via Patient-Criterion Level Fairness Constraint. Towards Understanding the Generalization of Medical Text-to-SQL Models and Datasets. Transferable and Interpretable Treatment Effectiveness Prediction for Ovarian Cancer via Multimodal Deep Learning. Understanding Cancer Caregiving and Predicting Burden: An Analytics and Machine Learning Approach.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1