概率验证度量的统一框架

IF 0.5 Q4 ENGINEERING, MECHANICAL Journal of Verification, Validation and Uncertainty Quantification Pub Date : 2019-09-01 DOI:10.1115/1.4045296
P. Gardner, C. Lord, R. Barthorpe
{"title":"概率验证度量的统一框架","authors":"P. Gardner, C. Lord, R. Barthorpe","doi":"10.1115/1.4045296","DOIUrl":null,"url":null,"abstract":"\n Probabilistic modeling methods are increasingly being employed in engineering applications. These approaches make inferences about the distribution for output quantities of interest. A challenge in applying probabilistic computer models (simulators) is validating output distributions against samples from observational data. An ideal validation metric is one that intuitively provides information on key differences between the simulator output and observational distributions, such as statistical distances/divergences. Within the literature, only a small set of statistical distances/divergences have been utilized for this task; often selected based on user experience and without reference to the wider variety available. As a result, this paper offers a unifying framework of statistical distances/divergences, categorizing those implemented within the literature, providing a greater understanding of their benefits, and offering new potential measures as validation metrics. In this paper, two families of measures for quantifying differences between distributions, that encompass the existing statistical distances/divergences within the literature, are analyzed: f-divergence and integral probability metrics (IPMs). Specific measures from these families are highlighted, providing an assessment of current and new validation metrics, with a discussion of their merits in determining simulator adequacy, offering validation metrics with greater sensitivity in quantifying differences across the range of probability mass.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.5000,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"A Unifying Framework for Probabilistic Validation Metrics\",\"authors\":\"P. Gardner, C. Lord, R. Barthorpe\",\"doi\":\"10.1115/1.4045296\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n Probabilistic modeling methods are increasingly being employed in engineering applications. These approaches make inferences about the distribution for output quantities of interest. A challenge in applying probabilistic computer models (simulators) is validating output distributions against samples from observational data. An ideal validation metric is one that intuitively provides information on key differences between the simulator output and observational distributions, such as statistical distances/divergences. Within the literature, only a small set of statistical distances/divergences have been utilized for this task; often selected based on user experience and without reference to the wider variety available. As a result, this paper offers a unifying framework of statistical distances/divergences, categorizing those implemented within the literature, providing a greater understanding of their benefits, and offering new potential measures as validation metrics. In this paper, two families of measures for quantifying differences between distributions, that encompass the existing statistical distances/divergences within the literature, are analyzed: f-divergence and integral probability metrics (IPMs). Specific measures from these families are highlighted, providing an assessment of current and new validation metrics, with a discussion of their merits in determining simulator adequacy, offering validation metrics with greater sensitivity in quantifying differences across the range of probability mass.\",\"PeriodicalId\":52254,\"journal\":{\"name\":\"Journal of Verification, Validation and Uncertainty Quantification\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":0.5000,\"publicationDate\":\"2019-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Verification, Validation and Uncertainty Quantification\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1115/1.4045296\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"ENGINEERING, MECHANICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Verification, Validation and Uncertainty Quantification","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1115/1.4045296","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ENGINEERING, MECHANICAL","Score":null,"Total":0}
引用次数: 8

摘要

概率建模方法越来越多地应用于工程应用中。这些方法对感兴趣的输出量的分布进行推断。应用概率计算机模型(模拟器)的一个挑战是根据观测数据的样本验证输出分布。理想的验证度量是直观地提供模拟器输出和观测分布之间的关键差异信息,如统计距离/偏差。在文献中,只有一小部分统计距离/偏差被用于这项任务;通常是基于用户体验而选择的,而不参考更广泛的可用种类。因此,本文提供了一个统计距离/差异的统一框架,对文献中实施的距离/差异进行了分类,更好地了解了它们的好处,并提供了新的潜在衡量标准作为验证指标。在本文中,分析了两类用于量化分布之间差异的度量,包括文献中现有的统计距离/偏差:f偏差和积分概率度量(IPMs)。强调了这些系列的具体措施,对当前和新的验证指标进行了评估,并讨论了它们在确定模拟器充分性方面的优点,提供了在量化概率质量范围内的差异方面具有更高灵敏度的验证指标。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A Unifying Framework for Probabilistic Validation Metrics
Probabilistic modeling methods are increasingly being employed in engineering applications. These approaches make inferences about the distribution for output quantities of interest. A challenge in applying probabilistic computer models (simulators) is validating output distributions against samples from observational data. An ideal validation metric is one that intuitively provides information on key differences between the simulator output and observational distributions, such as statistical distances/divergences. Within the literature, only a small set of statistical distances/divergences have been utilized for this task; often selected based on user experience and without reference to the wider variety available. As a result, this paper offers a unifying framework of statistical distances/divergences, categorizing those implemented within the literature, providing a greater understanding of their benefits, and offering new potential measures as validation metrics. In this paper, two families of measures for quantifying differences between distributions, that encompass the existing statistical distances/divergences within the literature, are analyzed: f-divergence and integral probability metrics (IPMs). Specific measures from these families are highlighted, providing an assessment of current and new validation metrics, with a discussion of their merits in determining simulator adequacy, offering validation metrics with greater sensitivity in quantifying differences across the range of probability mass.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
1.60
自引率
16.70%
发文量
12
期刊最新文献
Automatic Ground-Truth Image Labeling for Deep Neural Network Training and Evaluation Using Industrial Robotics and Motion Capture Using Responsive Feedback in Scaling a Gender Norms-Shifting Adolescent Sexual and Reproductive Health Intervention in the Democratic Republic of Congo. A Solution Verification Study For Urans Simulations of Flow Over a 5:1 Rectangular Cylinder Using Grid Convergence Index And Least Squares Procedures Strategies for Computational Fluid Dynamics Validation Experiments On the Verification of Finite Element Determinations of Stress Concentration Factors for Handbooks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1