Matthew C. Ledwith, R. Hill, L. Champagne, Edward D. White
{"title":"Probabilities of Agreement for Computational Model Validation","authors":"Matthew C. Ledwith, R. Hill, L. Champagne, Edward D. White","doi":"10.1115/1.4056862","DOIUrl":null,"url":null,"abstract":"\n Determining whether a computational model is valid for its intended use requires the rigorous assessment of agreement between observed system responses of the computational model and the corresponding real world system or process of interest. In this article, a new method for assessing the validity of computational models is proposed based upon the probability of agreement (PoA) approach. The proposed method quantifies the probability that observed simulation and system response differences are small enough to be considered acceptable, and hence the two systems can be used interchangeably. Rather than relying on Boolean-based statistical tests and procedures, the distance-based probability of agreement validation metric (PoAVM) assesses the similarity of system responses used to predict system behaviors by comparing the distributions of output behavior. The corresponding PoA plot serves as a useful tool for summarizing agreement transparently and directly while accounting for potentially complicated bias and variability structures. A general procedure for employing the proposed computational model validation method is provided which leverages bootstrapping to overcome the fact that in most situations where computational models are employed, one's ability to collect real world data is limited. The new method is demonstrated and contextualized through an illustrative application based upon empirical data from a transient-phase assembly line manufacturing process and a discussion on its desirability based upon an established validation framework.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.5000,"publicationDate":"2023-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Verification, Validation and Uncertainty Quantification","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1115/1.4056862","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ENGINEERING, MECHANICAL","Score":null,"Total":0}
引用次数: 0
Abstract
Determining whether a computational model is valid for its intended use requires the rigorous assessment of agreement between observed system responses of the computational model and the corresponding real world system or process of interest. In this article, a new method for assessing the validity of computational models is proposed based upon the probability of agreement (PoA) approach. The proposed method quantifies the probability that observed simulation and system response differences are small enough to be considered acceptable, and hence the two systems can be used interchangeably. Rather than relying on Boolean-based statistical tests and procedures, the distance-based probability of agreement validation metric (PoAVM) assesses the similarity of system responses used to predict system behaviors by comparing the distributions of output behavior. The corresponding PoA plot serves as a useful tool for summarizing agreement transparently and directly while accounting for potentially complicated bias and variability structures. A general procedure for employing the proposed computational model validation method is provided which leverages bootstrapping to overcome the fact that in most situations where computational models are employed, one's ability to collect real world data is limited. The new method is demonstrated and contextualized through an illustrative application based upon empirical data from a transient-phase assembly line manufacturing process and a discussion on its desirability based upon an established validation framework.