{"title":"衍生结果的间接建模:微小的预测差异是否值得关注?","authors":"John P. Prybylski","doi":"10.1002/psp4.13219","DOIUrl":null,"url":null,"abstract":"<p>It is often a goal of model development to predict data from which a variety of outcomes can be derived, such as threshold-based categorization or change from baseline (CFB) transformations. This approach can improve power or support multiple decisions. Because these derivations are indirectly predicted from the model, they are valuable tests for misspecification when used in visual or numeric predictive checks (V/NPCs). However, derived outcome V/NPCs (especially if primary or key secondary) are often overly scrutinized and held to an uncommon standard when comparing model predictions to point estimates, even if by conventional standards both the directly and indirectly modeled data are captured well. Here, simulations of directly modeled data were used to determine where apparent issues in V/NPCs of derived outcomes are expected. Two types of datasets were simulated: (1) a simple pre–post study and (2) pharmacokinetic/pharmacodynamic data from a dose-ranging study. A psoriasis exposure–response model case study was also assessed. V/NPCs were generated on the raw data, CFB data, and placebo-corrected CFB (dCFB) data, and binned summary statistics of the observed data for each trial were graded as being strongly or weakly supportive of a predictive model (within the interquartile range or the 95% central distribution of all simulated trials, respectively). In all cases, the strength of support in direct data V/NPCs was minimally related to that in derived outcome V/NPCs. There are myriad benefits to modeling the underlying data of a derived measure, and these results support caution in discarding adequate models based on overly strict derived measure predictive checks.</p>","PeriodicalId":10774,"journal":{"name":"CPT: Pharmacometrics & Systems Pharmacology","volume":"13 10","pages":"1762-1770"},"PeriodicalIF":3.1000,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11494822/pdf/","citationCount":"0","resultStr":"{\"title\":\"Indirect modeling of derived outcomes: Are minor prediction discrepancies a cause for concern?\",\"authors\":\"John P. Prybylski\",\"doi\":\"10.1002/psp4.13219\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>It is often a goal of model development to predict data from which a variety of outcomes can be derived, such as threshold-based categorization or change from baseline (CFB) transformations. This approach can improve power or support multiple decisions. Because these derivations are indirectly predicted from the model, they are valuable tests for misspecification when used in visual or numeric predictive checks (V/NPCs). However, derived outcome V/NPCs (especially if primary or key secondary) are often overly scrutinized and held to an uncommon standard when comparing model predictions to point estimates, even if by conventional standards both the directly and indirectly modeled data are captured well. Here, simulations of directly modeled data were used to determine where apparent issues in V/NPCs of derived outcomes are expected. Two types of datasets were simulated: (1) a simple pre–post study and (2) pharmacokinetic/pharmacodynamic data from a dose-ranging study. A psoriasis exposure–response model case study was also assessed. V/NPCs were generated on the raw data, CFB data, and placebo-corrected CFB (dCFB) data, and binned summary statistics of the observed data for each trial were graded as being strongly or weakly supportive of a predictive model (within the interquartile range or the 95% central distribution of all simulated trials, respectively). In all cases, the strength of support in direct data V/NPCs was minimally related to that in derived outcome V/NPCs. There are myriad benefits to modeling the underlying data of a derived measure, and these results support caution in discarding adequate models based on overly strict derived measure predictive checks.</p>\",\"PeriodicalId\":10774,\"journal\":{\"name\":\"CPT: Pharmacometrics & Systems Pharmacology\",\"volume\":\"13 10\",\"pages\":\"1762-1770\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2024-08-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11494822/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"CPT: Pharmacometrics & Systems Pharmacology\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/psp4.13219\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"PHARMACOLOGY & PHARMACY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"CPT: Pharmacometrics & Systems Pharmacology","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/psp4.13219","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PHARMACOLOGY & PHARMACY","Score":null,"Total":0}
Indirect modeling of derived outcomes: Are minor prediction discrepancies a cause for concern?
It is often a goal of model development to predict data from which a variety of outcomes can be derived, such as threshold-based categorization or change from baseline (CFB) transformations. This approach can improve power or support multiple decisions. Because these derivations are indirectly predicted from the model, they are valuable tests for misspecification when used in visual or numeric predictive checks (V/NPCs). However, derived outcome V/NPCs (especially if primary or key secondary) are often overly scrutinized and held to an uncommon standard when comparing model predictions to point estimates, even if by conventional standards both the directly and indirectly modeled data are captured well. Here, simulations of directly modeled data were used to determine where apparent issues in V/NPCs of derived outcomes are expected. Two types of datasets were simulated: (1) a simple pre–post study and (2) pharmacokinetic/pharmacodynamic data from a dose-ranging study. A psoriasis exposure–response model case study was also assessed. V/NPCs were generated on the raw data, CFB data, and placebo-corrected CFB (dCFB) data, and binned summary statistics of the observed data for each trial were graded as being strongly or weakly supportive of a predictive model (within the interquartile range or the 95% central distribution of all simulated trials, respectively). In all cases, the strength of support in direct data V/NPCs was minimally related to that in derived outcome V/NPCs. There are myriad benefits to modeling the underlying data of a derived measure, and these results support caution in discarding adequate models based on overly strict derived measure predictive checks.