{"title":"在灵敏度测试/分析中使用偏差校正参数自启动来构建具有准确覆盖水平的置信界限","authors":"E. V. Thomas","doi":"10.1080/00224065.2023.2185558","DOIUrl":null,"url":null,"abstract":"Abstract Sensitivity testing often involves sequential design strategies in small-sample settings that provide binary data which are then used to develop generalized linear models. Model parameters are usually estimated via maximum likelihood methods. Often, confidence bounds relating to model parameters and quantiles are based on the likelihood ratio. In this paper, it is demonstrated how the bias-corrected parametric bootstrap used in conjunction with approximate pivotal quantities can be used to provide an alternative means for constructing bounds when using a location-scale model. In small-sample settings, the coverage of bounds based on the likelihood ratio is often anticonservative due to bias in estimating the scale parameter. In contrast, bounds produced by the bias-corrected parametric bootstrap can provide accurate levels of coverage in such settings when both the sequential strategy and method for parameter estimation effectively adapt (are approximately equivariant) to the location and scale. A series of simulations illustrate this contrasting behavior in a small-sample setting when assuming a normal/probit model in conjunction with a popular sequential design strategy. In addition, it is shown how a high-fidelity assessment of performance can be attained with reduced computational effort by using the nonparametric bootstrap to resample pivotal quantities obtained from a small-scale set of parametric bootstrap simulations.","PeriodicalId":54769,"journal":{"name":"Journal of Quality Technology","volume":null,"pages":null},"PeriodicalIF":2.6000,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Use of the bias-corrected parametric bootstrap in sensitivity testing/analysis to construct confidence bounds with accurate levels of coverage\",\"authors\":\"E. V. Thomas\",\"doi\":\"10.1080/00224065.2023.2185558\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract Sensitivity testing often involves sequential design strategies in small-sample settings that provide binary data which are then used to develop generalized linear models. Model parameters are usually estimated via maximum likelihood methods. Often, confidence bounds relating to model parameters and quantiles are based on the likelihood ratio. In this paper, it is demonstrated how the bias-corrected parametric bootstrap used in conjunction with approximate pivotal quantities can be used to provide an alternative means for constructing bounds when using a location-scale model. In small-sample settings, the coverage of bounds based on the likelihood ratio is often anticonservative due to bias in estimating the scale parameter. In contrast, bounds produced by the bias-corrected parametric bootstrap can provide accurate levels of coverage in such settings when both the sequential strategy and method for parameter estimation effectively adapt (are approximately equivariant) to the location and scale. A series of simulations illustrate this contrasting behavior in a small-sample setting when assuming a normal/probit model in conjunction with a popular sequential design strategy. In addition, it is shown how a high-fidelity assessment of performance can be attained with reduced computational effort by using the nonparametric bootstrap to resample pivotal quantities obtained from a small-scale set of parametric bootstrap simulations.\",\"PeriodicalId\":54769,\"journal\":{\"name\":\"Journal of Quality Technology\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2023-04-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Quality Technology\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1080/00224065.2023.2185558\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, INDUSTRIAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Quality Technology","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1080/00224065.2023.2185558","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, INDUSTRIAL","Score":null,"Total":0}
Use of the bias-corrected parametric bootstrap in sensitivity testing/analysis to construct confidence bounds with accurate levels of coverage
Abstract Sensitivity testing often involves sequential design strategies in small-sample settings that provide binary data which are then used to develop generalized linear models. Model parameters are usually estimated via maximum likelihood methods. Often, confidence bounds relating to model parameters and quantiles are based on the likelihood ratio. In this paper, it is demonstrated how the bias-corrected parametric bootstrap used in conjunction with approximate pivotal quantities can be used to provide an alternative means for constructing bounds when using a location-scale model. In small-sample settings, the coverage of bounds based on the likelihood ratio is often anticonservative due to bias in estimating the scale parameter. In contrast, bounds produced by the bias-corrected parametric bootstrap can provide accurate levels of coverage in such settings when both the sequential strategy and method for parameter estimation effectively adapt (are approximately equivariant) to the location and scale. A series of simulations illustrate this contrasting behavior in a small-sample setting when assuming a normal/probit model in conjunction with a popular sequential design strategy. In addition, it is shown how a high-fidelity assessment of performance can be attained with reduced computational effort by using the nonparametric bootstrap to resample pivotal quantities obtained from a small-scale set of parametric bootstrap simulations.
期刊介绍:
The objective of Journal of Quality Technology is to contribute to the technical advancement of the field of quality technology by publishing papers that emphasize the practical applicability of new techniques, instructive examples of the operation of existing techniques and results of historical researches. Expository, review, and tutorial papers are also acceptable if they are written in a style suitable for practicing engineers.
Sample our Mathematics & Statistics journals, sign in here to start your FREE access for 14 days