Empirical bias-reducing adjustments to estimating functions

IF 3.1 1区 数学 Q1 STATISTICS & PROBABILITY Journal of the Royal Statistical Society Series B-Statistical Methodology Pub Date : 2023-09-16 DOI:10.1093/jrsssb/qkad083
Ioannis Kosmidis, Nicola Lunardon
{"title":"Empirical bias-reducing adjustments to estimating functions","authors":"Ioannis Kosmidis, Nicola Lunardon","doi":"10.1093/jrsssb/qkad083","DOIUrl":null,"url":null,"abstract":"Abstract We develop a novel, general framework for reduced-bias M-estimation from asymptotically unbiased estimating functions. The framework relies on an empirical approximation of the bias by a function of derivatives of estimating function contributions. Reduced-bias M-estimation operates either implicitly, solving empirically adjusted estimating equations, or explicitly, subtracting the estimated bias from the original M-estimates, and applies to partially or fully specified models with likelihoods or surrogate objectives. Automatic differentiation can abstract away the algebra required to implement reduced-bias M-estimation. As a result, the bias-reduction methods, we introduce have broader applicability, straightforward implementation, and less algebraic or computational effort than other established bias-reduction methods that require resampling or expectations of products of log-likelihood derivatives. If M-estimation is by maximising an objective, then there always exists a bias-reducing penalised objective. That penalised objective relates to information criteria for model selection and can be enhanced with plug-in penalties to deliver reduced-bias M-estimates with extra properties, like finiteness for categorical data models. Inferential procedures and model selection procedures for M-estimators apply unaltered with the reduced-bias M-estimates. We demonstrate and assess the properties of reduced-bias M-estimation in well-used, prominent modelling settings of varying complexity.","PeriodicalId":49982,"journal":{"name":"Journal of the Royal Statistical Society Series B-Statistical Methodology","volume":"32 1","pages":"0"},"PeriodicalIF":3.1000,"publicationDate":"2023-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of the Royal Statistical Society Series B-Statistical Methodology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/jrsssb/qkad083","RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"STATISTICS & PROBABILITY","Score":null,"Total":0}
引用次数: 1

Abstract

Abstract We develop a novel, general framework for reduced-bias M-estimation from asymptotically unbiased estimating functions. The framework relies on an empirical approximation of the bias by a function of derivatives of estimating function contributions. Reduced-bias M-estimation operates either implicitly, solving empirically adjusted estimating equations, or explicitly, subtracting the estimated bias from the original M-estimates, and applies to partially or fully specified models with likelihoods or surrogate objectives. Automatic differentiation can abstract away the algebra required to implement reduced-bias M-estimation. As a result, the bias-reduction methods, we introduce have broader applicability, straightforward implementation, and less algebraic or computational effort than other established bias-reduction methods that require resampling or expectations of products of log-likelihood derivatives. If M-estimation is by maximising an objective, then there always exists a bias-reducing penalised objective. That penalised objective relates to information criteria for model selection and can be enhanced with plug-in penalties to deliver reduced-bias M-estimates with extra properties, like finiteness for categorical data models. Inferential procedures and model selection procedures for M-estimators apply unaltered with the reduced-bias M-estimates. We demonstrate and assess the properties of reduced-bias M-estimation in well-used, prominent modelling settings of varying complexity.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
经验偏差减少调整估计函数
基于渐近无偏估计函数,提出了一种新的、通用的减偏m估计框架。该框架依赖于通过估计函数贡献的导数函数对偏差的经验逼近。减少偏差m估计可以隐式地解决经验调整的估计方程,也可以显式地从原始m估计中减去估计偏差,并适用于具有可能性或替代目标的部分或完全指定的模型。自动微分可以抽象出实现减偏m估计所需的代数。因此,我们介绍的偏倚减少方法具有更广泛的适用性,直接实现,并且比其他需要重采样或对数似然导数乘积期望的既定偏倚减少方法的代数或计算工作量更少。如果m估计是通过最大化一个目标,那么总是存在一个减少偏差的惩罚目标。这个被惩罚的目标与模型选择的信息标准有关,并且可以通过插件惩罚来增强,以提供具有额外属性的减少偏差的m估计,例如分类数据模型的有限性。m估计器的推理程序和模型选择程序不变地适用于减少偏差的m估计。我们在不同复杂性的良好使用的突出建模设置中演示和评估减偏m估计的性质。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
8.80
自引率
0.00%
发文量
83
审稿时长
>12 weeks
期刊介绍: Series B (Statistical Methodology) aims to publish high quality papers on the methodological aspects of statistics and data science more broadly. The objective of papers should be to contribute to the understanding of statistical methodology and/or to develop and improve statistical methods; any mathematical theory should be directed towards these aims. The kinds of contribution considered include descriptions of new methods of collecting or analysing data, with the underlying theory, an indication of the scope of application and preferably a real example. Also considered are comparisons, critical evaluations and new applications of existing methods, contributions to probability theory which have a clear practical bearing (including the formulation and analysis of stochastic models), statistical computation or simulation where original methodology is involved and original contributions to the foundations of statistical science. Reviews of methodological techniques are also considered. A paper, even if correct and well presented, is likely to be rejected if it only presents straightforward special cases of previously published work, if it is of mathematical interest only, if it is too long in relation to the importance of the new material that it contains or if it is dominated by computations or simulations of a routine nature.
期刊最新文献
Model-assisted sensitivity analysis for treatment effects under unmeasured confounding via regularized calibrated estimation. Interpretable discriminant analysis for functional data supported on random nonlinear domains with an application to Alzheimer's disease. GENIUS-MAWII: for robust Mendelian randomization with many weak invalid instruments. Doubly robust calibration of prediction sets under covariate shift. Gradient synchronization for multivariate functional data, with application to brain connectivity.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1