Robust subtractive stability measures for fast and exhaustive feature importance ranking and selection in generalised linear models

IF 0.8 4区 数学 Q3 STATISTICS & PROBABILITY Australian & New Zealand Journal of Statistics Pub Date : 2022-09-02 DOI:10.1111/anzs.12375
Connor Smith, Boris Guennewig, Samuel Muller
{"title":"Robust subtractive stability measures for fast and exhaustive feature importance ranking and selection in generalised linear models","authors":"Connor Smith,&nbsp;Boris Guennewig,&nbsp;Samuel Muller","doi":"10.1111/anzs.12375","DOIUrl":null,"url":null,"abstract":"<p>We introduce the relatively new concept of subtractive lack-of-fit measures in the context of robust regression, in particular in generalised linear models. We devise a fast and robust feature selection framework for regression that empirically enjoys better performance than other selection methods while remaining computationally feasible when fully exhaustive methods are not. Our method builds on the concepts of model stability, subtractive lack-of-fit measures and repeated model identification. We demonstrate how the multiple implementations add value in a robust regression type context, in particular through utilizing a combination of robust regression coefficient and scale estimates. Through resampling, we construct a robust stability matrix, which contains multiple measures of feature importance for each variable. By constructing this stability matrix and using it to rank features based on importance, we are able to reduce the candidate model space and then perform an exhaustive search on the remaining models. We also introduce two different visualisations to better convey information held within the stability matrix; a subtractive Mosaic Probability Plot and a subtractive Variable Inclusion Plot. We demonstrate how these graphics allow for a better understanding of how variable importance changes under small alterations to the underlying data. Our framework is made available in <span>R</span> through the <span>RobStabR</span> package.</p>","PeriodicalId":55428,"journal":{"name":"Australian & New Zealand Journal of Statistics","volume":"64 3","pages":"339-355"},"PeriodicalIF":0.8000,"publicationDate":"2022-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Australian & New Zealand Journal of Statistics","FirstCategoryId":"100","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/anzs.12375","RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"STATISTICS & PROBABILITY","Score":null,"Total":0}
引用次数: 1

Abstract

We introduce the relatively new concept of subtractive lack-of-fit measures in the context of robust regression, in particular in generalised linear models. We devise a fast and robust feature selection framework for regression that empirically enjoys better performance than other selection methods while remaining computationally feasible when fully exhaustive methods are not. Our method builds on the concepts of model stability, subtractive lack-of-fit measures and repeated model identification. We demonstrate how the multiple implementations add value in a robust regression type context, in particular through utilizing a combination of robust regression coefficient and scale estimates. Through resampling, we construct a robust stability matrix, which contains multiple measures of feature importance for each variable. By constructing this stability matrix and using it to rank features based on importance, we are able to reduce the candidate model space and then perform an exhaustive search on the remaining models. We also introduce two different visualisations to better convey information held within the stability matrix; a subtractive Mosaic Probability Plot and a subtractive Variable Inclusion Plot. We demonstrate how these graphics allow for a better understanding of how variable importance changes under small alterations to the underlying data. Our framework is made available in R through the RobStabR package.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
广义线性模型中快速穷尽特征重要性排序和选择的鲁棒减法稳定性测度
我们在鲁棒回归的背景下,特别是在广义线性模型中,引入了相对较新的减法拟合缺失度量的概念。我们设计了一个快速和鲁棒的回归特征选择框架,在经验上比其他选择方法具有更好的性能,同时在完全穷尽方法不具备计算可行性的情况下保持计算可行性。我们的方法建立在模型稳定性、相减失拟合度量和重复模型识别的概念之上。我们演示了多个实现如何在稳健回归类型上下文中增加价值,特别是通过结合使用稳健回归系数和规模估计。通过重采样,我们构建了一个鲁棒稳定性矩阵,其中包含每个变量的多个特征重要性度量。通过构造这个稳定性矩阵并使用它来根据重要性对特征进行排序,我们能够减少候选模型空间,然后对剩余模型进行穷举搜索。我们还引入了两种不同的可视化,以更好地传达稳定性矩阵中包含的信息;一个相减的镶嵌概率图和一个相减的变量包含图。我们演示了这些图形如何允许更好地理解在底层数据的微小变化下变量重要性是如何变化的。我们的框架可以通过RobStabR包在R中使用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Australian & New Zealand Journal of Statistics
Australian & New Zealand Journal of Statistics 数学-统计学与概率论
CiteScore
1.30
自引率
9.10%
发文量
31
审稿时长
>12 weeks
期刊介绍: The Australian & New Zealand Journal of Statistics is an international journal managed jointly by the Statistical Society of Australia and the New Zealand Statistical Association. Its purpose is to report significant and novel contributions in statistics, ranging across articles on statistical theory, methodology, applications and computing. The journal has a particular focus on statistical techniques that can be readily applied to real-world problems, and on application papers with an Australasian emphasis. Outstanding articles submitted to the journal may be selected as Discussion Papers, to be read at a meeting of either the Statistical Society of Australia or the New Zealand Statistical Association. The main body of the journal is divided into three sections. The Theory and Methods Section publishes papers containing original contributions to the theory and methodology of statistics, econometrics and probability, and seeks papers motivated by a real problem and which demonstrate the proposed theory or methodology in that situation. There is a strong preference for papers motivated by, and illustrated with, real data. The Applications Section publishes papers demonstrating applications of statistical techniques to problems faced by users of statistics in the sciences, government and industry. A particular focus is the application of newly developed statistical methodology to real data and the demonstration of better use of established statistical methodology in an area of application. It seeks to aid teachers of statistics by placing statistical methods in context. The Statistical Computing Section publishes papers containing new algorithms, code snippets, or software descriptions (for open source software only) which enhance the field through the application of computing. Preference is given to papers featuring publically available code and/or data, and to those motivated by statistical methods for practical problems.
期刊最新文献
Issue Information PanIC: Consistent information criteria for general model selection problems Prediction de-correlated inference: A safe approach for post-prediction inference Telling Stories with Data: With Application in R. By Rohan Alexander. CRC Press. 2023. 622 pages. AU$129.60 (hardback). ISBN: 978-1-0321-3477-2. Full Bayesian analysis of triple seasonal autoregressive models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1