{"title":"通过基于规则的变量优先级选择与模型无关的变量","authors":"Min Lu, Hemant Ishwaran","doi":"arxiv-2409.09003","DOIUrl":null,"url":null,"abstract":"While achieving high prediction accuracy is a fundamental goal in machine\nlearning, an equally important task is finding a small number of features with\nhigh explanatory power. One popular selection technique is permutation\nimportance, which assesses a variable's impact by measuring the change in\nprediction error after permuting the variable. However, this can be problematic\ndue to the need to create artificial data, a problem shared by other methods as\nwell. Another problem is that variable selection methods can be limited by\nbeing model-specific. We introduce a new model-independent approach, Variable\nPriority (VarPro), which works by utilizing rules without the need to generate\nartificial data or evaluate prediction error. The method is relatively easy to\nuse, requiring only the calculation of sample averages of simple statistics,\nand can be applied to many data settings, including regression, classification,\nand survival. We investigate the asymptotic properties of VarPro and show,\namong other things, that VarPro has a consistent filtering property for noise\nvariables. Empirical studies using synthetic and real-world data show the\nmethod achieves a balanced performance and compares favorably to many\nstate-of-the-art procedures currently used for variable selection.","PeriodicalId":501340,"journal":{"name":"arXiv - STAT - Machine Learning","volume":"47 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Model-independent variable selection via the rule-based variable priorit\",\"authors\":\"Min Lu, Hemant Ishwaran\",\"doi\":\"arxiv-2409.09003\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"While achieving high prediction accuracy is a fundamental goal in machine\\nlearning, an equally important task is finding a small number of features with\\nhigh explanatory power. One popular selection technique is permutation\\nimportance, which assesses a variable's impact by measuring the change in\\nprediction error after permuting the variable. However, this can be problematic\\ndue to the need to create artificial data, a problem shared by other methods as\\nwell. Another problem is that variable selection methods can be limited by\\nbeing model-specific. We introduce a new model-independent approach, Variable\\nPriority (VarPro), which works by utilizing rules without the need to generate\\nartificial data or evaluate prediction error. The method is relatively easy to\\nuse, requiring only the calculation of sample averages of simple statistics,\\nand can be applied to many data settings, including regression, classification,\\nand survival. We investigate the asymptotic properties of VarPro and show,\\namong other things, that VarPro has a consistent filtering property for noise\\nvariables. Empirical studies using synthetic and real-world data show the\\nmethod achieves a balanced performance and compares favorably to many\\nstate-of-the-art procedures currently used for variable selection.\",\"PeriodicalId\":501340,\"journal\":{\"name\":\"arXiv - STAT - Machine Learning\",\"volume\":\"47 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - STAT - Machine Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.09003\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - STAT - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.09003","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Model-independent variable selection via the rule-based variable priorit
While achieving high prediction accuracy is a fundamental goal in machine
learning, an equally important task is finding a small number of features with
high explanatory power. One popular selection technique is permutation
importance, which assesses a variable's impact by measuring the change in
prediction error after permuting the variable. However, this can be problematic
due to the need to create artificial data, a problem shared by other methods as
well. Another problem is that variable selection methods can be limited by
being model-specific. We introduce a new model-independent approach, Variable
Priority (VarPro), which works by utilizing rules without the need to generate
artificial data or evaluate prediction error. The method is relatively easy to
use, requiring only the calculation of sample averages of simple statistics,
and can be applied to many data settings, including regression, classification,
and survival. We investigate the asymptotic properties of VarPro and show,
among other things, that VarPro has a consistent filtering property for noise
variables. Empirical studies using synthetic and real-world data show the
method achieves a balanced performance and compares favorably to many
state-of-the-art procedures currently used for variable selection.