Embedded feature selection for neural networks via learnable drop layer

Pub Date : 2024-06-07 DOI:10.1093/jigpal/jzae062
M. Jiménez-Navarro, M. Martínez-Ballesteros, I. S. Brito, F. Martínez-Álvarez, G. Asencio-Cortés
{"title":"Embedded feature selection for neural networks via learnable drop layer","authors":"M. Jiménez-Navarro, M. Martínez-Ballesteros, I. S. Brito, F. Martínez-Álvarez, G. Asencio-Cortés","doi":"10.1093/jigpal/jzae062","DOIUrl":null,"url":null,"abstract":"\n Feature selection is a widely studied technique whose goal is to reduce the dimensionality of the problem by removing irrelevant features. It has multiple benefits, such as improved efficacy, efficiency and interpretability of almost any type of machine learning model. Feature selection techniques may be divided into three main categories, depending on the process used to remove the features known as Filter, Wrapper and Embedded. Embedded methods are usually the preferred feature selection method that efficiently obtains a selection of the most relevant features of the model. However, not all models support an embedded feature selection that forces the use of a different method, reducing the efficiency and reliability of the selection. Neural networks are an example of a model that does not support embedded feature selection. As neural networks have shown to provide remarkable results in multiple scenarios such as classification and regression, sometimes in an ensemble with a model that includes an embedded feature selection, we attempt to embed a feature selection process with a general-purpose methodology. In this work, we propose a novel general-purpose layer for neural networks that removes the influence of irrelevant features. The Feature-Aware Drop Layer is included at the top of the neural network and trained during the backpropagation process without any additional parameters. Our methodology is tested with 17 datasets for classification and regression tasks, including data from different fields such as Health, Economic and Environment, among others. The results show remarkable improvements compared to three different feature selection approaches, with reliable, efficient and effective results.","PeriodicalId":0,"journal":{"name":"","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1093/jigpal/jzae062","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Feature selection is a widely studied technique whose goal is to reduce the dimensionality of the problem by removing irrelevant features. It has multiple benefits, such as improved efficacy, efficiency and interpretability of almost any type of machine learning model. Feature selection techniques may be divided into three main categories, depending on the process used to remove the features known as Filter, Wrapper and Embedded. Embedded methods are usually the preferred feature selection method that efficiently obtains a selection of the most relevant features of the model. However, not all models support an embedded feature selection that forces the use of a different method, reducing the efficiency and reliability of the selection. Neural networks are an example of a model that does not support embedded feature selection. As neural networks have shown to provide remarkable results in multiple scenarios such as classification and regression, sometimes in an ensemble with a model that includes an embedded feature selection, we attempt to embed a feature selection process with a general-purpose methodology. In this work, we propose a novel general-purpose layer for neural networks that removes the influence of irrelevant features. The Feature-Aware Drop Layer is included at the top of the neural network and trained during the backpropagation process without any additional parameters. Our methodology is tested with 17 datasets for classification and regression tasks, including data from different fields such as Health, Economic and Environment, among others. The results show remarkable improvements compared to three different feature selection approaches, with reliable, efficient and effective results.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
通过可学习下降层为神经网络选择嵌入式特征
特征选择是一种被广泛研究的技术,其目标是通过去除无关特征来降低问题的维度。它有多种好处,如提高几乎所有类型机器学习模型的功效、效率和可解释性。根据去除特征的过程,特征选择技术可分为三大类,即过滤器、包装器和嵌入式。嵌入式方法通常是首选的特征选择方法,它能有效地获得模型中最相关的特征选择。然而,并非所有模型都支持嵌入式特征选择,这就迫使我们使用不同的方法,从而降低了选择的效率和可靠性。神经网络就是一种不支持嵌入式特征选择的模型。由于神经网络在分类和回归等多种情况下都能取得显著效果,有时还能与包含嵌入式特征选择的模型进行组合,因此我们尝试将特征选择过程嵌入到通用方法中。在这项工作中,我们为神经网络提出了一种新型通用层,它可以消除无关特征的影响。特征感知删除层位于神经网络的顶部,在反向传播过程中进行训练,无需任何额外参数。我们使用 17 个数据集对我们的方法进行了分类和回归任务测试,其中包括来自不同领域的数据,如健康、经济和环境等。结果表明,与三种不同的特征选择方法相比,我们的方法有了显著的改进,取得了可靠、高效和有效的结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1