An Improved Feature Removal Approach for Classification of High Dimensional Feature Dataset

Pub Date : 2024-05-13 DOI:10.52783/jes.3653
Hardikkumar Harishbhai Maheta
{"title":"An Improved Feature Removal Approach for Classification of High Dimensional Feature Dataset","authors":"Hardikkumar Harishbhai Maheta","doi":"10.52783/jes.3653","DOIUrl":null,"url":null,"abstract":"Extracting and evaluating pertinent information in a high-dimensional feature set is extremely difficult when dealing with a high-dimensional feature space. Classification methods require additional training time to generate a classification model. Every feature is not equally relevant in a high-dimensional feature collection. As a result, feature selection is a productive method for identifying vital features and eliminating unnecessary ones. Feature selection serves as a method of pre-processing before classification. It decreases the dimension of the dataset to shorten the training period required to develop a classifier. This research study aims to propose a novel feature subset selection method that establishes the relative importance of each feature using several criteria. The proposed approach ranks available features from high to low using a variety of feature-ranking approaches. Different feature ranking algorithms perform differently on the same dataset. It is challenging to obtain robust performance with just one feature ranking algorithm. To overcome this problem, we have used the Schulze rank aggregation method. The Schulze method combines multiple feature ranking techniques to assign a rank to each feature inside the dataset. This study presents an optimization strategy for heuristic search based on the backward feature removal method. It eliminates features according to the rank determined by the Schulze rank aggregation technique. In this paper, we evaluated the performance of the proposed method against the current state-of-the-art feature ranking techniques for high-dimensional feature set classification.","PeriodicalId":0,"journal":{"name":"","volume":" 32","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.52783/jes.3653","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Extracting and evaluating pertinent information in a high-dimensional feature set is extremely difficult when dealing with a high-dimensional feature space. Classification methods require additional training time to generate a classification model. Every feature is not equally relevant in a high-dimensional feature collection. As a result, feature selection is a productive method for identifying vital features and eliminating unnecessary ones. Feature selection serves as a method of pre-processing before classification. It decreases the dimension of the dataset to shorten the training period required to develop a classifier. This research study aims to propose a novel feature subset selection method that establishes the relative importance of each feature using several criteria. The proposed approach ranks available features from high to low using a variety of feature-ranking approaches. Different feature ranking algorithms perform differently on the same dataset. It is challenging to obtain robust performance with just one feature ranking algorithm. To overcome this problem, we have used the Schulze rank aggregation method. The Schulze method combines multiple feature ranking techniques to assign a rank to each feature inside the dataset. This study presents an optimization strategy for heuristic search based on the backward feature removal method. It eliminates features according to the rank determined by the Schulze rank aggregation technique. In this paper, we evaluated the performance of the proposed method against the current state-of-the-art feature ranking techniques for high-dimensional feature set classification.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
一种用于高维特征数据集分类的改进型特征去除方法
在处理高维特征空间时,提取和评估高维特征集中的相关信息极为困难。分类方法需要额外的训练时间来生成分类模型。在高维特征集合中,每个特征的相关性并不相同。因此,特征选择是识别重要特征和剔除不必要特征的有效方法。特征选择是分类前的一种预处理方法。它可以降低数据集的维度,从而缩短开发分类器所需的训练时间。本研究旨在提出一种新颖的特征子集选择方法,利用多个标准确定每个特征的相对重要性。所提出的方法利用各种特征排序方法将可用特征从高到低排序。不同的特征排序算法在同一数据集上的表现各不相同。仅使用一种特征排序算法就想获得稳健的性能是很有挑战性的。为了克服这个问题,我们采用了 Schulze 排名聚合法。Schulze 方法结合了多种特征排序技术,为数据集中的每个特征分配一个等级。本研究提出了一种基于后向特征剔除法的启发式搜索优化策略。它根据舒尔茨等级聚合技术确定的等级来消除特征。在本文中,我们针对当前最先进的高维特征集分类特征排序技术,评估了所提方法的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1