A feature extraction method for small sample data based on optimal ensemble random forest

Q3 Engineering 西北工业大学学报 Pub Date : 2022-12-01 DOI:10.1051/jnwpu/20224061261
Wei Zhang, H. Zhang
{"title":"A feature extraction method for small sample data based on optimal ensemble random forest","authors":"Wei Zhang, H. Zhang","doi":"10.1051/jnwpu/20224061261","DOIUrl":null,"url":null,"abstract":"High dimensional small sample data is the difficulty of data mining. When using the traditional random forest algorithm for feature selection, it is to have the poor stability and low accuracy of feature importance ranking caused by over fitting of classification results. Aiming at the difficulties of random forest in the dimensionality reduction of small sample data, a feature extraction algorithm ote-gwrffs is proposed based on small sample data. Firstly, the algorithm expands the samples based on the generated countermeasure network Gan to avoid the over fitting phenomenon of traditional random forest in the small sample classification. Then, on the basis of data expansion, the optimal tree set algorithm based on weight is adopted to reduce the impact of data distribution error on feature extraction accuracy and improve the overall stability of decision tree set. Finally, the weighted average of the weight and feature importance measure of a single decision tree is used to obtain the feature importance ranking, which solves the problem of low accuracy and poor stability in the feature selection process of small sample data. Through the UCI data set, the present algorithm is compared with the traditional random forest algorithm and the weight based random forest algorithm. The ote-gwrffs algorithm has higher stability and accuracy for processing high-dimensional and small sample data.","PeriodicalId":39691,"journal":{"name":"西北工业大学学报","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"西北工业大学学报","FirstCategoryId":"1093","ListUrlMain":"https://doi.org/10.1051/jnwpu/20224061261","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Engineering","Score":null,"Total":0}
引用次数: 1

Abstract

High dimensional small sample data is the difficulty of data mining. When using the traditional random forest algorithm for feature selection, it is to have the poor stability and low accuracy of feature importance ranking caused by over fitting of classification results. Aiming at the difficulties of random forest in the dimensionality reduction of small sample data, a feature extraction algorithm ote-gwrffs is proposed based on small sample data. Firstly, the algorithm expands the samples based on the generated countermeasure network Gan to avoid the over fitting phenomenon of traditional random forest in the small sample classification. Then, on the basis of data expansion, the optimal tree set algorithm based on weight is adopted to reduce the impact of data distribution error on feature extraction accuracy and improve the overall stability of decision tree set. Finally, the weighted average of the weight and feature importance measure of a single decision tree is used to obtain the feature importance ranking, which solves the problem of low accuracy and poor stability in the feature selection process of small sample data. Through the UCI data set, the present algorithm is compared with the traditional random forest algorithm and the weight based random forest algorithm. The ote-gwrffs algorithm has higher stability and accuracy for processing high-dimensional and small sample data.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
一种基于最优集合随机森林的小样本数据特征提取方法
高维小样本数据是数据挖掘的难点。在使用传统的随机森林算法进行特征选择时,由于分类结果的过拟合,导致特征重要性排序的稳定性差,精度低。针对随机森林在小样本数据降维方面的困难,提出了一种基于小样本数据的特征提取算法ote-gwrfs。首先,该算法基于生成的对抗网络Gan对样本进行扩展,避免了小样本分类中传统随机森林的过拟合现象。然后,在数据扩展的基础上,采用基于权重的最优树集算法,减少数据分布误差对特征提取精度的影响,提高决策树集的整体稳定性。最后,使用单个决策树的权重和特征重要性测度的加权平均值来获得特征重要性排序,解决了小样本数据特征选择过程中精度低、稳定性差的问题。通过UCI数据集,将该算法与传统的随机森林算法和基于权重的随机森林方法进行了比较。ote-gwrfs算法在处理高维、小样本数据时具有更高的稳定性和准确性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
西北工业大学学报
西北工业大学学报 Engineering-Engineering (all)
CiteScore
1.30
自引率
0.00%
发文量
6201
审稿时长
12 weeks
期刊介绍:
期刊最新文献
Research on the safe separation corridor of the combined aircraft and its generation method Cracking mechanism analysis and experimental verification of encapsulated module under high low temperature cycle considering residual stress AFDX network equipment fault diagnosis technology MUSIC algorithm based on eigenvalue clustering Target recognition algorithm based on HRRP time-spectrogram feature and multi-scale asymmetric convolutional neural network
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1