Semi-supervised Min-Max Modular SVM

Yan-Ping Wu, Yun Li
{"title":"Semi-supervised Min-Max Modular SVM","authors":"Yan-Ping Wu, Yun Li","doi":"10.1109/IJCNN.2015.7280505","DOIUrl":null,"url":null,"abstract":"Min-Max Modular Support Vector Machine (M3-SVM) is a powerful supervised ensemble pattern classification method, and it can efficiently deal with large scale labeled data. However, it is very expensive, even infeasible, to label the large scale data set. In order to extend the M3-SVM to handle unlabeled data, a Semi-Supervised M3-SVM learning algorithm (SS-M3-SVM) is proposed in this paper. SS-M3-SVM completes the task decomposition for labeled and unlabeled data, then combines the unlabeled sample subset with labeled sample subset and explores some hidden concepts exist in this combined sample subset. After the hidden concepts explored, the posterior probability of each concept with respect to labeled samples are treated as new features for these labeled samples. Some discriminant information derived from unlabeled data is embedded in these new features. Then each base SVM classifier is trained on the labeled data subset with addition of new features. Finally, the base classifiers are combined using Min-Max rule to obtain the SS-M3-SVM. Experiments on different data sets indicate that the proposed semi-supervised learning strategy can enhance the classification performance of traditional M3-SVM.","PeriodicalId":6539,"journal":{"name":"2015 International Joint Conference on Neural Networks (IJCNN)","volume":"58 1","pages":"1-8"},"PeriodicalIF":0.0000,"publicationDate":"2015-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.2015.7280505","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Min-Max Modular Support Vector Machine (M3-SVM) is a powerful supervised ensemble pattern classification method, and it can efficiently deal with large scale labeled data. However, it is very expensive, even infeasible, to label the large scale data set. In order to extend the M3-SVM to handle unlabeled data, a Semi-Supervised M3-SVM learning algorithm (SS-M3-SVM) is proposed in this paper. SS-M3-SVM completes the task decomposition for labeled and unlabeled data, then combines the unlabeled sample subset with labeled sample subset and explores some hidden concepts exist in this combined sample subset. After the hidden concepts explored, the posterior probability of each concept with respect to labeled samples are treated as new features for these labeled samples. Some discriminant information derived from unlabeled data is embedded in these new features. Then each base SVM classifier is trained on the labeled data subset with addition of new features. Finally, the base classifiers are combined using Min-Max rule to obtain the SS-M3-SVM. Experiments on different data sets indicate that the proposed semi-supervised learning strategy can enhance the classification performance of traditional M3-SVM.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
半监督最小最大模支持向量机
最小-最大模支持向量机(M3-SVM)是一种强大的监督集成模式分类方法,能够有效地处理大规模标记数据。然而,对大规模数据集进行标记是非常昂贵的,甚至是不可行的。为了将M3-SVM扩展到处理无标记数据,本文提出了一种半监督M3-SVM学习算法(SS-M3-SVM)。SS-M3-SVM完成对标记和未标记数据的任务分解,然后将未标记的样本子集与标记的样本子集相结合,并在这个组合的样本子集中挖掘一些隐藏的概念。在探索了隐藏的概念之后,每个概念相对于标记样本的后验概率被视为这些标记样本的新特征。在这些新特征中嵌入了一些来自未标记数据的判别信息。然后在标记的数据子集上训练每个基SVM分类器,并添加新特征。最后,利用最小-最大规则对基分类器进行组合,得到SS-M3-SVM。在不同数据集上的实验表明,所提出的半监督学习策略可以提高传统M3-SVM的分类性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Efficient conformal regressors using bagged neural nets Repeated play of the SVM game as a means of adaptive classification Unit commitment considering multiple charging and discharging scenarios of plug-in electric vehicles High-dimensional function approximation using local linear embedding A label compression coding approach through maximizing dependence between features and labels for multi-label classification
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1