Multi-view evidential K-NN classification

IF 15.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Information Fusion Pub Date : 2025-03-17 DOI:10.1016/j.inffus.2025.103113
Chaoyu Gong , Zhi-gang Su , Thierry Denoeux
{"title":"Multi-view evidential K-NN classification","authors":"Chaoyu Gong ,&nbsp;Zhi-gang Su ,&nbsp;Thierry Denoeux","doi":"10.1016/j.inffus.2025.103113","DOIUrl":null,"url":null,"abstract":"<div><div>Multi-view classification, aiming to classify samples represented by multiple feature vectors, has become a hot topic in pattern recognition. Although many methods with promising performances have been proposed, their practicality is still limited by the lack of interpretability in some situations. Besides, an appropriate description for the soft labels of multi-view samples is missing, which may degrade the classification performance, especially for those samples located in highly-overlapping areas of multiple vector spaces. To address these issues, we extend the <em>K</em>-nearest neighbor (K-NN) classification algorithm to multi-view learning, under the theoretical framework of evidence theory. The learning process is formalized, firstly, as an optimization problem, where the weights of different views, an adaptive <em>K</em> value of every sample and the distance matrix are determined jointly based on training error. Then, the final classification result is derived according to the philosophy of the evidential K-NN classification algorithm. Detailed ablation studies demonstrate the benefits of the joint learning for adaptive neighborhoods and view weights in a supervised way. Comparative experiments on real-world datasets show that our algorithm performs better than other state-of-the-art methods. A real-world industrial application for condition monitoring shown in Appendix F exemplifies the need to use the evidence theory and the benefits from the unique interpretability of K-NN in detail.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"120 ","pages":"Article 103113"},"PeriodicalIF":15.5000,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525001861","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Multi-view classification, aiming to classify samples represented by multiple feature vectors, has become a hot topic in pattern recognition. Although many methods with promising performances have been proposed, their practicality is still limited by the lack of interpretability in some situations. Besides, an appropriate description for the soft labels of multi-view samples is missing, which may degrade the classification performance, especially for those samples located in highly-overlapping areas of multiple vector spaces. To address these issues, we extend the K-nearest neighbor (K-NN) classification algorithm to multi-view learning, under the theoretical framework of evidence theory. The learning process is formalized, firstly, as an optimization problem, where the weights of different views, an adaptive K value of every sample and the distance matrix are determined jointly based on training error. Then, the final classification result is derived according to the philosophy of the evidential K-NN classification algorithm. Detailed ablation studies demonstrate the benefits of the joint learning for adaptive neighborhoods and view weights in a supervised way. Comparative experiments on real-world datasets show that our algorithm performs better than other state-of-the-art methods. A real-world industrial application for condition monitoring shown in Appendix F exemplifies the need to use the evidence theory and the benefits from the unique interpretability of K-NN in detail.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
多视图证据K-NN分类
多视图分类是模式识别领域的研究热点,旨在对由多个特征向量表示的样本进行分类。虽然已经提出了许多有前景的方法,但由于在某些情况下缺乏可解释性,它们的实用性仍然受到限制。此外,缺少对多视图样本软标签的适当描述,这可能会降低分类性能,特别是对于位于多个向量空间高度重叠区域的样本。为了解决这些问题,我们在证据理论的理论框架下,将k -近邻(K-NN)分类算法扩展到多视图学习。首先,将学习过程形式化为一个优化问题,根据训练误差共同确定不同视图的权重、每个样本的自适应K值和距离矩阵。然后,根据证据K-NN分类算法的原理推导出最终的分类结果。详细的消融研究证明了联合学习对自适应邻域的好处,并以监督的方式观察权重。在真实数据集上的对比实验表明,我们的算法比其他最先进的方法表现得更好。附录F中所示的实际工业状态监测应用实例详细说明了使用证据理论的必要性以及K-NN独特可解释性的好处。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Information Fusion
Information Fusion 工程技术-计算机:理论方法
CiteScore
33.20
自引率
4.30%
发文量
161
审稿时长
7.9 months
期刊介绍: Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.
期刊最新文献
PCFNet: Period–channel fusion network for multivariate time series forecasting — towards multi-period dependency modeling Learning Spatio-Temporal Affine Representation Subspace for Video-based Person Re-Identification From Unimodal to Flexible: A Survey of Generalized Biometric Systems Trustworthy Text-to-Image Diffusion Models: A Timely and Focused Survey Consensus Learning Framework Boosting Co-clustering
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1