GCCNet: A Novel Network Leveraging Gated Cross-Correlation for Multi-View Classification

IF 9.7 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS IEEE Transactions on Multimedia Pub Date : 2024-12-24 DOI:10.1109/TMM.2024.3521733
Yuanpeng Zeng;Ru Zhang;Hao Zhang;Shaojie Qiao;Faliang Huang;Qing Tian;Yuzhong Peng
{"title":"GCCNet: A Novel Network Leveraging Gated Cross-Correlation for Multi-View Classification","authors":"Yuanpeng Zeng;Ru Zhang;Hao Zhang;Shaojie Qiao;Faliang Huang;Qing Tian;Yuzhong Peng","doi":"10.1109/TMM.2024.3521733","DOIUrl":null,"url":null,"abstract":"Multi-view learning is a machine learning paradigm that utilizes multiple feature sets or data sources to improve learning performance and generalization. However, existing multi-view learning methods often do not capture and utilize information from different views very well, especially when the relationships between views are complex and of varying quality. In this paper, we propose a novel multi-view learning framework for the multi-view classification task, called Gated Cross-Correlation Network (GCCNet), which addresses these challenges by integrating the three key operational levels in multi-view learning: representation, fusion, and decision. Specifically, GCCNet contains a novel component called the Multi-View Gated Information Distributor (MVGID) to enhance noise filtering and optimize the retention of critical information. In addition, GCCNet uses cross-correlation analysis to reveal dependencies and interactions between different views, as well as integrates an adaptive weighted joint decision strategy to mitigate the interference of low-quality views. Thus, GCCNet can not only comprehensively capture and utilize information from different views, but also facilitate information exchange and synergy between views, ultimately improving the overall performance of the model. Extensive experimental results on ten benchmark datasets show GCCNet's outperforms state-of-the-art methods on eight out of ten datasets, validating its effectiveness and superiority in multi-view learning.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"1086-1099"},"PeriodicalIF":9.7000,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Multimedia","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10814649/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Multi-view learning is a machine learning paradigm that utilizes multiple feature sets or data sources to improve learning performance and generalization. However, existing multi-view learning methods often do not capture and utilize information from different views very well, especially when the relationships between views are complex and of varying quality. In this paper, we propose a novel multi-view learning framework for the multi-view classification task, called Gated Cross-Correlation Network (GCCNet), which addresses these challenges by integrating the three key operational levels in multi-view learning: representation, fusion, and decision. Specifically, GCCNet contains a novel component called the Multi-View Gated Information Distributor (MVGID) to enhance noise filtering and optimize the retention of critical information. In addition, GCCNet uses cross-correlation analysis to reveal dependencies and interactions between different views, as well as integrates an adaptive weighted joint decision strategy to mitigate the interference of low-quality views. Thus, GCCNet can not only comprehensively capture and utilize information from different views, but also facilitate information exchange and synergy between views, ultimately improving the overall performance of the model. Extensive experimental results on ten benchmark datasets show GCCNet's outperforms state-of-the-art methods on eight out of ten datasets, validating its effectiveness and superiority in multi-view learning.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
GCCNet:一种利用门控互相关进行多视图分类的新网络
多视图学习是一种机器学习范式,它利用多个特征集或数据源来提高学习性能和泛化。然而,现有的多视图学习方法往往不能很好地捕获和利用来自不同视图的信息,特别是当视图之间的关系复杂且质量参差不齐时。在本文中,我们为多视图分类任务提出了一种新的多视图学习框架,称为门控相互关联网络(GCCNet),该框架通过集成多视图学习中的三个关键操作层次:表示、融合和决策来解决这些挑战。具体来说,GCCNet包含一个称为多视图门控信息分发器(MVGID)的新组件,以增强噪声过滤并优化关键信息的保留。此外,GCCNet利用互相关分析来揭示不同视图之间的依赖关系和相互作用,并集成了自适应加权联合决策策略来减轻低质量视图的干扰。因此,GCCNet不仅可以综合地捕获和利用来自不同视图的信息,还可以促进视图之间的信息交换和协同,最终提高模型的整体性能。在10个基准数据集上的广泛实验结果表明,GCCNet在10个数据集中的8个上优于最先进的方法,验证了其在多视图学习中的有效性和优越性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Multimedia
IEEE Transactions on Multimedia 工程技术-电信学
CiteScore
11.70
自引率
11.00%
发文量
576
审稿时长
5.5 months
期刊介绍: The IEEE Transactions on Multimedia delves into diverse aspects of multimedia technology and applications, covering circuits, networking, signal processing, systems, software, and systems integration. The scope aligns with the Fields of Interest of the sponsors, ensuring a comprehensive exploration of research in multimedia.
期刊最新文献
Screen Detection from Egocentric Image Streams Leveraging Multi-View Vision Language Model. TMT: Tri-Modal Translation Between Speech, Image, and Text by Processing Different Modalities as Different Languages HMS2Net: Heterogeneous Multimodal State Space Network via CLIP for Dynamic Scene Classification in Livestreaming 2025 Reviewers List Light CNN-Transformer Dual-Branch Network for Real-Time Semantic Segmentation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1