{"title":"Contrastive independent subspace analysis network for multi-view spatial information extraction.","authors":"Tengyu Zhang, Deyu Zeng, Wei Liu, Zongze Wu, Chris Ding, Xiaopin Zhong","doi":"10.1016/j.neunet.2024.107105","DOIUrl":null,"url":null,"abstract":"<p><p>Multi-view classification integrates features from different views to optimize classification performance. Most of the existing works typically utilize semantic information to achieve view fusion but neglect the spatial information of data itself, which accommodates data representation with correlation information and is proven to be an essential aspect. Thus robust independent subspace analysis network, optimized by sparse and soft orthogonal optimization, is first proposed to extract the latent spatial information of multi-view data with subspace bases. Building on this, a novel contrastive independent subspace analysis framework for multi-view classification is developed to further optimize from spatial perspective. Specifically, contrastive subspace optimization separates the subspaces, thereby enhancing their representational capacity. Whilst contrastive fusion optimization aims at building cross-view subspace correlations and forms a non overlapping data representation. In k-fold validation experiments, MvCISA achieved state-of-the-art accuracies of 76.95%, 98.50%, 93.33% and 88.24% on four benchmark multi-view datasets, significantly outperforming the second-best method by 8.57%, 0.25%, 1.66% and 5.96% in accuracy. And visualization experiments demonstrate the effectiveness of the subspace and feature space optimization, also indicating their promising potential for other downstream tasks. Our code is available at https://github.com/raRn0y/MvCISA.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"107105"},"PeriodicalIF":6.0000,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1016/j.neunet.2024.107105","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Multi-view classification integrates features from different views to optimize classification performance. Most of the existing works typically utilize semantic information to achieve view fusion but neglect the spatial information of data itself, which accommodates data representation with correlation information and is proven to be an essential aspect. Thus robust independent subspace analysis network, optimized by sparse and soft orthogonal optimization, is first proposed to extract the latent spatial information of multi-view data with subspace bases. Building on this, a novel contrastive independent subspace analysis framework for multi-view classification is developed to further optimize from spatial perspective. Specifically, contrastive subspace optimization separates the subspaces, thereby enhancing their representational capacity. Whilst contrastive fusion optimization aims at building cross-view subspace correlations and forms a non overlapping data representation. In k-fold validation experiments, MvCISA achieved state-of-the-art accuracies of 76.95%, 98.50%, 93.33% and 88.24% on four benchmark multi-view datasets, significantly outperforming the second-best method by 8.57%, 0.25%, 1.66% and 5.96% in accuracy. And visualization experiments demonstrate the effectiveness of the subspace and feature space optimization, also indicating their promising potential for other downstream tasks. Our code is available at https://github.com/raRn0y/MvCISA.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.