Shuangchen Zhao , Changde Du , Jingze Li , Hui Li , Huiguang He
{"title":"NeuralOOD: Improving out-of-distribution generalization performance with brain-machine fusion learning framework","authors":"Shuangchen Zhao , Changde Du , Jingze Li , Hui Li , Huiguang He","doi":"10.1016/j.inffus.2025.103021","DOIUrl":null,"url":null,"abstract":"<div><div>Deep Neural Networks (DNNs) have demonstrated exceptional recognition capabilities in traditional computer vision (CV) tasks. However, existing CV models often suffer a significant decrease in accuracy when confronted with out-of-distribution (OOD) data. In contrast to these DNN models, human can maintain a consistently low error rate when facing OOD scenes, partly attributed to the rich prior cognitive knowledge stored in the human brain. Previous OOD generalization researches only focus on the single modal, overlooking the advantages of multimodal learning method. In this paper, we utilize the multimodal learning method to improve the OOD generalization and propose a novel Brain-machine Fusion Learning (BMFL) framework. We adopt the cross-attention mechanism to fuse the visual knowledge from CV model and prior cognitive knowledge from the human brain. Specially, we employ a pre-trained visual neural encoding model to predict the functional Magnetic Resonance Imaging (fMRI) from visual features which eliminates the need for the fMRI data collection and pre-processing, effectively reduces the workload associated with conventional BMFL methods. Furthermore, we construct a brain transformer to facilitate the extraction of knowledge inside the fMRI data. Moreover, we introduce the Pearson correlation coefficient maximization regularization method into the training process, which improves the fusion capability with better constrains. Our model outperforms the DINOv2 and baseline models on the ImageNet-1k validation dataset as well as on carefully curated OOD datasets, showcasing its superior performance in diverse scenarios.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"119 ","pages":"Article 103021"},"PeriodicalIF":15.5000,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525000946","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Deep Neural Networks (DNNs) have demonstrated exceptional recognition capabilities in traditional computer vision (CV) tasks. However, existing CV models often suffer a significant decrease in accuracy when confronted with out-of-distribution (OOD) data. In contrast to these DNN models, human can maintain a consistently low error rate when facing OOD scenes, partly attributed to the rich prior cognitive knowledge stored in the human brain. Previous OOD generalization researches only focus on the single modal, overlooking the advantages of multimodal learning method. In this paper, we utilize the multimodal learning method to improve the OOD generalization and propose a novel Brain-machine Fusion Learning (BMFL) framework. We adopt the cross-attention mechanism to fuse the visual knowledge from CV model and prior cognitive knowledge from the human brain. Specially, we employ a pre-trained visual neural encoding model to predict the functional Magnetic Resonance Imaging (fMRI) from visual features which eliminates the need for the fMRI data collection and pre-processing, effectively reduces the workload associated with conventional BMFL methods. Furthermore, we construct a brain transformer to facilitate the extraction of knowledge inside the fMRI data. Moreover, we introduce the Pearson correlation coefficient maximization regularization method into the training process, which improves the fusion capability with better constrains. Our model outperforms the DINOv2 and baseline models on the ImageNet-1k validation dataset as well as on carefully curated OOD datasets, showcasing its superior performance in diverse scenarios.
深度神经网络(dnn)在传统的计算机视觉(CV)任务中表现出卓越的识别能力。然而,现有的CV模型在面对out- distribution (OOD)数据时往往会出现准确率的显著下降。与这些DNN模型相比,人类在面对OOD场景时能够始终保持较低的错误率,部分原因是人脑中存储了丰富的先验认知知识。以往的面向对象泛化研究只关注单模态,忽视了多模态学习方法的优势。本文利用多模态学习方法改进OOD泛化,提出了一种新的脑机融合学习(BMFL)框架。我们采用交叉注意机制来融合CV模型的视觉知识和人脑的先验认知知识。特别地,我们采用预训练的视觉神经编码模型从视觉特征预测功能磁共振成像(fMRI),消除了fMRI数据采集和预处理的需要,有效地减少了传统BMFL方法的工作量。此外,我们构建了一个脑转换器,以方便提取fMRI数据中的知识。在训练过程中引入了Pearson相关系数最大化正则化方法,以更好的约束条件提高了融合能力。我们的模型在ImageNet-1k验证数据集以及精心策划的OOD数据集上优于DINOv2和基线模型,在各种场景中展示了其优越的性能。
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.