NeuralOOD: Improving out-of-distribution generalization performance with brain-machine fusion learning framework

IF 15.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Information Fusion Pub Date : 2025-02-14 DOI:10.1016/j.inffus.2025.103021
Shuangchen Zhao , Changde Du , Jingze Li , Hui Li , Huiguang He
{"title":"NeuralOOD: Improving out-of-distribution generalization performance with brain-machine fusion learning framework","authors":"Shuangchen Zhao ,&nbsp;Changde Du ,&nbsp;Jingze Li ,&nbsp;Hui Li ,&nbsp;Huiguang He","doi":"10.1016/j.inffus.2025.103021","DOIUrl":null,"url":null,"abstract":"<div><div>Deep Neural Networks (DNNs) have demonstrated exceptional recognition capabilities in traditional computer vision (CV) tasks. However, existing CV models often suffer a significant decrease in accuracy when confronted with out-of-distribution (OOD) data. In contrast to these DNN models, human can maintain a consistently low error rate when facing OOD scenes, partly attributed to the rich prior cognitive knowledge stored in the human brain. Previous OOD generalization researches only focus on the single modal, overlooking the advantages of multimodal learning method. In this paper, we utilize the multimodal learning method to improve the OOD generalization and propose a novel Brain-machine Fusion Learning (BMFL) framework. We adopt the cross-attention mechanism to fuse the visual knowledge from CV model and prior cognitive knowledge from the human brain. Specially, we employ a pre-trained visual neural encoding model to predict the functional Magnetic Resonance Imaging (fMRI) from visual features which eliminates the need for the fMRI data collection and pre-processing, effectively reduces the workload associated with conventional BMFL methods. Furthermore, we construct a brain transformer to facilitate the extraction of knowledge inside the fMRI data. Moreover, we introduce the Pearson correlation coefficient maximization regularization method into the training process, which improves the fusion capability with better constrains. Our model outperforms the DINOv2 and baseline models on the ImageNet-1k validation dataset as well as on carefully curated OOD datasets, showcasing its superior performance in diverse scenarios.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"119 ","pages":"Article 103021"},"PeriodicalIF":15.5000,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525000946","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Deep Neural Networks (DNNs) have demonstrated exceptional recognition capabilities in traditional computer vision (CV) tasks. However, existing CV models often suffer a significant decrease in accuracy when confronted with out-of-distribution (OOD) data. In contrast to these DNN models, human can maintain a consistently low error rate when facing OOD scenes, partly attributed to the rich prior cognitive knowledge stored in the human brain. Previous OOD generalization researches only focus on the single modal, overlooking the advantages of multimodal learning method. In this paper, we utilize the multimodal learning method to improve the OOD generalization and propose a novel Brain-machine Fusion Learning (BMFL) framework. We adopt the cross-attention mechanism to fuse the visual knowledge from CV model and prior cognitive knowledge from the human brain. Specially, we employ a pre-trained visual neural encoding model to predict the functional Magnetic Resonance Imaging (fMRI) from visual features which eliminates the need for the fMRI data collection and pre-processing, effectively reduces the workload associated with conventional BMFL methods. Furthermore, we construct a brain transformer to facilitate the extraction of knowledge inside the fMRI data. Moreover, we introduce the Pearson correlation coefficient maximization regularization method into the training process, which improves the fusion capability with better constrains. Our model outperforms the DINOv2 and baseline models on the ImageNet-1k validation dataset as well as on carefully curated OOD datasets, showcasing its superior performance in diverse scenarios.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
NeuralOOD:利用脑机融合学习框架改善分布外泛化性能
深度神经网络(dnn)在传统的计算机视觉(CV)任务中表现出卓越的识别能力。然而,现有的CV模型在面对out- distribution (OOD)数据时往往会出现准确率的显著下降。与这些DNN模型相比,人类在面对OOD场景时能够始终保持较低的错误率,部分原因是人脑中存储了丰富的先验认知知识。以往的面向对象泛化研究只关注单模态,忽视了多模态学习方法的优势。本文利用多模态学习方法改进OOD泛化,提出了一种新的脑机融合学习(BMFL)框架。我们采用交叉注意机制来融合CV模型的视觉知识和人脑的先验认知知识。特别地,我们采用预训练的视觉神经编码模型从视觉特征预测功能磁共振成像(fMRI),消除了fMRI数据采集和预处理的需要,有效地减少了传统BMFL方法的工作量。此外,我们构建了一个脑转换器,以方便提取fMRI数据中的知识。在训练过程中引入了Pearson相关系数最大化正则化方法,以更好的约束条件提高了融合能力。我们的模型在ImageNet-1k验证数据集以及精心策划的OOD数据集上优于DINOv2和基线模型,在各种场景中展示了其优越的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Information Fusion
Information Fusion 工程技术-计算机:理论方法
CiteScore
33.20
自引率
4.30%
发文量
161
审稿时长
7.9 months
期刊介绍: Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.
期刊最新文献
An Interpretable Deep Unfolding Framework for Multi-view Representation Learning The Effect of Data Poisoning on Counterfactual Explanations FusionBev: LiDAR and 4D Radar Fusion for 3D Object Detection MMFN : A Novel Multi-View Multimodal Fusion Network for Pediatric Intestinal Obstruction Recognition FDA-CAPMA: Federated Domain Adaptation with Co-Activation Pattern and Multimodal Mamba for fMRI Depression Detection
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1