A large-scale examination of inductive biases shaping high-level visual representation in brains and machines

IF 14.7 1区 综合性期刊 Q1 MULTIDISCIPLINARY SCIENCES Nature Communications Pub Date : 2024-10-30 DOI:10.1038/s41467-024-53147-y
Colin Conwell, Jacob S. Prince, Kendrick N. Kay, George A. Alvarez, Talia Konkle
{"title":"A large-scale examination of inductive biases shaping high-level visual representation in brains and machines","authors":"Colin Conwell, Jacob S. Prince, Kendrick N. Kay, George A. Alvarez, Talia Konkle","doi":"10.1038/s41467-024-53147-y","DOIUrl":null,"url":null,"abstract":"<p>The rapid release of high-performing computer vision models offers new potential to study the impact of different inductive biases on the emergent brain alignment of learned representations. Here, we perform controlled comparisons among a curated set of 224 diverse models to test the impact of specific model properties on visual brain predictivity – a process requiring over 1.8 billion regressions and 50.3 thousand representational similarity analyses. We find that models with qualitatively different architectures (e.g. CNNs versus Transformers) and task objectives (e.g. purely visual contrastive learning versus vision- language alignment) achieve near equivalent brain predictivity, when other factors are held constant. Instead, variation across visual training diets yields the largest, most consistent effect on brain predictivity. Many models achieve similarly high brain predictivity, despite clear variation in their underlying representations – suggesting that standard methods used to link models to brains may be too flexible. Broadly, these findings challenge common assumptions about the factors underlying emergent brain alignment, and outline how we can leverage controlled model comparison to probe the common computational principles underlying biological and artificial visual systems.</p>","PeriodicalId":19066,"journal":{"name":"Nature Communications","volume":null,"pages":null},"PeriodicalIF":14.7000,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Nature Communications","FirstCategoryId":"103","ListUrlMain":"https://doi.org/10.1038/s41467-024-53147-y","RegionNum":1,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

The rapid release of high-performing computer vision models offers new potential to study the impact of different inductive biases on the emergent brain alignment of learned representations. Here, we perform controlled comparisons among a curated set of 224 diverse models to test the impact of specific model properties on visual brain predictivity – a process requiring over 1.8 billion regressions and 50.3 thousand representational similarity analyses. We find that models with qualitatively different architectures (e.g. CNNs versus Transformers) and task objectives (e.g. purely visual contrastive learning versus vision- language alignment) achieve near equivalent brain predictivity, when other factors are held constant. Instead, variation across visual training diets yields the largest, most consistent effect on brain predictivity. Many models achieve similarly high brain predictivity, despite clear variation in their underlying representations – suggesting that standard methods used to link models to brains may be too flexible. Broadly, these findings challenge common assumptions about the factors underlying emergent brain alignment, and outline how we can leverage controlled model comparison to probe the common computational principles underlying biological and artificial visual systems.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
大规模研究大脑和机器中影响高级视觉表征的归纳偏差
高性能计算机视觉模型的快速发布为研究不同的归纳偏差对学习表征的新兴大脑排列的影响提供了新的可能性。在这里,我们对一组精心挑选的 224 种不同模型进行了对照比较,以测试特定模型属性对大脑视觉预测能力的影响--这一过程需要超过 18 亿次回归和 50,300 次表征相似性分析。我们发现,在其他因素保持不变的情况下,具有质的不同架构(如 CNNs 与 Transformers)和任务目标(如纯视觉对比学习与视觉语言对齐)的模型实现了近乎相同的大脑预测能力。相反,不同的视觉训练饮食对大脑预测能力的影响最大、最一致。尽管许多模型的底层表征存在明显差异,但它们却实现了类似的高大脑预测率--这表明,用于将模型与大脑联系起来的标准方法可能过于灵活。从广义上讲,这些发现挑战了人们对大脑出现排列的基础因素的普遍假设,并概述了我们如何利用受控模型比较来探究生物和人工视觉系统的共同计算原理。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Nature Communications
Nature Communications Biological Science Disciplines-
CiteScore
24.90
自引率
2.40%
发文量
6928
审稿时长
3.7 months
期刊介绍: Nature Communications, an open-access journal, publishes high-quality research spanning all areas of the natural sciences. Papers featured in the journal showcase significant advances relevant to specialists in each respective field. With a 2-year impact factor of 16.6 (2022) and a median time of 8 days from submission to the first editorial decision, Nature Communications is committed to rapid dissemination of research findings. As a multidisciplinary journal, it welcomes contributions from biological, health, physical, chemical, Earth, social, mathematical, applied, and engineering sciences, aiming to highlight important breakthroughs within each domain.
期刊最新文献
High-parametric protein maps reveal the spatial organization in early-developing human lung Innate immune control of influenza virus interspecies adaptation via IFITM3 Automated detection and de novo structure modeling of nucleic acids from cryo-EM maps Encoding extracellular modification of artificial cell membranes using engineered self-translocating proteins A large-scale examination of inductive biases shaping high-level visual representation in brains and machines
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1