视觉表征的通用维度

Zirui Chen, Michael F. Bonner
{"title":"视觉表征的通用维度","authors":"Zirui Chen, Michael F. Bonner","doi":"arxiv-2408.12804","DOIUrl":null,"url":null,"abstract":"Do neural network models of vision learn brain-aligned representations\nbecause they share architectural constraints and task objectives with\nbiological vision or because they learn universal features of natural image\nprocessing? We characterized the universality of hundreds of thousands of\nrepresentational dimensions from visual neural networks with varied\nconstruction. We found that networks with varied architectures and task\nobjectives learn to represent natural images using a shared set of latent\ndimensions, despite appearing highly distinct at a surface level. Next, by\ncomparing these networks with human brain representations measured with fMRI,\nwe found that the most brain-aligned representations in neural networks are\nthose that are universal and independent of a network's specific\ncharacteristics. Remarkably, each network can be reduced to fewer than ten of\nits most universal dimensions with little impact on its representational\nsimilarity to the human brain. These results suggest that the underlying\nsimilarities between artificial and biological vision are primarily governed by\na core set of universal image representations that are convergently learned by\ndiverse systems.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Universal dimensions of visual representation\",\"authors\":\"Zirui Chen, Michael F. Bonner\",\"doi\":\"arxiv-2408.12804\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Do neural network models of vision learn brain-aligned representations\\nbecause they share architectural constraints and task objectives with\\nbiological vision or because they learn universal features of natural image\\nprocessing? We characterized the universality of hundreds of thousands of\\nrepresentational dimensions from visual neural networks with varied\\nconstruction. We found that networks with varied architectures and task\\nobjectives learn to represent natural images using a shared set of latent\\ndimensions, despite appearing highly distinct at a surface level. Next, by\\ncomparing these networks with human brain representations measured with fMRI,\\nwe found that the most brain-aligned representations in neural networks are\\nthose that are universal and independent of a network's specific\\ncharacteristics. Remarkably, each network can be reduced to fewer than ten of\\nits most universal dimensions with little impact on its representational\\nsimilarity to the human brain. These results suggest that the underlying\\nsimilarities between artificial and biological vision are primarily governed by\\na core set of universal image representations that are convergently learned by\\ndiverse systems.\",\"PeriodicalId\":501517,\"journal\":{\"name\":\"arXiv - QuanBio - Neurons and Cognition\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - QuanBio - Neurons and Cognition\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.12804\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuanBio - Neurons and Cognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.12804","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

视觉神经网络模型学习与大脑一致的表征,是因为它们与生物视觉具有相同的结构限制和任务目标,还是因为它们学习了自然图像处理的普遍特征?我们从不同架构的视觉神经网络中找出了数十万个表征维度的普遍性。我们发现,具有不同架构和任务目标的网络学会了使用一组共享的潜在维度来表示自然图像,尽管这些维度在表面上看起来非常不同。接下来,通过将这些网络与用 fMRI 测量的人类大脑表征进行比较,我们发现神经网络中与大脑最匹配的表征是那些通用的、独立于网络具体特征的表征。值得注意的是,每个网络都可以缩减到少于十个最普遍的维度,而对其与人脑表征的相似性几乎没有影响。这些结果表明,人工视觉与生物视觉之间的基本相似性主要是由一组核心的通用图像表征决定的,而这些图像表征是由不同的系统共同学习的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Universal dimensions of visual representation
Do neural network models of vision learn brain-aligned representations because they share architectural constraints and task objectives with biological vision or because they learn universal features of natural image processing? We characterized the universality of hundreds of thousands of representational dimensions from visual neural networks with varied construction. We found that networks with varied architectures and task objectives learn to represent natural images using a shared set of latent dimensions, despite appearing highly distinct at a surface level. Next, by comparing these networks with human brain representations measured with fMRI, we found that the most brain-aligned representations in neural networks are those that are universal and independent of a network's specific characteristics. Remarkably, each network can be reduced to fewer than ten of its most universal dimensions with little impact on its representational similarity to the human brain. These results suggest that the underlying similarities between artificial and biological vision are primarily governed by a core set of universal image representations that are convergently learned by diverse systems.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Early reduced dopaminergic tone mediated by D3 receptor and dopamine transporter in absence epileptogenesis Contrasformer: A Brain Network Contrastive Transformer for Neurodegenerative Condition Identification Identifying Influential nodes in Brain Networks via Self-Supervised Graph-Transformer Contrastive Learning in Memristor-based Neuromorphic Systems Self-Attention Limits Working Memory Capacity of Transformer-Based Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1