用模块化打破神经网络扩展法则

Akhilan Boopathy, Sunshine Jiang, William Yue, Jaedong Hwang, Abhiram Iyer, Ila Fiete
{"title":"用模块化打破神经网络扩展法则","authors":"Akhilan Boopathy, Sunshine Jiang, William Yue, Jaedong Hwang, Abhiram Iyer, Ila Fiete","doi":"arxiv-2409.05780","DOIUrl":null,"url":null,"abstract":"Modular neural networks outperform nonmodular neural networks on tasks\nranging from visual question answering to robotics. These performance\nimprovements are thought to be due to modular networks' superior ability to\nmodel the compositional and combinatorial structure of real-world problems.\nHowever, a theoretical explanation of how modularity improves generalizability,\nand how to leverage task modularity while training networks remains elusive.\nUsing recent theoretical progress in explaining neural network generalization,\nwe investigate how the amount of training data required to generalize on a task\nvaries with the intrinsic dimensionality of a task's input. We show\ntheoretically that when applied to modularly structured tasks, while nonmodular\nnetworks require an exponential number of samples with task dimensionality,\nmodular networks' sample complexity is independent of task dimensionality:\nmodular networks can generalize in high dimensions. We then develop a novel\nlearning rule for modular networks to exploit this advantage and empirically\nshow the improved generalization of the rule, both in- and out-of-distribution,\non high-dimensional, modular tasks.","PeriodicalId":501340,"journal":{"name":"arXiv - STAT - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Breaking Neural Network Scaling Laws with Modularity\",\"authors\":\"Akhilan Boopathy, Sunshine Jiang, William Yue, Jaedong Hwang, Abhiram Iyer, Ila Fiete\",\"doi\":\"arxiv-2409.05780\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Modular neural networks outperform nonmodular neural networks on tasks\\nranging from visual question answering to robotics. These performance\\nimprovements are thought to be due to modular networks' superior ability to\\nmodel the compositional and combinatorial structure of real-world problems.\\nHowever, a theoretical explanation of how modularity improves generalizability,\\nand how to leverage task modularity while training networks remains elusive.\\nUsing recent theoretical progress in explaining neural network generalization,\\nwe investigate how the amount of training data required to generalize on a task\\nvaries with the intrinsic dimensionality of a task's input. We show\\ntheoretically that when applied to modularly structured tasks, while nonmodular\\nnetworks require an exponential number of samples with task dimensionality,\\nmodular networks' sample complexity is independent of task dimensionality:\\nmodular networks can generalize in high dimensions. We then develop a novel\\nlearning rule for modular networks to exploit this advantage and empirically\\nshow the improved generalization of the rule, both in- and out-of-distribution,\\non high-dimensional, modular tasks.\",\"PeriodicalId\":501340,\"journal\":{\"name\":\"arXiv - STAT - Machine Learning\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - STAT - Machine Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.05780\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - STAT - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.05780","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

模块化神经网络在从视觉问题解答到机器人等任务中的表现优于非模块化神经网络。我们利用最近在解释神经网络泛化方面取得的理论进展,研究了泛化任务所需的训练数据量如何随任务输入的内在维度而变化。我们从理论上证明,当应用于模块化结构的任务时,非模块化网络需要的样本数量与任务维度成指数关系,而模块化网络的样本复杂度与任务维度无关:模块化网络可以在高维度上泛化。然后,我们为模块化网络开发了一种新的学习规则,以利用这一优势,并通过实证证明了该规则在高维模块化任务中的泛化能力,无论是在分布内还是分布外。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Breaking Neural Network Scaling Laws with Modularity
Modular neural networks outperform nonmodular neural networks on tasks ranging from visual question answering to robotics. These performance improvements are thought to be due to modular networks' superior ability to model the compositional and combinatorial structure of real-world problems. However, a theoretical explanation of how modularity improves generalizability, and how to leverage task modularity while training networks remains elusive. Using recent theoretical progress in explaining neural network generalization, we investigate how the amount of training data required to generalize on a task varies with the intrinsic dimensionality of a task's input. We show theoretically that when applied to modularly structured tasks, while nonmodular networks require an exponential number of samples with task dimensionality, modular networks' sample complexity is independent of task dimensionality: modular networks can generalize in high dimensions. We then develop a novel learning rule for modular networks to exploit this advantage and empirically show the improved generalization of the rule, both in- and out-of-distribution, on high-dimensional, modular tasks.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Fitting Multilevel Factor Models Cartan moving frames and the data manifolds Symmetry-Based Structured Matrices for Efficient Approximately Equivariant Networks Recurrent Interpolants for Probabilistic Time Series Prediction PieClam: A Universal Graph Autoencoder Based on Overlapping Inclusive and Exclusive Communities
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1