为神经网络的普遍性提供了一个统一的、建设性的框架

IF 1.4 4区 数学 Q2 MATHEMATICS, APPLIED IMA Journal of Applied Mathematics Pub Date : 2023-11-11 DOI:10.1093/imamat/hxad032
Tan Bui-Thanh
{"title":"为神经网络的普遍性提供了一个统一的、建设性的框架","authors":"Tan Bui-Thanh","doi":"10.1093/imamat/hxad032","DOIUrl":null,"url":null,"abstract":"Abstract One of the reasons why many neural networks are capable of replicating complicated tasks or functions is their universal approximation property. Though the past few decades have seen tremendous advances in theories of neural networks, a single constructive and elementary framework for neural network universality remains unavailable. This paper is an effort to provide a unified and constructive framework for the universality of a large class of activation functions including most of the existing ones. At the heart of the framework is the concept of neural network approximate identity (nAI). The main result is: any nAI activation function is universal in the space of continuous functions on compacta. It turns out that most of the existing activation functions are nAI, and thus universal. The framework induces several advantages over the contemporary counterparts. First, it is constructive with elementary means from functional analysis, probability theory, and numerical analysis. Second, it is one of the first unified and constructive attempts that is valid for most of the existing activation functions. Third, it provides new proofs for most activation functions. Fourth, for a given activation and error tolerance, the framework provides precisely the architecture of the corresponding one-hidden neural network with a predetermined number of neurons and the values of weights/biases. Fifth, the framework allows us to abstractly present the first universal approximation with a favorable non-asymptotic rate. Sixth, our framework also provides insights into the developments, and hence providing constructive derivations, of some of the existing approaches.","PeriodicalId":56297,"journal":{"name":"IMA Journal of Applied Mathematics","volume":null,"pages":null},"PeriodicalIF":1.4000,"publicationDate":"2023-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A unified and constructive framework for the universality of neural networks\",\"authors\":\"Tan Bui-Thanh\",\"doi\":\"10.1093/imamat/hxad032\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract One of the reasons why many neural networks are capable of replicating complicated tasks or functions is their universal approximation property. Though the past few decades have seen tremendous advances in theories of neural networks, a single constructive and elementary framework for neural network universality remains unavailable. This paper is an effort to provide a unified and constructive framework for the universality of a large class of activation functions including most of the existing ones. At the heart of the framework is the concept of neural network approximate identity (nAI). The main result is: any nAI activation function is universal in the space of continuous functions on compacta. It turns out that most of the existing activation functions are nAI, and thus universal. The framework induces several advantages over the contemporary counterparts. First, it is constructive with elementary means from functional analysis, probability theory, and numerical analysis. Second, it is one of the first unified and constructive attempts that is valid for most of the existing activation functions. Third, it provides new proofs for most activation functions. Fourth, for a given activation and error tolerance, the framework provides precisely the architecture of the corresponding one-hidden neural network with a predetermined number of neurons and the values of weights/biases. Fifth, the framework allows us to abstractly present the first universal approximation with a favorable non-asymptotic rate. Sixth, our framework also provides insights into the developments, and hence providing constructive derivations, of some of the existing approaches.\",\"PeriodicalId\":56297,\"journal\":{\"name\":\"IMA Journal of Applied Mathematics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.4000,\"publicationDate\":\"2023-11-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IMA Journal of Applied Mathematics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1093/imamat/hxad032\",\"RegionNum\":4,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IMA Journal of Applied Mathematics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/imamat/hxad032","RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0

摘要

许多神经网络能够复制复杂任务或函数的原因之一是它们的普遍近似性质。尽管在过去的几十年里,神经网络理论取得了巨大的进步,但仍然没有一个单一的、建设性的、基本的神经网络普适性框架。本文试图为包括大多数现有激活函数在内的一类激活函数的普适性提供一个统一的、建设性的框架。该框架的核心是神经网络近似恒等式(nAI)的概念。主要结果是:任何nAI激活函数在紧网络上的连续函数空间中都是泛的。事实证明,大多数现有的激活函数都是nAI,因此是通用的。这个框架比同时代的框架有几个优点。首先,它是建设性的,从泛函分析,概率论和数值分析的初等手段。其次,它是对大多数现有激活函数有效的第一个统一的和建设性的尝试之一。第三,为大多数激活函数提供了新的证明。第四,对于给定的激活和容错性,该框架精确地提供了具有预定数量的神经元和权重/偏差值的相应单隐神经网络的体系结构。第五,该框架允许我们抽象地提出具有有利的非渐近速率的第一个普遍近似。第六,我们的框架还提供了对发展的见解,并因此提供了一些现有方法的建设性衍生。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A unified and constructive framework for the universality of neural networks
Abstract One of the reasons why many neural networks are capable of replicating complicated tasks or functions is their universal approximation property. Though the past few decades have seen tremendous advances in theories of neural networks, a single constructive and elementary framework for neural network universality remains unavailable. This paper is an effort to provide a unified and constructive framework for the universality of a large class of activation functions including most of the existing ones. At the heart of the framework is the concept of neural network approximate identity (nAI). The main result is: any nAI activation function is universal in the space of continuous functions on compacta. It turns out that most of the existing activation functions are nAI, and thus universal. The framework induces several advantages over the contemporary counterparts. First, it is constructive with elementary means from functional analysis, probability theory, and numerical analysis. Second, it is one of the first unified and constructive attempts that is valid for most of the existing activation functions. Third, it provides new proofs for most activation functions. Fourth, for a given activation and error tolerance, the framework provides precisely the architecture of the corresponding one-hidden neural network with a predetermined number of neurons and the values of weights/biases. Fifth, the framework allows us to abstractly present the first universal approximation with a favorable non-asymptotic rate. Sixth, our framework also provides insights into the developments, and hence providing constructive derivations, of some of the existing approaches.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
2.30
自引率
8.30%
发文量
32
审稿时长
24 months
期刊介绍: The IMA Journal of Applied Mathematics is a direct successor of the Journal of the Institute of Mathematics and its Applications which was started in 1965. It is an interdisciplinary journal that publishes research on mathematics arising in the physical sciences and engineering as well as suitable articles in the life sciences, social sciences, and finance. Submissions should address interesting and challenging mathematical problems arising in applications. A good balance between the development of the application(s) and the analysis is expected. Papers that either use established methods to address solved problems or that present analysis in the absence of applications will not be considered. The journal welcomes submissions in many research areas. Examples are: continuum mechanics materials science and elasticity, including boundary layer theory, combustion, complex flows and soft matter, electrohydrodynamics and magnetohydrodynamics, geophysical flows, granular flows, interfacial and free surface flows, vortex dynamics; elasticity theory; linear and nonlinear wave propagation, nonlinear optics and photonics; inverse problems; applied dynamical systems and nonlinear systems; mathematical physics; stochastic differential equations and stochastic dynamics; network science; industrial applications.
期刊最新文献
The impact of confinement on the deformation of an elastic particle under axisymmetric tube flow On the P-Irreducibility of Quintic Positive Polynomials An explicit Maclaurin series solution to non-autonomous and non-homogeneous evolution equation, Omega Calculus, and associated applications Stability analysis of viscous multi-layer shear flows with interfacial slip An iteration method to study nonlinear wave propagation for a non-Green elastic 1D bar
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1