{"title":"A unified and constructive framework for the universality of neural networks","authors":"Tan Bui-Thanh","doi":"10.1093/imamat/hxad032","DOIUrl":null,"url":null,"abstract":"Abstract One of the reasons why many neural networks are capable of replicating complicated tasks or functions is their universal approximation property. Though the past few decades have seen tremendous advances in theories of neural networks, a single constructive and elementary framework for neural network universality remains unavailable. This paper is an effort to provide a unified and constructive framework for the universality of a large class of activation functions including most of the existing ones. At the heart of the framework is the concept of neural network approximate identity (nAI). The main result is: any nAI activation function is universal in the space of continuous functions on compacta. It turns out that most of the existing activation functions are nAI, and thus universal. The framework induces several advantages over the contemporary counterparts. First, it is constructive with elementary means from functional analysis, probability theory, and numerical analysis. Second, it is one of the first unified and constructive attempts that is valid for most of the existing activation functions. Third, it provides new proofs for most activation functions. Fourth, for a given activation and error tolerance, the framework provides precisely the architecture of the corresponding one-hidden neural network with a predetermined number of neurons and the values of weights/biases. Fifth, the framework allows us to abstractly present the first universal approximation with a favorable non-asymptotic rate. Sixth, our framework also provides insights into the developments, and hence providing constructive derivations, of some of the existing approaches.","PeriodicalId":56297,"journal":{"name":"IMA Journal of Applied Mathematics","volume":null,"pages":null},"PeriodicalIF":1.4000,"publicationDate":"2023-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IMA Journal of Applied Mathematics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/imamat/hxad032","RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0
Abstract
Abstract One of the reasons why many neural networks are capable of replicating complicated tasks or functions is their universal approximation property. Though the past few decades have seen tremendous advances in theories of neural networks, a single constructive and elementary framework for neural network universality remains unavailable. This paper is an effort to provide a unified and constructive framework for the universality of a large class of activation functions including most of the existing ones. At the heart of the framework is the concept of neural network approximate identity (nAI). The main result is: any nAI activation function is universal in the space of continuous functions on compacta. It turns out that most of the existing activation functions are nAI, and thus universal. The framework induces several advantages over the contemporary counterparts. First, it is constructive with elementary means from functional analysis, probability theory, and numerical analysis. Second, it is one of the first unified and constructive attempts that is valid for most of the existing activation functions. Third, it provides new proofs for most activation functions. Fourth, for a given activation and error tolerance, the framework provides precisely the architecture of the corresponding one-hidden neural network with a predetermined number of neurons and the values of weights/biases. Fifth, the framework allows us to abstractly present the first universal approximation with a favorable non-asymptotic rate. Sixth, our framework also provides insights into the developments, and hence providing constructive derivations, of some of the existing approaches.
期刊介绍:
The IMA Journal of Applied Mathematics is a direct successor of the Journal of the Institute of Mathematics and its Applications which was started in 1965. It is an interdisciplinary journal that publishes research on mathematics arising in the physical sciences and engineering as well as suitable articles in the life sciences, social sciences, and finance. Submissions should address interesting and challenging mathematical problems arising in applications. A good balance between the development of the application(s) and the analysis is expected. Papers that either use established methods to address solved problems or that present analysis in the absence of applications will not be considered.
The journal welcomes submissions in many research areas. Examples are: continuum mechanics materials science and elasticity, including boundary layer theory, combustion, complex flows and soft matter, electrohydrodynamics and magnetohydrodynamics, geophysical flows, granular flows, interfacial and free surface flows, vortex dynamics; elasticity theory; linear and nonlinear wave propagation, nonlinear optics and photonics; inverse problems; applied dynamical systems and nonlinear systems; mathematical physics; stochastic differential equations and stochastic dynamics; network science; industrial applications.