{"title":"nmODE: neural memory ordinary differential equation","authors":"Zhang Yi","doi":"10.1007/s10462-023-10496-2","DOIUrl":null,"url":null,"abstract":"<div><p>Brain neural networks are regarded as dynamical systems in neural science, in which memories are interpreted as attractors of the systems. Mathematically, ordinary differential equations (ODEs) can be utilized to describe dynamical systems. Any ODE that is employed to describe the dynamics of a neural network can be called a neuralODE. Inspired by rethinking the nonlinear representation ability of existing artificial neural networks together with the functions of columns in the neocortex, this paper proposes a theory of memory-based neuralODE, which is composed of two novel artificial neural network models: nmODE and <span>\\(\\epsilon\\)</span>-net, and two learning algorithms: nmLA and <span>\\(\\epsilon\\)</span>-LA. The nmODE (neural memory Ordinary Differential Equation) is designed with a special structure that separates learning neurons from memory neurons, making its dynamics clear. Given any external input, the nmODE possesses the global attractor property and is thus embedded with a memory mechanism. The nmODE establishes a nonlinear mapping from the external input to its associated attractor and does not have the problem of learning features homeomorphic to the input data space, as occurs frequently in most existing neuralODEs. The nmLA (neural memory Learning Algorithm) is developed by proposing an interesting three-dimensional inverse ODE (invODE) and has advantages in memory and parameter efficiency. The proposed <span>\\(\\epsilon\\)</span>-net is a discrete version of the nmODE, which is particularly feasible for digital computing. The proposed <span>\\(\\epsilon\\)</span>-LA (<span>\\(\\epsilon\\)</span> learning algorithm) requires no prior knowledge of the number of network layers. Both nmLA and <span>\\(\\epsilon\\)</span>-LA have no problem with gradient vanishing. Experimental results show that the proposed theory is comparable to state-of-the-art methods.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"56 12","pages":"14403 - 14438"},"PeriodicalIF":10.7000,"publicationDate":"2023-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-023-10496-2.pdf","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence Review","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10462-023-10496-2","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 1
Abstract
Brain neural networks are regarded as dynamical systems in neural science, in which memories are interpreted as attractors of the systems. Mathematically, ordinary differential equations (ODEs) can be utilized to describe dynamical systems. Any ODE that is employed to describe the dynamics of a neural network can be called a neuralODE. Inspired by rethinking the nonlinear representation ability of existing artificial neural networks together with the functions of columns in the neocortex, this paper proposes a theory of memory-based neuralODE, which is composed of two novel artificial neural network models: nmODE and \(\epsilon\)-net, and two learning algorithms: nmLA and \(\epsilon\)-LA. The nmODE (neural memory Ordinary Differential Equation) is designed with a special structure that separates learning neurons from memory neurons, making its dynamics clear. Given any external input, the nmODE possesses the global attractor property and is thus embedded with a memory mechanism. The nmODE establishes a nonlinear mapping from the external input to its associated attractor and does not have the problem of learning features homeomorphic to the input data space, as occurs frequently in most existing neuralODEs. The nmLA (neural memory Learning Algorithm) is developed by proposing an interesting three-dimensional inverse ODE (invODE) and has advantages in memory and parameter efficiency. The proposed \(\epsilon\)-net is a discrete version of the nmODE, which is particularly feasible for digital computing. The proposed \(\epsilon\)-LA (\(\epsilon\) learning algorithm) requires no prior knowledge of the number of network layers. Both nmLA and \(\epsilon\)-LA have no problem with gradient vanishing. Experimental results show that the proposed theory is comparable to state-of-the-art methods.
期刊介绍:
Artificial Intelligence Review, a fully open access journal, publishes cutting-edge research in artificial intelligence and cognitive science. It features critical evaluations of applications, techniques, and algorithms, providing a platform for both researchers and application developers. The journal includes refereed survey and tutorial articles, along with reviews and commentary on significant developments in the field.