Pub Date : 2023-10-16DOI: 10.1007/s10472-023-09901-x
Aysu Ismayilova, Muhammad Ismayilov
In this paper we consider a new class of RBF (Radial Basis Function) neural networks, in which smoothing factors are replaced with shifts. We prove under certain conditions on the activation function that these networks are capable of approximating any continuous multivariate function on any compact subset of the d-dimensional Euclidean space. For RBF networks with finitely many fixed centroids we describe conditions guaranteeing approximation with arbitrary precision.
在本文中,我们考虑了一类新的 RBF(径向基函数)神经网络,其中平滑因子被移位所取代。我们证明,在激活函数的某些条件下,这些网络能够逼近 d 维欧几里得空间任何紧凑子集上的任何连续多元函数。对于具有有限多个固定中心的 RBF 网络,我们描述了保证以任意精度逼近的条件。
{"title":"On the universal approximation property of radial basis function neural networks","authors":"Aysu Ismayilova, Muhammad Ismayilov","doi":"10.1007/s10472-023-09901-x","DOIUrl":"10.1007/s10472-023-09901-x","url":null,"abstract":"<div><p>In this paper we consider a new class of RBF (Radial Basis Function) neural networks, in which smoothing factors are replaced with shifts. We prove under certain conditions on the activation function that these networks are capable of approximating any continuous multivariate function on any compact subset of the <i>d</i>-dimensional Euclidean space. For RBF networks with finitely many fixed centroids we describe conditions guaranteeing approximation with arbitrary precision.</p></div>","PeriodicalId":7971,"journal":{"name":"Annals of Mathematics and Artificial Intelligence","volume":"92 3","pages":"691 - 701"},"PeriodicalIF":1.2,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136113626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-10DOI: 10.1007/s10472-023-09894-7
Bruno Buchberger
In this note, I present my personal view on the interaction of the three areas Automated Programming, Symbolic Computation, and Machine Learning. Programming is the activity of finding a (hopefully) correct program (algorithm) for a given problem. Programming is central to automation in all areas and is considered one of the most creative human activities. However, already very early in the history of programming, people started to “jump to the meta-level” of programming, i.e., started to develop procedures that automate, or semi-automate, (various aspects or parts of) the process of programming. This area has various names like “Automated Programming”, “Automated Algorithm Synthesis”, etc. Developing compilers can be considered an early example of a problem in automated programming. Automated reasoners for proving the correctness of programs with respect to a specification is an advanced example of a topic in automated programming. ChatGPT producing (amazingly good) programs from problem specifications in natural language is a recent example of automated programming. Programming tends to become the most important activity as the level of technological sophistication increases. Therefore, automating programming is maybe the most exciting and relevant technological endeavor today. It also will have enormous impact on the global job market in the software industry. Roughly, I see two main approaches to automated programming: