{"title":"小波神经网络是单变量函数的渐近最优逼近器","authors":"V. Kreinovich, O. Sirisaengtaksin, S. Cabrera","doi":"10.1109/ICNN.1994.374179","DOIUrl":null,"url":null,"abstract":"Neural networks are universal approximators. For example, it has been proved (K. Hornik et al., 1989) that for every /spl epsiv/>0 an arbitrary continuous function on a compact set can be /spl epsiv/-approximated by a 3-layer neural network. This and other results prove that in principle, any function (e.g., any control) can be implemented by an appropriate neural network. But why neural networks? In addition to neural networks, an arbitrary continuous function can be also approximated by polynomials, etc. What is so special about neural networks that make them preferable approximators? To compare different approximators, one can compare the number of bits that we must store in order to be able to reconstruct a function with a given precision /spl epsiv/. For neural networks, we must store weights and thresholds. For polynomials, we must store coefficients, etc. We consider functions of one variable, and show that for some special neurons (corresponding to wavelets), neural networks are optimal approximators in the sense that they require (asymptotically) the smallest possible number of bits.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1994-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"41","resultStr":"{\"title\":\"Wavelet neural networks are asymptotically optimal approximators for functions of one variable\",\"authors\":\"V. Kreinovich, O. Sirisaengtaksin, S. Cabrera\",\"doi\":\"10.1109/ICNN.1994.374179\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Neural networks are universal approximators. For example, it has been proved (K. Hornik et al., 1989) that for every /spl epsiv/>0 an arbitrary continuous function on a compact set can be /spl epsiv/-approximated by a 3-layer neural network. This and other results prove that in principle, any function (e.g., any control) can be implemented by an appropriate neural network. But why neural networks? In addition to neural networks, an arbitrary continuous function can be also approximated by polynomials, etc. What is so special about neural networks that make them preferable approximators? To compare different approximators, one can compare the number of bits that we must store in order to be able to reconstruct a function with a given precision /spl epsiv/. For neural networks, we must store weights and thresholds. For polynomials, we must store coefficients, etc. We consider functions of one variable, and show that for some special neurons (corresponding to wavelets), neural networks are optimal approximators in the sense that they require (asymptotically) the smallest possible number of bits.<<ETX>>\",\"PeriodicalId\":209128,\"journal\":{\"name\":\"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)\",\"volume\":\"23 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1994-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"41\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICNN.1994.374179\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICNN.1994.374179","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 41
摘要
神经网络是通用逼近器。例如,已经证明(K. Hornik et al., 1989)对于每一个/spl epsiv/>0,紧集上的任意连续函数都可以用三层神经网络逼近/spl epsiv/-。这一结果和其他结果证明,原则上,任何函数(例如,任何控制)都可以由适当的神经网络实现。但为什么是神经网络呢?除了神经网络,任意连续函数也可以用多项式等逼近。神经网络有什么特别之处使其成为更好的近似器?为了比较不同的近似值,可以比较我们必须存储的比特数,以便能够以给定的精度/spl epsiv/重建函数。对于神经网络,我们必须存储权值和阈值。对于多项式,我们必须存储系数,等等。我们考虑单变量的函数,并表明对于一些特殊的神经元(对应于小波),神经网络是最优逼近器,因为它们(渐近地)需要尽可能少的比特数。
Wavelet neural networks are asymptotically optimal approximators for functions of one variable
Neural networks are universal approximators. For example, it has been proved (K. Hornik et al., 1989) that for every /spl epsiv/>0 an arbitrary continuous function on a compact set can be /spl epsiv/-approximated by a 3-layer neural network. This and other results prove that in principle, any function (e.g., any control) can be implemented by an appropriate neural network. But why neural networks? In addition to neural networks, an arbitrary continuous function can be also approximated by polynomials, etc. What is so special about neural networks that make them preferable approximators? To compare different approximators, one can compare the number of bits that we must store in order to be able to reconstruct a function with a given precision /spl epsiv/. For neural networks, we must store weights and thresholds. For polynomials, we must store coefficients, etc. We consider functions of one variable, and show that for some special neurons (corresponding to wavelets), neural networks are optimal approximators in the sense that they require (asymptotically) the smallest possible number of bits.<>