{"title":"High-dimensional function approximation using local linear embedding","authors":"Péter András","doi":"10.1109/IJCNN.2015.7280370","DOIUrl":null,"url":null,"abstract":"Neural network approximation of high-dimensional nonlinear functions is difficult due to the sparsity of the data in the high-dimensional data space and the need for good coverage of the data space by the `receptive fields' of the neurons. However, high-dimensional data often resides around a much lower dimensional supporting manifold. Given that a low dimensional approximation of the target function is likely to be more precise than a high-dimensional approximation, if we can find a mapping of the data points onto a lower-dimensional space corresponding to the supporting manifold, we expect to be able to build neural network approximations of the target function with improved precision and generalization ability. Here we use the local linear embedding (LLE) method to find the low-dimensional manifold and show that the neural networks trained on the transformed data achieve much better function approximation performance than neural networks trained on the original data.","PeriodicalId":6539,"journal":{"name":"2015 International Joint Conference on Neural Networks (IJCNN)","volume":"68 1","pages":"1-8"},"PeriodicalIF":0.0000,"publicationDate":"2015-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.2015.7280370","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Neural network approximation of high-dimensional nonlinear functions is difficult due to the sparsity of the data in the high-dimensional data space and the need for good coverage of the data space by the `receptive fields' of the neurons. However, high-dimensional data often resides around a much lower dimensional supporting manifold. Given that a low dimensional approximation of the target function is likely to be more precise than a high-dimensional approximation, if we can find a mapping of the data points onto a lower-dimensional space corresponding to the supporting manifold, we expect to be able to build neural network approximations of the target function with improved precision and generalization ability. Here we use the local linear embedding (LLE) method to find the low-dimensional manifold and show that the neural networks trained on the transformed data achieve much better function approximation performance than neural networks trained on the original data.