{"title":"Preserving Data Manifold Structure in Latent Space for Exploration through Network Geodesics","authors":"Sanjukta Krishnagopal, J. Bedrossian","doi":"10.1109/IJCNN55064.2022.9891993","DOIUrl":null,"url":null,"abstract":"While variational autoencoders have been successful in several tasks, the use of conventional priors are limited in their ability to encode the underlying structure of input data. We introduce an Encoded Prior Sliced Wasserstein AutoEncoder wherein an additional prior-encoder network learns a geometry and topology preserving embedding of any data manifold, thus improving the structure of latent space. The autoencoder and prior-encoder networks are iteratively trained using the Sliced Wasserstein distance, which facilitates the learning of nonstandard complex priors. We then introduce a graph-based algorithm to explore the learned manifold by traversing latent space through network-geodesics that lie along the manifold and hence are more realistic compared to conventional Euclidean interpolation. Specifically, we identify network-geodesics by maximizing the density of samples along the path while minimizing total energy. We use the 3D-spiral data to show that the prior encodes the geometry underlying the data unlike conventional autoencoders, and to demonstrate the exploration of the embedded data manifold through the network algorithm. We apply our framework to artificial as well as image datasets to demonstrate the advantages of learning improved latent structure, outlier generation, and geodesic interpolation.","PeriodicalId":106974,"journal":{"name":"2022 International Joint Conference on Neural Networks (IJCNN)","volume":"586 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN55064.2022.9891993","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
While variational autoencoders have been successful in several tasks, the use of conventional priors are limited in their ability to encode the underlying structure of input data. We introduce an Encoded Prior Sliced Wasserstein AutoEncoder wherein an additional prior-encoder network learns a geometry and topology preserving embedding of any data manifold, thus improving the structure of latent space. The autoencoder and prior-encoder networks are iteratively trained using the Sliced Wasserstein distance, which facilitates the learning of nonstandard complex priors. We then introduce a graph-based algorithm to explore the learned manifold by traversing latent space through network-geodesics that lie along the manifold and hence are more realistic compared to conventional Euclidean interpolation. Specifically, we identify network-geodesics by maximizing the density of samples along the path while minimizing total energy. We use the 3D-spiral data to show that the prior encodes the geometry underlying the data unlike conventional autoencoders, and to demonstrate the exploration of the embedded data manifold through the network algorithm. We apply our framework to artificial as well as image datasets to demonstrate the advantages of learning improved latent structure, outlier generation, and geodesic interpolation.