G. Mahindre, Rasika Karkare, R. Paffenroth, A. Jayasumana
{"title":"基于预训练Hadamard自编码器的超稀疏距离测量在社交网络中的推断","authors":"G. Mahindre, Rasika Karkare, R. Paffenroth, A. Jayasumana","doi":"10.1109/LCN48667.2020.9314769","DOIUrl":null,"url":null,"abstract":"Analysis of large-scale networks is hampered by limited data as complete network measurements are expensive or impossible to collect. We present an autoencoder based technique paired with pretraining, to predict missing topology information in ultra-sparsely sampled social networks. Randomly generated variations of Barabási-Albert and power law cluster graphs are used to pretrain a Hadamard Autoencoder. Pretrained neural network is then used to infer distances in social networks where only a very small fraction of intra-node distances are available. Model is evaluated on variations of Barabási-Albert and Powerlaw cluster graphs as well as on a real-world Facebook network. Results are compared with a deterministic Low-rank Matrix Completion (LMC) method as well as an autoencoder trained on partially observed data from the test-network. Results show that pretrained autoencoder far outperforms LMC when the number of distance samples available is less than 1%, while being competitive for higher fraction of samples.","PeriodicalId":245782,"journal":{"name":"2020 IEEE 45th Conference on Local Computer Networks (LCN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Inference in Social Networks from Ultra-Sparse Distance Measurements via Pretrained Hadamard Autoencoders\",\"authors\":\"G. Mahindre, Rasika Karkare, R. Paffenroth, A. Jayasumana\",\"doi\":\"10.1109/LCN48667.2020.9314769\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Analysis of large-scale networks is hampered by limited data as complete network measurements are expensive or impossible to collect. We present an autoencoder based technique paired with pretraining, to predict missing topology information in ultra-sparsely sampled social networks. Randomly generated variations of Barabási-Albert and power law cluster graphs are used to pretrain a Hadamard Autoencoder. Pretrained neural network is then used to infer distances in social networks where only a very small fraction of intra-node distances are available. Model is evaluated on variations of Barabási-Albert and Powerlaw cluster graphs as well as on a real-world Facebook network. Results are compared with a deterministic Low-rank Matrix Completion (LMC) method as well as an autoencoder trained on partially observed data from the test-network. Results show that pretrained autoencoder far outperforms LMC when the number of distance samples available is less than 1%, while being competitive for higher fraction of samples.\",\"PeriodicalId\":245782,\"journal\":{\"name\":\"2020 IEEE 45th Conference on Local Computer Networks (LCN)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-11-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE 45th Conference on Local Computer Networks (LCN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/LCN48667.2020.9314769\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 45th Conference on Local Computer Networks (LCN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/LCN48667.2020.9314769","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Inference in Social Networks from Ultra-Sparse Distance Measurements via Pretrained Hadamard Autoencoders
Analysis of large-scale networks is hampered by limited data as complete network measurements are expensive or impossible to collect. We present an autoencoder based technique paired with pretraining, to predict missing topology information in ultra-sparsely sampled social networks. Randomly generated variations of Barabási-Albert and power law cluster graphs are used to pretrain a Hadamard Autoencoder. Pretrained neural network is then used to infer distances in social networks where only a very small fraction of intra-node distances are available. Model is evaluated on variations of Barabási-Albert and Powerlaw cluster graphs as well as on a real-world Facebook network. Results are compared with a deterministic Low-rank Matrix Completion (LMC) method as well as an autoencoder trained on partially observed data from the test-network. Results show that pretrained autoencoder far outperforms LMC when the number of distance samples available is less than 1%, while being competitive for higher fraction of samples.