{"title":"网络拓扑结构变化对信息源定位的影响","authors":"Piotr Machura, Robert Paluch","doi":"10.1016/j.comcom.2024.107958","DOIUrl":null,"url":null,"abstract":"<div><div>Well-established methods of locating the source of information in a complex network are usually derived with the assumption of complete and exact knowledge of network topology. We study the performance of three such algorithms (Limited Pinto–Thiran–Vetterli Algorithm — LPTVA, Gradient Maximum Likelihood Algorithm — GMLA and Pearson Correlation Algorithm — PCA) in scenarios that do not fulfill this assumption by modifying the network before localization. This is done by adding superfluous new links, hiding existing ones, or reattaching links following the network’s structural Hamiltonian. Our results show that GMLA is highly resilient to adding superfluous edges, as its precision falls by more than statistical uncertainty only when the number of links is approximately doubled. On the other hand, if the edge set is underestimated or reattachment has taken place, the performance of GMLA drops significantly. In such a scenario, PCA is preferable, retaining most of its performance when other simulation parameters favor successful localization (high density of observers, highly deterministic propagation). It is also generally more accurate than LPTVA and orders of magnitude faster. The differences between localization algorithms can be intuitively explained, although further theoretical research is needed.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"228 ","pages":"Article 107958"},"PeriodicalIF":4.5000,"publicationDate":"2024-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Impact of network topology changes on information source localization\",\"authors\":\"Piotr Machura, Robert Paluch\",\"doi\":\"10.1016/j.comcom.2024.107958\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Well-established methods of locating the source of information in a complex network are usually derived with the assumption of complete and exact knowledge of network topology. We study the performance of three such algorithms (Limited Pinto–Thiran–Vetterli Algorithm — LPTVA, Gradient Maximum Likelihood Algorithm — GMLA and Pearson Correlation Algorithm — PCA) in scenarios that do not fulfill this assumption by modifying the network before localization. This is done by adding superfluous new links, hiding existing ones, or reattaching links following the network’s structural Hamiltonian. Our results show that GMLA is highly resilient to adding superfluous edges, as its precision falls by more than statistical uncertainty only when the number of links is approximately doubled. On the other hand, if the edge set is underestimated or reattachment has taken place, the performance of GMLA drops significantly. In such a scenario, PCA is preferable, retaining most of its performance when other simulation parameters favor successful localization (high density of observers, highly deterministic propagation). It is also generally more accurate than LPTVA and orders of magnitude faster. The differences between localization algorithms can be intuitively explained, although further theoretical research is needed.</div></div>\",\"PeriodicalId\":55224,\"journal\":{\"name\":\"Computer Communications\",\"volume\":\"228 \",\"pages\":\"Article 107958\"},\"PeriodicalIF\":4.5000,\"publicationDate\":\"2024-09-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Communications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0140366424003050\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Communications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0140366424003050","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Impact of network topology changes on information source localization
Well-established methods of locating the source of information in a complex network are usually derived with the assumption of complete and exact knowledge of network topology. We study the performance of three such algorithms (Limited Pinto–Thiran–Vetterli Algorithm — LPTVA, Gradient Maximum Likelihood Algorithm — GMLA and Pearson Correlation Algorithm — PCA) in scenarios that do not fulfill this assumption by modifying the network before localization. This is done by adding superfluous new links, hiding existing ones, or reattaching links following the network’s structural Hamiltonian. Our results show that GMLA is highly resilient to adding superfluous edges, as its precision falls by more than statistical uncertainty only when the number of links is approximately doubled. On the other hand, if the edge set is underestimated or reattachment has taken place, the performance of GMLA drops significantly. In such a scenario, PCA is preferable, retaining most of its performance when other simulation parameters favor successful localization (high density of observers, highly deterministic propagation). It is also generally more accurate than LPTVA and orders of magnitude faster. The differences between localization algorithms can be intuitively explained, although further theoretical research is needed.
期刊介绍:
Computer and Communications networks are key infrastructures of the information society with high socio-economic value as they contribute to the correct operations of many critical services (from healthcare to finance and transportation). Internet is the core of today''s computer-communication infrastructures. This has transformed the Internet, from a robust network for data transfer between computers, to a global, content-rich, communication and information system where contents are increasingly generated by the users, and distributed according to human social relations. Next-generation network technologies, architectures and protocols are therefore required to overcome the limitations of the legacy Internet and add new capabilities and services. The future Internet should be ubiquitous, secure, resilient, and closer to human communication paradigms.
Computer Communications is a peer-reviewed international journal that publishes high-quality scientific articles (both theory and practice) and survey papers covering all aspects of future computer communication networks (on all layers, except the physical layer), with a special attention to the evolution of the Internet architecture, protocols, services, and applications.