{"title":"分布式网络应用的效率最优马尔可夫链","authors":"Chul-Ho Lee, Do Young Eun","doi":"10.1109/INFOCOM.2015.7218566","DOIUrl":null,"url":null,"abstract":"The Metropolis-Hastings (MH) algorithm, in addition to its application for Markov Chain Monte Carlo sampling or simulation, has been popularly used for constructing a random walk that achieves a given, desired stationary distribution over a graph. Applications include crawling-based sampling of large graphs or online social networks, statistical estimation or inference from massive scale of networked data, efficient searching algorithms in unstructured peer-to-peer networks, randomized routing and movement strategies in wireless sensor networks, to list a few. Despite its versatility, the MH algorithm often causes self-transitions of its resulting random walk at some nodes, which is not efficient in the sense of the Peskun ordering - a partial order between off-diagonal elements of transition matrices of two different Markov chains, and in turn results in deficient performance in terms of asymptotic variance of time averages and expected hitting times with slower speed of convergence. To alleviate this problem, we present simple yet effective distributed algorithms that are guaranteed to improve the MH algorithm over time when running on a graph, and eventually reach `efficiency-optimality', while ensuring the same desired stationary distribution throughout.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"On the efficiency-optimal Markov chains for distributed networking applications\",\"authors\":\"Chul-Ho Lee, Do Young Eun\",\"doi\":\"10.1109/INFOCOM.2015.7218566\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The Metropolis-Hastings (MH) algorithm, in addition to its application for Markov Chain Monte Carlo sampling or simulation, has been popularly used for constructing a random walk that achieves a given, desired stationary distribution over a graph. Applications include crawling-based sampling of large graphs or online social networks, statistical estimation or inference from massive scale of networked data, efficient searching algorithms in unstructured peer-to-peer networks, randomized routing and movement strategies in wireless sensor networks, to list a few. Despite its versatility, the MH algorithm often causes self-transitions of its resulting random walk at some nodes, which is not efficient in the sense of the Peskun ordering - a partial order between off-diagonal elements of transition matrices of two different Markov chains, and in turn results in deficient performance in terms of asymptotic variance of time averages and expected hitting times with slower speed of convergence. To alleviate this problem, we present simple yet effective distributed algorithms that are guaranteed to improve the MH algorithm over time when running on a graph, and eventually reach `efficiency-optimality', while ensuring the same desired stationary distribution throughout.\",\"PeriodicalId\":342583,\"journal\":{\"name\":\"2015 IEEE Conference on Computer Communications (INFOCOM)\",\"volume\":\"20 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-08-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 IEEE Conference on Computer Communications (INFOCOM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/INFOCOM.2015.7218566\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 IEEE Conference on Computer Communications (INFOCOM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INFOCOM.2015.7218566","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
On the efficiency-optimal Markov chains for distributed networking applications
The Metropolis-Hastings (MH) algorithm, in addition to its application for Markov Chain Monte Carlo sampling or simulation, has been popularly used for constructing a random walk that achieves a given, desired stationary distribution over a graph. Applications include crawling-based sampling of large graphs or online social networks, statistical estimation or inference from massive scale of networked data, efficient searching algorithms in unstructured peer-to-peer networks, randomized routing and movement strategies in wireless sensor networks, to list a few. Despite its versatility, the MH algorithm often causes self-transitions of its resulting random walk at some nodes, which is not efficient in the sense of the Peskun ordering - a partial order between off-diagonal elements of transition matrices of two different Markov chains, and in turn results in deficient performance in terms of asymptotic variance of time averages and expected hitting times with slower speed of convergence. To alleviate this problem, we present simple yet effective distributed algorithms that are guaranteed to improve the MH algorithm over time when running on a graph, and eventually reach `efficiency-optimality', while ensuring the same desired stationary distribution throughout.