João Paulo Bezerra, Luciano Freitas, Petr Kuznetsov
{"title":"异步延迟和快速原子快照","authors":"João Paulo Bezerra, Luciano Freitas, Petr Kuznetsov","doi":"arxiv-2408.02562","DOIUrl":null,"url":null,"abstract":"The original goal of this paper was a novel, fast atomic-snapshot protocol\nfor asynchronous message-passing systems. In the process of defining what fast\nmeans exactly, we faced a number of interesting issues that arise when\nconventional time metrics are applied to asynchronous implementations. We\ndiscovered some gaps in latency claims made in earlier work on snapshot\nalgorithms, which hampers their comparative time-complexity analysis. We then\ncame up with a new unifying time-complexity analysis that captures the latency\nof an operation in an asynchronous, long-lived implementation, which allowed us\nto formally grasp latency improvements of our solution with respect to the\nstate-of-the-art protocols: optimal latency in fault-free runs without\ncontention, short constant latency in fault-free runs with contention, the\nworst-case latency proportional to the number of failures, and constant, close\nto optimal amortized latency.","PeriodicalId":501422,"journal":{"name":"arXiv - CS - Distributed, Parallel, and Cluster Computing","volume":"75 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Asynchronous Latency and Fast Atomic Snapshot\",\"authors\":\"João Paulo Bezerra, Luciano Freitas, Petr Kuznetsov\",\"doi\":\"arxiv-2408.02562\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The original goal of this paper was a novel, fast atomic-snapshot protocol\\nfor asynchronous message-passing systems. In the process of defining what fast\\nmeans exactly, we faced a number of interesting issues that arise when\\nconventional time metrics are applied to asynchronous implementations. We\\ndiscovered some gaps in latency claims made in earlier work on snapshot\\nalgorithms, which hampers their comparative time-complexity analysis. We then\\ncame up with a new unifying time-complexity analysis that captures the latency\\nof an operation in an asynchronous, long-lived implementation, which allowed us\\nto formally grasp latency improvements of our solution with respect to the\\nstate-of-the-art protocols: optimal latency in fault-free runs without\\ncontention, short constant latency in fault-free runs with contention, the\\nworst-case latency proportional to the number of failures, and constant, close\\nto optimal amortized latency.\",\"PeriodicalId\":501422,\"journal\":{\"name\":\"arXiv - CS - Distributed, Parallel, and Cluster Computing\",\"volume\":\"75 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Distributed, Parallel, and Cluster Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.02562\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Distributed, Parallel, and Cluster Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.02562","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The original goal of this paper was a novel, fast atomic-snapshot protocol
for asynchronous message-passing systems. In the process of defining what fast
means exactly, we faced a number of interesting issues that arise when
conventional time metrics are applied to asynchronous implementations. We
discovered some gaps in latency claims made in earlier work on snapshot
algorithms, which hampers their comparative time-complexity analysis. We then
came up with a new unifying time-complexity analysis that captures the latency
of an operation in an asynchronous, long-lived implementation, which allowed us
to formally grasp latency improvements of our solution with respect to the
state-of-the-art protocols: optimal latency in fault-free runs without
contention, short constant latency in fault-free runs with contention, the
worst-case latency proportional to the number of failures, and constant, close
to optimal amortized latency.