Pub Date : 2024-02-22DOI: 10.1007/s00446-024-00462-8
Amos Korman, Robin Vacus
How to efficiently and reliably spread information in a system is one of the most fundamental problems in distributed computing. Recently, inspired by biological scenarios, several works focused on identifying the minimal communication resources necessary to spread information under faulty conditions. Here we study the self-stabilizing bit-dissemination problem, introduced by Boczkowski, Korman, and Natale in [SODA 2017]. The problem considers a fully-connected network of nagents, with a binary world of opinions, one of which is called correct. At any given time, each agent holds an opinion bit as its public output. The population contains a source agent which knows which opinion is correct. This agent adopts the correct opinion and remains with it throughout the execution. We consider the basic (mathcal {PULL}) model of communication, in which each agent observes relatively few randomly chosen agents in each round. The goal of the non-source agents is to quickly converge on the correct opinion, despite having an arbitrary initial configuration, i.e., in a self-stabilizing manner. Once the population converges on the correct opinion, it should remain with it forever. Motivated by biological scenarios in which animals observe and react to the behavior of others, we focus on the extremely constrained model of passive communication, which assumes that when observing another agent the only information that can be extracted is the opinion bit of that agent. We prove that this problem can be solved in a poly-logarithmic in n number of rounds with high probability, while sampling a logarithmic number of agents at each round. Previous works solved this problem faster and using fewer samples, but they did that by decoupling the messages sent by agents from their output opinion, and hence do not fit the framework of passive communication. Moreover, these works use complex recursive algorithms with refined clocks that are unlikely to be used by biological entities. In contrast, our proposed algorithm has a natural appeal as it is based on letting agents estimate the current tendency direction of the dynamics, and then adapt to the emerging trend.
如何在系统中高效可靠地传播信息是分布式计算中最基本的问题之一。最近,受生物场景的启发,有几项研究集中于确定在故障条件下传播信息所需的最小通信资源。在此,我们研究 Boczkowski、Korman 和 Natale 在 [SODA 2017] 中提出的自稳定比特传播问题。该问题考虑了一个由 n 个代理组成的全连接网络,该网络具有二元意见世界,其中一种意见被称为正确意见。在任何给定时间,每个代理都持有一个意见位作为其公共输出。群体中包含一个源代理,它知道哪种观点是正确的。该代理采用正确的观点,并在整个执行过程中保持不变。我们考虑基本的通信模型,即每个代理在每一轮中观察相对较少的随机选择的代理。非源代理的目标是,尽管有一个任意的初始配置,也就是以自稳定的方式,快速收敛到正确的意见上。一旦群体趋同于正确的观点,就应该永远保持下去。受动物观察并对他人行为做出反应的生物场景的启发,我们重点研究了极其受限的被动交流模型,该模型假定当观察另一个代理时,唯一能提取的信息就是该代理的意见位。我们证明,这个问题可以在 n 个回合内以高概率的多对数方式解决,同时在每个回合中对数数量的代理进行采样。以前的研究能以更快的速度和更少的样本解决这个问题,但它们是通过将代理发送的信息与其输出意见解耦来实现的,因此不符合被动通信的框架。此外,这些研究还使用了复杂的递归算法和精制时钟,而生物实体不太可能使用这些算法。相比之下,我们提出的算法则具有天然的吸引力,因为它是基于让代理估计当前的动态趋势方向,然后适应新出现的趋势。
{"title":"Early adapting to trends: self-stabilizing information spread using passive communication","authors":"Amos Korman, Robin Vacus","doi":"10.1007/s00446-024-00462-8","DOIUrl":"https://doi.org/10.1007/s00446-024-00462-8","url":null,"abstract":"<p>How to efficiently and reliably spread information in a system is one of the most fundamental problems in distributed computing. Recently, inspired by biological scenarios, several works focused on identifying the minimal communication resources necessary to spread information under faulty conditions. Here we study the self-stabilizing <i>bit-dissemination</i> problem, introduced by Boczkowski, Korman, and Natale in [SODA 2017]. The problem considers a fully-connected network of <i>n</i> <i>agents</i>, with a binary world of <i>opinions</i>, one of which is called <i>correct</i>. At any given time, each agent holds an opinion bit as its public output. The population contains a <i>source</i> agent which knows which opinion is correct. This agent adopts the correct opinion and remains with it throughout the execution. We consider the basic <span>(mathcal {PULL})</span> model of communication, in which each agent observes relatively few randomly chosen agents in each round. The goal of the non-source agents is to quickly converge on the correct opinion, despite having an arbitrary initial configuration, i.e., in a self-stabilizing manner. Once the population converges on the correct opinion, it should remain with it forever. Motivated by biological scenarios in which animals observe and react to the behavior of others, we focus on the extremely constrained model of <i>passive communication</i>, which assumes that when observing another agent the only information that can be extracted is the opinion bit of that agent. We prove that this problem can be solved in a poly-logarithmic in <i>n</i> number of rounds with high probability, while sampling a logarithmic number of agents at each round. Previous works solved this problem faster and using fewer samples, but they did that by decoupling the messages sent by agents from their output opinion, and hence do not fit the framework of passive communication. Moreover, these works use complex recursive algorithms with refined clocks that are unlikely to be used by biological entities. In contrast, our proposed algorithm has a natural appeal as it is based on letting agents estimate the current tendency direction of the dynamics, and then adapt to the emerging trend.</p>","PeriodicalId":50569,"journal":{"name":"Distributed Computing","volume":"85 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139945961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-08DOI: 10.1007/s00446-024-00461-9
Artur Czumaj, Peter Davies-Peck, Merav Parter
In this paper, we study the power and limitations of component-stable algorithms in the low-space model of massively parallel computation (MPC). Recently Ghaffari, Kuhn and Uitto (FOCS 2019) introduced the class of component-stable low-space MPC algorithms, which are, informally, those algorithms for which the outputs reported by the nodes in different connected components are required to be independent. This very natural notion was introduced to capture most (if not all) of the known efficient MPC algorithms to date, and it was the first general class of MPC algorithms for which one can show non-trivial conditional lower bounds. In this paper we enhance the framework of component-stable algorithms and investigate its effect on the complexity of randomized and deterministic low-space MPC. Our key contributions include: 1. We revise and formalize the lifting approach of Ghaffari, Kuhn and Uitto. This requires a very delicate amendment of the notion of component stability, which allows us to fill in gaps in the earlier arguments. 2. We also extend the framework to obtain conditional lower bounds for deterministic algorithms and fine-grained lower bounds that depend on the maximum degree (Delta ). 3. We demonstrate a collection of natural graph problems for which deterministic component-unstable algorithms break the conditional lower bound obtained for component-stable algorithms. This implies that, in the context of deterministic algorithms, component-stable algorithms are conditionally weaker than the component-unstable ones. 4. We also show that the restriction to component-stable algorithms has an impact in the randomized setting. We present a natural problem which can be solved in O(1) rounds by a component-unstable MPC algorithm, but requires (Omega (log log ^* n)) rounds for any component-stable algorithm, conditioned on the connectivity conjecture. Altogether our results imply that component-stability might limit the computational power of the low-space MPC model, at least in certain contexts, paving the way for improved upper bounds that escape the conditional lower bound setting of Ghaffari, Kuhn, and Uitto.
{"title":"Component stability in low-space massively parallel computation","authors":"Artur Czumaj, Peter Davies-Peck, Merav Parter","doi":"10.1007/s00446-024-00461-9","DOIUrl":"https://doi.org/10.1007/s00446-024-00461-9","url":null,"abstract":"<p>In this paper, we study the power and limitations of component-stable algorithms in the low-space model of <i>massively parallel computation (</i><span>MPC</span><i>)</i>. Recently Ghaffari, Kuhn and Uitto (FOCS 2019) introduced the class of <i>component-stable</i> low-space <span>MPC</span> algorithms, which are, informally, those algorithms for which the outputs reported by the nodes in different connected components are required to be independent. This very natural notion was introduced to capture most (if not all) of the known efficient <span>MPC</span> algorithms to date, and it was the first general class of <span>MPC</span> algorithms for which one can show non-trivial conditional lower bounds. In this paper we enhance the framework of component-stable algorithms and investigate its effect on the complexity of randomized and deterministic low-space <span>MPC</span>. Our key contributions include: 1. We revise and formalize the lifting approach of Ghaffari, Kuhn and Uitto. This requires a very delicate amendment of the notion of component stability, which allows us to fill in gaps in the earlier arguments. 2. We also extend the framework to obtain conditional lower bounds for deterministic algorithms and fine-grained lower bounds that depend on the maximum degree <span>(Delta )</span>. 3. We demonstrate a collection of natural graph problems for which deterministic component-unstable algorithms break the conditional lower bound obtained for component-stable algorithms. This implies that, in the context of deterministic algorithms, component-stable algorithms are conditionally weaker than the component-unstable ones. 4. We also show that the restriction to component-stable algorithms has an impact in the randomized setting. We present a natural problem which can be solved in <i>O</i>(1) rounds by a component-unstable <span>MPC</span> algorithm, but requires <span>(Omega (log log ^* n))</span> rounds for any component-stable algorithm, conditioned on the connectivity conjecture. Altogether our results imply that component-stability might limit the computational power of the low-space <span>MPC</span> model, at least in certain contexts, paving the way for improved upper bounds that escape the conditional lower bound setting of Ghaffari, Kuhn, and Uitto.\u0000</p>","PeriodicalId":50569,"journal":{"name":"Distributed Computing","volume":"13 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139765601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-01DOI: 10.1007/s00446-024-00460-w
Yehuda Afek, Gal Giladi, Boaz Patt-Shamir
We investigate the effect of omnipresent cloud storage on distributed computing. To this end, we specify a network model with links of prescribed bandwidth that connect standard processing nodes, and, in addition, passive storage nodes. Each passive node represents a cloud storage system, such as Dropbox, Google Drive etc. We study a few tasks in this model, assuming a single cloud node connected to all other nodes, which are connected to each other arbitrarily. We give implementations for basic tasks of collaboratively writing to and reading from the cloud, and for more advanced applications such as matrix multiplication and federated learning. Our results show that utilizing node-cloud links as well as node-node links can considerably speed up computations, compared to the case where processors communicate either only through the cloud or only through the network links. We first show how to optimally read and write large files to and from the cloud in general graphs using flow techniques. We use these primitives to derive algorithms for combining, where every processor node has an input value and the task is to compute a combined value under some given associative operator. In the special but common case of “fat links,” where we assume that links between processors are bidirectional and have high bandwidth, we provide near-optimal algorithms for any commutative combining operator (such as vector addition). For the task of matrix multiplication (or other non-commutative combining operators), where the inputs are ordered, we present tight results in the simple “wheel” network, where procesing nodes are arranged in a ring, and are all connected to a single cloud node.
{"title":"Distributed computing with the cloud","authors":"Yehuda Afek, Gal Giladi, Boaz Patt-Shamir","doi":"10.1007/s00446-024-00460-w","DOIUrl":"https://doi.org/10.1007/s00446-024-00460-w","url":null,"abstract":"<p>We investigate the effect of omnipresent cloud storage on distributed computing. To this end, we specify a network model with links of prescribed bandwidth that connect standard processing nodes, and, in addition, passive storage nodes. Each passive node represents a cloud storage system, such as Dropbox, Google Drive etc. We study a few tasks in this model, assuming a single cloud node connected to all other nodes, which are connected to each other arbitrarily. We give implementations for basic tasks of collaboratively writing to and reading from the cloud, and for more advanced applications such as matrix multiplication and federated learning. Our results show that utilizing node-cloud links as well as node-node links can considerably speed up computations, compared to the case where processors communicate either only through the cloud or only through the network links. We first show how to optimally read and write large files to and from the cloud in general graphs using flow techniques. We use these primitives to derive algorithms for <i>combining</i>, where every processor node has an input value and the task is to compute a combined value under some given associative operator. In the special but common case of “fat links,” where we assume that links between processors are bidirectional and have high bandwidth, we provide near-optimal algorithms for any commutative combining operator (such as vector addition). For the task of matrix multiplication (or other non-commutative combining operators), where the inputs are ordered, we present tight results in the simple “wheel” network, where procesing nodes are arranged in a ring, and are all connected to a single cloud node.</p>","PeriodicalId":50569,"journal":{"name":"Distributed Computing","volume":"61 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139666685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-08DOI: 10.1007/s00446-023-00459-9
Oded Naor, Idit Keidar
State Machine Replication (SMR) solutions often divide time into rounds, with a designated leader driving decisions in each round. Progress is guaranteed once all correct processes synchronize to the same round, and the leader of that round is correct. Recently suggested Byzantine SMR solutions such as HotStuff, and LibraBFT achieve progress with a linear message complexity and a constant time complexity once such round synchronization occurs. But round synchronization itself incurs an additional cost. By Dolev and Reischuk’s lower bound, any deterministic solution must have (Omega (n^2)) communication complexity. Yet the question of randomized round synchronization with an expected linear message complexity remained open. We present an algorithm that, for the first time, achieves round synchronization with expected linear message complexity and expected constant latency. Existing protocols can use our round synchronization algorithm to solve Byzantine SMR with the same asymptotic performance.
{"title":"Expected linear round synchronization: the missing link for linear Byzantine SMR","authors":"Oded Naor, Idit Keidar","doi":"10.1007/s00446-023-00459-9","DOIUrl":"https://doi.org/10.1007/s00446-023-00459-9","url":null,"abstract":"<p>State Machine Replication (SMR) solutions often divide time into rounds, with a designated leader driving decisions in each round. Progress is guaranteed once all correct processes <i>synchronize</i> to the same round, and the leader of that round is correct. Recently suggested Byzantine SMR solutions such as HotStuff, and LibraBFT achieve progress with a linear message complexity and a constant time complexity once such round synchronization occurs. But round synchronization itself incurs an additional cost. By Dolev and Reischuk’s lower bound, any deterministic solution must have <span>(Omega (n^2))</span> communication complexity. Yet the question of randomized round synchronization with an expected linear message complexity remained open. We present an algorithm that, for the first time, achieves round synchronization with expected linear message complexity and expected constant latency. Existing protocols can use our round synchronization algorithm to solve Byzantine SMR with the same asymptotic performance.\u0000</p>","PeriodicalId":50569,"journal":{"name":"Distributed Computing","volume":"256 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139410118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-11DOI: 10.1007/s00446-023-00458-w
Pierre Civit, Muhammad Ayaz Dzulfikar, Seth Gilbert, Vincent Gramoli, Rachid Guerraoui, Jovan Komatovic, Manuel Vidigueira
The Dolev-Reischuk bound says that any deterministic Byzantine consensus protocol has (at least) quadratic (in the number of processes) communication complexity in the worst case: given a system with n processes and at most (f < n / 3) failures, any solution to Byzantine consensus exchanges (Omega big (n^2big )) words, where a word contains a constant number of values and signatures. While it has been shown that the bound is tight in synchronous environments, it is still unknown whether a consensus protocol with quadratic communication complexity can be obtained in partial synchrony where the network alternates between (1) asynchronous periods, with unbounded message delays, and (2) synchronous periods, with (delta )-bounded message delays. Until now, the most efficient known solutions for Byzantine consensus in partially synchronous settings had cubic communication complexity (e.g., HotStuff, binary DBFT). This paper closes the existing gap by introducing SQuad, a partially synchronous Byzantine consensus protocol with (Obig (n^2big )) worst-case communication complexity. In addition, SQuad is optimally-resilient (tolerating up to (f < n / 3) failures) and achieves (O(f cdot delta )) worst-case latency complexity. The key technical contribution underlying SQuad lies in the way we solve view synchronization, the problem of bringing all correct processes to the same view with a correct leader for sufficiently long. Concretely, we present RareSync, a view synchronization protocol with (Obig (n^2big )) communication complexity and (O(f cdot delta )) latency complexity, which we utilize in order to obtain SQuad.
{"title":"Byzantine consensus is $$Theta (n^2)$$ : the Dolev-Reischuk bound is tight even in partial synchrony!","authors":"Pierre Civit, Muhammad Ayaz Dzulfikar, Seth Gilbert, Vincent Gramoli, Rachid Guerraoui, Jovan Komatovic, Manuel Vidigueira","doi":"10.1007/s00446-023-00458-w","DOIUrl":"https://doi.org/10.1007/s00446-023-00458-w","url":null,"abstract":"<p>The Dolev-Reischuk bound says that any deterministic Byzantine consensus protocol has (at least) quadratic (in the number of processes) communication complexity in the worst case: given a system with <i>n</i> processes and at most <span>(f < n / 3)</span> failures, any solution to Byzantine consensus exchanges <span>(Omega big (n^2big ))</span> words, where a word contains a constant number of values and signatures. While it has been shown that the bound is tight in synchronous environments, it is still unknown whether a consensus protocol with quadratic communication complexity can be obtained in partial synchrony where the network alternates between (1) asynchronous periods, with unbounded message delays, and (2) synchronous periods, with <span>(delta )</span>-bounded message delays. Until now, the most efficient known solutions for Byzantine consensus in partially synchronous settings had cubic communication complexity (e.g., HotStuff, binary DBFT). This paper closes the existing gap by introducing <span>SQuad</span>, a partially synchronous Byzantine consensus protocol with <span>(Obig (n^2big ))</span> worst-case communication complexity. In addition, <span>SQuad</span> is optimally-resilient (tolerating up to <span>(f < n / 3)</span> failures) and achieves <span>(O(f cdot delta ))</span> worst-case latency complexity. The key technical contribution underlying <span>SQuad</span> lies in the way we solve <i>view synchronization</i>, the problem of bringing all correct processes to the same view with a correct leader for sufficiently long. Concretely, we present <span>RareSync</span>, a view synchronization protocol with <span>(Obig (n^2big ))</span> communication complexity and <span>(O(f cdot delta ))</span> latency complexity, which we utilize in order to obtain <span>SQuad</span>.</p>","PeriodicalId":50569,"journal":{"name":"Distributed Computing","volume":"30 26 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138574723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-30DOI: 10.1007/s00446-023-00457-x
Keren Censor-Hillel, Shir Cohen, Ran Gelles, Gal Sela
{"title":"Correction to: Distributed computations in fully-defective networks","authors":"Keren Censor-Hillel, Shir Cohen, Ran Gelles, Gal Sela","doi":"10.1007/s00446-023-00457-x","DOIUrl":"https://doi.org/10.1007/s00446-023-00457-x","url":null,"abstract":"","PeriodicalId":50569,"journal":{"name":"Distributed Computing","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136341442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-31DOI: 10.1007/s00446-023-00454-0
Ioannis Anagnostides, Christoph Lenzen, Bernhard Haeupler, Goran Zuzic, Themis Gouleakis
Abstract In this paper, we refine the (almost) existentially optimal distributed Laplacian solver of Forster, Goranci, Liu, Peng, Sun, and Ye (FOCS ‘21) into an (almost) universally optimal distributed Laplacian solver. Specifically, when the topology is known (i.e., the Supported-CONGEST model), we show that any Laplacian system on an n -node graph with shortcut quality $$textrm{SQ}(G)$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mrow> <mml:mtext>SQ</mml:mtext> <mml:mo>(</mml:mo> <mml:mi>G</mml:mi> <mml:mo>)</mml:mo> </mml:mrow> </mml:math> can be solved after $$n^{o(1)} text {SQ}(G) log (1/epsilon )$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mrow> <mml:msup> <mml:mi>n</mml:mi> <mml:mrow> <mml:mi>o</mml:mi> <mml:mo>(</mml:mo> <mml:mn>1</mml:mn> <mml:mo>)</mml:mo> </mml:mrow> </mml:msup> <mml:mtext>SQ</mml:mtext> <mml:mrow> <mml:mo>(</mml:mo> <mml:mi>G</mml:mi> <mml:mo>)</mml:mo> </mml:mrow> <mml:mo>log</mml:mo> <mml:mrow> <mml:mo>(</mml:mo> <mml:mn>1</mml:mn> <mml:mo>/</mml:mo> <mml:mi>ϵ</mml:mi> <mml:mo>)</mml:mo> </mml:mrow> </mml:mrow> </mml:math> rounds, where $$epsilon >0$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mrow> <mml:mi>ϵ</mml:mi> <mml:mo>></mml:mo> <mml:mn>0</mml:mn> </mml:mrow> </mml:math> is the required accuracy. This almost matches our lower bound that guarantees that any correct algorithm on G requires $$widetilde{Omega }(textrm{SQ}(G))$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mrow> <mml:mover> <mml:mi>Ω</mml:mi> <mml:mo>~</mml:mo> </mml:mover> <mml:mrow> <mml:mo>(</mml:mo> <mml:mtext>SQ</mml:mtext> <mml:mrow> <mml:mo>(</mml:mo> <mml:mi>G</mml:mi> <mml:mo>)</mml:mo> </mml:mrow> <mml:mo>)</mml:mo> </mml:mrow> </mml:mrow> </mml:math> rounds, even for a crude solution with $$epsilon le 1/2$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mrow> <mml:mi>ϵ</mml:mi> <mml:mo>≤</mml:mo> <mml:mn>1</mml:mn> <mml:mo>/</mml:mo> <mml:mn>2</mml:mn> </mml:mrow> </mml:math> . Several important implications hold in the unknown-topology (i.e., standard CONGEST) case: for excluded-minor graphs we get an almost universally optimal algorithm that terminates in $$D cdot n^{o(1)} log (1/epsilon )$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mrow> <mml:mi>D</mml:mi> <mml:mo>·</mml:mo> <mml:msup> <mml:mi>n</mml:mi> <mml:mrow> <mml:mi>o</mml:mi> <mml:mo>(</mml:mo> <mml:mn>1</mml:mn> <mml:mo>)</mml:mo> </mml:mrow> </mml:msup> <mml:mo>log</mml:mo> <mml:mrow> <mml:mo>(</mml:mo> <mml:mn>1</mml:mn> <mml:mo>/</mml:mo> <mml:mi>ϵ</mml:mi> <mml:mo>)</mml:mo> </mml:mrow> </mml:mrow> </mml:math> rounds, where D is the hop-diameter of the network; as well as $$n^{o(1)} log (1/epsilon )$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mrow> <mml:msup> <mml:mi>n</mml:mi> <mml:mrow> <mml:mi>o</mml:mi> <mml:mo>(</mml:mo> <mml:mn>1</mml:mn> <mml:mo>)</mml:mo> </mml:mrow> </mml:msup> <mml:mo>log</mml:mo> <mml:mrow> <mml:mo>(</m
本文将Forster, Goranci, Liu, Peng, Sun, and Ye (FOCS ' 21)的(几乎)存在最优分布拉普拉斯求解器改进为(几乎)普遍最优分布拉普拉斯求解器。具体来说,当拓扑已知时(即支持的congest模型),我们证明了在具有快捷质量$$textrm{SQ}(G)$$ SQ (G)的n节点图上的任何拉普拉斯系统都可以在$$n^{o(1)} text {SQ}(G) log (1/epsilon )$$ no (1) SQ (G) log (1 / λ)轮之后求解,其中$$epsilon >0$$ λ &gt;0是要求的精度。这几乎与我们的下界相匹配,下界保证任何正确的G算法都需要$$widetilde{Omega }(textrm{SQ}(G))$$ Ω (SQ (G))轮数,即使对于$$epsilon le 1/2$$ λ≤1 / 2的粗糙解也是如此。在未知拓扑(即标准CONGEST)情况下,有几个重要的含义:对于排除次要图,我们得到了一个几乎普遍最优的算法,该算法终止于$$D cdot n^{o(1)} log (1/epsilon )$$ D·no (1) log (1 / λ)轮,其中D是网络的跳直径;以及$$textrm{SQ}(G) le n^{o(1)}$$ SQ (G)≤n o(1)的情况下的$$n^{o(1)} log (1/epsilon )$$ n o (1) log (1 / λ) -round算法,它适用于大多数感兴趣的网络。此外,根据最近在分布式算法中的一系列工作,我们考虑了一种混合通信模型,该模型以节点容量团模型的形式增强了有限全局功率的CONGEST。在这个模型中,我们证明了具有循环复杂度$$n^{o(1)} log (1/epsilon )$$ n o (1) log (1 / λ)的拉普拉斯解算器的存在性。这些结果的统一线索,以及我们的主要技术贡献,是针对标准部分聚合问题的新颖$$rho $$ ρ -拥塞泛化的近最优算法的开发,这可能是独立的兴趣。
{"title":"Almost universally optimal distributed Laplacian solvers via low-congestion shortcuts","authors":"Ioannis Anagnostides, Christoph Lenzen, Bernhard Haeupler, Goran Zuzic, Themis Gouleakis","doi":"10.1007/s00446-023-00454-0","DOIUrl":"https://doi.org/10.1007/s00446-023-00454-0","url":null,"abstract":"Abstract In this paper, we refine the (almost) existentially optimal distributed Laplacian solver of Forster, Goranci, Liu, Peng, Sun, and Ye (FOCS ‘21) into an (almost) universally optimal distributed Laplacian solver. Specifically, when the topology is known (i.e., the Supported-CONGEST model), we show that any Laplacian system on an n -node graph with shortcut quality $$textrm{SQ}(G)$$ <mml:math xmlns:mml=\"http://www.w3.org/1998/Math/MathML\"> <mml:mrow> <mml:mtext>SQ</mml:mtext> <mml:mo>(</mml:mo> <mml:mi>G</mml:mi> <mml:mo>)</mml:mo> </mml:mrow> </mml:math> can be solved after $$n^{o(1)} text {SQ}(G) log (1/epsilon )$$ <mml:math xmlns:mml=\"http://www.w3.org/1998/Math/MathML\"> <mml:mrow> <mml:msup> <mml:mi>n</mml:mi> <mml:mrow> <mml:mi>o</mml:mi> <mml:mo>(</mml:mo> <mml:mn>1</mml:mn> <mml:mo>)</mml:mo> </mml:mrow> </mml:msup> <mml:mtext>SQ</mml:mtext> <mml:mrow> <mml:mo>(</mml:mo> <mml:mi>G</mml:mi> <mml:mo>)</mml:mo> </mml:mrow> <mml:mo>log</mml:mo> <mml:mrow> <mml:mo>(</mml:mo> <mml:mn>1</mml:mn> <mml:mo>/</mml:mo> <mml:mi>ϵ</mml:mi> <mml:mo>)</mml:mo> </mml:mrow> </mml:mrow> </mml:math> rounds, where $$epsilon >0$$ <mml:math xmlns:mml=\"http://www.w3.org/1998/Math/MathML\"> <mml:mrow> <mml:mi>ϵ</mml:mi> <mml:mo>></mml:mo> <mml:mn>0</mml:mn> </mml:mrow> </mml:math> is the required accuracy. This almost matches our lower bound that guarantees that any correct algorithm on G requires $$widetilde{Omega }(textrm{SQ}(G))$$ <mml:math xmlns:mml=\"http://www.w3.org/1998/Math/MathML\"> <mml:mrow> <mml:mover> <mml:mi>Ω</mml:mi> <mml:mo>~</mml:mo> </mml:mover> <mml:mrow> <mml:mo>(</mml:mo> <mml:mtext>SQ</mml:mtext> <mml:mrow> <mml:mo>(</mml:mo> <mml:mi>G</mml:mi> <mml:mo>)</mml:mo> </mml:mrow> <mml:mo>)</mml:mo> </mml:mrow> </mml:mrow> </mml:math> rounds, even for a crude solution with $$epsilon le 1/2$$ <mml:math xmlns:mml=\"http://www.w3.org/1998/Math/MathML\"> <mml:mrow> <mml:mi>ϵ</mml:mi> <mml:mo>≤</mml:mo> <mml:mn>1</mml:mn> <mml:mo>/</mml:mo> <mml:mn>2</mml:mn> </mml:mrow> </mml:math> . Several important implications hold in the unknown-topology (i.e., standard CONGEST) case: for excluded-minor graphs we get an almost universally optimal algorithm that terminates in $$D cdot n^{o(1)} log (1/epsilon )$$ <mml:math xmlns:mml=\"http://www.w3.org/1998/Math/MathML\"> <mml:mrow> <mml:mi>D</mml:mi> <mml:mo>·</mml:mo> <mml:msup> <mml:mi>n</mml:mi> <mml:mrow> <mml:mi>o</mml:mi> <mml:mo>(</mml:mo> <mml:mn>1</mml:mn> <mml:mo>)</mml:mo> </mml:mrow> </mml:msup> <mml:mo>log</mml:mo> <mml:mrow> <mml:mo>(</mml:mo> <mml:mn>1</mml:mn> <mml:mo>/</mml:mo> <mml:mi>ϵ</mml:mi> <mml:mo>)</mml:mo> </mml:mrow> </mml:mrow> </mml:math> rounds, where D is the hop-diameter of the network; as well as $$n^{o(1)} log (1/epsilon )$$ <mml:math xmlns:mml=\"http://www.w3.org/1998/Math/MathML\"> <mml:mrow> <mml:msup> <mml:mi>n</mml:mi> <mml:mrow> <mml:mi>o</mml:mi> <mml:mo>(</mml:mo> <mml:mn>1</mml:mn> <mml:mo>)</mml:mo> </mml:mrow> </mml:msup> <mml:mo>log</mml:mo> <mml:mrow> <mml:mo>(</m","PeriodicalId":50569,"journal":{"name":"Distributed Computing","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135208696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-14DOI: 10.1007/s00446-023-00455-z
Merav Parter, Asaf Petruschka
We present near-optimal algorithms for detecting small vertex cuts in the ({textsf{CONGEST}}) model of distributed computing. Despite extensive research in this area, our understanding of the vertex connectivity of a graph is still incomplete, especially in the distributed setting. To this date, all distributed algorithms for detecting cut vertices suffer from an inherent dependency in the maximum degree of the graph, (Delta ). Hence, in particular, there is no truly sub-linear time algorithm for this problem, not even for detecting a single cut vertex. We take a new algorithmic approach for vertex connectivity which allows us to bypass the existing (Delta ) barrier. As a warm-up to our approach, we show a simple (widetilde{O}(D))-round randomized algorithm for computing all cut vertices in a D-diameter n-vertex graph. This improves upon the (O(D+Delta /log n))-round algorithm of [Pritchard and Thurimella, ICALP 2008]. Our key technical contribution is an (widetilde{O}(D))-round randomized algorithm for computing all cut pairs in the graph, improving upon the state-of-the-art (O(Delta cdot D)^4)-round algorithm by [Parter, DISC ’19]. Note that even for the considerably simpler setting of edge cuts, currently (widetilde{O}(D))-round algorithms are known only for detecting pairs of cut edges. Our approach is based on employing the well-known linear graph sketching technique [Ahn, Guha and McGregor, SODA 2012] along with the heavy-light tree decomposition of [Sleator and Tarjan, STOC 1981]. Combining this with a careful characterization of the survivable subgraphs, allows us to determine the connectivity of (G {setminus } {x,y}) for every pair (x,y in V), using (widetilde{O}(D))-rounds. We believe that the tools provided in this paper are useful for omitting the (Delta )-dependency even for larger cut values.
{"title":"Near-optimal distributed computation of small vertex cuts","authors":"Merav Parter, Asaf Petruschka","doi":"10.1007/s00446-023-00455-z","DOIUrl":"https://doi.org/10.1007/s00446-023-00455-z","url":null,"abstract":"<p>We present near-optimal algorithms for detecting small vertex cuts in the <span>({textsf{CONGEST}})</span> model of distributed computing. Despite extensive research in this area, our understanding of the <i>vertex</i> connectivity of a graph is still incomplete, especially in the distributed setting. To this date, all distributed algorithms for detecting cut vertices suffer from an inherent dependency in the maximum degree of the graph, <span>(Delta )</span>. Hence, in particular, there is no truly sub-linear time algorithm for this problem, not even for detecting a <i>single</i> cut vertex. We take a new algorithmic approach for vertex connectivity which allows us to bypass the existing <span>(Delta )</span> barrier. As a warm-up to our approach, we show a simple <span>(widetilde{O}(D))</span>-round randomized algorithm for computing all cut vertices in a <i>D</i>-diameter <i>n</i>-vertex graph. This improves upon the <span>(O(D+Delta /log n))</span>-round algorithm of [Pritchard and Thurimella, ICALP 2008]. Our key technical contribution is an <span>(widetilde{O}(D))</span>-round randomized algorithm for computing all cut <i>pairs</i> in the graph, improving upon the state-of-the-art <span>(O(Delta cdot D)^4)</span>-round algorithm by [Parter, DISC ’19]. Note that even for the considerably simpler setting of <i>edge</i> cuts, currently <span>(widetilde{O}(D))</span>-round algorithms are known <i>only</i> for detecting pairs of cut edges. Our approach is based on employing the well-known linear graph sketching technique [Ahn, Guha and McGregor, SODA 2012] along with the heavy-light tree decomposition of [Sleator and Tarjan, STOC 1981]. Combining this with a careful characterization of the survivable subgraphs, allows us to determine the connectivity of <span>(G {setminus } {x,y})</span> for every pair <span>(x,y in V)</span>, using <span>(widetilde{O}(D))</span>-rounds. We believe that the tools provided in this paper are useful for omitting the <span>(Delta )</span>-dependency even for larger cut values.</p>","PeriodicalId":50569,"journal":{"name":"Distributed Computing","volume":"9 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2023-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138507812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-05DOI: 10.1007/s00446-023-00453-1
Alkida Balliu, Mohsen Ghaffari, Fabian Kuhn, Dennis Olivetti
We continue the recently started line of work on the distributed node-averaged complexity of distributed graph algorithms. The node-averaged complexity of a distributed algorithm running on a graph (G=(V,E)) is the average over the times at which the nodes V of G finish their computation and commit to their outputs. We study the node-averaged complexity for some of the central distributed symmetry breaking problems and provide the following results (among others). As our main result, we show that the randomized node-averaged complexity of computing a maximal independent set (MIS) in n-node graphs of maximum degree (Delta ) is at least (Omega big (min big {frac{log Delta }{log log Delta },sqrt{frac{log n}{log log n}}big }big )). This bound is obtained by a novel adaptation of the well-known lower bound by Kuhn, Moscibroda, and Wattenhofer [JACM’16]. As a side result, we obtain that the worst-case randomized round complexity for computing an MIS in trees is also (Omega big (min big {frac{log Delta }{log log Delta },sqrt{frac{log n}{log log n}}big }big ))—this essentially answers open problem 11.15 in the book by Barenboim and Elkin and resolves the complexity of MIS on trees up to an (O(sqrt{log log n})) factor. We also show that, perhaps surprisingly, a minimal relaxation of MIS, which is the same as (2, 1)-ruling set, to the (2, 2)-ruling set problem drops the randomized node-averaged complexity to O(1). For maximal matching, we show that while the randomized node-averaged complexity is (Omega big (min big {frac{log Delta }{log log Delta },sqrt{frac{log n}{log log n}}big }big )), the randomized edge-averaged complexity is O(1). Further, we show that the deterministic edge-averaged complexity of maximal matching is (O(log ^2Delta + log ^* n)) and the deterministic node-averaged complexity of maximal matching is (O(log ^3Delta + log ^* n)). Finally, we consider the problem of computing a sinkless orientation of a graph. The deterministic worst-case complexity of the problem is known to be (Theta (log n)), even on bounded-degree graphs. We show that the problem can be solved deterministically with node-averaged complexity (O(log ^* n)), while keeping the worst-case complexity in (O(log n)).
{"title":"Node and edge averaged complexities of local graph problems","authors":"Alkida Balliu, Mohsen Ghaffari, Fabian Kuhn, Dennis Olivetti","doi":"10.1007/s00446-023-00453-1","DOIUrl":"https://doi.org/10.1007/s00446-023-00453-1","url":null,"abstract":"<p>We continue the recently started line of work on the distributed node-averaged complexity of distributed graph algorithms. The node-averaged complexity of a distributed algorithm running on a graph <span>(G=(V,E))</span> is the average over the times at which the nodes <i>V</i> of <i>G</i> finish their computation and commit to their outputs. We study the node-averaged complexity for some of the central distributed symmetry breaking problems and provide the following results (among others). As our main result, we show that the randomized node-averaged complexity of computing a maximal independent set (MIS) in <i>n</i>-node graphs of maximum degree <span>(Delta )</span> is at least <span>(Omega big (min big {frac{log Delta }{log log Delta },sqrt{frac{log n}{log log n}}big }big ))</span>. This bound is obtained by a novel adaptation of the well-known lower bound by Kuhn, Moscibroda, and Wattenhofer [JACM’16]. As a side result, we obtain that the worst-case randomized round complexity for computing an MIS in trees is also <span>(Omega big (min big {frac{log Delta }{log log Delta },sqrt{frac{log n}{log log n}}big }big ))</span>—this essentially answers open problem 11.15 in the book by Barenboim and Elkin and resolves the complexity of MIS on trees up to an <span>(O(sqrt{log log n}))</span> factor. We also show that, perhaps surprisingly, a minimal relaxation of MIS, which is the same as (2, 1)-ruling set, to the (2, 2)-ruling set problem drops the randomized node-averaged complexity to <i>O</i>(1). For maximal matching, we show that while the randomized node-averaged complexity is <span>(Omega big (min big {frac{log Delta }{log log Delta },sqrt{frac{log n}{log log n}}big }big ))</span>, the randomized edge-averaged complexity is <i>O</i>(1). Further, we show that the deterministic edge-averaged complexity of maximal matching is <span>(O(log ^2Delta + log ^* n))</span> and the deterministic node-averaged complexity of maximal matching is <span>(O(log ^3Delta + log ^* n))</span>. Finally, we consider the problem of computing a sinkless orientation of a graph. The deterministic worst-case complexity of the problem is known to be <span>(Theta (log n))</span>, even on bounded-degree graphs. We show that the problem can be solved deterministically with node-averaged complexity <span>(O(log ^* n))</span>, while keeping the worst-case complexity in <span>(O(log n))</span>.\u0000</p>","PeriodicalId":50569,"journal":{"name":"Distributed Computing","volume":"20 11","pages":""},"PeriodicalIF":1.3,"publicationDate":"2023-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138507819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}