We show that interaction in any zero-knowledge proof can be replaced by sharing a common, short, random string. We use this result to construct the first public-key cryptosystem secure against chosen ciphertext attack.
{"title":"Non-interactive zero-knowledge and its applications","authors":"M. Blum, Paul Feldman, S. Micali","doi":"10.1145/62212.62222","DOIUrl":"https://doi.org/10.1145/62212.62222","url":null,"abstract":"We show that interaction in <italic>any</italic> zero-knowledge proof can be replaced by sharing a common, short, random string. We use this result to construct the <italic>first</italic> public-key cryptosystem secure against chosen ciphertext attack.","PeriodicalId":191270,"journal":{"name":"Symposium on the Theory of Computing","volume":"295 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134273982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Quite complex cryptographic machinery has been developed based on the assumption that one-way functions exist, yet we know of only a few possible such candidates. It is important at this time to find alternative foundations to the design of secure cryptography. We introduce a new model of generalized interactive proofs as a step in this direction. We prove that all NP languages have perfect zero-knowledge proof-systems in this model, without making any intractability assumptions. The generalized interactive-proof model consists of two computationally unbounded and untrusted provers, rather than one, who jointly agree on a strategy to convince the verifier of the truth of an assertion and then engage in a polynomial number of message exchanges with the verifier in their attempt to do so. To believe the validity of the assertion, the verifier must make sure that the two provers can not communicate with each other during the course of the proof process. Thus, the complexity assumptions made in previous work, have been traded for a physical separation between the two provers. We call this new model the multi-prover interactive-proof model, and examine its properties and applicability to cryptography.
{"title":"Multi-prover interactive proofs: how to remove intractability assumptions","authors":"M. Ben-Or, S. Goldwasser, J. Kilian, A. Wigderson","doi":"10.1145/62212.62223","DOIUrl":"https://doi.org/10.1145/62212.62223","url":null,"abstract":"Quite complex cryptographic machinery has been developed based on the assumption that one-way functions exist, yet we know of only a few possible such candidates. It is important at this time to find alternative foundations to the design of secure cryptography. We introduce a new model of generalized interactive proofs as a step in this direction. We prove that all NP languages have perfect zero-knowledge proof-systems in this model, without making any intractability assumptions.\u0000The generalized interactive-proof model consists of two computationally unbounded and untrusted provers, rather than one, who jointly agree on a strategy to convince the verifier of the truth of an assertion and then engage in a polynomial number of message exchanges with the verifier in their attempt to do so. To believe the validity of the assertion, the verifier must make sure that the two provers can not communicate with each other during the course of the proof process. Thus, the complexity assumptions made in previous work, have been traded for a physical separation between the two provers.\u0000We call this new model the multi-prover interactive-proof model, and examine its properties and applicability to cryptography.","PeriodicalId":191270,"journal":{"name":"Symposium on the Theory of Computing","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127168462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A parallel computing system becomes increasingly prone to failure as the number of processing elements in it increases. In this paper, we describe a completely general strategy that takes an arbitrary step of an ideal CRCW PRAM and automatically translates it to run efficiently and robustly on a PRAM in which processors are prone to failure. The strategy relies on efficient robust algorithms for solving a core problem, the Certified Write-All Problem. This problem characterizes the core of robustness, because , as we show, its complexity is equal to that of any general strategy for realizing robustness in the model. We analyze the expected parallel time and work of various algorithms for solving this problem. Our results are a non-trivial generalization of Brent's Permission to copy without fee all or part of this material is granted provided that the copies are not made or distn'buted for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. Lemma. We consider the case where the number of the available processors decreases dynamically over time, whereas Brent's Lemma is only applicable in the case where the processor availability pattern is static.
{"title":"Efficient robust parallel computations","authors":"Z. Kedem, K. Palem, P. Spirakis","doi":"10.1145/100216.100231","DOIUrl":"https://doi.org/10.1145/100216.100231","url":null,"abstract":"A parallel computing system becomes increasingly prone to failure as the number of processing elements in it increases. In this paper, we describe a completely general strategy that takes an arbitrary step of an ideal CRCW PRAM and automatically translates it to run efficiently and robustly on a PRAM in which processors are prone to failure. The strategy relies on efficient robust algorithms for solving a core problem, the Certified Write-All Problem. This problem characterizes the core of robustness, because , as we show, its complexity is equal to that of any general strategy for realizing robustness in the model. We analyze the expected parallel time and work of various algorithms for solving this problem. Our results are a non-trivial generalization of Brent's Permission to copy without fee all or part of this material is granted provided that the copies are not made or distn'buted for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. Lemma. We consider the case where the number of the available processors decreases dynamically over time, whereas Brent's Lemma is only applicable in the case where the processor availability pattern is static.","PeriodicalId":191270,"journal":{"name":"Symposium on the Theory of Computing","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132298976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The following problem is considered: given a linked list of length n, compute the distance of each element of the linked list from the end of the list. The problem has two standard deterministic algorithms: a linear time serial algorithm, and an O((nlog n)/p + log n) time parallel algorithm using p processors. A known conjecture states that it is impossible to design an O(log n) time deterministic parallel algorithm that uses only n/log n processors. We present three randomized parallel algorithms for the problem. One of these algorithms runs almost-surely in time of O(n/p + log nlog*n) using p processors on an exclusive-read exclusive-write parallel RAM.
{"title":"Randomized speed-ups in parallel computation","authors":"U. Vishkin","doi":"10.1145/800057.808686","DOIUrl":"https://doi.org/10.1145/800057.808686","url":null,"abstract":"The following problem is considered: given a linked list of length n, compute the distance of each element of the linked list from the end of the list. The problem has two standard deterministic algorithms: a linear time serial algorithm, and an O((nlog n)/p + log n) time parallel algorithm using p processors. A known conjecture states that it is impossible to design an O(log n) time deterministic parallel algorithm that uses only n/log n processors.\u0000 We present three randomized parallel algorithms for the problem. One of these algorithms runs almost-surely in time of O(n/p + log nlog*n) using p processors on an exclusive-read exclusive-write parallel RAM.","PeriodicalId":191270,"journal":{"name":"Symposium on the Theory of Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124288842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Čadek, Marek Krcál, J. Matoušek, L. Vokrínek, Uli Wagner
We consider several basic problems of algebraic topology, with connections to combinatorial and geometric questions, from the point of view of computational complexity. The extension problem asks, given topological spaces X,Y, a subspace A ⊆ X, and a (continuous) map f:A -> Y, whether f can be extended to a map X -> Y. For computational purposes, we assume that X and Y are represented as finite simplicial complexes, A is a subcomplex of X, and f is given as a simplicial map. In this generality the problem is undecidable, as follows from Novikov's result from the 1950s on uncomputability of the fundamental group π1(Y). We thus study the problem under the assumption that, for some k ≥ 2, Y is (k-1)-connected; informally, this means that Y has "no holes up to dimension k-1" i.e., the first k-1 homotopy groups of Y vanish (a basic example of such a Y is the sphere Sk). We prove that, on the one hand, this problem is still undecidable for dim X=2k. On the other hand, for every fixed k ≥ 2, we obtain an algorithm that solves the extension problem in polynomial time assuming Y (k-1)-connected and dim X ≤ 2k-1$. For dim X ≤ 2k-2, the algorithm also provides a classification of all extensions up to homotopy (continuous deformation). This relies on results of our SODA 2012 paper, and the main new ingredient is a machinery of objects with polynomial-time homology, which is a polynomial-time analog of objects with effective homology developed earlier by Sergeraert et al. We also consider the computation of the higher homotopy groups πk(Y)$, k ≥ 2, for a 1-connected Y. Their computability was established by Brown in 1957; we show that πk(Y) can be computed in polynomial time for every fixed k ≥ 2. On the other hand, Anick proved in 1989 that computing πk(Y) is #P-hard if k is a part of input, where Y is a cell complex with certain rather compact encoding. We strengthen his result to #P-hardness for Y given as a simplicial complex.
我们从计算复杂性的角度考虑了代数拓扑的几个基本问题,它们与组合问题和几何问题有联系。可拓问题是:给定拓扑空间X、Y、子空间a⊥X和(连续)映射f: a -> Y, f是否可以扩展到映射X -> Y。为了计算目的,我们假设X和Y被表示为有限简单复形,a是X的子复形,f被给定为简单映射。根据Novikov在1950年代关于基本群π1(Y)的不可计算性的结论,这个问题是不可判定的。因此,我们在以下假设下研究问题:对于某些k≥2,Y是(k-1)连通的;非正式地说,这意味着Y“在维度k-1之前没有空穴”,即Y的第一个k-1同伦群消失了(这种Y的一个基本例子是球Sk)。我们一方面证明了,当X=2k时,这个问题仍然是不可判定的。另一方面,对于每一个固定k≥2,我们得到了在多项式时间内解决扩展问题的算法,该算法假设Y (k-1)连通且dim X≤2k-1$。对于dim X≤2k-2,该算法还提供了所有扩展到同伦(连续变形)的分类。这依赖于我们SODA 2012论文的结果,主要的新成分是具有多项式时间同调的对象机制,这是Sergeraert等人早期开发的具有有效同调的对象的多项式时间模拟。我们还考虑了1连通Y的高同伦群πk(Y)$, k≥2的计算,它们的可计算性由Brown在1957年建立;我们证明πk(Y)可以在多项式时间内计算出每一个固定k≥2。另一方面,Anick在1989年证明,如果k是输入的一部分,其中Y是一个具有一定相当紧凑编码的细胞复合体,计算πk(Y)是#P-hard的。对于以简单复形形式给出的Y,我们将其结果加强到# p -硬度。
{"title":"Extending continuous maps: polynomiality and undecidability","authors":"M. Čadek, Marek Krcál, J. Matoušek, L. Vokrínek, Uli Wagner","doi":"10.1145/2488608.2488683","DOIUrl":"https://doi.org/10.1145/2488608.2488683","url":null,"abstract":"We consider several basic problems of algebraic topology, with connections to combinatorial and geometric questions, from the point of view of computational complexity.\u0000 The extension problem asks, given topological spaces X,Y, a subspace A ⊆ X, and a (continuous) map f:A -> Y, whether f can be extended to a map X -> Y. For computational purposes, we assume that X and Y are represented as finite simplicial complexes, A is a subcomplex of X, and f is given as a simplicial map. In this generality the problem is undecidable, as follows from Novikov's result from the 1950s on uncomputability of the fundamental group π1(Y). We thus study the problem under the assumption that, for some k ≥ 2, Y is (k-1)-connected; informally, this means that Y has \"no holes up to dimension k-1\" i.e., the first k-1 homotopy groups of Y vanish (a basic example of such a Y is the sphere Sk).\u0000 We prove that, on the one hand, this problem is still undecidable for dim X=2k. On the other hand, for every fixed k ≥ 2, we obtain an algorithm that solves the extension problem in polynomial time assuming Y (k-1)-connected and dim X ≤ 2k-1$. For dim X ≤ 2k-2, the algorithm also provides a classification of all extensions up to homotopy (continuous deformation). This relies on results of our SODA 2012 paper, and the main new ingredient is a machinery of objects with polynomial-time homology, which is a polynomial-time analog of objects with effective homology developed earlier by Sergeraert et al.\u0000 We also consider the computation of the higher homotopy groups πk(Y)$, k ≥ 2, for a 1-connected Y. Their computability was established by Brown in 1957; we show that πk(Y) can be computed in polynomial time for every fixed k ≥ 2. On the other hand, Anick proved in 1989 that computing πk(Y) is #P-hard if k is a part of input, where Y is a cell complex with certain rather compact encoding. We strengthen his result to #P-hardness for Y given as a simplicial complex.","PeriodicalId":191270,"journal":{"name":"Symposium on the Theory of Computing","volume":"132 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130892186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present size-space trade-offs for the polynomial calculus (PC) and polynomial calculus resolution (PCR) proof systems. These are the first true size-space trade-offs in any algebraic proof system, showing that size and space cannot be simultaneously optimized in these models. We achieve this by extending essentially all known size-space trade-offs for resolution to PC and PCR. As such, our results cover space complexity from constant all the way up to exponential and yield mostly superpolynomial or even exponential size blow-ups. Since the upper bounds in our trade-offs hold for resolution, our work shows that there are formulas for which adding algebraic reasoning on top of resolution does not improve the trade-off properties in any significant way. As byproducts of our analysis, we also obtain trade-offs between space and degree in PC and PCR exactly matching analogous results for space versus width in resolution, and strengthen the resolution trade-offs in [Beame, Beck, and Impagliazzo '12] to apply also to k-CNF formulas.
{"title":"Some trade-off results for polynomial calculus: extended abstract","authors":"Chris Beck, Jakob Nordström, Bangsheng Tang","doi":"10.1145/2488608.2488711","DOIUrl":"https://doi.org/10.1145/2488608.2488711","url":null,"abstract":"We present size-space trade-offs for the polynomial calculus (PC) and polynomial calculus resolution (PCR) proof systems. These are the first true size-space trade-offs in any algebraic proof system, showing that size and space cannot be simultaneously optimized in these models. We achieve this by extending essentially all known size-space trade-offs for resolution to PC and PCR. As such, our results cover space complexity from constant all the way up to exponential and yield mostly superpolynomial or even exponential size blow-ups. Since the upper bounds in our trade-offs hold for resolution, our work shows that there are formulas for which adding algebraic reasoning on top of resolution does not improve the trade-off properties in any significant way.\u0000 As byproducts of our analysis, we also obtain trade-offs between space and degree in PC and PCR exactly matching analogous results for space versus width in resolution, and strengthen the resolution trade-offs in [Beame, Beck, and Impagliazzo '12] to apply also to k-CNF formulas.","PeriodicalId":191270,"journal":{"name":"Symposium on the Theory of Computing","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128916413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present improved polynomial time algorithms for the max flow problem defined on sparse networks with n nodes and m arcs. We show how to solve the max flow problem in O(nm + m31/16 log2 n) time. In the case that m = O(n1.06), this improves upon the best previous algorithm due to King, Rao, and Tarjan, who solved the max flow problem in O(nm logm/(n log n)n) time. This establishes that the max flow problem is solvable in O(nm) time for all values of n and m. In the case that m = O(n), we improve the running time to O(n2/ log n).
{"title":"Max flows in O(nm) time, or better","authors":"J. Orlin","doi":"10.1145/2488608.2488705","DOIUrl":"https://doi.org/10.1145/2488608.2488705","url":null,"abstract":"In this paper, we present improved polynomial time algorithms for the max flow problem defined on sparse networks with n nodes and m arcs. We show how to solve the max flow problem in O(nm + m31/16 log2 n) time. In the case that m = O(n1.06), this improves upon the best previous algorithm due to King, Rao, and Tarjan, who solved the max flow problem in O(nm logm/(n log n)n) time. This establishes that the max flow problem is solvable in O(nm) time for all values of n and m. In the case that m = O(n), we improve the running time to O(n2/ log n).","PeriodicalId":191270,"journal":{"name":"Symposium on the Theory of Computing","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123993243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We provide a general framework for getting linear time constant factor approximations (and in many cases FPTAS's) to a copious amount of well known and well studied problems in Computational Geometry, such as k-center clustering and furthest nearest neighbor. The new approach is robust to variations in the input problem, and yet it is simple, elegant and practical. In particular, many of these well studied problems which fit easily into our framework, either previously had no linear time approximation algorithm, or required rather involved algorithms and analysis. A short list of the problems we consider include furthest nearest neighbor, k-center clustering, smallest disk enclosing k points, k-th largest distance, k-th smallest m-nearest neighbor distance, k-th heaviest edge in the MST and other spanning forest type problems, problems involving upward closed set systems, and more. Finally, we show how to extend our framework such that the linear running time bound holds with high probability.
{"title":"Net and prune: a linear time algorithm for euclidean distance problems","authors":"Sariel Har-Peled, Benjamin Raichel","doi":"10.1145/2488608.2488684","DOIUrl":"https://doi.org/10.1145/2488608.2488684","url":null,"abstract":"We provide a general framework for getting linear time constant factor approximations (and in many cases FPTAS's) to a copious amount of well known and well studied problems in Computational Geometry, such as k-center clustering and furthest nearest neighbor. The new approach is robust to variations in the input problem, and yet it is simple, elegant and practical. In particular, many of these well studied problems which fit easily into our framework, either previously had no linear time approximation algorithm, or required rather involved algorithms and analysis. A short list of the problems we consider include furthest nearest neighbor, k-center clustering, smallest disk enclosing k points, k-th largest distance, k-th smallest m-nearest neighbor distance, k-th heaviest edge in the MST and other spanning forest type problems, problems involving upward closed set systems, and more. Finally, we show how to extend our framework such that the linear running time bound holds with high probability.","PeriodicalId":191270,"journal":{"name":"Symposium on the Theory of Computing","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130574707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The diameter and the radius of a graph are fundamental topological parameters that have many important practical applications in real world networks. The fastest combinatorial algorithm for both parameters works by solving the all-pairs shortest paths problem (APSP) and has a running time of ~O(mn) in m-edge, n-node graphs. In a seminal paper, Aingworth, Chekuri, Indyk and Motwani [SODA'96 and SICOMP'99] presented an algorithm that computes in ~O(m√ n + n2) time an estimate D for the diameter D, such that ⌊ 2/3 D ⌋ ≤ ^D ≤ D. Their paper spawned a long line of research on approximate APSP. For the specific problem of diameter approximation, however, no improvement has been achieved in over 15 years. Our paper presents the first improvement over the diameter approximation algorithm of Aingworth et. al, producing an algorithm with the same estimate but with an expected running time of ~O(m√ n). We thus show that for all sparse enough graphs, the diameter can be 3/2-approximated in o(n2) time. Our algorithm is obtained using a surprisingly simple method of neighborhood depth estimation that is strong enough to also approximate, in the same running time, the radius and more generally, all of the eccentricities, i.e. for every node the distance to its furthest node. We also provide strong evidence that our diameter approximation result may be hard to improve. We show that if for some constant ε>0 there is an O(m2-ε) time (3/2-ε)-approximation algorithm for the diameter of undirected unweighted graphs, then there is an O*( (2-δ)n) time algorithm for CNF-SAT on n variables for constant δ>0, and the strong exponential time hypothesis of [Impagliazzo, Paturi, Zane JCSS'01] is false. Motivated by this negative result, we give several improved diameter approximation algorithms for special cases. We show for instance that for unweighted graphs of constant diameter D not divisible by 3, there is an O(m2-ε) time algorithm that gives a (3/2-ε) approximation for constant ε>0. This is interesting since the diameter approximation problem is hardest to solve for small D.
{"title":"Fast approximation algorithms for the diameter and radius of sparse graphs","authors":"L. Roditty, V. V. Williams","doi":"10.1145/2488608.2488673","DOIUrl":"https://doi.org/10.1145/2488608.2488673","url":null,"abstract":"The diameter and the radius of a graph are fundamental topological parameters that have many important practical applications in real world networks. The fastest combinatorial algorithm for both parameters works by solving the all-pairs shortest paths problem (APSP) and has a running time of ~O(mn) in m-edge, n-node graphs. In a seminal paper, Aingworth, Chekuri, Indyk and Motwani [SODA'96 and SICOMP'99] presented an algorithm that computes in ~O(m√ n + n2) time an estimate D for the diameter D, such that ⌊ 2/3 D ⌋ ≤ ^D ≤ D. Their paper spawned a long line of research on approximate APSP. For the specific problem of diameter approximation, however, no improvement has been achieved in over 15 years.\u0000 Our paper presents the first improvement over the diameter approximation algorithm of Aingworth et. al, producing an algorithm with the same estimate but with an expected running time of ~O(m√ n). We thus show that for all sparse enough graphs, the diameter can be 3/2-approximated in o(n2) time. Our algorithm is obtained using a surprisingly simple method of neighborhood depth estimation that is strong enough to also approximate, in the same running time, the radius and more generally, all of the eccentricities, i.e. for every node the distance to its furthest node.\u0000 We also provide strong evidence that our diameter approximation result may be hard to improve. We show that if for some constant ε>0 there is an O(m2-ε) time (3/2-ε)-approximation algorithm for the diameter of undirected unweighted graphs, then there is an O*( (2-δ)n) time algorithm for CNF-SAT on n variables for constant δ>0, and the strong exponential time hypothesis of [Impagliazzo, Paturi, Zane JCSS'01] is false.\u0000 Motivated by this negative result, we give several improved diameter approximation algorithms for special cases. We show for instance that for unweighted graphs of constant diameter D not divisible by 3, there is an O(m2-ε) time algorithm that gives a (3/2-ε) approximation for constant ε>0. This is interesting since the diameter approximation problem is hardest to solve for small D.","PeriodicalId":191270,"journal":{"name":"Symposium on the Theory of Computing","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116235393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present game-theoretic models of opinion formation in social networks where opinions themselves co-evolve with friendships. In these models, nodes form their opinions by maximizing agreements with friends weighted by the strength of the relationships, which in turn depend on difference in opinion with the respective friends. We define a social cost of this process by generalizing recent work of Bindel et al., FOCS 2011. We tightly bound the price of anarchy of the resulting dynamics via local smoothness arguments, and characterize it as a function of how much nodes value their own (intrinsic) opinion, as well as how strongly they weigh links to friends with whom they agree more.
我们提出了社会网络中意见形成的博弈论模型,其中意见本身与友谊共同进化。在这些模型中,节点通过最大化与朋友之间的共识来形成自己的观点,而这种共识又取决于与各自朋友之间的观点差异。我们通过推广Bindel et al. (fos 2011)最近的工作来定义这一过程的社会成本。我们通过局部平滑性参数严格限定了由此产生的动态的无政府状态的代价,并将其描述为节点对自己(内在)意见的重视程度,以及他们对与他们更同意的朋友的链接的重视程度。
{"title":"Coevolutionary opinion formation games","authors":"Kshipra Bhawalkar, Sreenivas Gollapudi, Kamesh Munagala","doi":"10.1145/2488608.2488615","DOIUrl":"https://doi.org/10.1145/2488608.2488615","url":null,"abstract":"We present game-theoretic models of opinion formation in social networks where opinions themselves co-evolve with friendships. In these models, nodes form their opinions by maximizing agreements with friends weighted by the strength of the relationships, which in turn depend on difference in opinion with the respective friends. We define a social cost of this process by generalizing recent work of Bindel et al., FOCS 2011. We tightly bound the price of anarchy of the resulting dynamics via local smoothness arguments, and characterize it as a function of how much nodes value their own (intrinsic) opinion, as well as how strongly they weigh links to friends with whom they agree more.","PeriodicalId":191270,"journal":{"name":"Symposium on the Theory of Computing","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127392074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}