首页 > 最新文献

Symposium on the Theory of Computing最新文献

英文 中文
Non-interactive zero-knowledge and its applications 非交互式零知识及其应用
Pub Date : 2019-10-09 DOI: 10.1145/62212.62222
M. Blum, Paul Feldman, S. Micali
We show that interaction in any zero-knowledge proof can be replaced by sharing a common, short, random string. We use this result to construct the first public-key cryptosystem secure against chosen ciphertext attack.
我们证明了任何零知识证明中的交互都可以通过共享一个公共的、短的、随机的字符串来代替。我们利用这一结果构建了第一个安全的公钥密码系统,以防止所选密文攻击。
{"title":"Non-interactive zero-knowledge and its applications","authors":"M. Blum, Paul Feldman, S. Micali","doi":"10.1145/62212.62222","DOIUrl":"https://doi.org/10.1145/62212.62222","url":null,"abstract":"We show that interaction in <italic>any</italic> zero-knowledge proof can be replaced by sharing a common, short, random string. We use this result to construct the <italic>first</italic> public-key cryptosystem secure against chosen ciphertext attack.","PeriodicalId":191270,"journal":{"name":"Symposium on the Theory of Computing","volume":"295 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134273982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 809
Multi-prover interactive proofs: how to remove intractability assumptions 多证明者交互证明:如何消除棘手的假设
Pub Date : 2019-10-09 DOI: 10.1145/62212.62223
M. Ben-Or, S. Goldwasser, J. Kilian, A. Wigderson
Quite complex cryptographic machinery has been developed based on the assumption that one-way functions exist, yet we know of only a few possible such candidates. It is important at this time to find alternative foundations to the design of secure cryptography. We introduce a new model of generalized interactive proofs as a step in this direction. We prove that all NP languages have perfect zero-knowledge proof-systems in this model, without making any intractability assumptions.The generalized interactive-proof model consists of two computationally unbounded and untrusted provers, rather than one, who jointly agree on a strategy to convince the verifier of the truth of an assertion and then engage in a polynomial number of message exchanges with the verifier in their attempt to do so. To believe the validity of the assertion, the verifier must make sure that the two provers can not communicate with each other during the course of the proof process. Thus, the complexity assumptions made in previous work, have been traded for a physical separation between the two provers.We call this new model the multi-prover interactive-proof model, and examine its properties and applicability to cryptography.
基于单向函数存在的假设,已经开发出了相当复杂的加密机制,但我们知道只有少数可能的候选函数。在这个时候,寻找安全加密设计的替代基础是很重要的。作为这个方向的一步,我们引入了一个新的广义交互证明模型。在此模型中,我们证明了所有NP语言都具有完美的零知识证明系统,而不做任何棘手的假设。广义交互证明模型由两个计算无界且不可信的证明者(而不是一个)组成,他们共同同意一种策略来说服验证者相信断言的真实性,然后在他们尝试这样做的过程中与验证者进行多项式次数的消息交换。为了相信断言的有效性,验证者必须确保两个证明者在证明过程中不能相互通信。因此,在之前的工作中所做的复杂性假设已经被交换为两个证明者之间的物理分离。我们将这个新模型称为多证明者交互证明模型,并研究了它的性质和在密码学中的适用性。
{"title":"Multi-prover interactive proofs: how to remove intractability assumptions","authors":"M. Ben-Or, S. Goldwasser, J. Kilian, A. Wigderson","doi":"10.1145/62212.62223","DOIUrl":"https://doi.org/10.1145/62212.62223","url":null,"abstract":"Quite complex cryptographic machinery has been developed based on the assumption that one-way functions exist, yet we know of only a few possible such candidates. It is important at this time to find alternative foundations to the design of secure cryptography. We introduce a new model of generalized interactive proofs as a step in this direction. We prove that all NP languages have perfect zero-knowledge proof-systems in this model, without making any intractability assumptions.\u0000The generalized interactive-proof model consists of two computationally unbounded and untrusted provers, rather than one, who jointly agree on a strategy to convince the verifier of the truth of an assertion and then engage in a polynomial number of message exchanges with the verifier in their attempt to do so. To believe the validity of the assertion, the verifier must make sure that the two provers can not communicate with each other during the course of the proof process. Thus, the complexity assumptions made in previous work, have been traded for a physical separation between the two provers.\u0000We call this new model the multi-prover interactive-proof model, and examine its properties and applicability to cryptography.","PeriodicalId":191270,"journal":{"name":"Symposium on the Theory of Computing","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127168462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 526
Efficient robust parallel computations 高效稳健的并行计算
Pub Date : 2018-02-28 DOI: 10.1145/100216.100231
Z. Kedem, K. Palem, P. Spirakis
A parallel computing system becomes increasingly prone to failure as the number of processing elements in it increases. In this paper, we describe a completely general strategy that takes an arbitrary step of an ideal CRCW PRAM and automatically translates it to run efficiently and robustly on a PRAM in which processors are prone to failure. The strategy relies on efficient robust algorithms for solving a core problem, the Certified Write-All Problem. This problem characterizes the core of robustness, because , as we show, its complexity is equal to that of any general strategy for realizing robustness in the model. We analyze the expected parallel time and work of various algorithms for solving this problem. Our results are a non-trivial generalization of Brent's Permission to copy without fee all or part of this material is granted provided that the copies are not made or distn'buted for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. Lemma. We consider the case where the number of the available processors decreases dynamically over time, whereas Brent's Lemma is only applicable in the case where the processor availability pattern is static.
随着处理元素数量的增加,并行计算系统越来越容易出现故障。在本文中,我们描述了一种完全通用的策略,该策略采用理想CRCW PRAM的任意步骤,并自动将其转换为在处理器容易发生故障的PRAM上高效且稳健地运行。该策略依赖于有效的鲁棒算法来解决一个核心问题,即认证全写问题。这个问题表征了鲁棒性的核心,因为,正如我们所示,它的复杂性等于在模型中实现鲁棒性的任何一般策略的复杂性。我们分析了求解该问题的各种算法的预期并行时间和工作量。我们的研究结果是对Brent的“免费复制全部或部分材料的许可”的一个重要概括,前提是这些副本不是为了直接的商业利益而制作或分发的,ACM版权声明、出版物标题和出版日期出现,并且声明复制是由计算机械协会许可的。以其他方式复制或重新发布需要付费和/或特定许可。引理。我们考虑可用处理器数量随时间动态减少的情况,而Brent引理仅适用于处理器可用性模式是静态的情况。
{"title":"Efficient robust parallel computations","authors":"Z. Kedem, K. Palem, P. Spirakis","doi":"10.1145/100216.100231","DOIUrl":"https://doi.org/10.1145/100216.100231","url":null,"abstract":"A parallel computing system becomes increasingly prone to failure as the number of processing elements in it increases. In this paper, we describe a completely general strategy that takes an arbitrary step of an ideal CRCW PRAM and automatically translates it to run efficiently and robustly on a PRAM in which processors are prone to failure. The strategy relies on efficient robust algorithms for solving a core problem, the Certified Write-All Problem. This problem characterizes the core of robustness, because , as we show, its complexity is equal to that of any general strategy for realizing robustness in the model. We analyze the expected parallel time and work of various algorithms for solving this problem. Our results are a non-trivial generalization of Brent's Permission to copy without fee all or part of this material is granted provided that the copies are not made or distn'buted for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. Lemma. We consider the case where the number of the available processors decreases dynamically over time, whereas Brent's Lemma is only applicable in the case where the processor availability pattern is static.","PeriodicalId":191270,"journal":{"name":"Symposium on the Theory of Computing","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132298976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 132
Randomized speed-ups in parallel computation 并行计算中的随机加速
Pub Date : 2015-08-23 DOI: 10.1145/800057.808686
U. Vishkin
The following problem is considered: given a linked list of length n, compute the distance of each element of the linked list from the end of the list. The problem has two standard deterministic algorithms: a linear time serial algorithm, and an O((nlog n)/p + log n) time parallel algorithm using p processors. A known conjecture states that it is impossible to design an O(log n) time deterministic parallel algorithm that uses only n/log n processors. We present three randomized parallel algorithms for the problem. One of these algorithms runs almost-surely in time of O(n/p + log nlog*n) using p processors on an exclusive-read exclusive-write parallel RAM.
考虑以下问题:给定一个长度为n的链表,计算链表中每个元素到链表末端的距离。该问题有两种标准的确定性算法:线性时间序列算法和使用p个处理器的O((nlog n)/p + log n)时间并行算法。一个已知的猜想表明,不可能设计一个只使用n/log n个处理器的O(log n)时间确定性并行算法。针对该问题,我们提出了三种随机并行算法。其中一种算法几乎肯定在O(n/p + log nlog*n)的时间内运行,在独占读独占写并行RAM上使用p个处理器。
{"title":"Randomized speed-ups in parallel computation","authors":"U. Vishkin","doi":"10.1145/800057.808686","DOIUrl":"https://doi.org/10.1145/800057.808686","url":null,"abstract":"The following problem is considered: given a linked list of length n, compute the distance of each element of the linked list from the end of the list. The problem has two standard deterministic algorithms: a linear time serial algorithm, and an O((nlog n)/p + log n) time parallel algorithm using p processors. A known conjecture states that it is impossible to design an O(log n) time deterministic parallel algorithm that uses only n/log n processors.\u0000 We present three randomized parallel algorithms for the problem. One of these algorithms runs almost-surely in time of O(n/p + log nlog*n) using p processors on an exclusive-read exclusive-write parallel RAM.","PeriodicalId":191270,"journal":{"name":"Symposium on the Theory of Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124288842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 70
Extending continuous maps: polynomiality and undecidability 扩展连续映射:多项式性和不可判定性
Pub Date : 2013-06-01 DOI: 10.1145/2488608.2488683
M. Čadek, Marek Krcál, J. Matoušek, L. Vokrínek, Uli Wagner
We consider several basic problems of algebraic topology, with connections to combinatorial and geometric questions, from the point of view of computational complexity. The extension problem asks, given topological spaces X,Y, a subspace A ⊆ X, and a (continuous) map f:A -> Y, whether f can be extended to a map X -> Y. For computational purposes, we assume that X and Y are represented as finite simplicial complexes, A is a subcomplex of X, and f is given as a simplicial map. In this generality the problem is undecidable, as follows from Novikov's result from the 1950s on uncomputability of the fundamental group π1(Y). We thus study the problem under the assumption that, for some k ≥ 2, Y is (k-1)-connected; informally, this means that Y has "no holes up to dimension k-1" i.e., the first k-1 homotopy groups of Y vanish (a basic example of such a Y is the sphere Sk). We prove that, on the one hand, this problem is still undecidable for dim X=2k. On the other hand, for every fixed k ≥ 2, we obtain an algorithm that solves the extension problem in polynomial time assuming Y (k-1)-connected and dim X ≤ 2k-1$. For dim X ≤ 2k-2, the algorithm also provides a classification of all extensions up to homotopy (continuous deformation). This relies on results of our SODA 2012 paper, and the main new ingredient is a machinery of objects with polynomial-time homology, which is a polynomial-time analog of objects with effective homology developed earlier by Sergeraert et al. We also consider the computation of the higher homotopy groups πk(Y)$, k ≥ 2, for a 1-connected Y. Their computability was established by Brown in 1957; we show that πk(Y) can be computed in polynomial time for every fixed k ≥ 2. On the other hand, Anick proved in 1989 that computing πk(Y) is #P-hard if k is a part of input, where Y is a cell complex with certain rather compact encoding. We strengthen his result to #P-hardness for Y given as a simplicial complex.
我们从计算复杂性的角度考虑了代数拓扑的几个基本问题,它们与组合问题和几何问题有联系。可拓问题是:给定拓扑空间X、Y、子空间a⊥X和(连续)映射f: a -> Y, f是否可以扩展到映射X -> Y。为了计算目的,我们假设X和Y被表示为有限简单复形,a是X的子复形,f被给定为简单映射。根据Novikov在1950年代关于基本群π1(Y)的不可计算性的结论,这个问题是不可判定的。因此,我们在以下假设下研究问题:对于某些k≥2,Y是(k-1)连通的;非正式地说,这意味着Y“在维度k-1之前没有空穴”,即Y的第一个k-1同伦群消失了(这种Y的一个基本例子是球Sk)。我们一方面证明了,当X=2k时,这个问题仍然是不可判定的。另一方面,对于每一个固定k≥2,我们得到了在多项式时间内解决扩展问题的算法,该算法假设Y (k-1)连通且dim X≤2k-1$。对于dim X≤2k-2,该算法还提供了所有扩展到同伦(连续变形)的分类。这依赖于我们SODA 2012论文的结果,主要的新成分是具有多项式时间同调的对象机制,这是Sergeraert等人早期开发的具有有效同调的对象的多项式时间模拟。我们还考虑了1连通Y的高同伦群πk(Y)$, k≥2的计算,它们的可计算性由Brown在1957年建立;我们证明πk(Y)可以在多项式时间内计算出每一个固定k≥2。另一方面,Anick在1989年证明,如果k是输入的一部分,其中Y是一个具有一定相当紧凑编码的细胞复合体,计算πk(Y)是#P-hard的。对于以简单复形形式给出的Y,我们将其结果加强到# p -硬度。
{"title":"Extending continuous maps: polynomiality and undecidability","authors":"M. Čadek, Marek Krcál, J. Matoušek, L. Vokrínek, Uli Wagner","doi":"10.1145/2488608.2488683","DOIUrl":"https://doi.org/10.1145/2488608.2488683","url":null,"abstract":"We consider several basic problems of algebraic topology, with connections to combinatorial and geometric questions, from the point of view of computational complexity.\u0000 The extension problem asks, given topological spaces X,Y, a subspace A ⊆ X, and a (continuous) map f:A -> Y, whether f can be extended to a map X -> Y. For computational purposes, we assume that X and Y are represented as finite simplicial complexes, A is a subcomplex of X, and f is given as a simplicial map. In this generality the problem is undecidable, as follows from Novikov's result from the 1950s on uncomputability of the fundamental group π1(Y). We thus study the problem under the assumption that, for some k ≥ 2, Y is (k-1)-connected; informally, this means that Y has \"no holes up to dimension k-1\" i.e., the first k-1 homotopy groups of Y vanish (a basic example of such a Y is the sphere Sk).\u0000 We prove that, on the one hand, this problem is still undecidable for dim X=2k. On the other hand, for every fixed k ≥ 2, we obtain an algorithm that solves the extension problem in polynomial time assuming Y (k-1)-connected and dim X ≤ 2k-1$. For dim X ≤ 2k-2, the algorithm also provides a classification of all extensions up to homotopy (continuous deformation). This relies on results of our SODA 2012 paper, and the main new ingredient is a machinery of objects with polynomial-time homology, which is a polynomial-time analog of objects with effective homology developed earlier by Sergeraert et al.\u0000 We also consider the computation of the higher homotopy groups πk(Y)$, k ≥ 2, for a 1-connected Y. Their computability was established by Brown in 1957; we show that πk(Y) can be computed in polynomial time for every fixed k ≥ 2. On the other hand, Anick proved in 1989 that computing πk(Y) is #P-hard if k is a part of input, where Y is a cell complex with certain rather compact encoding. We strengthen his result to #P-hardness for Y given as a simplicial complex.","PeriodicalId":191270,"journal":{"name":"Symposium on the Theory of Computing","volume":"132 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130892186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Some trade-off results for polynomial calculus: extended abstract 多项式微积分的一些权衡结果:扩展摘要
Pub Date : 2013-06-01 DOI: 10.1145/2488608.2488711
Chris Beck, Jakob Nordström, Bangsheng Tang
We present size-space trade-offs for the polynomial calculus (PC) and polynomial calculus resolution (PCR) proof systems. These are the first true size-space trade-offs in any algebraic proof system, showing that size and space cannot be simultaneously optimized in these models. We achieve this by extending essentially all known size-space trade-offs for resolution to PC and PCR. As such, our results cover space complexity from constant all the way up to exponential and yield mostly superpolynomial or even exponential size blow-ups. Since the upper bounds in our trade-offs hold for resolution, our work shows that there are formulas for which adding algebraic reasoning on top of resolution does not improve the trade-off properties in any significant way. As byproducts of our analysis, we also obtain trade-offs between space and degree in PC and PCR exactly matching analogous results for space versus width in resolution, and strengthen the resolution trade-offs in [Beame, Beck, and Impagliazzo '12] to apply also to k-CNF formulas.
我们提出了多项式微积分(PC)和多项式微积分分辨率(PCR)证明系统的大小空间权衡。这是任何代数证明系统中第一个真正的大小-空间权衡,表明在这些模型中大小和空间不能同时优化。我们通过将所有已知的分辨率大小空间权衡扩展到PC和PCR来实现这一点。因此,我们的结果涵盖了从常数到指数的空间复杂性,并产生了大多数超多项式甚至指数大小的膨胀。由于我们权衡的上界适用于分辨率,我们的工作表明,有些公式在分辨率之上添加代数推理并不能以任何显著的方式改善权衡属性。作为我们分析的副产品,我们还在PC和PCR中获得了空间和度之间的权衡,与空间和宽度在分辨率上的类似结果完全匹配,并加强了[Beame, Beck和Impagliazzo '12]中的分辨率权衡,也适用于k-CNF公式。
{"title":"Some trade-off results for polynomial calculus: extended abstract","authors":"Chris Beck, Jakob Nordström, Bangsheng Tang","doi":"10.1145/2488608.2488711","DOIUrl":"https://doi.org/10.1145/2488608.2488711","url":null,"abstract":"We present size-space trade-offs for the polynomial calculus (PC) and polynomial calculus resolution (PCR) proof systems. These are the first true size-space trade-offs in any algebraic proof system, showing that size and space cannot be simultaneously optimized in these models. We achieve this by extending essentially all known size-space trade-offs for resolution to PC and PCR. As such, our results cover space complexity from constant all the way up to exponential and yield mostly superpolynomial or even exponential size blow-ups. Since the upper bounds in our trade-offs hold for resolution, our work shows that there are formulas for which adding algebraic reasoning on top of resolution does not improve the trade-off properties in any significant way.\u0000 As byproducts of our analysis, we also obtain trade-offs between space and degree in PC and PCR exactly matching analogous results for space versus width in resolution, and strengthen the resolution trade-offs in [Beame, Beck, and Impagliazzo '12] to apply also to k-CNF formulas.","PeriodicalId":191270,"journal":{"name":"Symposium on the Theory of Computing","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128916413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Max flows in O(nm) time, or better 最大流量在0 (nm)时间,或更好
Pub Date : 2013-06-01 DOI: 10.1145/2488608.2488705
J. Orlin
In this paper, we present improved polynomial time algorithms for the max flow problem defined on sparse networks with n nodes and m arcs. We show how to solve the max flow problem in O(nm + m31/16 log2 n) time. In the case that m = O(n1.06), this improves upon the best previous algorithm due to King, Rao, and Tarjan, who solved the max flow problem in O(nm logm/(n log n)n) time. This establishes that the max flow problem is solvable in O(nm) time for all values of n and m. In the case that m = O(n), we improve the running time to O(n2/ log n).
本文提出了一种改进的多项式时间算法,用于求解n节点m弧稀疏网络上的最大流问题。我们展示了如何在O(nm + m31/16 log2 n)时间内解决最大流量问题。在m = O(n1.06)的情况下,该算法改进了King, Rao和Tarjan之前的最佳算法,他们在O(nm logm/(n log n)n)时间内解决了最大流量问题。这表明,对于所有n和m值,最大流量问题都可以在O(nm)时间内解决。在m = O(n)的情况下,我们将运行时间提高到O(n2/ log n)。
{"title":"Max flows in O(nm) time, or better","authors":"J. Orlin","doi":"10.1145/2488608.2488705","DOIUrl":"https://doi.org/10.1145/2488608.2488705","url":null,"abstract":"In this paper, we present improved polynomial time algorithms for the max flow problem defined on sparse networks with n nodes and m arcs. We show how to solve the max flow problem in O(nm + m31/16 log2 n) time. In the case that m = O(n1.06), this improves upon the best previous algorithm due to King, Rao, and Tarjan, who solved the max flow problem in O(nm logm/(n log n)n) time. This establishes that the max flow problem is solvable in O(nm) time for all values of n and m. In the case that m = O(n), we improve the running time to O(n2/ log n).","PeriodicalId":191270,"journal":{"name":"Symposium on the Theory of Computing","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123993243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 406
Net and prune: a linear time algorithm for euclidean distance problems 欧几里得距离问题的线性时间算法
Pub Date : 2013-06-01 DOI: 10.1145/2488608.2488684
Sariel Har-Peled, Benjamin Raichel
We provide a general framework for getting linear time constant factor approximations (and in many cases FPTAS's) to a copious amount of well known and well studied problems in Computational Geometry, such as k-center clustering and furthest nearest neighbor. The new approach is robust to variations in the input problem, and yet it is simple, elegant and practical. In particular, many of these well studied problems which fit easily into our framework, either previously had no linear time approximation algorithm, or required rather involved algorithms and analysis. A short list of the problems we consider include furthest nearest neighbor, k-center clustering, smallest disk enclosing k points, k-th largest distance, k-th smallest m-nearest neighbor distance, k-th heaviest edge in the MST and other spanning forest type problems, problems involving upward closed set systems, and more. Finally, we show how to extend our framework such that the linear running time bound holds with high probability.
我们提供了一个通用框架,用于获得线性时间常数因子近似(在许多情况下是FPTAS),以解决计算几何中大量众所周知和研究得很好的问题,例如k中心聚类和最近邻。新方法对输入问题的变化具有鲁棒性,而且简单、优雅、实用。特别是,许多这些研究得很好的问题很容易适合我们的框架,要么以前没有线性时间近似算法,要么需要相当复杂的算法和分析。我们考虑的问题的简短列表包括最远最近邻居,k中心聚类,包含k个点的最小磁盘,第k个最大距离,第k个最小m最近邻居距离,MST中第k个最重边和其他跨越森林类型的问题,涉及向上封闭集系统的问题,等等。最后,我们展示了如何扩展我们的框架,使线性运行时间范围具有高概率。
{"title":"Net and prune: a linear time algorithm for euclidean distance problems","authors":"Sariel Har-Peled, Benjamin Raichel","doi":"10.1145/2488608.2488684","DOIUrl":"https://doi.org/10.1145/2488608.2488684","url":null,"abstract":"We provide a general framework for getting linear time constant factor approximations (and in many cases FPTAS's) to a copious amount of well known and well studied problems in Computational Geometry, such as k-center clustering and furthest nearest neighbor. The new approach is robust to variations in the input problem, and yet it is simple, elegant and practical. In particular, many of these well studied problems which fit easily into our framework, either previously had no linear time approximation algorithm, or required rather involved algorithms and analysis. A short list of the problems we consider include furthest nearest neighbor, k-center clustering, smallest disk enclosing k points, k-th largest distance, k-th smallest m-nearest neighbor distance, k-th heaviest edge in the MST and other spanning forest type problems, problems involving upward closed set systems, and more. Finally, we show how to extend our framework such that the linear running time bound holds with high probability.","PeriodicalId":191270,"journal":{"name":"Symposium on the Theory of Computing","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130574707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Fast approximation algorithms for the diameter and radius of sparse graphs 稀疏图的直径和半径的快速逼近算法
Pub Date : 2013-06-01 DOI: 10.1145/2488608.2488673
L. Roditty, V. V. Williams
The diameter and the radius of a graph are fundamental topological parameters that have many important practical applications in real world networks. The fastest combinatorial algorithm for both parameters works by solving the all-pairs shortest paths problem (APSP) and has a running time of ~O(mn) in m-edge, n-node graphs. In a seminal paper, Aingworth, Chekuri, Indyk and Motwani [SODA'96 and SICOMP'99] presented an algorithm that computes in ~O(m√ n + n2) time an estimate D for the diameter D, such that ⌊ 2/3 D ⌋ ≤ ^D ≤ D. Their paper spawned a long line of research on approximate APSP. For the specific problem of diameter approximation, however, no improvement has been achieved in over 15 years. Our paper presents the first improvement over the diameter approximation algorithm of Aingworth et. al, producing an algorithm with the same estimate but with an expected running time of ~O(m√ n). We thus show that for all sparse enough graphs, the diameter can be 3/2-approximated in o(n2) time. Our algorithm is obtained using a surprisingly simple method of neighborhood depth estimation that is strong enough to also approximate, in the same running time, the radius and more generally, all of the eccentricities, i.e. for every node the distance to its furthest node. We also provide strong evidence that our diameter approximation result may be hard to improve. We show that if for some constant ε>0 there is an O(m2-ε) time (3/2-ε)-approximation algorithm for the diameter of undirected unweighted graphs, then there is an O*( (2-δ)n) time algorithm for CNF-SAT on n variables for constant δ>0, and the strong exponential time hypothesis of [Impagliazzo, Paturi, Zane JCSS'01] is false. Motivated by this negative result, we give several improved diameter approximation algorithms for special cases. We show for instance that for unweighted graphs of constant diameter D not divisible by 3, there is an O(m2-ε) time algorithm that gives a (3/2-ε) approximation for constant ε>0. This is interesting since the diameter approximation problem is hardest to solve for small D.
图的直径和半径是基本的拓扑参数,在现实世界的网络中有许多重要的实际应用。这两个参数的最快组合算法通过解决全对最短路径问题(APSP)来工作,在m边n节点图中运行时间为~O(mn)。在一篇开创性的论文中,Aingworth, Chekuri, Indyk和Motwani [SODA'96和SICOMP'99]提出了一种算法,该算法在~O(m√n + n2)时间内计算直径D的估计D,使得⌊2/3 D⌋≤^D≤D。他们的论文催生了一系列关于近似APSP的研究。然而,对于直径近似的具体问题,15年来没有取得任何改进。本文提出了对Aingworth等人的直径近似算法的第一个改进,产生了一个具有相同估计但期望运行时间为~O(m√n)的算法。因此,我们证明了对于所有足够稀疏的图,直径可以在O(n2)时间内被3/2近似。我们的算法是使用一种非常简单的邻域深度估计方法获得的,这种方法足够强大,可以在相同的运行时间内近似半径,更一般地说,也可以近似所有的偏心率,即每个节点到其最远节点的距离。我们还提供了强有力的证据,证明我们的直径近似结果可能难以改进。我们证明了如果对于某常数ε>0存在一个O(m2-ε)时间(3/2-ε)-近似算法,那么对于常数δ>0存在n个变量上的CNF-SAT存在一个O*((2-δ)n)时间算法,并且[Impagliazzo, Paturi, Zane JCSS'01]的强指数时间假设是错误的。基于这个否定的结果,我们给出了几种特殊情况下改进的直径近似算法。例如,我们证明了对于常数直径D不能被3整除的未加权图,存在一个O(m2-ε)时间算法,该算法给出常数ε>0的(3/2-ε)近似。这很有趣,因为直径近似问题对于小D是最难解的。
{"title":"Fast approximation algorithms for the diameter and radius of sparse graphs","authors":"L. Roditty, V. V. Williams","doi":"10.1145/2488608.2488673","DOIUrl":"https://doi.org/10.1145/2488608.2488673","url":null,"abstract":"The diameter and the radius of a graph are fundamental topological parameters that have many important practical applications in real world networks. The fastest combinatorial algorithm for both parameters works by solving the all-pairs shortest paths problem (APSP) and has a running time of ~O(mn) in m-edge, n-node graphs. In a seminal paper, Aingworth, Chekuri, Indyk and Motwani [SODA'96 and SICOMP'99] presented an algorithm that computes in ~O(m√ n + n2) time an estimate D for the diameter D, such that ⌊ 2/3 D ⌋ ≤ ^D ≤ D. Their paper spawned a long line of research on approximate APSP. For the specific problem of diameter approximation, however, no improvement has been achieved in over 15 years.\u0000 Our paper presents the first improvement over the diameter approximation algorithm of Aingworth et. al, producing an algorithm with the same estimate but with an expected running time of ~O(m√ n). We thus show that for all sparse enough graphs, the diameter can be 3/2-approximated in o(n2) time. Our algorithm is obtained using a surprisingly simple method of neighborhood depth estimation that is strong enough to also approximate, in the same running time, the radius and more generally, all of the eccentricities, i.e. for every node the distance to its furthest node.\u0000 We also provide strong evidence that our diameter approximation result may be hard to improve. We show that if for some constant ε>0 there is an O(m2-ε) time (3/2-ε)-approximation algorithm for the diameter of undirected unweighted graphs, then there is an O*( (2-δ)n) time algorithm for CNF-SAT on n variables for constant δ>0, and the strong exponential time hypothesis of [Impagliazzo, Paturi, Zane JCSS'01] is false.\u0000 Motivated by this negative result, we give several improved diameter approximation algorithms for special cases. We show for instance that for unweighted graphs of constant diameter D not divisible by 3, there is an O(m2-ε) time algorithm that gives a (3/2-ε) approximation for constant ε>0. This is interesting since the diameter approximation problem is hardest to solve for small D.","PeriodicalId":191270,"journal":{"name":"Symposium on the Theory of Computing","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116235393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 253
Coevolutionary opinion formation games 共同进化的意见形成游戏
Pub Date : 2013-06-01 DOI: 10.1145/2488608.2488615
Kshipra Bhawalkar, Sreenivas Gollapudi, Kamesh Munagala
We present game-theoretic models of opinion formation in social networks where opinions themselves co-evolve with friendships. In these models, nodes form their opinions by maximizing agreements with friends weighted by the strength of the relationships, which in turn depend on difference in opinion with the respective friends. We define a social cost of this process by generalizing recent work of Bindel et al., FOCS 2011. We tightly bound the price of anarchy of the resulting dynamics via local smoothness arguments, and characterize it as a function of how much nodes value their own (intrinsic) opinion, as well as how strongly they weigh links to friends with whom they agree more.
我们提出了社会网络中意见形成的博弈论模型,其中意见本身与友谊共同进化。在这些模型中,节点通过最大化与朋友之间的共识来形成自己的观点,而这种共识又取决于与各自朋友之间的观点差异。我们通过推广Bindel et al. (fos 2011)最近的工作来定义这一过程的社会成本。我们通过局部平滑性参数严格限定了由此产生的动态的无政府状态的代价,并将其描述为节点对自己(内在)意见的重视程度,以及他们对与他们更同意的朋友的链接的重视程度。
{"title":"Coevolutionary opinion formation games","authors":"Kshipra Bhawalkar, Sreenivas Gollapudi, Kamesh Munagala","doi":"10.1145/2488608.2488615","DOIUrl":"https://doi.org/10.1145/2488608.2488615","url":null,"abstract":"We present game-theoretic models of opinion formation in social networks where opinions themselves co-evolve with friendships. In these models, nodes form their opinions by maximizing agreements with friends weighted by the strength of the relationships, which in turn depend on difference in opinion with the respective friends. We define a social cost of this process by generalizing recent work of Bindel et al., FOCS 2011. We tightly bound the price of anarchy of the resulting dynamics via local smoothness arguments, and characterize it as a function of how much nodes value their own (intrinsic) opinion, as well as how strongly they weigh links to friends with whom they agree more.","PeriodicalId":191270,"journal":{"name":"Symposium on the Theory of Computing","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127392074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 73
期刊
Symposium on the Theory of Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1