首页 > 最新文献

Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)最新文献

英文 中文
Tight Bounds for Vertex Connectivity in Dynamic Streams 动态流中顶点连通性的紧边界
Pub Date : 2022-11-09 DOI: 10.48550/arXiv.2211.04685
Sepehr Assadi, Vihan Shah
We present a streaming algorithm for the vertex connectivity problem in dynamic streams with a (nearly) optimal space bound: for any $n$-vertex graph $G$ and any integer $k geq 1$, our algorithm with high probability outputs whether or not $G$ is $k$-vertex-connected in a single pass using $widetilde{O}(k n)$ space. Our upper bound matches the known $Omega(k n)$ lower bound for this problem even in insertion-only streams -- which we extend to multi-pass algorithms in this paper -- and closes one of the last remaining gaps in our understanding of dynamic versus insertion-only streams. Our result is obtained via a novel analysis of the previous best dynamic streaming algorithm of Guha, McGregor, and Tench [PODS 2015] who obtained an $widetilde{O}(k^2 n)$ space algorithm for this problem. This also gives a model-independent algorithm for computing a"certificate"of $k$-vertex-connectivity as a union of $O(k^2log{n})$ spanning forests, each on a random subset of $O(n/k)$ vertices, which may be of independent interest.
我们提出了一种具有(接近)最优空间边界的动态流中顶点连接问题的流算法:对于任何$n$ -顶点图$G$和任何整数$k geq 1$,我们的算法具有高概率输出,无论$G$是否在使用$widetilde{O}(k n)$空间的单次通过中是$k$ -顶点连接。我们的上界与这个问题的已知的$Omega(k n)$下界相匹配,即使是在只插入的流中——我们在本文中扩展到多通道算法——并且关闭了我们对动态与只插入的流的理解中最后剩下的差距之一。我们的结果是通过对Guha, McGregor和Tench [PODS 2015]先前最佳动态流算法的新颖分析获得的,他们获得了针对该问题的$widetilde{O}(k^2 n)$空间算法。这也给出了一个模型无关的算法,用于计算$k$ -顶点连通性的“证书”,作为$O(k^2log{n})$跨越森林的联合,每个森林都在$O(n/k)$顶点的随机子集上,这可能是独立的兴趣。
{"title":"Tight Bounds for Vertex Connectivity in Dynamic Streams","authors":"Sepehr Assadi, Vihan Shah","doi":"10.48550/arXiv.2211.04685","DOIUrl":"https://doi.org/10.48550/arXiv.2211.04685","url":null,"abstract":"We present a streaming algorithm for the vertex connectivity problem in dynamic streams with a (nearly) optimal space bound: for any $n$-vertex graph $G$ and any integer $k geq 1$, our algorithm with high probability outputs whether or not $G$ is $k$-vertex-connected in a single pass using $widetilde{O}(k n)$ space. Our upper bound matches the known $Omega(k n)$ lower bound for this problem even in insertion-only streams -- which we extend to multi-pass algorithms in this paper -- and closes one of the last remaining gaps in our understanding of dynamic versus insertion-only streams. Our result is obtained via a novel analysis of the previous best dynamic streaming algorithm of Guha, McGregor, and Tench [PODS 2015] who obtained an $widetilde{O}(k^2 n)$ space algorithm for this problem. This also gives a model-independent algorithm for computing a\"certificate\"of $k$-vertex-connectivity as a union of $O(k^2log{n})$ spanning forests, each on a random subset of $O(n/k)$ vertices, which may be of independent interest.","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"50 1","pages":"213-227"},"PeriodicalIF":0.0,"publicationDate":"2022-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86106798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Faster Walsh-Hadamard Transform and Matrix Multiplication over Finite Fields using Lookup Tables 更快的沃尔什-阿达玛变换和矩阵乘法在有限的领域使用查找表
Pub Date : 2022-11-09 DOI: 10.48550/arXiv.2211.04643
Josh Alman
We use lookup tables to design faster algorithms for important algebraic problems over finite fields. These faster algorithms, which only use arithmetic operations and lookup table operations, may help to explain the difficulty of determining the complexities of these important problems. Our results over a constant-sized finite field are as follows. The Walsh-Hadamard transform of a vector of length $N$ can be computed using $O(N log N / log log N)$ bit operations. This generalizes to any transform defined as a Kronecker power of a fixed matrix. By comparison, the Fast Walsh-Hadamard transform (similar to the Fast Fourier transform) uses $O(N log N)$ arithmetic operations, which is believed to be optimal up to constant factors. Any algebraic algorithm for multiplying two $N times N$ matrices using $O(N^omega)$ operations can be converted into an algorithm using $O(N^omega / (log N)^{omega/2 - 1})$ bit operations. For example, Strassen's algorithm can be converted into an algorithm using $O(N^{2.81} / (log N)^{0.4})$ bit operations. It remains an open problem with practical implications to determine the smallest constant $c$ such that Strassen's algorithm can be implemented to use $c cdot N^{2.81} + o(N^{2.81})$ arithmetic operations; using a lookup table allows one to save a super-constant factor in bit operations.
我们使用查找表来为有限域上重要的代数问题设计更快的算法。这些更快的算法只使用算术运算和查找表运算,可能有助于解释确定这些重要问题的复杂性的困难。我们在一个固定大小的有限域上得到的结果如下。长度为$N$的矢量的Walsh-Hadamard变换可以使用$O(N log N / log log N)$位操作来计算。这可以推广到任何定义为固定矩阵的克罗内克幂的变换。相比之下,快速沃尔什-阿达玛变换(类似于快速傅立叶变换)使用$O(N log N)$算术运算,这被认为是最优的常数因子。使用$O(N^omega)$操作相乘两个$N times N$矩阵的任何代数算法都可以转换为使用$O(N^omega / (log N)^{omega/2 - 1})$位操作的算法。例如,Strassen算法可以通过$O(N^{2.81} / (log N)^{0.4})$位操作转换为算法。它仍然是一个开放的问题,具有实际意义,以确定最小常数$c$,使Strassen的算法可以实现使用$c cdot N^{2.81} + o(N^{2.81})$算术运算;使用查找表可以在位操作中节省一个超常数因子。
{"title":"Faster Walsh-Hadamard Transform and Matrix Multiplication over Finite Fields using Lookup Tables","authors":"Josh Alman","doi":"10.48550/arXiv.2211.04643","DOIUrl":"https://doi.org/10.48550/arXiv.2211.04643","url":null,"abstract":"We use lookup tables to design faster algorithms for important algebraic problems over finite fields. These faster algorithms, which only use arithmetic operations and lookup table operations, may help to explain the difficulty of determining the complexities of these important problems. Our results over a constant-sized finite field are as follows. The Walsh-Hadamard transform of a vector of length $N$ can be computed using $O(N log N / log log N)$ bit operations. This generalizes to any transform defined as a Kronecker power of a fixed matrix. By comparison, the Fast Walsh-Hadamard transform (similar to the Fast Fourier transform) uses $O(N log N)$ arithmetic operations, which is believed to be optimal up to constant factors. Any algebraic algorithm for multiplying two $N times N$ matrices using $O(N^omega)$ operations can be converted into an algorithm using $O(N^omega / (log N)^{omega/2 - 1})$ bit operations. For example, Strassen's algorithm can be converted into an algorithm using $O(N^{2.81} / (log N)^{0.4})$ bit operations. It remains an open problem with practical implications to determine the smallest constant $c$ such that Strassen's algorithm can be implemented to use $c cdot N^{2.81} + o(N^{2.81})$ arithmetic operations; using a lookup table allows one to save a super-constant factor in bit operations.","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"246 1","pages":"137-144"},"PeriodicalIF":0.0,"publicationDate":"2022-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86705348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fully-dynamic-to-incremental reductions with known deletion order (e.g. sliding window) 已知删除顺序的完全动态到增量的缩减(例如滑动窗口)
Pub Date : 2022-11-09 DOI: 10.48550/arXiv.2211.05178
Binghui Peng, A. Rubinstein
Dynamic algorithms come in three main flavors: $mathit{incremental}$ (insertions-only), $mathit{decremental}$ (deletions-only), or $mathit{fully}$ $mathit{dynamic}$ (both insertions and deletions). Fully dynamic is the holy grail of dynamic algorithm design; it is obviously more general than the other two, but is it strictly harder? Several works managed to reduce fully dynamic to the incremental or decremental models by taking advantage of either specific structure of the incremental/decremental algorithms (e.g. [HK99, HLT01, BKS12, ADKKP16, BS80, OL81, OvL81]), or specific order of insertions/deletions (e.g. [AW14,HKNS15,KPP16]). Our goal in this work is to get a black-box fully-to-incremental reduction that is as general as possible. We find that the following conditions are necessary: $bullet$ The incremental algorithm must have a worst-case (rather than amortized) running time guarantee. $bullet$ The reduction must work in what we call the $mathit{deletions}$-$mathit{look}$-$mathit{ahead}$ $mathit{model}$, where the order of deletions among current elements is known in advance. A notable practical example is the"sliding window"(FIFO) order of updates. Under those conditions, we design: $bullet$ A simple, practical, amortized-fully-dynamic to worst-case-incremental reduction with a $log(T)$-factor overhead on the running time, where $T$ is the total number of updates. $bullet$ A theoretical worst-case-fully-dynamic to worst-case-incremental reduction with a $mathsf{polylog}(T)$-factor overhead on the running time.
动态算法有三种主要形式:$mathit{增量}$(仅限插入)、$mathit{递减}$(仅限删除)或$mathit{完全}$ $mathit{动态}$(包括插入和删除)。完全动态是动态算法设计的圣杯;它显然比其他两个更普遍,但严格来说它更难吗?一些研究通过利用增量/递减算法的特定结构(例如[HK99, HLT01, BKS12, ADKKP16, BS80, OL81, OvL81])或特定的插入/删除顺序(例如[AW14,HKNS15,KPP16]),成功地将完全动态还原为增量或递减模型。我们在这项工作中的目标是获得一个尽可能通用的从完全到增量的黑盒缩减。我们发现以下条件是必要的:增量算法必须有最坏情况(而不是平摊)运行时间保证。约简必须在我们所说的$mathit{deletions}$-$mathit{look}$-$mathit{ahead}$ $mathit{model}$中进行,其中当前元素之间的删除顺序是预先知道的。一个值得注意的实际例子是更新的“滑动窗口”(FIFO)顺序。在这些条件下,我们设计:一个简单的,实用的,平摊的,完全动态的最坏情况下的增量减少,在运行时间上有$log(T)$因素开销,其中$T$是更新的总数。一个理论上的最坏情况-全动态到最坏情况-增量缩减,在运行时间上使用$mathsf{polylog}(T)$-因子开销。
{"title":"Fully-dynamic-to-incremental reductions with known deletion order (e.g. sliding window)","authors":"Binghui Peng, A. Rubinstein","doi":"10.48550/arXiv.2211.05178","DOIUrl":"https://doi.org/10.48550/arXiv.2211.05178","url":null,"abstract":"Dynamic algorithms come in three main flavors: $mathit{incremental}$ (insertions-only), $mathit{decremental}$ (deletions-only), or $mathit{fully}$ $mathit{dynamic}$ (both insertions and deletions). Fully dynamic is the holy grail of dynamic algorithm design; it is obviously more general than the other two, but is it strictly harder? Several works managed to reduce fully dynamic to the incremental or decremental models by taking advantage of either specific structure of the incremental/decremental algorithms (e.g. [HK99, HLT01, BKS12, ADKKP16, BS80, OL81, OvL81]), or specific order of insertions/deletions (e.g. [AW14,HKNS15,KPP16]). Our goal in this work is to get a black-box fully-to-incremental reduction that is as general as possible. We find that the following conditions are necessary: $bullet$ The incremental algorithm must have a worst-case (rather than amortized) running time guarantee. $bullet$ The reduction must work in what we call the $mathit{deletions}$-$mathit{look}$-$mathit{ahead}$ $mathit{model}$, where the order of deletions among current elements is known in advance. A notable practical example is the\"sliding window\"(FIFO) order of updates. Under those conditions, we design: $bullet$ A simple, practical, amortized-fully-dynamic to worst-case-incremental reduction with a $log(T)$-factor overhead on the running time, where $T$ is the total number of updates. $bullet$ A theoretical worst-case-fully-dynamic to worst-case-incremental reduction with a $mathsf{polylog}(T)$-factor overhead on the running time.","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"42 1","pages":"261-271"},"PeriodicalIF":0.0,"publicationDate":"2022-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82522320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Sampling an Edge in Sublinear Time Exactly and Optimally 在亚线性时间内精确和最优地采样边缘
Pub Date : 2022-11-09 DOI: 10.48550/arXiv.2211.04981
T. Eden, Shyam Narayanan, Jakub Tvetek
Sampling edges from a graph in sublinear time is a fundamental problem and a powerful subroutine for designing sublinear-time algorithms. Suppose we have access to the vertices of the graph and know a constant-factor approximation to the number of edges. An algorithm for pointwise $varepsilon$-approximate edge sampling with complexity $O(n/sqrt{varepsilon m})$ has been given by Eden and Rosenbaum [SOSA 2018]. This has been later improved by Tv{e}tek and Thorup [STOC 2022] to $O(n log(varepsilon^{-1})/sqrt{m})$. At the same time, $Omega(n/sqrt{m})$ time is necessary. We close the problem, by giving an algorithm with complexity $O(n/sqrt{m})$ for the task of sampling an edge exactly uniformly.
在亚线性时间内对图进行边采样是设计亚线性时间算法的一个基本问题,也是一个强大的子程序。假设我们可以访问图的顶点,并且知道边数的常数因子近似值。Eden和Rosenbaum [SOSA 2018]给出了一种复杂度为$O(n/sqrt{varepsilon m})$的逐点$varepsilon$ -近似边缘采样算法。这后来由T v{e} tek和Thorup [STOC 2022]改进为$O(n log(varepsilon^{-1})/sqrt{m})$。同时,$Omega(n/sqrt{m})$时间是必要的。我们通过给出一个复杂度为$O(n/sqrt{m})$的算法来完成精确均匀采样边缘的任务,从而解决了这个问题。
{"title":"Sampling an Edge in Sublinear Time Exactly and Optimally","authors":"T. Eden, Shyam Narayanan, Jakub Tvetek","doi":"10.48550/arXiv.2211.04981","DOIUrl":"https://doi.org/10.48550/arXiv.2211.04981","url":null,"abstract":"Sampling edges from a graph in sublinear time is a fundamental problem and a powerful subroutine for designing sublinear-time algorithms. Suppose we have access to the vertices of the graph and know a constant-factor approximation to the number of edges. An algorithm for pointwise $varepsilon$-approximate edge sampling with complexity $O(n/sqrt{varepsilon m})$ has been given by Eden and Rosenbaum [SOSA 2018]. This has been later improved by Tv{e}tek and Thorup [STOC 2022] to $O(n log(varepsilon^{-1})/sqrt{m})$. At the same time, $Omega(n/sqrt{m})$ time is necessary. We close the problem, by giving an algorithm with complexity $O(n/sqrt{m})$ for the task of sampling an edge exactly uniformly.","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"24 1","pages":"253-260"},"PeriodicalIF":0.0,"publicationDate":"2022-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84024407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Local Search-Based Approach for Set Covering 一种基于局部搜索的集合覆盖方法
Pub Date : 2022-11-08 DOI: 10.48550/arXiv.2211.04444
Anupam Gupta, Euiwoong Lee, Jason Li
In the Set Cover problem, we are given a set system with each set having a weight, and we want to find a collection of sets that cover the universe, whilst having low total weight. There are several approaches known (based on greedy approaches, relax-and-round, and dual-fitting) that achieve a $H_k approx ln k + O(1)$ approximation for this problem, where the size of each set is bounded by $k$. Moreover, getting a $ln k - O(ln ln k)$ approximation is hard. Where does the truth lie? Can we close the gap between the upper and lower bounds? An improvement would be particularly interesting for small values of $k$, which are often used in reductions between Set Cover and other combinatorial optimization problems. We consider a non-oblivious local-search approach: to the best of our knowledge this gives the first $H_k$-approximation for Set Cover using an approach based on local-search. Our proof fits in one page, and gives a integrality gap result as well. Refining our approach by considering larger moves and an optimized potential function gives an $(H_k - Omega(log^2 k)/k)$-approximation, improving on the previous bound of $(H_k - Omega(1/k^8))$ (emph{R. Hassin and A. Levin, SICOMP '05}) based on a modified greedy algorithm.
在Set Cover问题中,我们有一个集合系统,每个集合都有一个权重,我们想要找到一个集合的集合,这个集合覆盖整个宇宙,同时总权重较低。有几种已知的方法(基于贪心方法、松弛-圆和双重拟合)可以实现这个问题的$H_k approx ln k + O(1)$近似,其中每个集合的大小以$k$为界。此外,获得$ln k - O(ln ln k)$近似是困难的。真相在哪里?我们能缩小上界和下界之间的差距吗?对于$k$的小值,改进将特别有趣,它经常用于集覆盖和其他组合优化问题之间的约简。我们考虑一种非遗忘的局部搜索方法:据我们所知,这给出了使用基于局部搜索的方法的Set Cover的第一个$H_k$ -近似。我们的证明只用了一页纸,并且给出了一个完整性缺口的结果。通过考虑更大的移动和优化的势函数来改进我们的方法,给出了$(H_k - Omega(log^2 k)/k)$ -近似,改进了$(H_k - Omega(1/k^8))$的上一个边界emph{(R. Hassin和a . Levin, SICOMP '05)},基于改进的贪婪算法。
{"title":"A Local Search-Based Approach for Set Covering","authors":"Anupam Gupta, Euiwoong Lee, Jason Li","doi":"10.48550/arXiv.2211.04444","DOIUrl":"https://doi.org/10.48550/arXiv.2211.04444","url":null,"abstract":"In the Set Cover problem, we are given a set system with each set having a weight, and we want to find a collection of sets that cover the universe, whilst having low total weight. There are several approaches known (based on greedy approaches, relax-and-round, and dual-fitting) that achieve a $H_k approx ln k + O(1)$ approximation for this problem, where the size of each set is bounded by $k$. Moreover, getting a $ln k - O(ln ln k)$ approximation is hard. Where does the truth lie? Can we close the gap between the upper and lower bounds? An improvement would be particularly interesting for small values of $k$, which are often used in reductions between Set Cover and other combinatorial optimization problems. We consider a non-oblivious local-search approach: to the best of our knowledge this gives the first $H_k$-approximation for Set Cover using an approach based on local-search. Our proof fits in one page, and gives a integrality gap result as well. Refining our approach by considering larger moves and an optimized potential function gives an $(H_k - Omega(log^2 k)/k)$-approximation, improving on the previous bound of $(H_k - Omega(1/k^8))$ (emph{R. Hassin and A. Levin, SICOMP '05}) based on a modified greedy algorithm.","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"97 1","pages":"1-11"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76972287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Simple Combinatorial Algorithm for Robust Matroid Center 鲁棒矩阵中心的一种简单组合算法
Pub Date : 2022-11-07 DOI: 10.48550/arXiv.2211.03601
Georg Anegg, Laura Vargas Koch, R. Zenklusen
Recent progress on robust clustering led to constant-factor approximations for Robust Matroid Center. After a first combinatorial $7$-approximation that is based on a matroid intersection approach, two tight LP-based $3$-approximations were discovered, both relying on the Ellipsoid Method. In this paper, we show how a carefully designed, yet very simple, greedy selection algorithm gives a $5$-approximation. An important ingredient of our approach is a well-chosen use of Rado matroids. This enables us to capture with a single matroid a relaxed version of the original matroid, which, as we show, is amenable to straightforward greedy selections.
鲁棒聚类的最新进展导致鲁棒矩阵中心的常因子逼近。在基于矩阵相交方法的第一个组合$7$逼近之后,发现了两个基于lp的紧密$3$逼近,它们都依赖于椭球体方法。在本文中,我们展示了一个精心设计但非常简单的贪心选择算法是如何给出$5$-近似的。我们方法的一个重要组成部分是对Rado拟阵的精心选择。这使我们能够用单个矩阵捕获原始矩阵的松弛版本,正如我们所展示的,它适用于直接的贪婪选择。
{"title":"A Simple Combinatorial Algorithm for Robust Matroid Center","authors":"Georg Anegg, Laura Vargas Koch, R. Zenklusen","doi":"10.48550/arXiv.2211.03601","DOIUrl":"https://doi.org/10.48550/arXiv.2211.03601","url":null,"abstract":"Recent progress on robust clustering led to constant-factor approximations for Robust Matroid Center. After a first combinatorial $7$-approximation that is based on a matroid intersection approach, two tight LP-based $3$-approximations were discovered, both relying on the Ellipsoid Method. In this paper, we show how a carefully designed, yet very simple, greedy selection algorithm gives a $5$-approximation. An important ingredient of our approach is a well-chosen use of Rado matroids. This enables us to capture with a single matroid a relaxed version of the original matroid, which, as we show, is amenable to straightforward greedy selections.","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"31 1","pages":"96-102"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82989557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simple Set Sketching 简单布景素描
Pub Date : 2022-11-07 DOI: 10.48550/arXiv.2211.03683
Jakob Baek Tejs Houen, R. Pagh, Stefan Walzer
Imagine handling collisions in a hash table by storing, in each cell, the bit-wise exclusive-or of the set of keys hashing there. This appears to be a terrible idea: For $alpha n$ keys and $n$ buckets, where $alpha$ is constant, we expect that a constant fraction of the keys will be unrecoverable due to collisions. We show that if this collision resolution strategy is repeated three times independently the situation reverses: If $alpha$ is below a threshold of $approx 0.81$ then we can recover the set of all inserted keys in linear time with high probability. Even though the description of our data structure is simple, its analysis is nontrivial. Our approach can be seen as a variant of the Invertible Bloom Filter (IBF) of Eppstein and Goodrich. While IBFs involve an explicit checksum per bucket to decide whether the bucket stores a single key, we exploit the idea of quotienting, namely that some bits of the key are implicit in the location where it is stored. We let those serve as an implicit checksum. These bits are not quite enough to ensure that no errors occur and the main technical challenge is to show that decoding can recover from these errors.
想象一下,在哈希表中,通过在每个单元中存储键的按位排他值来处理冲突。这似乎是一个糟糕的想法:对于$alpha n$密钥和$n$存储桶,其中$alpha$是常量,我们预计由于碰撞,一定比例的密钥将无法恢复。我们表明,如果这种冲突解决策略独立地重复三次,情况就会相反:如果$alpha$低于$approx 0.81$的阈值,那么我们可以在线性时间内以高概率恢复所有插入键的集合。尽管对数据结构的描述很简单,但对它的分析却很重要。我们的方法可以看作是Eppstein和Goodrich的可逆布隆过滤器(IBF)的一个变体。虽然ibf涉及每个桶的显式校验和,以确定桶是否存储单个密钥,但我们利用了引用的思想,即密钥的某些位隐含在存储密钥的位置中。我们让它们作为隐式校验和。这些位不足以确保没有错误发生,主要的技术挑战是要证明解码可以从这些错误中恢复。
{"title":"Simple Set Sketching","authors":"Jakob Baek Tejs Houen, R. Pagh, Stefan Walzer","doi":"10.48550/arXiv.2211.03683","DOIUrl":"https://doi.org/10.48550/arXiv.2211.03683","url":null,"abstract":"Imagine handling collisions in a hash table by storing, in each cell, the bit-wise exclusive-or of the set of keys hashing there. This appears to be a terrible idea: For $alpha n$ keys and $n$ buckets, where $alpha$ is constant, we expect that a constant fraction of the keys will be unrecoverable due to collisions. We show that if this collision resolution strategy is repeated three times independently the situation reverses: If $alpha$ is below a threshold of $approx 0.81$ then we can recover the set of all inserted keys in linear time with high probability. Even though the description of our data structure is simple, its analysis is nontrivial. Our approach can be seen as a variant of the Invertible Bloom Filter (IBF) of Eppstein and Goodrich. While IBFs involve an explicit checksum per bucket to decide whether the bucket stores a single key, we exploit the idea of quotienting, namely that some bits of the key are implicit in the location where it is stored. We let those serve as an implicit checksum. These bits are not quite enough to ensure that no errors occur and the main technical challenge is to show that decoding can recover from these errors.","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"25 1 1","pages":"228-241"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75419019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Simplified Prophet Inequalities for Combinatorial Auctions 组合拍卖的简化先知不等式
Pub Date : 2022-11-01 DOI: 10.48550/arXiv.2211.00707
Alexander Braun, Thomas Kesselheim
We consider prophet inequalities for XOS and MPH-$k$ combinatorial auctions and give a simplified proof for the existence of static and anonymous item prices which recover the state-of-the-art competitive ratios. Our proofs make use of a linear programming formulation which has a non-negative objective value if there are prices which admit a given competitive ratio $alpha geq 1$. Changing our perspective to dual space by an application of strong LP duality, we use an interpretation of the dual variables as probabilities to directly obtain our result. In contrast to previous work, our proofs do not require to argue about specific values of buyers for bundles, but only about the presence or absence of items. As a side remark, for any $k geq 2$, this simplification also leads to a tiny improvement in the best competitive ratio for MPH-$k$ combinatorial auctions from $4k-2$ to $2k + 2 sqrt{k(k-1)} -1$.
我们考虑了XOS和MPH- $k$组合拍卖的先知不等式,并给出了静态和匿名物品价格的存在性的简化证明,这些价格可以恢复最先进的竞争比率。我们的证明使用了一个线性规划公式,该公式具有非负的目标值,如果存在允许给定竞争比$alpha geq 1$的价格。通过应用强LP对偶性将我们的视角转变为对偶空间,我们使用对偶变量作为概率的解释来直接获得我们的结果。与以前的工作相反,我们的证明不需要争论购买者的特定值,而只需要讨论物品的存在或不存在。作为旁注,对于任何$k geq 2$,这种简化也导致MPH- $k$组合拍卖的最佳竞争比从$4k-2$到$2k + 2 sqrt{k(k-1)} -1$的微小改进。
{"title":"Simplified Prophet Inequalities for Combinatorial Auctions","authors":"Alexander Braun, Thomas Kesselheim","doi":"10.48550/arXiv.2211.00707","DOIUrl":"https://doi.org/10.48550/arXiv.2211.00707","url":null,"abstract":"We consider prophet inequalities for XOS and MPH-$k$ combinatorial auctions and give a simplified proof for the existence of static and anonymous item prices which recover the state-of-the-art competitive ratios. Our proofs make use of a linear programming formulation which has a non-negative objective value if there are prices which admit a given competitive ratio $alpha geq 1$. Changing our perspective to dual space by an application of strong LP duality, we use an interpretation of the dual variables as probabilities to directly obtain our result. In contrast to previous work, our proofs do not require to argue about specific values of buyers for bundles, but only about the presence or absence of items. As a side remark, for any $k geq 2$, this simplification also leads to a tiny improvement in the best competitive ratio for MPH-$k$ combinatorial auctions from $4k-2$ to $2k + 2 sqrt{k(k-1)} -1$.","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"38 1","pages":"381-389"},"PeriodicalIF":0.0,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74495332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Optimal Lower Bound for Simplex Range Reporting 单纯形范围报告的最优下界
Pub Date : 2022-10-26 DOI: 10.48550/arXiv.2210.14736
P. Afshani, P. Cheng
We give a simplified and improved lower bound for the simplex range reporting problem. We show that given a set $P$ of $n$ points in $mathbb{R}^d$, any data structure that uses $S(n)$ space to answer such queries must have $Q(n)=Omega((n^2/S(n))^{(d-1)/d}+k)$ query time, where $k$ is the output size. For near-linear space data structures, i.e., $S(n)=O(nlog^{O(1)}n)$, this improves the previous lower bounds by Chazelle and Rosenberg [CR96] and Afshani [A12] but perhaps more importantly, it is the first ever tight lower bound for any variant of simplex range searching for $dge 3$ dimensions. We obtain our lower bound by making a simple connection to well-studied problems in incident geometry which allows us to use known constructions in the area. We observe that a small modification of a simple already existing construction can lead to our lower bound. We believe that our proof is accessible to a much wider audience, at least compared to the previous intricate probabilistic proofs based on measure arguments by Chazelle and Rosenberg [CR96] and Afshani [A12]. The lack of tight or almost-tight (up to polylogarithmic factor) lower bounds for near-linear space data structures is a major bottleneck in making progress on problems such as proving lower bounds for multilevel data structures. It is our hope that this new line of attack based on incidence geometry can lead to further progress in this area.
给出了单纯形值域报告问题的简化改进下界。我们展示了在$mathbb{R}^d$中给定一组$P$的$n$点,任何使用$S(n)$空间来回答此类查询的数据结构都必须具有$Q(n)=Omega((n^2/S(n))^{(d-1)/d}+k)$查询时间,其中$k$是输出大小。对于近线性空间数据结构,即$S(n)=O(nlog^{O(1)}n)$,这改进了Chazelle和Rosenberg [CR96]和afshai [A12]先前的下界,但也许更重要的是,它是有史以来第一个对$dge 3$维的单纯形范围搜索的任何变体的紧下界。我们通过与入射几何中研究得很好的问题建立一个简单的联系来获得下界,这使得我们可以使用该区域的已知结构。我们观察到,对一个简单的已经存在的结构进行一个小的修改就可以导致我们的下界。我们相信我们的证明可以被更广泛的受众所接受,至少与之前由Chazelle和Rosenberg [CR96]以及Afshani [A12]基于度量参数的复杂概率证明相比。缺乏近线性空间数据结构的紧或几乎紧(多对数因子)下界是在证明多层数据结构下界等问题上取得进展的主要瓶颈。我们希望,这种基于入射几何的新攻线能够在这一领域取得进一步进展。
{"title":"An Optimal Lower Bound for Simplex Range Reporting","authors":"P. Afshani, P. Cheng","doi":"10.48550/arXiv.2210.14736","DOIUrl":"https://doi.org/10.48550/arXiv.2210.14736","url":null,"abstract":"We give a simplified and improved lower bound for the simplex range reporting problem. We show that given a set $P$ of $n$ points in $mathbb{R}^d$, any data structure that uses $S(n)$ space to answer such queries must have $Q(n)=Omega((n^2/S(n))^{(d-1)/d}+k)$ query time, where $k$ is the output size. For near-linear space data structures, i.e., $S(n)=O(nlog^{O(1)}n)$, this improves the previous lower bounds by Chazelle and Rosenberg [CR96] and Afshani [A12] but perhaps more importantly, it is the first ever tight lower bound for any variant of simplex range searching for $dge 3$ dimensions. We obtain our lower bound by making a simple connection to well-studied problems in incident geometry which allows us to use known constructions in the area. We observe that a small modification of a simple already existing construction can lead to our lower bound. We believe that our proof is accessible to a much wider audience, at least compared to the previous intricate probabilistic proofs based on measure arguments by Chazelle and Rosenberg [CR96] and Afshani [A12]. The lack of tight or almost-tight (up to polylogarithmic factor) lower bounds for near-linear space data structures is a major bottleneck in making progress on problems such as proving lower bounds for multilevel data structures. It is our hope that this new line of attack based on incidence geometry can lead to further progress in this area.","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"15 1","pages":"272-277"},"PeriodicalIF":0.0,"publicationDate":"2022-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75242395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Simple Deterministic Distributed Low-Diameter Clustering 一种简单的确定性分布式低直径聚类
Pub Date : 2022-10-21 DOI: 10.48550/arXiv.2210.11784
Václav Rozhoň, Bernhard Haeupler, C. Grunau
We give a simple, local process for nodes in an undirected graph to form non-adjacent clusters that (1) have at most a polylogarithmic diameter and (2) contain at least half of all vertices. Efficient deterministic distributed clustering algorithms for computing strong-diameter network decompositions and other key tools follow immediately. Overall, our process is a direct and drastically simplified way for computing these fundamental objects.
我们给出了一个简单的局部过程,使无向图中的节点形成非相邻簇(1)最多有一个多对数直径,(2)至少包含所有顶点的一半。计算强直径网络分解的高效确定性分布式聚类算法和其他关键工具紧随其后。总的来说,我们的过程是计算这些基本对象的直接和彻底简化的方法。
{"title":"A Simple Deterministic Distributed Low-Diameter Clustering","authors":"Václav Rozhoň, Bernhard Haeupler, C. Grunau","doi":"10.48550/arXiv.2210.11784","DOIUrl":"https://doi.org/10.48550/arXiv.2210.11784","url":null,"abstract":"We give a simple, local process for nodes in an undirected graph to form non-adjacent clusters that (1) have at most a polylogarithmic diameter and (2) contain at least half of all vertices. Efficient deterministic distributed clustering algorithms for computing strong-diameter network decompositions and other key tools follow immediately. Overall, our process is a direct and drastically simplified way for computing these fundamental objects.","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"74 1","pages":"166-174"},"PeriodicalIF":0.0,"publicationDate":"2022-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73681785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1