{"title":"Better Sum Estimation via Weighted Sampling","authors":"Lorenzo Beretta, Jakub Tětek","doi":"10.1145/3650030","DOIUrl":null,"url":null,"abstract":"<p>Given a large set <i>U</i> where each item <i>a</i> ∈ <i>U</i> has weight <i>w</i>(<i>a</i>), we want to estimate the total weight <i>W</i> = ∑<sub><i>a</i> ∈ <i>U</i></sub><i>w</i>(<i>a</i>) to within factor of 1 ± ε with some constant probability > 1/2. Since <i>n</i> = |<i>U</i>| is large, we want to do this without looking at the entire set <i>U</i>. In the traditional setting in which we are allowed to sample elements from <i>U</i> uniformly, sampling <i>Ω</i>(<i>n</i>) items is necessary to provide any non-trivial guarantee on the estimate. Therefore, we investigate this problem in different settings: in the <i>proportional</i> setting we can sample items with probabilities proportional to their weights, and in the <i>hybrid</i> setting we can sample both proportionally and uniformly. These settings have applications, for example, in sublinear-time algorithms and distribution testing. </p><p>Sum estimation in the proportional and hybrid setting has been considered before by Motwani, Panigrahy, and Xu [ICALP, 2007]. In their paper, they give both upper and lower bounds in terms of <i>n</i>. Their bounds are near-matching in terms of <i>n</i>, but not in terms of ε. In this paper, we improve both their upper and lower bounds. Our bounds are matching up to constant factors in both settings, in terms of both <i>n</i> and ε. No lower bounds with dependency on ε were known previously. In the proportional setting, we improve their \\(\\tilde{O}(\\sqrt {n}/\\varepsilon ^{7/2}) \\) algorithm to \\(O(\\sqrt {n}/\\varepsilon) \\). In the hybrid setting, we improve \\(\\tilde{O}(\\sqrt [3]{n}/ \\varepsilon ^{9/2}) \\) to \\(O(\\sqrt [3]{n}/\\varepsilon ^{4/3}) \\). Our algorithms are also significantly simpler and do not have large constant factors. </p><p>We then investigate the previously unexplored scenario in which <i>n</i> is not known to the algorithm. In this case, we obtain a \\(O(\\sqrt {n}/\\varepsilon + \\log n / \\varepsilon ^2) \\) algorithm for the proportional setting, and a \\(O(\\sqrt {n}/\\varepsilon) \\) algorithm for the hybrid setting. This means that in the proportional setting, we may remove the need for advice without greatly increasing the complexity of the problem, while there is a major difference in the hybrid setting. We prove that this difference in the hybrid setting is necessary, by showing a matching lower bound. </p><p>Our algorithms have applications in the area of sublinear-time graph algorithms. Consider a large graph <i>G</i> = (<i>V</i>, <i>E</i>) and the task of (1 ± ε)-approximating |<i>E</i>|. We consider the (standard) settings where we can sample uniformly from <i>E</i> or from both <i>E</i> and <i>V</i>. This relates to sum estimation as follows: we set <i>U</i> = <i>V</i> and the weights to be equal to the degrees. Uniform sampling then corresponds to sampling vertices uniformly. Proportional sampling can be simulated by taking a random edge and picking one of its endpoints at random. If we can only sample uniformly from <i>E</i>, then our results immediately give a \\(O(\\sqrt {|V|} / \\varepsilon) \\) algorithm. When we may sample both from <i>E</i> and <i>V</i>, our results imply an algorithm with complexity \\(O(\\sqrt [3]{|V|}/\\varepsilon ^{4/3}) \\). Surprisingly, one of our subroutines provides an (1 ± ε)-approximation of |<i>E</i>| using \\(\\tilde{O}(d/\\varepsilon ^2) \\) expected samples, where <i>d</i> is the average degree, under the mild assumption that at least a constant fraction of vertices are non-isolated. This subroutine works in the setting where we can sample uniformly from both <i>V</i> and <i>E</i>. We find this remarkable since it is <i>O</i>(1/ε<sup>2</sup>) for sparse graphs.</p>","PeriodicalId":50922,"journal":{"name":"ACM Transactions on Algorithms","volume":"44 1","pages":""},"PeriodicalIF":0.9000,"publicationDate":"2024-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Algorithms","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3650030","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
Given a large set U where each item a ∈ U has weight w(a), we want to estimate the total weight W = ∑a ∈ Uw(a) to within factor of 1 ± ε with some constant probability > 1/2. Since n = |U| is large, we want to do this without looking at the entire set U. In the traditional setting in which we are allowed to sample elements from U uniformly, sampling Ω(n) items is necessary to provide any non-trivial guarantee on the estimate. Therefore, we investigate this problem in different settings: in the proportional setting we can sample items with probabilities proportional to their weights, and in the hybrid setting we can sample both proportionally and uniformly. These settings have applications, for example, in sublinear-time algorithms and distribution testing.
Sum estimation in the proportional and hybrid setting has been considered before by Motwani, Panigrahy, and Xu [ICALP, 2007]. In their paper, they give both upper and lower bounds in terms of n. Their bounds are near-matching in terms of n, but not in terms of ε. In this paper, we improve both their upper and lower bounds. Our bounds are matching up to constant factors in both settings, in terms of both n and ε. No lower bounds with dependency on ε were known previously. In the proportional setting, we improve their \(\tilde{O}(\sqrt {n}/\varepsilon ^{7/2}) \) algorithm to \(O(\sqrt {n}/\varepsilon) \). In the hybrid setting, we improve \(\tilde{O}(\sqrt [3]{n}/ \varepsilon ^{9/2}) \) to \(O(\sqrt [3]{n}/\varepsilon ^{4/3}) \). Our algorithms are also significantly simpler and do not have large constant factors.
We then investigate the previously unexplored scenario in which n is not known to the algorithm. In this case, we obtain a \(O(\sqrt {n}/\varepsilon + \log n / \varepsilon ^2) \) algorithm for the proportional setting, and a \(O(\sqrt {n}/\varepsilon) \) algorithm for the hybrid setting. This means that in the proportional setting, we may remove the need for advice without greatly increasing the complexity of the problem, while there is a major difference in the hybrid setting. We prove that this difference in the hybrid setting is necessary, by showing a matching lower bound.
Our algorithms have applications in the area of sublinear-time graph algorithms. Consider a large graph G = (V, E) and the task of (1 ± ε)-approximating |E|. We consider the (standard) settings where we can sample uniformly from E or from both E and V. This relates to sum estimation as follows: we set U = V and the weights to be equal to the degrees. Uniform sampling then corresponds to sampling vertices uniformly. Proportional sampling can be simulated by taking a random edge and picking one of its endpoints at random. If we can only sample uniformly from E, then our results immediately give a \(O(\sqrt {|V|} / \varepsilon) \) algorithm. When we may sample both from E and V, our results imply an algorithm with complexity \(O(\sqrt [3]{|V|}/\varepsilon ^{4/3}) \). Surprisingly, one of our subroutines provides an (1 ± ε)-approximation of |E| using \(\tilde{O}(d/\varepsilon ^2) \) expected samples, where d is the average degree, under the mild assumption that at least a constant fraction of vertices are non-isolated. This subroutine works in the setting where we can sample uniformly from both V and E. We find this remarkable since it is O(1/ε2) for sparse graphs.
期刊介绍:
ACM Transactions on Algorithms welcomes submissions of original research of the highest quality dealing with algorithms that are inherently discrete and finite, and having mathematical content in a natural way, either in the objective or in the analysis. Most welcome are new algorithms and data structures, new and improved analyses, and complexity results. Specific areas of computation covered by the journal include
combinatorial searches and objects;
counting;
discrete optimization and approximation;
randomization and quantum computation;
parallel and distributed computation;
algorithms for
graphs,
geometry,
arithmetic,
number theory,
strings;
on-line analysis;
cryptography;
coding;
data compression;
learning algorithms;
methods of algorithmic analysis;
discrete algorithms for application areas such as
biology,
economics,
game theory,
communication,
computer systems and architecture,
hardware design,
scientific computing