首页 > 最新文献

ACM Transactions on Algorithms (TALG)最新文献

英文 中文
An Optimal Algorithm for ℓ1-Heavy Hitters in Insertion Streams and Related Problems 插入流中1-重子的最优算法及相关问题
Pub Date : 2018-10-22 DOI: 10.1145/3264427
Arnab Bhattacharyya, P. Dey, David P. Woodruff
We give the first optimal bounds for returning the ℓ1-heavy hitters in a data stream of insertions, together with their approximate frequencies, closing a long line of work on this problem. For a stream of m items in { 1, 2, … , n} and parameters 0 < ε < φ ⩽ 1, let fi denote the frequency of item i, i.e., the number of times item i occurs in the stream. With arbitrarily large constant probability, our algorithm returns all items i for which fi ⩾ φ m, returns no items j for which fj ⩽ (φ −ε)m, and returns approximations f˜i with |f˜i − fi| ⩽ ε m for each item i that it returns. Our algorithm uses O(ε−1 log φ −1 + φ −1 log n + log log m) bits of space, processes each stream update in O(1) worst-case time, and can report its output in time linear in the output size. We also prove a lower bound, which implies that our algorithm is optimal up to a constant factor in its space complexity. A modification of our algorithm can be used to estimate the maximum frequency up to an additive ε m error in the above amount of space, resolving Question 3 in the IITK 2006 Workshop on Algorithms for Data Streams for the case of ℓ1-heavy hitters. We also introduce several variants of the heavy hitters and maximum frequency problems, inspired by rank aggregation and voting schemes, and show how our techniques can be applied in such settings. Unlike the traditional heavy hitters problem, some of these variants look at comparisons between items rather than numerical values to determine the frequency of an item.
我们给出了在插入数据流中返回1-重命中的第一个最优边界,以及它们的近似频率,结束了对这个问题的一长串工作。对于在{1,2,…,n}中有m个项目且参数0 < ε < φ≤1的流,设fi表示项目i出现的频率,即项目i在流中出现的次数。以任意大的常数概率,我们的算法返回fi小于φ m的所有项目i,不返回fj≤(φ−ε)m的项目j,并且为它返回的每个项目i返回带有|f ~ i−fi|≤ε m的近似f ~ i。我们的算法使用O(ε−1 log φ−1 + φ−1 log n + log log m)位空间,在O(1)最坏情况时间内处理每个流更新,并且可以在输出大小上呈时间线性报告其输出。我们还证明了一个下界,这意味着我们的算法是最优的,直到一个常数因子的空间复杂度。我们的算法的一个修改可以用来估计在上述空间量的最大频率到一个可加的ε m误差,解决了IITK 2006年的数据流算法研讨会上的问题3。受排名聚合和投票方案的启发,我们还介绍了重磅炸弹和最大频率问题的几个变体,并展示了我们的技术如何应用于此类设置。与传统的重磅问题不同,这些变体中的一些着眼于项目之间的比较,而不是数值,以确定一个项目的频率。
{"title":"An Optimal Algorithm for ℓ1-Heavy Hitters in Insertion Streams and Related Problems","authors":"Arnab Bhattacharyya, P. Dey, David P. Woodruff","doi":"10.1145/3264427","DOIUrl":"https://doi.org/10.1145/3264427","url":null,"abstract":"We give the first optimal bounds for returning the ℓ1-heavy hitters in a data stream of insertions, together with their approximate frequencies, closing a long line of work on this problem. For a stream of m items in { 1, 2, … , n} and parameters 0 < ε < φ ⩽ 1, let fi denote the frequency of item i, i.e., the number of times item i occurs in the stream. With arbitrarily large constant probability, our algorithm returns all items i for which fi ⩾ φ m, returns no items j for which fj ⩽ (φ −ε)m, and returns approximations f˜i with |f˜i − fi| ⩽ ε m for each item i that it returns. Our algorithm uses O(ε−1 log φ −1 + φ −1 log n + log log m) bits of space, processes each stream update in O(1) worst-case time, and can report its output in time linear in the output size. We also prove a lower bound, which implies that our algorithm is optimal up to a constant factor in its space complexity. A modification of our algorithm can be used to estimate the maximum frequency up to an additive ε m error in the above amount of space, resolving Question 3 in the IITK 2006 Workshop on Algorithms for Data Streams for the case of ℓ1-heavy hitters. We also introduce several variants of the heavy hitters and maximum frequency problems, inspired by rank aggregation and voting schemes, and show how our techniques can be applied in such settings. Unlike the traditional heavy hitters problem, some of these variants look at comparisons between items rather than numerical values to determine the frequency of an item.","PeriodicalId":154047,"journal":{"name":"ACM Transactions on Algorithms (TALG)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115294819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Randomized Contractions Meet Lean Decompositions 随机收缩与精益分解
Pub Date : 2018-10-16 DOI: 10.1145/3426738
Marek Cygan, Pawel Komosa, D. Lokshtanov, Michal Pilipczuk, Marcin Pilipczuk, Saket Saurabh
We show an algorithm that, given an n-vertex graph G and a parameter k, in time 2O(k log k) n O(1) finds a tree decomposition of G with the following properties: — every adhesion of the tree decomposition is of size at most k, and — every bag of the tree decomposition is (i,i)-unbreakable in G for every 1 ⩽ i ⩽ k. Here, a set X ⊆ V(G) is (a,b)-unbreakable in G if for every separation (A,B) of order at most b in G, we have |A cap X| ⩽ a or |B ∩ X| ⩽ a. The resulting tree decomposition has arguably best possible adhesion size bounds and unbreakability guarantees. Furthermore, the parametric factor in the running time bound is significantly smaller than in previous similar constructions. These improvements allow us to present parameterized algorithms for MINIMUM BISECTION, STEINER CUT, and STEINER MULTICUT with improved parameteric factor in the running time bound. The main technical insight is to adapt the notion of lean decompositions of Thomas and the subsequent construction algorithm of Bellenbaum and Diestel to the parameterized setting.
我们给出了一个算法,给定一个n顶点图G和一个参数k,在时间2O(k log k) n O(1)中找到G的树分解,它具有以下性质:——每个树分解的附着力大小最多k,每袋,树的分解是(我)牢不可破的G每1⩽我⩽k。这里,一组X⊆V (G) (a, b)牢不可破的G如果每个订单最多的分离(a, b) b G,我们有X | | 帽⩽a或b∩X | |⩽。结果树分解无疑最好的附着力大小范围和unbreakability担保。此外,在运行时间范围内的参数因子明显小于以前的类似结构。这些改进使我们能够在运行时间范围内提出最小平分、STEINER CUT和STEINER multiccut的参数化算法,并改进了参数化因子。主要的技术见解是将Thomas的精益分解概念以及随后的Bellenbaum和Diestel的构造算法适应于参数化设置。
{"title":"Randomized Contractions Meet Lean Decompositions","authors":"Marek Cygan, Pawel Komosa, D. Lokshtanov, Michal Pilipczuk, Marcin Pilipczuk, Saket Saurabh","doi":"10.1145/3426738","DOIUrl":"https://doi.org/10.1145/3426738","url":null,"abstract":"We show an algorithm that, given an n-vertex graph G and a parameter k, in time 2O(k log k) n O(1) finds a tree decomposition of G with the following properties: — every adhesion of the tree decomposition is of size at most k, and — every bag of the tree decomposition is (i,i)-unbreakable in G for every 1 ⩽ i ⩽ k. Here, a set X ⊆ V(G) is (a,b)-unbreakable in G if for every separation (A,B) of order at most b in G, we have |A cap X| ⩽ a or |B ∩ X| ⩽ a. The resulting tree decomposition has arguably best possible adhesion size bounds and unbreakability guarantees. Furthermore, the parametric factor in the running time bound is significantly smaller than in previous similar constructions. These improvements allow us to present parameterized algorithms for MINIMUM BISECTION, STEINER CUT, and STEINER MULTICUT with improved parameteric factor in the running time bound. The main technical insight is to adapt the notion of lean decompositions of Thomas and the subsequent construction algorithm of Bellenbaum and Diestel to the parameterized setting.","PeriodicalId":154047,"journal":{"name":"ACM Transactions on Algorithms (TALG)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122017373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Entropy and Optimal Compression of Some General Plane Trees 一些通用平面树的熵和最优压缩
Pub Date : 2018-10-01 DOI: 10.1145/3275444
Z. Golebiewski, A. Magner, W. Szpankowski
We continue developing the information theory of structured data. In this article, we study models generating d-ary trees (d ≥ 2) and trees with unrestricted degree. We first compute the entropy which gives us the fundamental lower bound on compression of such trees. Then we present efficient compression algorithms based on arithmetic encoding that achieve the entropy within a constant number of bits. A naïve implementation of these algorithms has a prohibitive time complexity of O(nd) elementary arithmetic operations (each corresponding to a number f(n, d) of bit operations), but our efficient algorithms run in O(n2) of these operations, where n is the number of nodes. It turns out that extending source coding (i.e., compression) from sequences to advanced data structures such as degree-unconstrained trees is mathematically quite challenging and leads to recurrences that find ample applications in the information theory of general structures (e.g., to analyze the information content of degree-unconstrained non-plane trees).
我们继续发展结构化数据的信息理论。本文研究了d阶树(d≥2)和无限制阶树的生成模型。我们首先计算熵,它给出了这些树压缩的基本下界。然后,我们提出了基于算术编码的有效压缩算法,在恒定的比特数内实现熵。这些算法的naïve实现具有令人望而却步的O(nd)个基本算术运算(每个运算对应f(n, d)个位运算)的时间复杂度,但我们的高效算法运行O(n2)个这些运算,其中n是节点数。事实证明,将源编码(即压缩)从序列扩展到高级数据结构(如度无约束树)在数学上是相当具有挑战性的,并且会导致递归,在一般结构的信息论中找到大量应用(例如,分析度无约束非平面树的信息内容)。
{"title":"Entropy and Optimal Compression of Some General Plane Trees","authors":"Z. Golebiewski, A. Magner, W. Szpankowski","doi":"10.1145/3275444","DOIUrl":"https://doi.org/10.1145/3275444","url":null,"abstract":"We continue developing the information theory of structured data. In this article, we study models generating d-ary trees (d ≥ 2) and trees with unrestricted degree. We first compute the entropy which gives us the fundamental lower bound on compression of such trees. Then we present efficient compression algorithms based on arithmetic encoding that achieve the entropy within a constant number of bits. A naïve implementation of these algorithms has a prohibitive time complexity of O(nd) elementary arithmetic operations (each corresponding to a number f(n, d) of bit operations), but our efficient algorithms run in O(n2) of these operations, where n is the number of nodes. It turns out that extending source coding (i.e., compression) from sequences to advanced data structures such as degree-unconstrained trees is mathematically quite challenging and leads to recurrences that find ample applications in the information theory of general structures (e.g., to analyze the information content of degree-unconstrained non-plane trees).","PeriodicalId":154047,"journal":{"name":"ACM Transactions on Algorithms (TALG)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125913737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Enumerating Minimal Dominating Sets in Kt-free Graphs and Variants 无kt图及其变体中的极小支配集枚举
Pub Date : 2018-10-01 DOI: 10.1145/3386686
Marthe Bonamy, Oscar Defrain, Marc Heinrich, Michal Pilipczuk, Jean-Florent Raymond
It is a long-standing open problem whether the minimal dominating sets of a graph can be enumerated in output-polynomial time. In this article we investigate this problem in graph classes defined by forbidding an induced subgraph. In particular, we provide output-polynomial time algorithms for Kt-free graphs and for several related graph classes. This answers a question of Kanté et al. about enumeration in bipartite graphs.
图的最小支配集能否在输出多项式时间内被枚举是一个长期开放的问题。在本文中,我们研究了通过禁止诱导子图定义的图类中的这个问题。特别是,我们为无kt图和几个相关的图类提供了输出多项式时间算法。这回答了kant等人关于二部图中枚举的问题。
{"title":"Enumerating Minimal Dominating Sets in Kt-free Graphs and Variants","authors":"Marthe Bonamy, Oscar Defrain, Marc Heinrich, Michal Pilipczuk, Jean-Florent Raymond","doi":"10.1145/3386686","DOIUrl":"https://doi.org/10.1145/3386686","url":null,"abstract":"It is a long-standing open problem whether the minimal dominating sets of a graph can be enumerated in output-polynomial time. In this article we investigate this problem in graph classes defined by forbidding an induced subgraph. In particular, we provide output-polynomial time algorithms for Kt-free graphs and for several related graph classes. This answers a question of Kanté et al. about enumeration in bipartite graphs.","PeriodicalId":154047,"journal":{"name":"ACM Transactions on Algorithms (TALG)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117251266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Stream Sampling Framework and Application for Frequency Cap Statistics 频率帽统计的流采样框架及应用
Pub Date : 2018-09-24 DOI: 10.1145/3234338
E. Cohen
Unaggregated data, in a streamed or distributed form, are prevalent and come from diverse sources such as interactions of users with web services and IP traffic. Data elements have keys (cookies, users, queries), and elements with different keys interleave. Analytics on such data typically utilizes statistics expressed as a sum over keys in a specified segment of a function f applied to the frequency (the total number of occurrences) of the key. In particular, Distinct is the number of active keys in the segment, Sum is the sum of their frequencies, and both are special cases of frequency cap statistics, which cap the frequency by a parameter T. Random samples can be very effective for quick and efficient estimation of statistics at query time. Ideally, to estimate statistics for a given function f, our sample would include a key with frequency w with probability roughly proportional to f(w). The challenge is that while such “gold-standard” samples can be easily computed after aggregating the data (computing the set of key-frequency pairs), this aggregation is costly: It requires structure of size that is proportional to the number of active keys, which can be very large. We present a sampling framework for unaggregated data that uses a single pass (for streams) or two passes (for distributed data) and structure size proportional to the desired sample size. Our design unifies classic solutions for Distinct and Sum. Specifically, our ℓ-capped samples provide nonnegative unbiased estimates of any monotone non-decreasing frequency statistics and statistical guarantees on quality that are close to gold standard for cap statistics with T=Θ (ℓ). Furthermore, our multi-objective samples provide these statistical guarantees on quality for all concave sub-linear statistics (the nonnegative span of cap functions) while incurring only a logarithmic overhead on sample size.
以流或分布式形式出现的未聚合数据非常普遍,它们来自不同的来源,例如用户与web服务和IP流量的交互。数据元素有键(cookie、用户、查询),具有不同键的元素相互交错。对此类数据的分析通常使用的统计数据表示为函数f的指定段中键的总和,该函数f应用于键的频率(出现的总次数)。其中Distinct是段中活动键的个数,Sum是它们频率的和,两者都是频率上限统计的特殊情况,它们通过参数t来限制频率。随机样本可以非常有效地在查询时快速有效地估计统计信息。理想情况下,为了估计给定函数f的统计信息,我们的样本将包含频率为w的键,其概率大致与f(w)成比例。挑战在于,虽然这种“黄金标准”样本可以在聚合数据(计算键-频率对的集合)之后轻松计算出来,但这种聚合的成本很高:它需要与活动键的数量成比例的结构,而活动键的数量可能非常大。我们提出了一个非聚合数据的采样框架,它使用单次传递(对于流)或两次传递(对于分布式数据),结构大小与所需的样本量成比例。我们的设计结合了Distinct和Sum的经典解决方案。具体来说,我们的上限样本提供了任何单调非递减频率统计量的非负无偏估计和质量的统计保证,这些估计接近于T=Θ (r)的上限统计量的金标准。此外,我们的多目标样本为所有凹次线性统计(cap函数的非负跨度)提供了这些质量的统计保证,同时只产生对样本量的对数开销。
{"title":"Stream Sampling Framework and Application for Frequency Cap Statistics","authors":"E. Cohen","doi":"10.1145/3234338","DOIUrl":"https://doi.org/10.1145/3234338","url":null,"abstract":"Unaggregated data, in a streamed or distributed form, are prevalent and come from diverse sources such as interactions of users with web services and IP traffic. Data elements have keys (cookies, users, queries), and elements with different keys interleave. Analytics on such data typically utilizes statistics expressed as a sum over keys in a specified segment of a function f applied to the frequency (the total number of occurrences) of the key. In particular, Distinct is the number of active keys in the segment, Sum is the sum of their frequencies, and both are special cases of frequency cap statistics, which cap the frequency by a parameter T. Random samples can be very effective for quick and efficient estimation of statistics at query time. Ideally, to estimate statistics for a given function f, our sample would include a key with frequency w with probability roughly proportional to f(w). The challenge is that while such “gold-standard” samples can be easily computed after aggregating the data (computing the set of key-frequency pairs), this aggregation is costly: It requires structure of size that is proportional to the number of active keys, which can be very large. We present a sampling framework for unaggregated data that uses a single pass (for streams) or two passes (for distributed data) and structure size proportional to the desired sample size. Our design unifies classic solutions for Distinct and Sum. Specifically, our ℓ-capped samples provide nonnegative unbiased estimates of any monotone non-decreasing frequency statistics and statistical guarantees on quality that are close to gold standard for cap statistics with T=Θ (ℓ). Furthermore, our multi-objective samples provide these statistical guarantees on quality for all concave sub-linear statistics (the nonnegative span of cap functions) while incurring only a logarithmic overhead on sample size.","PeriodicalId":154047,"journal":{"name":"ACM Transactions on Algorithms (TALG)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123116316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Online Submodular Maximization with Free Disposal 在线子模块最大化与自由处置
Pub Date : 2018-09-17 DOI: 10.1145/3242770
T-H. Hubert Chan, Zhiyi Huang, S. Jiang, N. Kang, Zhihao Gavin Tang
We study the online submodular maximization problem with free disposal under a matroid constraint. Elements from some ground set arrive one by one in rounds, and the algorithm maintains a feasible set that is independent in the underlying matroid. In each round when a new element arrives, the algorithm may accept the new element into its feasible set and possibly remove elements from it, provided that the resulting set is still independent. The goal is to maximize the value of the final feasible set under some monotone submodular function, to which the algorithm has oracle access. For k-uniform matroids, we give a deterministic algorithm with competitive ratio at least 0.2959, and the ratio approaches 1/α∞≈ 0.3178 as k approaches infinity, improving the previous best ratio of 0.25 by Chakrabarti and Kale (IPCO 2014), Buchbinder et al. (SODA 2015), and Chekuri et al. (ICALP 2015). We also show that our algorithm is optimal among a class of deterministic monotone algorithms that accept a new arriving element only if the objective is strictly increased. Further, we prove that no deterministic monotone algorithm can be strictly better than 0.25-competitive even for partition matroids, the most modest generalization of k-uniform matroids, matching the competitive ratio by Chakrabarti and Kale (IPCO 2014) and Chekuri et al. (ICALP 2015). Interestingly, we show that randomized algorithms are strictly more powerful by giving a (non-monotone) randomized algorithm for partition matroids with ratio 1/α∞≈ 0.3178.
研究了在矩阵约束下具有自由处理的在线子模最大化问题。该算法维护一个独立于底层矩阵的可行集。在每轮中,当一个新元素到达时,算法可能会接受新元素进入其可行集,并可能从中移除元素,前提是结果集仍然是独立的。目标是在某一单调子模函数下使最终可行集的值最大化,该算法对该函数具有oracle访问权限。对于k-均匀拟矩阵,我们给出了一个竞争比至少为0.2959的确定性算法,当k趋近于无穷时,竞争比趋近于1/α∞≈0.3178,改进了Chakrabarti和Kale (IPCO 2014)、Buchbinder等(SODA 2015)和Chekuri等(ICALP 2015)之前的最佳竞争比0.25。我们还证明了我们的算法在一类确定性单调算法中是最优的,这些算法只在目标严格增加时才接受新的到达元素。此外,我们证明,即使对于划分拟阵(k-均匀拟阵的最适度推广),也没有确定性单调算法可以严格优于0.25竞争比,与Chakrabarti和Kale (IPCO 2014)和Chekuri等人(ICALP 2015)的竞争比相匹配。有趣的是,我们通过给出比率为1/α∞≈0.3178的划分矩阵的(非单调)随机化算法,证明了随机化算法严格地更强大。
{"title":"Online Submodular Maximization with Free Disposal","authors":"T-H. Hubert Chan, Zhiyi Huang, S. Jiang, N. Kang, Zhihao Gavin Tang","doi":"10.1145/3242770","DOIUrl":"https://doi.org/10.1145/3242770","url":null,"abstract":"We study the online submodular maximization problem with free disposal under a matroid constraint. Elements from some ground set arrive one by one in rounds, and the algorithm maintains a feasible set that is independent in the underlying matroid. In each round when a new element arrives, the algorithm may accept the new element into its feasible set and possibly remove elements from it, provided that the resulting set is still independent. The goal is to maximize the value of the final feasible set under some monotone submodular function, to which the algorithm has oracle access. For k-uniform matroids, we give a deterministic algorithm with competitive ratio at least 0.2959, and the ratio approaches 1/α∞≈ 0.3178 as k approaches infinity, improving the previous best ratio of 0.25 by Chakrabarti and Kale (IPCO 2014), Buchbinder et al. (SODA 2015), and Chekuri et al. (ICALP 2015). We also show that our algorithm is optimal among a class of deterministic monotone algorithms that accept a new arriving element only if the objective is strictly increased. Further, we prove that no deterministic monotone algorithm can be strictly better than 0.25-competitive even for partition matroids, the most modest generalization of k-uniform matroids, matching the competitive ratio by Chakrabarti and Kale (IPCO 2014) and Chekuri et al. (ICALP 2015). Interestingly, we show that randomized algorithms are strictly more powerful by giving a (non-monotone) randomized algorithm for partition matroids with ratio 1/α∞≈ 0.3178.","PeriodicalId":154047,"journal":{"name":"ACM Transactions on Algorithms (TALG)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124143030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Fully Dynamic MIS in Uniformly Sparse Graphs 均匀稀疏图中的全动态MIS
Pub Date : 2018-08-30 DOI: 10.1145/3378025
Krzysztof Onak, B. Schieber, Shay Solomon, Nicole Wein
We consider the problem of maintaining a maximal independent set in a dynamic graph subject to edge insertions and deletions. Recently, Assadi et al. (at STOC’18) showed that a maximal independent set can be maintained in sublinear (in the dynamically changing number of edges) amortized update time. In this article, we significantly improve the update time for uniformly sparse graphs. Specifically, for graphs with arboricity α, the amortized update time of our algorithm is O(α2 ⋅ log2 n), where n is the number of vertices. For low arboricity graphs, which include, for example, minor-free graphs and some classes of “real-world” graphs, our update time is polylogarithmic. Our update time improves the result of Assadi et al. for all graphs with arboricity bounded by m3/8−ε, for any constant ε > 0. This covers much of the range of possible values for arboricity, as the arboricity of a general graph cannot exceed m1/2.
研究了动态图中存在边插入和边删除的最大独立集的维护问题。最近,Assadi等人(在STOC ' 18)证明了在次线性(边数动态变化)的平摊更新时间内可以维持极大独立集。在本文中,我们显著提高了一致稀疏图的更新时间。具体来说,对于任意性为α的图,我们的算法的平摊更新时间为O(α2·log2 n),其中n为顶点数。对于低树性图,包括,例如,无次图和一些“真实世界”图,我们的更新时间是多对数的。对于任意常数ε > 0,我们的更新时间改进了Assadi等人对所有树性为m3/8−ε的图的结果。这涵盖了任意性可能值的大部分范围,因为一般图的任意性不能超过m1/2。
{"title":"Fully Dynamic MIS in Uniformly Sparse Graphs","authors":"Krzysztof Onak, B. Schieber, Shay Solomon, Nicole Wein","doi":"10.1145/3378025","DOIUrl":"https://doi.org/10.1145/3378025","url":null,"abstract":"We consider the problem of maintaining a maximal independent set in a dynamic graph subject to edge insertions and deletions. Recently, Assadi et al. (at STOC’18) showed that a maximal independent set can be maintained in sublinear (in the dynamically changing number of edges) amortized update time. In this article, we significantly improve the update time for uniformly sparse graphs. Specifically, for graphs with arboricity α, the amortized update time of our algorithm is O(α2 ⋅ log2 n), where n is the number of vertices. For low arboricity graphs, which include, for example, minor-free graphs and some classes of “real-world” graphs, our update time is polylogarithmic. Our update time improves the result of Assadi et al. for all graphs with arboricity bounded by m3/8−ε, for any constant ε > 0. This covers much of the range of possible values for arboricity, as the arboricity of a general graph cannot exceed m1/2.","PeriodicalId":154047,"journal":{"name":"ACM Transactions on Algorithms (TALG)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114378552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Packing Groups of Items into Multiple Knapsacks 将物品打包成多个背包
Pub Date : 2018-08-21 DOI: 10.1145/3233524
Lin Chen, Guochuan Zhang
We consider a natural generalization of the classical multiple knapsack problem in which instead of packing single items we are packing groups of items. In this problem, we have multiple knapsacks and a set of items partitioned into groups. Each item has an individual weight, while the profit is associated with groups rather than items. The profit of a group can be attained if and only if every item of this group is packed. Such a general model finds applications in various practical problems, e.g., delivering bundles of goods. The tractability of this problem relies heavily on how large a group could be. Deciding if a group of items of total weight 2 could be packed into two knapsacks of unit capacity is already NP-hard and it thus rules out a constant-approximation algorithm for this problem in general. We then focus on the parameterized version where the total weight of items in each group is bounded by a factor δ of the total capacity of all knapsacks. Both approximation and inapproximability results with respect to δ are derived. We also show that, depending on whether the number of knapsacks is a constant or part of the input, the approximation ratio for the problem, as a function on δ, changes substantially, which has a clear difference from the classical multiple knapsack problem.
我们考虑了经典多重背包问题的一个自然推广,其中我们不是打包单个物品而是打包物品群。在这个问题中,我们有多个背包和一组被分成组的物品。每个项目都有一个单独的权重,而利润是与组而不是项目相关联的。当且仅当一个群体的每一项都被打包时,这个群体的利润才能实现。这样一个通用模型在各种实际问题中都有应用,例如,运送成捆货物。这个问题的可处理性在很大程度上取决于团队的规模。决定一组总重量为2的物品是否可以装入两个单位容量的背包已经是np困难的,因此它通常排除了这个问题的常数近似算法。然后,我们将重点放在参数化版本上,其中每组物品的总重量由所有背包总容量的因子δ限定。导出了δ的近似和不近似结果。我们还表明,根据背包的数量是常数还是输入的一部分,问题的近似比率作为δ的函数会发生实质性变化,这与经典的多重背包问题有明显区别。
{"title":"Packing Groups of Items into Multiple Knapsacks","authors":"Lin Chen, Guochuan Zhang","doi":"10.1145/3233524","DOIUrl":"https://doi.org/10.1145/3233524","url":null,"abstract":"We consider a natural generalization of the classical multiple knapsack problem in which instead of packing single items we are packing groups of items. In this problem, we have multiple knapsacks and a set of items partitioned into groups. Each item has an individual weight, while the profit is associated with groups rather than items. The profit of a group can be attained if and only if every item of this group is packed. Such a general model finds applications in various practical problems, e.g., delivering bundles of goods. The tractability of this problem relies heavily on how large a group could be. Deciding if a group of items of total weight 2 could be packed into two knapsacks of unit capacity is already NP-hard and it thus rules out a constant-approximation algorithm for this problem in general. We then focus on the parameterized version where the total weight of items in each group is bounded by a factor δ of the total capacity of all knapsacks. Both approximation and inapproximability results with respect to δ are derived. We also show that, depending on whether the number of knapsacks is a constant or part of the input, the approximation ratio for the problem, as a function on δ, changes substantially, which has a clear difference from the classical multiple knapsack problem.","PeriodicalId":154047,"journal":{"name":"ACM Transactions on Algorithms (TALG)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124195396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Approximation Guarantees for the Minimum Linear Arrangement Problem by Higher Eigenvalues 高特征值下最小线性排列问题的逼近保证
Pub Date : 2018-08-21 DOI: 10.1145/3228342
Suguru Tamaki, Yuichi Yoshida
Given an n-vertex undirected graph G = (V,E) and positive edge weights {we}e∈E, a linear arrangement is a permutation π : V → {1, 2, …, n}. The value of the arrangement is val(G, π) := 1/n∑ e ={u, v} ∈ E we|π(u) − π (v)|. In the minimum linear arrangement problem, the goal is to find a linear arrangement π * that achieves val(G, π*) = MLA(G) := min π val(G, π). In this article, we show that for any ε > 0 and positive integer r, there is an nO(r/ϵ)-time randomized algorithm that, given a graph G, returns a linear arrangement π, such that val(G, π) ≤ (1 + 2/(1 − ε)λr(L)) MLA(G) + O(√log n/n ∑ e ∈ E we) with high probability, where L is the normalized Laplacian of G and λr(L) is the rth smallest eigenvalue of L. Our algorithm gives a constant factor approximation for regular graphs that are weak expanders.
给定一个n顶点无向图G = (V,E),且正边权{we} E∈E,则线性排列是一个排列π: V→{1,2,…,n}。排列的值为val(G, π):= 1/n∑e ={u, v}∈e we|π(u)−π(v)|。在最小线性排列问题中,目标是找到一个满足val(G, π*) = MLA(G):= min π val(G, π)的线性排列π*。在本文中,我们表明,对于任何ε> 0和正整数r,是一个没有(r /ϵ)-随机算法,给定一个图G,返回一个线性安排π,这样瓦尔(G,π)≤(1 + 2 /(1−ε)λr (L)) MLA (G) + O(√log n / n∑e∈e)有高概率,L是G的规范化的拉普拉斯算子和λr (L)的最小特征值仅仅是L算法给出了一个常数因子近似正则图的弱扩展器。
{"title":"Approximation Guarantees for the Minimum Linear Arrangement Problem by Higher Eigenvalues","authors":"Suguru Tamaki, Yuichi Yoshida","doi":"10.1145/3228342","DOIUrl":"https://doi.org/10.1145/3228342","url":null,"abstract":"Given an n-vertex undirected graph G = (V,E) and positive edge weights {we}e∈E, a linear arrangement is a permutation π : V → {1, 2, …, n}. The value of the arrangement is val(G, π) := 1/n∑ e ={u, v} ∈ E we|π(u) − π (v)|. In the minimum linear arrangement problem, the goal is to find a linear arrangement π * that achieves val(G, π*) = MLA(G) := min π val(G, π). In this article, we show that for any ε > 0 and positive integer r, there is an nO(r/ϵ)-time randomized algorithm that, given a graph G, returns a linear arrangement π, such that val(G, π) ≤ (1 + 2/(1 − ε)λr(L)) MLA(G) + O(√log n/n ∑ e ∈ E we) with high probability, where L is the normalized Laplacian of G and λr(L) is the rth smallest eigenvalue of L. Our algorithm gives a constant factor approximation for regular graphs that are weak expanders.","PeriodicalId":154047,"journal":{"name":"ACM Transactions on Algorithms (TALG)","volume":"544 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116509902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Graph Reconstruction and Verification 图的重构与验证
Pub Date : 2018-08-09 DOI: 10.1145/3199606
Sampath Kannan, Claire Mathieu, Hang Zhou
How efficiently can we find an unknown graph using distance or shortest path queries between its vertices? We assume that the unknown graph G is connected, unweighted, and has bounded degree. In the reconstruction problem, the goal is to find the graph G. In the verification problem, we are given a hypothetical graph Ĝ and want to check whether G is equal to Ĝ. We provide a randomized algorithm for reconstruction using Õ(n3/2) distance queries, based on Voronoi cell decomposition. Next, we analyze natural greedy algorithms for reconstruction using a shortest path oracle and also for verification using either oracle, and show that their query complexity is n1+o(1). We further improve the query complexity when the graph is chordal or outerplanar. Finally, we show some lower bounds, and consider an approximate version of the reconstruction problem.
我们如何有效地找到一个未知的图使用距离或最短路径查询其顶点之间?我们假设未知图G是连通的、无权的、有界度的。在重构问题中,目标是找到图G。在验证问题中,我们给定一个假设图Ĝ,想要检查G是否等于Ĝ。我们提供了一种基于Voronoi细胞分解的随机重构算法,使用Õ(n3/2)距离查询。接下来,我们分析了使用最短路径oracle进行重建的自然贪婪算法,以及使用任意一种oracle进行验证的自然贪婪算法,并表明它们的查询复杂度为n1+o(1)。当图是弦状或外平面时,我们进一步提高了查询复杂度。最后,我们给出了一些下界,并考虑了重构问题的近似版本。
{"title":"Graph Reconstruction and Verification","authors":"Sampath Kannan, Claire Mathieu, Hang Zhou","doi":"10.1145/3199606","DOIUrl":"https://doi.org/10.1145/3199606","url":null,"abstract":"How efficiently can we find an unknown graph using distance or shortest path queries between its vertices? We assume that the unknown graph G is connected, unweighted, and has bounded degree. In the reconstruction problem, the goal is to find the graph G. In the verification problem, we are given a hypothetical graph Ĝ and want to check whether G is equal to Ĝ. We provide a randomized algorithm for reconstruction using Õ(n3/2) distance queries, based on Voronoi cell decomposition. Next, we analyze natural greedy algorithms for reconstruction using a shortest path oracle and also for verification using either oracle, and show that their query complexity is n1+o(1). We further improve the query complexity when the graph is chordal or outerplanar. Finally, we show some lower bounds, and consider an approximate version of the reconstruction problem.","PeriodicalId":154047,"journal":{"name":"ACM Transactions on Algorithms (TALG)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126293528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
期刊
ACM Transactions on Algorithms (TALG)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1