首页 > 最新文献

Journal of the ACM (JACM)最新文献

英文 中文
Near-optimal Linear Decision Trees for k-SUM and Related Problems k-SUM的近最优线性决策树及相关问题
Pub Date : 2019-04-12 DOI: 10.1145/3285953
D. Kane, Shachar Lovett, S. Moran
We construct near-optimal linear decision trees for a variety of decision problems in combinatorics and discrete geometry. For example, for any constant k, we construct linear decision trees that solve the k-SUM problem on n elements using O(n log2 n) linear queries. Moreover, the queries we use are comparison queries, which compare the sums of two k-subsets; when viewed as linear queries, comparison queries are 2k-sparse and have only { −1,0,1} coefficients. We give similar constructions for sorting sumsets A+B and for solving the SUBSET-SUM problem, both with optimal number of queries, up to poly-logarithmic terms. Our constructions are based on the notion of “inference dimension,” recently introduced by the authors in the context of active classification with comparison queries. This can be viewed as another contribution to the fruitful link between machine learning and discrete geometry, which goes back to the discovery of the VC dimension.
针对组合学和离散几何中的各种决策问题,构造了近似最优的线性决策树。例如,对于任意常数k,我们构建线性决策树,使用O(n log2 n)线性查询解决n个元素的k- sum问题。此外,我们使用的查询是比较查询,比较两个k子集的和;当被视为线性查询时,比较查询是2k-稀疏的,并且只有{−1,0,1}系数。我们给出了对集合A+B排序和求解子集- sum问题的类似结构,两者都具有最优查询数,直到多对数项。我们的结构基于“推理维度”的概念,该概念最近由作者在带有比较查询的主动分类上下文中引入。这可以被看作是机器学习和离散几何之间富有成效的联系的另一个贡献,这可以追溯到VC维的发现。
{"title":"Near-optimal Linear Decision Trees for k-SUM and Related Problems","authors":"D. Kane, Shachar Lovett, S. Moran","doi":"10.1145/3285953","DOIUrl":"https://doi.org/10.1145/3285953","url":null,"abstract":"We construct near-optimal linear decision trees for a variety of decision problems in combinatorics and discrete geometry. For example, for any constant k, we construct linear decision trees that solve the k-SUM problem on n elements using O(n log2 n) linear queries. Moreover, the queries we use are comparison queries, which compare the sums of two k-subsets; when viewed as linear queries, comparison queries are 2k-sparse and have only { −1,0,1} coefficients. We give similar constructions for sorting sumsets A+B and for solving the SUBSET-SUM problem, both with optimal number of queries, up to poly-logarithmic terms. Our constructions are based on the notion of “inference dimension,” recently introduced by the authors in the context of active classification with comparison queries. This can be viewed as another contribution to the fruitful link between machine learning and discrete geometry, which goes back to the discovery of the VC dimension.","PeriodicalId":17199,"journal":{"name":"Journal of the ACM (JACM)","volume":"30 1","pages":"1 - 18"},"PeriodicalIF":0.0,"publicationDate":"2019-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88078524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Parallel Bayesian Search with No Coordination 无协调并行贝叶斯搜索
Pub Date : 2019-04-05 DOI: 10.1145/3304111
P. Fraigniaud, Amos Korman, Yoav Rodeh
Coordinating the actions of agents (e.g., volunteers analyzing radio signals in SETI@home) yields efficient search algorithms. However, such an efficiency is often at the cost of implementing complex coordination mechanisms which may be expensive in terms of communication and/or computation overheads. Instead, non-coordinating algorithms, in which each agent operates independently from the others, are typically very simple, and easy to implement. They are also inherently robust to slight misbehaviors, or even crashes of agents. In this article, we investigate the “price of non-coordinating,” in terms of search performance, and we show that this price is actually quite small. Specifically, we consider a parallel version of a classical Bayesian search problem, where set of k≥1 searchers are looking for a treasure placed in one of the boxes indexed by positive integers, according to some distribution p. Each searcher can open a random box at each step, and the objective is to find the treasure in a minimum number of steps. We show that there is a very simple non-coordinating algorithm which has expected running time at most 4(1−1/k+1)2 OPT+10, where OPT is the expected running time of the best fully coordinated algorithm. Our algorithm does not even use the precise description of the distribution p, but only the relative likelihood of the boxes. We prove that, under this restriction, our algorithm has the best possible competitive ratio with respect to OPT. For the case where a complete description of the distribution p is given to the search algorithm, we describe an optimal non-coordinating algorithm for Bayesian search. This latter algorithm can be twice as fast as our former algorithm in practical scenarios such as uniform distributions. All these results provide a complete characterization of non-coordinating Bayesian search. The take-away message is that, for their simplicity and robustness, non-coordinating algorithms are viable alternatives to complex coordinating mechanisms subject to significant overheads. Most of these results apply as well to linear search, in which the indices of the boxes reflect their relative importance, and where important boxes must be visited first.
协调代理的行动(例如,志愿者在SETI@home上分析无线电信号)产生有效的搜索算法。然而,这种效率往往是以实现复杂的协调机制为代价的,这在通信和/或计算开销方面可能是昂贵的。相反,每个代理独立于其他代理操作的非协调算法通常非常简单,易于实现。对于轻微的错误行为,甚至是代理的崩溃,它们也具有固有的健壮性。在本文中,我们从搜索性能的角度研究了“非协调的代价”,我们发现这个代价实际上非常小。具体来说,我们考虑一个经典贝叶斯搜索问题的并行版本,其中k≥1个搜索者根据某个分布p寻找放置在以正整数为索引的盒子中的一个宝藏。每个搜索者在每一步都可以打开一个随机的盒子,目标是在最少的步骤中找到宝藏。我们证明了存在一种非常简单的非协调算法,其期望运行时间最多为4(1−1/k+1)2 OPT+10,其中OPT为最佳全协调算法的期望运行时间。我们的算法甚至不使用分布p的精确描述,而只使用盒子的相对可能性。我们证明了在此约束下,我们的算法相对于OPT具有最佳可能竞争比。对于搜索算法给出了分布p的完整描述的情况,我们描述了贝叶斯搜索的最优非协调算法。在均匀分布等实际情况下,后一种算法的速度是前一种算法的两倍。所有这些结果提供了一个完整的表征非协调贝叶斯搜索。结论是,由于非协调算法的简单性和鲁棒性,它们是复杂协调机制的可行替代方案,但会产生大量开销。这些结果中的大多数也适用于线性搜索,其中框的索引反映了它们的相对重要性,并且必须首先访问重要的框。
{"title":"Parallel Bayesian Search with No Coordination","authors":"P. Fraigniaud, Amos Korman, Yoav Rodeh","doi":"10.1145/3304111","DOIUrl":"https://doi.org/10.1145/3304111","url":null,"abstract":"Coordinating the actions of agents (e.g., volunteers analyzing radio signals in SETI@home) yields efficient search algorithms. However, such an efficiency is often at the cost of implementing complex coordination mechanisms which may be expensive in terms of communication and/or computation overheads. Instead, non-coordinating algorithms, in which each agent operates independently from the others, are typically very simple, and easy to implement. They are also inherently robust to slight misbehaviors, or even crashes of agents. In this article, we investigate the “price of non-coordinating,” in terms of search performance, and we show that this price is actually quite small. Specifically, we consider a parallel version of a classical Bayesian search problem, where set of k≥1 searchers are looking for a treasure placed in one of the boxes indexed by positive integers, according to some distribution p. Each searcher can open a random box at each step, and the objective is to find the treasure in a minimum number of steps. We show that there is a very simple non-coordinating algorithm which has expected running time at most 4(1−1/k+1)2 OPT+10, where OPT is the expected running time of the best fully coordinated algorithm. Our algorithm does not even use the precise description of the distribution p, but only the relative likelihood of the boxes. We prove that, under this restriction, our algorithm has the best possible competitive ratio with respect to OPT. For the case where a complete description of the distribution p is given to the search algorithm, we describe an optimal non-coordinating algorithm for Bayesian search. This latter algorithm can be twice as fast as our former algorithm in practical scenarios such as uniform distributions. All these results provide a complete characterization of non-coordinating Bayesian search. The take-away message is that, for their simplicity and robustness, non-coordinating algorithms are viable alternatives to complex coordinating mechanisms subject to significant overheads. Most of these results apply as well to linear search, in which the indices of the boxes reflect their relative importance, and where important boxes must be visited first.","PeriodicalId":17199,"journal":{"name":"Journal of the ACM (JACM)","volume":"15 1","pages":"1 - 28"},"PeriodicalIF":0.0,"publicationDate":"2019-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80461338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Approximate Counting, the Lovász Local Lemma, and Inference in Graphical Models 图模型中的近似计数,Lovász局部引理和推理
Pub Date : 2019-04-05 DOI: 10.1145/3268930
Ankur Moitra
In this article, we introduce a new approach to approximate counting in bounded degree systems with higher-order constraints. Our main result is an algorithm to approximately count the number of solutions to a CNF formula Φ when the width is logarithmic in the maximum degree. This closes an exponential gap between the known upper and lower bounds. Moreover, our algorithm extends straightforwardly to approximate sampling, which shows that under Lovász Local Lemma-like conditions it is not only possible to find a satisfying assignment, it is also possible to generate one approximately uniformly at random from the set of all satisfying assignments. Our approach is a significant departure from earlier techniques in approximate counting, and is based on a framework to bootstrap an oracle for computing marginal probabilities on individual variables. Finally, we give an application of our results to show that it is algorithmically possible to sample from the posterior distribution in an interesting class of graphical models.
在本文中,我们引入了一种具有高阶约束的有界度系统的近似计数的新方法。我们的主要成果是一种算法,当宽度在最大程度上是对数时,可以近似地计算CNF公式Φ的解的个数。这缩小了已知上界和下界之间的指数差距。此外,我们的算法直接扩展到近似抽样,这表明在Lovász类局部引理条件下,不仅可以找到一个满意的赋值,而且还可以从所有满足赋值的集合中近似均匀随机地生成一个。我们的方法与早期的近似计数技术有很大的不同,并且基于一个框架来引导一个oracle来计算单个变量的边际概率。最后,我们给出了我们的结果的一个应用,以表明在一类有趣的图形模型中,从后验分布中抽样是算法上可能的。
{"title":"Approximate Counting, the Lovász Local Lemma, and Inference in Graphical Models","authors":"Ankur Moitra","doi":"10.1145/3268930","DOIUrl":"https://doi.org/10.1145/3268930","url":null,"abstract":"In this article, we introduce a new approach to approximate counting in bounded degree systems with higher-order constraints. Our main result is an algorithm to approximately count the number of solutions to a CNF formula Φ when the width is logarithmic in the maximum degree. This closes an exponential gap between the known upper and lower bounds. Moreover, our algorithm extends straightforwardly to approximate sampling, which shows that under Lovász Local Lemma-like conditions it is not only possible to find a satisfying assignment, it is also possible to generate one approximately uniformly at random from the set of all satisfying assignments. Our approach is a significant departure from earlier techniques in approximate counting, and is based on a framework to bootstrap an oracle for computing marginal probabilities on individual variables. Finally, we give an application of our results to show that it is algorithmically possible to sample from the posterior distribution in an interesting class of graphical models.","PeriodicalId":17199,"journal":{"name":"Journal of the ACM (JACM)","volume":"36 1","pages":"1 - 25"},"PeriodicalIF":0.0,"publicationDate":"2019-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75497840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Capacity Upper Bounds for Deletion-type Channels 删除类型通道的容量上限
Pub Date : 2019-03-19 DOI: 10.1145/3281275
Mahdi Cheraghchi
We develop a systematic approach, based on convex programming and real analysis for obtaining upper bounds on the capacity of the binary deletion channel and, more generally, channels with i.i.d. insertions and deletions. Other than the classical deletion channel, we give special attention to the Poisson-repeat channel introduced by Mitzenmacher and Drinea (IEEE Transactions on Information Theory, 2006). Our framework can be applied to obtain capacity upper bounds for any repetition distribution (the deletion and Poisson-repeat channels corresponding to the special cases of Bernoulli and Poisson distributions). Our techniques essentially reduce the task of proving capacity upper bounds to maximizing a univariate, real-valued, and often concave function over a bounded interval. The corresponding univariate function is carefully designed according to the underlying distribution of repetitions, and the choices vary depending on the desired strength of the upper bounds as well as the desired simplicity of the function (e.g., being only efficiently computable versus having an explicit closed-form expression in terms of elementary, or common special, functions). Among our results, we show the following: (1) The capacity of the binary deletion channel with deletion probability d is at most (1 − d) φ for d ≥ 1/2 and, assuming that the capacity function is convex, is at most 1 − d log(4/φ) for d < 1/2, where φ = (1 + √5)/2 is the golden ratio. This is the first nontrivial capacity upper bound for any value of d outside the limiting case d → 0 that is fully explicit and proved without computer assistance. (2) We derive the first set of capacity upper bounds for the Poisson-repeat channel. Our results uncover further striking connections between this channel and the deletion channel and suggest, somewhat counter-intuitively, that the Poisson-repeat channel is actually analytically simpler than the deletion channel and may be of key importance to a complete understanding of the deletion channel. (3) We derive several novel upper bounds on the capacity of the deletion channel. All upper bounds are maximums of efficiently computable, and concave, univariate real functions over a bounded domain. In turn, we upper bound these functions in terms of explicit elementary and standard special functions, whose maximums can be found even more efficiently (and sometimes analytically, for example, for d = 1/2).
我们开发了一种系统的方法,基于凸规划和实数分析来获得二进制删除信道的容量上界,更一般地说,具有iid插入和删除的信道。除了经典的删除信道,我们特别关注由Mitzenmacher和Drinea (IEEE Transactions on Information Theory, 2006)引入的泊松重复信道。我们的框架可用于获得任何重复分布(与伯努利分布和泊松分布的特殊情况相对应的删除和泊松-重复通道)的容量上界。我们的技术从本质上减少了证明容量上界的任务,使单变量、实值和通常在有界区间内凹函数最大化。相应的单变量函数是根据重复的潜在分布精心设计的,选择取决于上界的期望强度以及函数的期望简单性(例如,仅是有效可计算的,而不是根据初等函数或普通特殊函数具有显式的封闭形式表达式)。结果表明:(1)当d≥1/2时,删除概率为d的二值删除通道的容量最大为(1−d) φ,假设容量函数为凸函数,当d < 1/2时,容量最大为1−d log(4/φ),其中φ =(1 +√5)/2为黄金分割率。这是在极限情况d→0之外的任何d值的非平凡容量上界的第一个完全显式且无需计算机辅助证明的。(2)导出了泊松重复信道的第一组容量上界。我们的研究结果进一步揭示了该通道和删除通道之间的惊人联系,并表明,在某种程度上与直觉相反,泊松重复通道实际上在分析上比删除通道更简单,并且可能对完全理解删除通道至关重要。(3)给出了删除信道容量的几个新的上界。所有上界都是有效可计算的、凹的、单变量实函数在有界域上的最大值。反过来,我们用显式初等函数和标准特殊函数为这些函数上界,它们的最大值可以更有效地找到(有时是解析的,例如,对于d = 1/2)。
{"title":"Capacity Upper Bounds for Deletion-type Channels","authors":"Mahdi Cheraghchi","doi":"10.1145/3281275","DOIUrl":"https://doi.org/10.1145/3281275","url":null,"abstract":"We develop a systematic approach, based on convex programming and real analysis for obtaining upper bounds on the capacity of the binary deletion channel and, more generally, channels with i.i.d. insertions and deletions. Other than the classical deletion channel, we give special attention to the Poisson-repeat channel introduced by Mitzenmacher and Drinea (IEEE Transactions on Information Theory, 2006). Our framework can be applied to obtain capacity upper bounds for any repetition distribution (the deletion and Poisson-repeat channels corresponding to the special cases of Bernoulli and Poisson distributions). Our techniques essentially reduce the task of proving capacity upper bounds to maximizing a univariate, real-valued, and often concave function over a bounded interval. The corresponding univariate function is carefully designed according to the underlying distribution of repetitions, and the choices vary depending on the desired strength of the upper bounds as well as the desired simplicity of the function (e.g., being only efficiently computable versus having an explicit closed-form expression in terms of elementary, or common special, functions). Among our results, we show the following: (1) The capacity of the binary deletion channel with deletion probability d is at most (1 − d) φ for d ≥ 1/2 and, assuming that the capacity function is convex, is at most 1 − d log(4/φ) for d < 1/2, where φ = (1 + √5)/2 is the golden ratio. This is the first nontrivial capacity upper bound for any value of d outside the limiting case d → 0 that is fully explicit and proved without computer assistance. (2) We derive the first set of capacity upper bounds for the Poisson-repeat channel. Our results uncover further striking connections between this channel and the deletion channel and suggest, somewhat counter-intuitively, that the Poisson-repeat channel is actually analytically simpler than the deletion channel and may be of key importance to a complete understanding of the deletion channel. (3) We derive several novel upper bounds on the capacity of the deletion channel. All upper bounds are maximums of efficiently computable, and concave, univariate real functions over a bounded domain. In turn, we upper bound these functions in terms of explicit elementary and standard special functions, whose maximums can be found even more efficiently (and sometimes analytically, for example, for d = 1/2).","PeriodicalId":17199,"journal":{"name":"Journal of the ACM (JACM)","volume":"21 1","pages":"1 - 79"},"PeriodicalIF":0.0,"publicationDate":"2019-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87038042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Polynomial Multiplication over Finite Fields in Time ( O(n log n ) 时间有限域上的多项式乘法 ( O(n log n )
Pub Date : 2019-03-18 DOI: 10.1145/3505584
David Harvey, J. van der Hoeven
Assuming a widely believed hypothesis concerning the least prime in an arithmetic progression, we show that polynomials of degree less than ( n ) over a finite field ( mathbb {F}_q ) with ( q ) elements can be multiplied in time ( O (n log q log (n log q)) ) , uniformly in ( q ) . Under the same hypothesis, we show how to multiply two ( n ) -bit integers in time ( O (n log n) ) ; this algorithm is somewhat simpler than the unconditional algorithm from the companion paper [22]. Our results hold in the Turing machine model with a finite number of tapes.
假设一个被广泛相信的关于等差数列中最小素数的假设,我们证明了在一个有限域( mathbb {F}_q )上含有( q )元素的次数小于( n )的多项式可以在时间( O (n log q log (n log q)) )上均匀地在( q )上相乘。在相同的假设下,我们展示了如何将两个( n )位整数在时间( O (n log n) )上相乘;该算法比同伴论文[22]中的无条件算法要简单一些。我们的结果适用于图灵机模型中有限数量的磁带。
{"title":"Polynomial Multiplication over Finite Fields in Time ( O(n log n )","authors":"David Harvey, J. van der Hoeven","doi":"10.1145/3505584","DOIUrl":"https://doi.org/10.1145/3505584","url":null,"abstract":"Assuming a widely believed hypothesis concerning the least prime in an arithmetic progression, we show that polynomials of degree less than ( n ) over a finite field ( mathbb {F}_q ) with ( q ) elements can be multiplied in time ( O (n log q log (n log q)) ) , uniformly in ( q ) . Under the same hypothesis, we show how to multiply two ( n ) -bit integers in time ( O (n log n) ) ; this algorithm is somewhat simpler than the unconditional algorithm from the companion paper [22]. Our results hold in the Turing machine model with a finite number of tapes.","PeriodicalId":17199,"journal":{"name":"Journal of the ACM (JACM)","volume":"48 1","pages":"1 - 40"},"PeriodicalIF":0.0,"publicationDate":"2019-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72664951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Exact Algorithms via Monotone Local Search 通过单调局部搜索的精确算法
Pub Date : 2019-03-08 DOI: 10.1145/3284176
F. Fomin, Serge Gaspers, D. Lokshtanov, Saket Saurabh
We give a new general approach for designing exact exponential-time algorithms for subset problems. In a subset problem the input implicitly describes a family of sets over a universe of size n and the task is to determine whether the family contains at least one set. A typical example of a subset problem is WEIGHTED d-SAT. Here, the input is a CNF-formula with clauses of size at most d, and an integer W. The universe is the set of variables and the variables have integer weights. The family contains all the subsets S of variables such that the total weight of the variables in S does not exceed W and setting the variables in S to 1 and the remaining variables to 0 satisfies the formula. Our approach is based on “monotone local search,” where the goal is to extend a partial solution to a solution by adding as few elements as possible. More formally, in the extension problem, we are also given as input a subset X of the universe and an integer k. The task is to determine whether one can add at most k elements to X to obtain a set in the (implicitly defined) family. Our main result is that a cknO(1) time algorithm for the extension problem immediately yields a randomized algorithm for finding a solution of any size with running time O((2−1/c)n). In many cases, the extension problem can be reduced to simply finding a solution of size at most k. Furthermore, efficient algorithms for finding small solutions have been extensively studied in the field of parameterized algorithms. Directly applying these algorithms, our theorem yields in one stroke significant improvements over the best known exponential-time algorithms for several well-studied problems, including d-HITTING SET, FEEDBACK VERTEX SET, NODE UNIQUE LABEL COVER, and WEIGHTED d-SAT. Our results demonstrate an interesting and very concrete connection between parameterized algorithms and exact exponential-time algorithms. We also show how to derandomize our algorithms at the cost of a subexponential multiplicative factor in the running time. Our derandomization is based on an efficient construction of a new pseudo-random object that might be of independent interest. Finally, we extend our methods to establish new combinatorial upper bounds and develop enumeration algorithms.
我们给出了一种设计精确指数时间子集问题算法的新方法。在子集问题中,输入隐式地描述大小为n的全域上的一个集合族,任务是确定该集合族是否至少包含一个集合。子集问题的一个典型例子是加权d-SAT。这里,输入是一个cnf公式,子句的大小最多为d,子句的大小为整数w。全域是变量的集合,变量的权重为整数。族包含变量的所有子集S,使S中变量的总权重不超过W,设S中的变量为1,其余变量为0满足公式。我们的方法基于“单调局部搜索”,其目标是通过添加尽可能少的元素将部分解决方案扩展为一个解决方案。更正式地说,在扩展问题中,我们也给出了宇宙的子集X和整数k作为输入。任务是确定是否可以向X添加最多k个元素以获得(隐式定义)族中的集合。我们的主要结果是,扩展问题的cknO(1)时间算法立即产生一个随机算法,用于寻找运行时间为O((2−1/c)n)的任意大小的解。在许多情况下,可拓问题可以简化为寻找大小不超过k的解。此外,在参数化算法领域中,寻找小解的有效算法已经得到了广泛的研究。直接应用这些算法,我们的定理在几个研究得很好的问题上,包括d- hit SET、反馈顶点集、节点唯一标签覆盖和加权d-SAT,比最著名的指数时间算法有了一次显著的改进。我们的结果证明了参数化算法和精确指数时间算法之间有趣且非常具体的联系。我们还展示了如何在运行时间中以次指数乘法因子为代价对算法进行非随机化。我们的非随机化是基于一个新的伪随机对象的有效构造,这个伪随机对象可能是独立的。最后,我们扩展了我们的方法来建立新的组合上界和开发枚举算法。
{"title":"Exact Algorithms via Monotone Local Search","authors":"F. Fomin, Serge Gaspers, D. Lokshtanov, Saket Saurabh","doi":"10.1145/3284176","DOIUrl":"https://doi.org/10.1145/3284176","url":null,"abstract":"We give a new general approach for designing exact exponential-time algorithms for subset problems. In a subset problem the input implicitly describes a family of sets over a universe of size n and the task is to determine whether the family contains at least one set. A typical example of a subset problem is WEIGHTED d-SAT. Here, the input is a CNF-formula with clauses of size at most d, and an integer W. The universe is the set of variables and the variables have integer weights. The family contains all the subsets S of variables such that the total weight of the variables in S does not exceed W and setting the variables in S to 1 and the remaining variables to 0 satisfies the formula. Our approach is based on “monotone local search,” where the goal is to extend a partial solution to a solution by adding as few elements as possible. More formally, in the extension problem, we are also given as input a subset X of the universe and an integer k. The task is to determine whether one can add at most k elements to X to obtain a set in the (implicitly defined) family. Our main result is that a cknO(1) time algorithm for the extension problem immediately yields a randomized algorithm for finding a solution of any size with running time O((2−1/c)n). In many cases, the extension problem can be reduced to simply finding a solution of size at most k. Furthermore, efficient algorithms for finding small solutions have been extensively studied in the field of parameterized algorithms. Directly applying these algorithms, our theorem yields in one stroke significant improvements over the best known exponential-time algorithms for several well-studied problems, including d-HITTING SET, FEEDBACK VERTEX SET, NODE UNIQUE LABEL COVER, and WEIGHTED d-SAT. Our results demonstrate an interesting and very concrete connection between parameterized algorithms and exact exponential-time algorithms. We also show how to derandomize our algorithms at the cost of a subexponential multiplicative factor in the running time. Our derandomization is based on an efficient construction of a new pseudo-random object that might be of independent interest. Finally, we extend our methods to establish new combinatorial upper bounds and develop enumeration algorithms.","PeriodicalId":17199,"journal":{"name":"Journal of the ACM (JACM)","volume":"32 1","pages":"1 - 23"},"PeriodicalIF":0.0,"publicationDate":"2019-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88548433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Detecting an Odd Hole 发现一个奇怪的洞
Pub Date : 2019-03-01 DOI: 10.1145/3375720
M. Chudnovsky, A. Scott, P. Seymour, S. Spirkl
We give a polynomial-time algorithm to test whether a graph contains an induced cycle with length more than three and odd.
给出了一个多项式时间算法来检验图中是否存在长度大于3且为奇数的诱导环。
{"title":"Detecting an Odd Hole","authors":"M. Chudnovsky, A. Scott, P. Seymour, S. Spirkl","doi":"10.1145/3375720","DOIUrl":"https://doi.org/10.1145/3375720","url":null,"abstract":"We give a polynomial-time algorithm to test whether a graph contains an induced cycle with length more than three and odd.","PeriodicalId":17199,"journal":{"name":"Journal of the ACM (JACM)","volume":"42 1","pages":"1 - 12"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76584893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Near Optimal Online Algorithms and Fast Approximation Algorithms for Resource Allocation Problems 资源分配问题的近最优在线算法和快速逼近算法
Pub Date : 2019-01-09 DOI: 10.1145/3284177
Nikhil R. Devanur, K. Jain, Balasubramanian Sivan, Christopher A. Wilkens
We present prior robust algorithms for a large class of resource allocation problems where requests arrive one-by-one (online), drawn independently from an unknown distribution at every step. We design a single algorithm that, for every possible underlying distribution, obtains a 1−ε fraction of the profit obtained by an algorithm that knows the entire request sequence ahead of time. The factor ε approaches 0 when no single request consumes/contributes a significant fraction of the global consumption/contribution by all requests together. We show that the tradeoff we obtain here that determines how fast ε approaches 0, is near optimal: We give a nearly matching lower bound showing that the tradeoff cannot be improved much beyond what we obtain. Going beyond the model of a static underlying distribution, we introduce the adversarial stochastic input model, where an adversary, possibly in an adaptive manner, controls the distributions from which the requests are drawn at each step. Placing no restriction on the adversary, we design an algorithm that obtains a 1−ε fraction of the optimal profit obtainable w.r.t. the worst distribution in the adversarial sequence. Further, if the algorithm is given one number per distribution, namely the optimal profit possible for each of the adversary’s distribution, then we design an algorithm that achieves a 1−ε fraction of the weighted average of the optimal profit of each distribution the adversary picks. In the offline setting we give a fast algorithm to solve very large linear programs (LPs) with both packing and covering constraints. We give algorithms to approximately solve (within a factor of 1+ε) the mixed packing-covering problem with O(γ m log (n/δ)/ε2) oracle calls where the constraint matrix of this LP has dimension n× m, the success probability of the algorithm is 1−δ, and γ quantifies how significant a single request is when compared to the sum total of all requests. We discuss implications of our results to several special cases including online combinatorial auctions, network routing, and the adwords problem.
我们提出了一种先前的鲁棒算法,用于解决一类资源分配问题,其中请求一个接一个(在线)到达,每一步都独立于未知分布。我们设计了一个单一的算法,对于每一个可能的底层分布,它获得的利润是由一个提前知道整个请求序列的算法获得的利润的1−ε分数。当没有单个请求消耗/贡献所有请求的全局消耗/贡献的很大一部分时,因子ε接近于0。我们证明了我们在这里得到的决定ε接近0的速度的权衡是接近最优的:我们给出了一个几乎匹配的下界,表明权衡不能比我们得到的更好。超越静态底层分布的模型,我们引入了对抗性随机输入模型,其中对手可能以自适应方式控制在每个步骤中绘制请求的分布。在不限制对手的情况下,我们设计了一种算法,该算法在对抗序列的最差分布下获得可获得的最优利润的1−ε分数。进一步,如果算法为每个分配给定一个数字,即每个对手分配的最优利润可能,那么我们设计了一个算法,该算法实现对手选择的每个分配的最优利润加权平均值的1−ε分数。在离线环境下,我们给出了一种求解同时具有包装约束和覆盖约束的超大型线性规划的快速算法。我们给出了用O(γ m log (n/δ)/ε2) oracle调用近似解决(在1+ε范围内)混合包装覆盖问题的算法,其中该LP的约束矩阵的维数为nx m,算法的成功概率为1 - δ,并且γ量化了单个请求与所有请求的总和相比的重要性。我们讨论了我们的结果对几个特殊情况的影响,包括在线组合拍卖,网络路由和广告问题。
{"title":"Near Optimal Online Algorithms and Fast Approximation Algorithms for Resource Allocation Problems","authors":"Nikhil R. Devanur, K. Jain, Balasubramanian Sivan, Christopher A. Wilkens","doi":"10.1145/3284177","DOIUrl":"https://doi.org/10.1145/3284177","url":null,"abstract":"We present prior robust algorithms for a large class of resource allocation problems where requests arrive one-by-one (online), drawn independently from an unknown distribution at every step. We design a single algorithm that, for every possible underlying distribution, obtains a 1−ε fraction of the profit obtained by an algorithm that knows the entire request sequence ahead of time. The factor ε approaches 0 when no single request consumes/contributes a significant fraction of the global consumption/contribution by all requests together. We show that the tradeoff we obtain here that determines how fast ε approaches 0, is near optimal: We give a nearly matching lower bound showing that the tradeoff cannot be improved much beyond what we obtain. Going beyond the model of a static underlying distribution, we introduce the adversarial stochastic input model, where an adversary, possibly in an adaptive manner, controls the distributions from which the requests are drawn at each step. Placing no restriction on the adversary, we design an algorithm that obtains a 1−ε fraction of the optimal profit obtainable w.r.t. the worst distribution in the adversarial sequence. Further, if the algorithm is given one number per distribution, namely the optimal profit possible for each of the adversary’s distribution, then we design an algorithm that achieves a 1−ε fraction of the weighted average of the optimal profit of each distribution the adversary picks. In the offline setting we give a fast algorithm to solve very large linear programs (LPs) with both packing and covering constraints. We give algorithms to approximately solve (within a factor of 1+ε) the mixed packing-covering problem with O(γ m log (n/δ)/ε2) oracle calls where the constraint matrix of this LP has dimension n× m, the success probability of the algorithm is 1−δ, and γ quantifies how significant a single request is when compared to the sum total of all requests. We discuss implications of our results to several special cases including online combinatorial auctions, network routing, and the adwords problem.","PeriodicalId":17199,"journal":{"name":"Journal of the ACM (JACM)","volume":"44 1","pages":"1 - 41"},"PeriodicalIF":0.0,"publicationDate":"2019-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79007414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
The PCL Theorem PCL定理
Pub Date : 2018-12-12 DOI: 10.1145/3266141
V. Bushkov, Dmytro Dziuma, P. Fatourou, R. Guerraoui
We establish a theorem called the PCL theorem, which states that it is impossible to design a transactional memory algorithm that ensures (1) parallelism, i.e., transactions do not need to synchronize unless they access the same application objects, (2) very little consistency, i.e., a consistency condition, called weak adaptive consistency, introduced here and that is weaker than snapshot isolation, processor consistency, and any other consistency condition stronger than them (such as opacity, serializability, causal serializability, etc.), and (3) very little liveness, i.e., which transactions eventually commit if they run solo.
我们建立了一个称为PCL定理的定理,该定理指出,不可能设计一个事务性内存算法来确保(1)并行性,即事务不需要同步,除非它们访问相同的应用程序对象;(2)非常小的一致性,即一致性条件,称为弱自适应一致性,在这里介绍,它比快照隔离,处理器一致性和任何其他比它们更强的一致性条件(如不透明性)弱。串行性(因果串行性,等等),以及(3)很少的活动性,即,如果事务单独运行,它们最终会提交哪些事务。
{"title":"The PCL Theorem","authors":"V. Bushkov, Dmytro Dziuma, P. Fatourou, R. Guerraoui","doi":"10.1145/3266141","DOIUrl":"https://doi.org/10.1145/3266141","url":null,"abstract":"We establish a theorem called the PCL theorem, which states that it is impossible to design a transactional memory algorithm that ensures (1) parallelism, i.e., transactions do not need to synchronize unless they access the same application objects, (2) very little consistency, i.e., a consistency condition, called weak adaptive consistency, introduced here and that is weaker than snapshot isolation, processor consistency, and any other consistency condition stronger than them (such as opacity, serializability, causal serializability, etc.), and (3) very little liveness, i.e., which transactions eventually commit if they run solo.","PeriodicalId":17199,"journal":{"name":"Journal of the ACM (JACM)","volume":"158 1","pages":"1 - 66"},"PeriodicalIF":0.0,"publicationDate":"2018-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74574640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Scaling Exponential Backoff 缩放指数回退
Pub Date : 2018-12-12 DOI: 10.1145/3276769
M. A. Bender, Jeremy T. Fineman, Seth Gilbert, Maxwell Young
Randomized exponential backoff is a widely deployed technique for coordinating access to a shared resource. A good backoff protocol should, arguably, satisfy three natural properties: (1) it should provide constant throughput, wasting as little time as possible; (2) it should require few failed access attempts, minimizing the amount of wasted effort; and (3) it should be robust, continuing to work efficiently even if some of the access attempts fail for spurious reasons. Unfortunately, exponential backoff has some well-known limitations in two of these areas: it can suffer subconstant throughput under bursty traffic, and it is not robust to adversarial disruption. The goal of this article is to “fix” exponential backoff by making it scalable, particularly focusing on the case where processes arrive in an online, worst-case fashion. We present a relatively simple backoff protocol, Re-Backoff, that has, at its heart, a version of exponential backoff. It guarantees expected constant throughput with dynamic process arrivals and requires only an expected polylogarithmic number of access attempts per process. Re-Backoff is also robust to periods where the shared resource is unavailable for a period of time. If it is unavailable for D time slots, Re-Backoff provides the following guarantees. For n packets, the expected number of access attempts for successfully sending a packet is O(log2(n + D)). For the case of an infinite number of packets, we provide a similar result in terms of the maximum number of processes that are ever in the system concurrently.
随机指数回退是一种广泛应用的技术,用于协调对共享资源的访问。可以说,一个好的回退协议应该满足三个自然属性:(1)它应该提供恒定的吞吐量,浪费尽可能少的时间;(2)它应该要求很少失败的访问尝试,最大限度地减少浪费的努力;(3)它应该是健壮的,即使一些访问尝试由于虚假的原因失败,它也能继续有效地工作。不幸的是,指数回退在其中两个领域有一些众所周知的局限性:在突发流量下,它可能遭受次恒定的吞吐量,并且它对对抗性中断并不健壮。本文的目标是通过使其可伸缩来“修复”指数回退,特别关注进程以在线、最坏的方式到达的情况。我们提出了一个相对简单的退避协议,Re-Backoff,其核心是一个指数退避的版本。它保证了动态进程到达时预期的恒定吞吐量,并且每个进程只需要预期的多对数访问尝试数。对于共享资源在一段时间内不可用的时间段,Re-Backoff也是健壮的。如果D时隙不可用,Re-Backoff提供以下保证。对于n个数据包,成功发送一个数据包的期望访问尝试次数为O(log2(n + D))。对于无限数量的数据包,我们在系统中并发进程的最大数量方面提供了类似的结果。
{"title":"Scaling Exponential Backoff","authors":"M. A. Bender, Jeremy T. Fineman, Seth Gilbert, Maxwell Young","doi":"10.1145/3276769","DOIUrl":"https://doi.org/10.1145/3276769","url":null,"abstract":"Randomized exponential backoff is a widely deployed technique for coordinating access to a shared resource. A good backoff protocol should, arguably, satisfy three natural properties: (1) it should provide constant throughput, wasting as little time as possible; (2) it should require few failed access attempts, minimizing the amount of wasted effort; and (3) it should be robust, continuing to work efficiently even if some of the access attempts fail for spurious reasons. Unfortunately, exponential backoff has some well-known limitations in two of these areas: it can suffer subconstant throughput under bursty traffic, and it is not robust to adversarial disruption. The goal of this article is to “fix” exponential backoff by making it scalable, particularly focusing on the case where processes arrive in an online, worst-case fashion. We present a relatively simple backoff protocol, Re-Backoff, that has, at its heart, a version of exponential backoff. It guarantees expected constant throughput with dynamic process arrivals and requires only an expected polylogarithmic number of access attempts per process. Re-Backoff is also robust to periods where the shared resource is unavailable for a period of time. If it is unavailable for D time slots, Re-Backoff provides the following guarantees. For n packets, the expected number of access attempts for successfully sending a packet is O(log2(n + D)). For the case of an infinite number of packets, we provide a similar result in terms of the maximum number of processes that are ever in the system concurrently.","PeriodicalId":17199,"journal":{"name":"Journal of the ACM (JACM)","volume":"1 1","pages":"1 - 33"},"PeriodicalIF":0.0,"publicationDate":"2018-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83099544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
期刊
Journal of the ACM (JACM)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1