首页 > 最新文献

2011 IEEE 52nd Annual Symposium on Foundations of Computer Science最新文献

英文 中文
Approximation Algorithms for Correlated Knapsacks and Non-martingale Bandits 相关背包与非鞅强盗的逼近算法
Pub Date : 2011-02-17 DOI: 10.1109/FOCS.2011.48
Anupam Gupta, Ravishankar Krishnaswamy, M. Molinaro, R. Ravi
In the stochastic knapsack problem, we are given a knapsack of size B, and a set of items whose sizes and rewards are drawn from a known probability distribution. To know the actual size and reward we have to schedule the item -- when it completes, we get to know these values. The goal is to schedule the items (possibly making adaptive decisions based on the sizes seen so far) to maximize the expected total reward of items which successfully pack into the knapsack. We know constant-factor approximations when (i) the rewards and sizes are independent, and (ii) we cannot prematurely cancel items after we schedule them. What if either or both assumptions are relaxed? Related stochastic packing problems are the multi-armed bandit (and budgeted learning) problems, here one is given several arms which evolve in a specified stochastic fashion with each pull, and the goal is to (adaptively) decide which arms to pull, in order to maximize the expected reward obtained after B pulls in total. Much recent work on this problem focuses on the case when the evolution of each arm follows a martingale, i.e., when the expected reward from one pull of an arm is the same as the reward at the current state. What if the rewards do not form a martingale? In this paper, we give O(1)-approximation algorithms for the stochastic knapsack problem with correlations and/or cancellations. Extending the ideas developed here, we give O(1)-approximations for MAB problems without the martingale assumption. Indeed, we can show that previously proposed linear programming relaxations for these problems have large integrality gaps. So we propose new time-indexed LP relaxations, using a decomposition and "gap-filling" approach, we convert these fractional solutions to distributions over strategies, and then use the LP values and the time ordering information from these strategies to devise randomized adaptive scheduling algorithms.
在随机背包问题中,我们给定一个大小为B的背包,以及一组物品,这些物品的大小和奖励来自一个已知的概率分布。为了了解道具的实际大小和奖励,我们需要安排道具的时间——当道具完成时,我们需要知道这些数值。我们的目标是安排物品的时间(可能根据目前看到的大小做出适应性决策),以最大化成功装入背包的物品的预期总奖励。当(i)奖励和大小是独立的,以及(ii)我们不能在计划后过早取消项目时,我们知道常数因子近似值。如果其中一个或两个假设都是宽松的呢?相关的随机包装问题是多臂强盗(和预算学习)问题,这里有几个手臂,每次拉动都会以特定的随机方式进化,目标是(自适应地)决定拉动哪只手臂,以便在B拉动后获得最大的预期奖励。最近关于这个问题的许多研究都集中在每条手臂的进化遵循鞅的情况下,即当一次拉动手臂的预期奖励与当前状态的奖励相同时。如果奖励不能形成鞅呢?本文给出了具有相关和/或消去的随机背包问题的O(1)-逼近算法。在此基础上,我们给出了不含鞅假设的MAB问题的O(1)-近似。事实上,我们可以证明先前提出的这些问题的线性规划松弛具有很大的完整性间隙。因此,我们提出了新的时间索引LP松弛,使用分解和“空白填充”方法,我们将这些分数解转换为策略上的分布,然后使用这些策略的LP值和时间排序信息来设计随机自适应调度算法。
{"title":"Approximation Algorithms for Correlated Knapsacks and Non-martingale Bandits","authors":"Anupam Gupta, Ravishankar Krishnaswamy, M. Molinaro, R. Ravi","doi":"10.1109/FOCS.2011.48","DOIUrl":"https://doi.org/10.1109/FOCS.2011.48","url":null,"abstract":"In the stochastic knapsack problem, we are given a knapsack of size B, and a set of items whose sizes and rewards are drawn from a known probability distribution. To know the actual size and reward we have to schedule the item -- when it completes, we get to know these values. The goal is to schedule the items (possibly making adaptive decisions based on the sizes seen so far) to maximize the expected total reward of items which successfully pack into the knapsack. We know constant-factor approximations when (i) the rewards and sizes are independent, and (ii) we cannot prematurely cancel items after we schedule them. What if either or both assumptions are relaxed? Related stochastic packing problems are the multi-armed bandit (and budgeted learning) problems, here one is given several arms which evolve in a specified stochastic fashion with each pull, and the goal is to (adaptively) decide which arms to pull, in order to maximize the expected reward obtained after B pulls in total. Much recent work on this problem focuses on the case when the evolution of each arm follows a martingale, i.e., when the expected reward from one pull of an arm is the same as the reward at the current state. What if the rewards do not form a martingale? In this paper, we give O(1)-approximation algorithms for the stochastic knapsack problem with correlations and/or cancellations. Extending the ideas developed here, we give O(1)-approximations for MAB problems without the martingale assumption. Indeed, we can show that previously proposed linear programming relaxations for these problems have large integrality gaps. So we propose new time-indexed LP relaxations, using a decomposition and \"gap-filling\" approach, we convert these fractional solutions to distributions over strategies, and then use the LP values and the time ordering information from these strategies to devise randomized adaptive scheduling algorithms.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131911757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 76
A Constant Factor Approximation Algorithm for Unsplittable Flow on Paths 路径上不可分割流的常因子逼近算法
Pub Date : 2011-02-17 DOI: 10.1137/120868360
P. Bonsma, J. Schulz, Andreas Wiese
In this paper, we present a constant-factor approximation algorithm for the unsplittable flow problem on a path. This improves on the previous best known approximation factor of O(log n). The approximation ratio of our algorithm is 7+e for any e>0. In the unsplittable flow problem on a path, we are given a capacitated path P and n tasks, each task having a demand, a profit, and start and end vertices. The goal is to compute a maximum profit set of tasks, such that for each edge e of P, the total demand of selected tasks that use e does not exceed the capacity of e. This is a well-studied problem that occurs naturally in various settings, and therefore it has been studied under alternative names, such as resource allocation, bandwidth allocation, resource constrained scheduling, temporal knapsack and interval packing. Polynomial time constant factor approximation algorithms for the problem were previously known only under the no-bottleneck assumption (in which the maximum task demand must be no greater than the minimum edge capacity). We introduce several novel algorithmic techniques, which might be of independent interest: a framework which reduces the problem to instances with a bounded range of capacities, and a new geometrically inspired dynamic program which solves a special case of the maximum weight independent set of rectangles problem to optimality. In addition, we show that the problem is strongly NP-hard even if all edge capacities are equal and all demands are either 1, 2, or 3.
本文给出了一种求解路径上不可分流问题的常因子近似算法。这改进了之前最著名的近似因子O(log n)。对于任何e>0,我们算法的近似比为7+e。在路径上的不可分割流问题中,我们有P和n个可容路径任务,每个任务都有需求、利润、起始点和结束点。目标是计算任务的最大利润集,使得对于P的每条边e,使用e的选定任务的总需求不超过e的容量。这是一个很好的研究问题,在各种设置中自然发生,因此它已经在其他名称下进行了研究,例如资源分配,带宽分配,资源约束调度,时间背包和间隔打包。该问题的多项式时间常数因子近似算法以前只在无瓶颈假设下已知(其中最大任务需求必须不大于最小边缘容量)。我们介绍了一些新的算法技术,它们可能是独立的兴趣:一个框架,它将问题简化为具有有限容量范围的实例,一个新的几何启发的动态规划,它将最大权无关矩形集问题的特殊情况解决为最优性。此外,我们证明了即使所有边缘容量相等并且所有需求为1、2或3,问题也是强np困难的。
{"title":"A Constant Factor Approximation Algorithm for Unsplittable Flow on Paths","authors":"P. Bonsma, J. Schulz, Andreas Wiese","doi":"10.1137/120868360","DOIUrl":"https://doi.org/10.1137/120868360","url":null,"abstract":"In this paper, we present a constant-factor approximation algorithm for the unsplittable flow problem on a path. This improves on the previous best known approximation factor of O(log n). The approximation ratio of our algorithm is 7+e for any e>0. In the unsplittable flow problem on a path, we are given a capacitated path P and n tasks, each task having a demand, a profit, and start and end vertices. The goal is to compute a maximum profit set of tasks, such that for each edge e of P, the total demand of selected tasks that use e does not exceed the capacity of e. This is a well-studied problem that occurs naturally in various settings, and therefore it has been studied under alternative names, such as resource allocation, bandwidth allocation, resource constrained scheduling, temporal knapsack and interval packing. Polynomial time constant factor approximation algorithms for the problem were previously known only under the no-bottleneck assumption (in which the maximum task demand must be no greater than the minimum edge capacity). We introduce several novel algorithmic techniques, which might be of independent interest: a framework which reduces the problem to instances with a bounded range of capacities, and a new geometrically inspired dynamic program which solves a special case of the maximum weight independent set of rectangles problem to optimality. In addition, we show that the problem is strongly NP-hard even if all edge capacities are equal and all demands are either 1, 2, or 3.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125433841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 78
Optimal Bounds for Quantum Bit Commitment 量子比特承诺的最优边界
Pub Date : 2011-02-08 DOI: 10.1109/FOCS.2011.42
A. Chailloux, Iordanis Kerenidis
Bit commitment is a fundamental cryptographic primitive with numerous applications. Quantum information allows for bit commitment schemes in the information theoretic setting where no dishonest party can perfectly cheat. The previously best-known quantum protocol by Ambainis achieved a cheating probability of at most 3/4. On the other hand, Kitaev showed that no quantum protocol can have cheating probability less than 1sqrt{2} (his lower bound on coin flipping can be easily extended to bit commitment). Closing this gap has since been an important open question. In this paper, we provide the optimal bound for quantum bit commitment. First, we show a lower bound of approximately 0.739, improving Kitaev's lower bound. For this, we present some generic cheating strategies for Alice and Bob and conclude by proving a new relation between the trace distance and fidelity of two quantum states. Second, we present an optimal quantum bit commitment protocol which has cheating probability arbitrarily close to 0.739. More precisely, we show how to use any weak coin flipping protocol with cheating probability 1/2 + eps in order to achieve a quantum bit commitment protocol with cheating probability 0.739 + O(eps). We then use the optimal quantum weak coin flipping protocol described by Mochon. Last, in order to stress the fact that our protocol uses quantum effects beyond the weak coin flip, we show that any classical bit commitment protocol with access to perfect weak (or strong) coin flipping has cheating probability at least 3/4.
位承诺是一种基本的密码原语,有许多应用。量子信息允许在信息理论设置的比特承诺方案,没有不诚实的一方可以完美地欺骗。Ambainis先前最著名的量子协议实现了最多3/4的作弊概率。另一方面,Kitaev证明了任何量子协议都不可能具有小于1sqrt{2}的作弊概率(他关于抛硬币的下界可以很容易地扩展到比特承诺)。自那以后,缩小这一差距一直是一个重要的悬而未决的问题。本文给出了量子比特承诺的最优界。首先,我们给出了一个约为0.739的下界,改进了Kitaev的下界。为此,我们提出了Alice和Bob的一些通用作弊策略,并通过证明两个量子态的跟踪距离和保真度之间的新关系来结束。其次,我们提出了一个最优量子比特承诺协议,其欺骗概率任意接近0.739。更准确地说,我们展示了如何使用欺骗概率为1/2 + eps的弱抛硬币协议来实现欺骗概率为0.739 + O(eps)的量子比特承诺协议。然后,我们使用Mochon描述的最优量子弱抛硬币协议。最后,为了强调我们的协议在弱抛硬币之外使用量子效应的事实,我们表明,任何经典的比特承诺协议都可以获得完美的弱(或强)抛硬币,其作弊概率至少为3/4。
{"title":"Optimal Bounds for Quantum Bit Commitment","authors":"A. Chailloux, Iordanis Kerenidis","doi":"10.1109/FOCS.2011.42","DOIUrl":"https://doi.org/10.1109/FOCS.2011.42","url":null,"abstract":"Bit commitment is a fundamental cryptographic primitive with numerous applications. Quantum information allows for bit commitment schemes in the information theoretic setting where no dishonest party can perfectly cheat. The previously best-known quantum protocol by Ambainis achieved a cheating probability of at most 3/4. On the other hand, Kitaev showed that no quantum protocol can have cheating probability less than 1sqrt{2} (his lower bound on coin flipping can be easily extended to bit commitment). Closing this gap has since been an important open question. In this paper, we provide the optimal bound for quantum bit commitment. First, we show a lower bound of approximately 0.739, improving Kitaev's lower bound. For this, we present some generic cheating strategies for Alice and Bob and conclude by proving a new relation between the trace distance and fidelity of two quantum states. Second, we present an optimal quantum bit commitment protocol which has cheating probability arbitrarily close to 0.739. More precisely, we show how to use any weak coin flipping protocol with cheating probability 1/2 + eps in order to achieve a quantum bit commitment protocol with cheating probability 0.739 + O(eps). We then use the optimal quantum weak coin flipping protocol described by Mochon. Last, in order to stress the fact that our protocol uses quantum effects beyond the weak coin flip, we show that any classical bit commitment protocol with access to perfect weak (or strong) coin flipping has cheating probability at least 3/4.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121315016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 63
On the Complexity of Commuting Local Hamiltonians, and Tight Conditions for Topological Order in Such Systems 这类系统中可交换局部哈密顿量的复杂性及拓扑有序的紧条件
Pub Date : 2011-02-03 DOI: 10.1109/FOCS.2011.58
D. Aharonov, Lior Eldar
The local Hamiltonian problem plays the equivalent role of SAT in quantum complexity theory. Understanding the complexity of the intermediate case in which the constraints are quantum but all local terms in the Hamiltonian commute, is of importance for conceptual, physical and computational complexity reasons. Bravyi and Vyalyi showed in 2003, using a clever application of the representation theory of C*-algebras, that if the terms in the Hamiltonian are all two-local, the problem is in NP, and the entanglement in the ground states is local. The general case remained open since then. In this paper we extend this result beyond the two-local case, to the case of three-qubit interactions. We then extend our results even further, and show that NP verification is possible for three-wise interaction between qutrits as well, as long as the interaction graph is planar and also " nearly Euclidean & quot, in some well-defined sense. The proofs imply that in all such systems, the entanglement in the ground states is local. These extensions imply an intriguing sharp transition phenomenon in commuting Hamiltonian systems: the ground spaces of 3-local " physical & quot, systems based on qubits and qutrits are diagonalizable by a basis whose entanglement is highly local, while even slightly more involved interactions (the particle dimensionality or the locality of the interaction is larger) already exhibit an important long-range entanglement property called Topological Order. Our results thus imply that Kitaev's celebrated Toric code construction is, in a well defined sense, optimal as a construction of Topological Order based on commuting Hamiltonians.
局部哈密顿问题在量子复杂性理论中起着与SAT等价的作用。理解中间情况的复杂性,其中约束是量子的,但都是哈密顿交换中的局部项,对于概念、物理和计算复杂性的原因是重要的。Bravyi和Vyalyi在2003年通过对C*代数表示理论的巧妙应用表明,如果哈密顿函数中的项都是双局部的,那么问题就在NP中,基态的纠缠是局部的。从那以后,这个案子一直悬而未决。在本文中,我们将这一结果从双局部情况扩展到三量子比特相互作用的情况。然后,我们进一步扩展了我们的结果,并表明只要交互图是平面的,并且在某种定义良好的意义上“接近欧几里得”,那么对于元素之间的三向交互,NP验证也是可能的。证明表明,在所有这样的系统中,基态的纠缠都是局域的。这些扩展暗示了交换哈密顿系统中一个有趣的尖锐跃迁现象:基于量子位和量子位的三局部“物理”系统的地面空间可以通过纠缠高度局域的基对角化,而甚至稍微涉及更多的相互作用(粒子维度或相互作用的局域性更大)已经表现出重要的远程纠缠特性,称为拓扑顺序。因此,我们的结果表明,Kitaev著名的托利码结构,在一个明确的意义上,作为基于交换哈密顿算子的拓扑序结构是最优的。
{"title":"On the Complexity of Commuting Local Hamiltonians, and Tight Conditions for Topological Order in Such Systems","authors":"D. Aharonov, Lior Eldar","doi":"10.1109/FOCS.2011.58","DOIUrl":"https://doi.org/10.1109/FOCS.2011.58","url":null,"abstract":"The local Hamiltonian problem plays the equivalent role of SAT in quantum complexity theory. Understanding the complexity of the intermediate case in which the constraints are quantum but all local terms in the Hamiltonian commute, is of importance for conceptual, physical and computational complexity reasons. Bravyi and Vyalyi showed in 2003, using a clever application of the representation theory of C*-algebras, that if the terms in the Hamiltonian are all two-local, the problem is in NP, and the entanglement in the ground states is local. The general case remained open since then. In this paper we extend this result beyond the two-local case, to the case of three-qubit interactions. We then extend our results even further, and show that NP verification is possible for three-wise interaction between qutrits as well, as long as the interaction graph is planar and also \" nearly Euclidean & quot, in some well-defined sense. The proofs imply that in all such systems, the entanglement in the ground states is local. These extensions imply an intriguing sharp transition phenomenon in commuting Hamiltonian systems: the ground spaces of 3-local \" physical & quot, systems based on qubits and qutrits are diagonalizable by a basis whose entanglement is highly local, while even slightly more involved interactions (the particle dimensionality or the locality of the interaction is larger) already exhibit an important long-range entanglement property called Topological Order. Our results thus imply that Kitaev's celebrated Toric code construction is, in a well defined sense, optimal as a construction of Topological Order based on commuting Hamiltonians.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123288099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 62
The Minimum k-way Cut of Bounded Size is Fixed-Parameter Tractable 有界大小的最小k路切割是固定参数可处理的
Pub Date : 2011-01-24 DOI: 10.1109/FOCS.2011.53
K. Kawarabayashi, M. Thorup
We consider the minimum $k$-way cut problem for unweighted undirected graphs with a size bound $s$ on the number of cut edges allowed. Thus we seek to remove as few edges as possible so as to split a graph into $k$ components, or report that this requires cutting more than $s$ edges. We show that this problem is fixed-parameter tractable (FPT) with the standard parameterization in terms of the solution size $s$. More precisely, for $s=O(1)$, we present a quadratic time algorithm. Moreover, we present a much easier linear time algorithm for planar graphs and bounded genus graphs. Our tractability result stands in contrast to known W[1] hardness of related problems. Without the size bound, Downey et al.~[2003] proved that the minimum $k$-way cut problem is W[1] hard with parameter $k$, and this is even for simple unweighted graphs. Downey et al.~asked about the status for planar graphs. We get linear time with fixed parameter $k$ for simple planar graphs since the minimum $k$-way cut of a planar graph is of size at most $6k$. More generally, we get FPT with parameter $k$ for any graph class with bounded average degree. A simple reduction shows that vertex cuts are at least as hard as edge cuts, so the minimum $k$-way vertex cut is also W[1] hard with parameter $k$. Marx [2004] proved that finding a minimum $k$-way vertex cut of size $s$ is also W[1] hard with parameter $s$. Marx asked about the FPT status with edge cuts, which we prove tractable here. We are not aware of any other cut problem where the vertex version is W[1] hard but the edge version is FPT, e.g., Marx [2004] proved that the $k$-terminal cut problem is FPT parameterized by the cut size, both for edge and vertex cuts.
我们考虑了无向图的最小k路切割问题,该图在允许的切割边数量上有一个大小限制。因此,我们试图删除尽可能少的边,以便将一个图分成$k$个组件,或者报告这需要切割超过$s$条边。我们证明了这个问题是固定参数可处理的(FPT),用解决方案大小的标准参数化。更准确地说,对于$s=O(1)$,我们提出了一个二次时间算法。此外,我们提出了一个更简单的平面图和有界格图的线性时间算法。我们的可处理性结果与已知的W[1]相关问题的硬度形成对比。在没有大小限制的情况下,Downey et al.~[2003]证明了在参数为$k$的情况下,最小$k$路切问题是W[1]难的,这甚至适用于简单的无加权图。唐尼等人询问了平面图形的现状。对于简单的平面图,我们得到了具有固定参数k的线性时间,因为平面图的最小k路切割的大小最多为6k。更一般地说,对于任何平均度有界的图类,我们得到参数为k的FPT。一个简单的约简表明顶点切割至少和边切割一样难,所以最小$k$路顶点切割也是W[1]难,参数$k$。Marx[2004]证明了在参数为$s$的情况下,寻找一个最小$k$向顶点切割也是W[1]困难的。马克思问过FPT的边缘切割状况,我们在这里证明这是可以处理的。我们不知道任何其他的切割问题,其中顶点版本是W[1]硬,但边缘版本是FPT,例如,马克思[2004]证明了$k$-终端切割问题是FPT参数化的切割尺寸,对于边和顶点切割。
{"title":"The Minimum k-way Cut of Bounded Size is Fixed-Parameter Tractable","authors":"K. Kawarabayashi, M. Thorup","doi":"10.1109/FOCS.2011.53","DOIUrl":"https://doi.org/10.1109/FOCS.2011.53","url":null,"abstract":"We consider the minimum $k$-way cut problem for unweighted undirected graphs with a size bound $s$ on the number of cut edges allowed. Thus we seek to remove as few edges as possible so as to split a graph into $k$ components, or report that this requires cutting more than $s$ edges. We show that this problem is fixed-parameter tractable (FPT) with the standard parameterization in terms of the solution size $s$. More precisely, for $s=O(1)$, we present a quadratic time algorithm. Moreover, we present a much easier linear time algorithm for planar graphs and bounded genus graphs. Our tractability result stands in contrast to known W[1] hardness of related problems. Without the size bound, Downey et al.~[2003] proved that the minimum $k$-way cut problem is W[1] hard with parameter $k$, and this is even for simple unweighted graphs. Downey et al.~asked about the status for planar graphs. We get linear time with fixed parameter $k$ for simple planar graphs since the minimum $k$-way cut of a planar graph is of size at most $6k$. More generally, we get FPT with parameter $k$ for any graph class with bounded average degree. A simple reduction shows that vertex cuts are at least as hard as edge cuts, so the minimum $k$-way vertex cut is also W[1] hard with parameter $k$. Marx [2004] proved that finding a minimum $k$-way vertex cut of size $s$ is also W[1] hard with parameter $s$. Marx asked about the FPT status with edge cuts, which we prove tractable here. We are not aware of any other cut problem where the vertex version is W[1] hard but the edge version is FPT, e.g., Marx [2004] proved that the $k$-terminal cut problem is FPT parameterized by the cut size, both for edge and vertex cuts.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122774587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 61
Maximizing Expected Utility for Stochastic Combinatorial Optimization Problems 随机组合优化问题的期望效用最大化
Pub Date : 2010-12-14 DOI: 10.1109/FOCS.2011.33
J. Li, A. Deshpande
We study the stochastic versions of a broad class of combinatorial problems where the weights of the elements in the input dataset are uncertain. The class of problems that we study includes shortest paths, minimum weight spanning trees, and minimum weight matchings over probabilistic graphs, and other combinatorial problems like knapsack. We observe that the expected value is inadequate in capturing different types of {em risk-averse} or {em risk-prone} behaviors, and instead we consider a more general objective which is to maximize the {em expected utility} of the solution for some given utility function, rather than the expected weight (expected weight becomes a special case). We show that we can obtain a polynomial time approximation algorithm with {em additive error} $epsilon$ for any $epsilon>0$, if there is a pseudopolynomial time algorithm for the {em exact} version of the problem (This is true for the problems mentioned above)and the maximum value of the utility function is bounded by a constant. Our result generalizes several prior results on stochastic shortest path, stochastic spanning tree, and stochastic knapsack. Our algorithm for utility maximization makes use of the separability of exponential utility and a technique to decompose a general utility function into exponential utility functions, which may be useful in other stochastic optimization problems.
我们研究了一大类组合问题的随机版本,其中输入数据集中元素的权重是不确定的。我们研究的一类问题包括最短路径、最小权值生成树、概率图上的最小权值匹配以及其他组合问题,如背包问题。我们观察到期望值不足以捕获不同类型的{em风险厌恶}或{em风险倾向}行为,相反,我们考虑一个更一般的目标,即最大化某些给定效用函数的解决方案的{em期望效用},而不是期望权重(期望权重成为特殊情况)。我们证明,对于任何$epsilon>0$,如果有一个伪多项式时间算法用于问题的{em精确}版本(这对于上面提到的问题是正确的),并且效用函数的最大值有一个常数的边界,我们可以获得一个具有{em加性误差}$epsilon$的多项式时间近似算法。我们的结果推广了之前关于随机最短路径、随机生成树和随机背包的一些结果。该算法利用了指数效用的可分性和将一般效用函数分解为指数效用函数的技术,可用于其他随机优化问题。
{"title":"Maximizing Expected Utility for Stochastic Combinatorial Optimization Problems","authors":"J. Li, A. Deshpande","doi":"10.1109/FOCS.2011.33","DOIUrl":"https://doi.org/10.1109/FOCS.2011.33","url":null,"abstract":"We study the stochastic versions of a broad class of combinatorial problems where the weights of the elements in the input dataset are uncertain. The class of problems that we study includes shortest paths, minimum weight spanning trees, and minimum weight matchings over probabilistic graphs, and other combinatorial problems like knapsack. We observe that the expected value is inadequate in capturing different types of {em risk-averse} or {em risk-prone} behaviors, and instead we consider a more general objective which is to maximize the {em expected utility} of the solution for some given utility function, rather than the expected weight (expected weight becomes a special case). We show that we can obtain a polynomial time approximation algorithm with {em additive error} $epsilon$ for any $epsilon>0$, if there is a pseudopolynomial time algorithm for the {em exact} version of the problem (This is true for the problems mentioned above)and the maximum value of the utility function is bounded by a constant. Our result generalizes several prior results on stochastic shortest path, stochastic spanning tree, and stochastic knapsack. Our algorithm for utility maximization makes use of the separability of exponential utility and a technique to decompose a general utility function into exponential utility functions, which may be useful in other stochastic optimization problems.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2010-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122111362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Enumerative Lattice Algorithms in any Norm Via M-ellipsoid Coverings 通过m -椭球覆盖的任意范数的枚举格算法
Pub Date : 2010-11-25 DOI: 10.1109/FOCS.2011.31
D. Dadush, Chris Peikert, S. Vempala
We give a novel algorithm for enumerating lattice points in any convex body, and give applications to several classic lattice problems, including the Shortest and Closest Vector Problems (SVP and CVP, respectively) and Integer Programming (IP). Our enumeration technique relies on a classical concept from asymptotic convex geometry known as the M-ellipsoid, and uses as a crucial subroutine the recent algorithm of Micciancio and Voulgaris (STOC 2010)for lattice problems in the l2 norm. As a main technical contribution, which may be of independent interest, we build on the techniques of Klartag (Geometric and Functional Analysis, 2006) to give an expected 2^O(n)-time algorithm for computing an M-ellipsoid for any n-dimensional convex body. As applications, we give deterministic 2^O(n)-time and -space algorithms for solving exact SVP, and exact CVP when the target point is sufficiently close to the lattice, on n-dimensional lattices in any (semi-)norm given an M-ellipsoid of the unit ball. In many norms of interest, including all lp norms, an M-ellipsoid is computable in deterministic poly(n) time, in which case these algorithms are fully deterministic. Here our approach may be seen as a derandomization of the “AKS sieve”for exact SVP and CVP (Ajtai, Kumar, and Siva Kumar, STOC2001 and CCC 2002). As a further application of our SVP algorithm, we derive an expected O(f*(n))^n-time algorithm for Integer Programming, where f*(n) denotes the optimal bound in the so-called “flatnesstheorem, ” which satisfies f*(n) = O(n^(4/3) polylog(n))and is conjectured to be f*(n) = O(n). Our runtime improves upon the previous best of O(n^2)^n by Hildebrand and Koppe(2010).
本文给出了一种新的点阵点枚举算法,并给出了几种经典点阵问题的应用,包括最短和最近向量问题(SVP和CVP)和整数规划(IP)。我们的枚举技术依赖于渐近凸几何中的经典概念,即m -椭球体,并使用Micciancio和Voulgaris (STOC 2010)的最新算法作为关键的子程序来解决l2范数中的格问题。作为一个主要的技术贡献,这可能是独立的兴趣,我们建立在Klartag(几何和功能分析,2006)的技术基础上,给出了一个预期的2^O(n)时间算法,用于计算任何n维凸体的m -椭球体。作为应用,我们给出了确定的2^O(n)时间和空间算法来求解精确的SVP和精确的CVP,当目标点足够接近晶格时,在给定一个单位球的m -椭球的任何(半)范数的n维格上。在许多感兴趣的范数中,包括所有lp范数,m -椭球体在确定性多(n)时间内是可计算的,在这种情况下,这些算法是完全确定的。在这里,我们的方法可以被看作是对精确的SVP和CVP (Ajtai, Kumar和Siva Kumar, STOC2001和CCC 2002)的 œAKS筛选 的非随机化。作为SVP算法的进一步应用,我们导出了一个期望的O(f*(n))^n时间的整数规划算法,其中f*(n)表示所谓的€œflatnesstheorem, €中满足f*(n) = O(n^(4/3) polylog(n))的最优界,并推测f*(n) = O(n)。我们的运行时间比Hildebrand和Koppe(2010)之前的最佳值O(n^2)^n有所提高。
{"title":"Enumerative Lattice Algorithms in any Norm Via M-ellipsoid Coverings","authors":"D. Dadush, Chris Peikert, S. Vempala","doi":"10.1109/FOCS.2011.31","DOIUrl":"https://doi.org/10.1109/FOCS.2011.31","url":null,"abstract":"We give a novel algorithm for enumerating lattice points in any convex body, and give applications to several classic lattice problems, including the Shortest and Closest Vector Problems (SVP and CVP, respectively) and Integer Programming (IP). Our enumeration technique relies on a classical concept from asymptotic convex geometry known as the M-ellipsoid, and uses as a crucial subroutine the recent algorithm of Micciancio and Voulgaris (STOC 2010)for lattice problems in the l2 norm. As a main technical contribution, which may be of independent interest, we build on the techniques of Klartag (Geometric and Functional Analysis, 2006) to give an expected 2^O(n)-time algorithm for computing an M-ellipsoid for any n-dimensional convex body. As applications, we give deterministic 2^O(n)-time and -space algorithms for solving exact SVP, and exact CVP when the target point is sufficiently close to the lattice, on n-dimensional lattices in any (semi-)norm given an M-ellipsoid of the unit ball. In many norms of interest, including all lp norms, an M-ellipsoid is computable in deterministic poly(n) time, in which case these algorithms are fully deterministic. Here our approach may be seen as a derandomization of the “AKS sieve”for exact SVP and CVP (Ajtai, Kumar, and Siva Kumar, STOC2001 and CCC 2002). As a further application of our SVP algorithm, we derive an expected O(f*(n))^n-time algorithm for Integer Programming, where f*(n) denotes the optimal bound in the so-called “flatnesstheorem, ” which satisfies f*(n) = O(n^(4/3) polylog(n))and is conjectured to be f*(n) = O(n). Our runtime improves upon the previous best of O(n^2)^n by Hildebrand and Koppe(2010).","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2010-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133590717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 101
Quantum Query Complexity of State Conversion 状态转换的量子查询复杂度
Pub Date : 2010-11-12 DOI: 10.1109/FOCS.2011.75
Troy Lee, R. Mittal, B. Reichardt, R. Spalek, M. Szegedy
State conversion generalizes query complexity to the problem of converting between two input-dependent quantum states by making queries to the input. We characterize the complexity of this problem by introducing a natural information-theoretic norm that extends the Schur product operator norm. The complexity of converting between two systems of states is given by the distance between them, as measured by this norm. In the special case of function evaluation, the norm is closely related to the general adversary bound, a semi-definite program that lower-bounds the number of input queries needed by a quantum algorithm to evaluate a function. We thus obtain that the general adversary bound characterizes the quantum query complexity of any function whatsoever. This generalizes and simplifies the proof of the same result in the case of boolean input and output. Also in the case of function evaluation, we show that our norm satisfies a remarkable composition property, implying that the quantum query complexity of the composition of two functions is at most the product of the query complexities of the functions, up to a constant. Finally, our result implies that discrete and continuous-time query models are equivalent in the bounded-error setting, even for the general state-conversion problem.
状态转换将查询复杂性概括为通过对输入进行查询在两个依赖输入的量子态之间进行转换的问题。我们通过引入扩展舒尔积算子范数的自然信息论范数来表征这个问题的复杂性。两个状态系统之间转换的复杂性由它们之间的距离给出,由这个范数测量。在函数求值的特殊情况下,范数与一般的对手界密切相关,后者是一种半确定的程序,用于降低量子算法求值函数所需的输入查询的数量。因此,我们得到一般对手界表征任何函数的量子查询复杂性。这推广和简化了布尔输入和输出情况下相同结果的证明。同样在函数求值的情况下,我们表明我们的范数满足一个显著的组合性质,这意味着两个函数组合的量子查询复杂性最多是函数查询复杂性的乘积,直到一个常数。最后,我们的结果表明,即使对于一般的状态转换问题,离散时间和连续时间查询模型在有界误差设置中是等效的。
{"title":"Quantum Query Complexity of State Conversion","authors":"Troy Lee, R. Mittal, B. Reichardt, R. Spalek, M. Szegedy","doi":"10.1109/FOCS.2011.75","DOIUrl":"https://doi.org/10.1109/FOCS.2011.75","url":null,"abstract":"State conversion generalizes query complexity to the problem of converting between two input-dependent quantum states by making queries to the input. We characterize the complexity of this problem by introducing a natural information-theoretic norm that extends the Schur product operator norm. The complexity of converting between two systems of states is given by the distance between them, as measured by this norm. In the special case of function evaluation, the norm is closely related to the general adversary bound, a semi-definite program that lower-bounds the number of input queries needed by a quantum algorithm to evaluate a function. We thus obtain that the general adversary bound characterizes the quantum query complexity of any function whatsoever. This generalizes and simplifies the proof of the same result in the case of boolean input and output. Also in the case of function evaluation, we show that our norm satisfies a remarkable composition property, implying that the quantum query complexity of the composition of two functions is at most the product of the query complexities of the functions, up to a constant. Finally, our result implies that discrete and continuous-time query models are equivalent in the bounded-error setting, even for the general state-conversion problem.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2010-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133172822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 150
Local Distributed Decision 局部分布式决策
Pub Date : 2010-11-09 DOI: 10.1109/FOCS.2011.17
P. Fraigniaud, Amos Korman, D. Peleg
A central theme in distributed network algorithms concerns understanding and coping with the issue of {em locality}. Despite considerable progress, research efforts in this direction have not yet resulted in a solid basis in the form of a fundamental computational complexity theory for locality. Inspired by sequential complexity theory, we focus on a complexity theory for emph{distributed decision problems}. In the context of locality, solving a decision problem requires the processors to independently inspect their local neighborhoods and then collectively decide whether a given global input instance belongs to some specified language. We consider the standard $cal{LOCAL}$ model of computation and define $LD(t)$ (for {em local decision}) as the class of decision problems that can be solved in $t$ communication rounds. We first study the intriguing question of whether randomization helps in local distributed computing, and to what extent. Specifically, we define the corresponding randomized class $BPLD(t,p,q)$, containing all languages for which there exists a randomized algorithm that runs in $t$ rounds, accepts correct instances with probability at least $p$ and rejects incorrect ones with probability at least $q$. We show that $p^2+q = 1$ is a threshold for the containment of $LD(t)$ in $BPLD(t,p,q)$. More precisely, we show that there exists a language that does not belong to $LD(t)$ for any $t=o(n)$ but does belong to $BPLD(0,p,q)$ for any $p,qin (0,1]$ such that $p^2+qleq 1$. On the other hand, we show that, restricted to hereditary languages, $BPLD(t,p,q) = LD(O(t))$, for any function $t$ and any $p,qin (0,1]$ such that $p^2+q>, 1$. In addition, we investigate the impact of non-determinism on local decision, and establish some structural results inspired by classical computational complexity theory. Specifically, we show that non-determinism does help, but that this help is limited, as there exist languages that cannot be decided non-deterministically. Perhaps surprisingly, it turns out that it is the combination of randomization with non-determinism that enables to decide emph{all} languages emph{in constant time}. Finally, we introduce the notion of local reduction, and establish some completeness results.
分布式网络算法的一个中心主题是理解和处理{em局部性}问题。尽管取得了相当大的进展,但这方面的研究工作尚未形成一个坚实的基础,即局部性的基本计算复杂性理论。受序列复杂性理论的启发,我们重点研究了emph{分布式决策问题}的复杂性理论。在局部性上下文中,解决决策问题需要处理器独立地检查它们的局部邻域,然后集体决定给定的全局输入实例是否属于某种指定的语言。我们考虑标准的$cal{LOCAL}$计算模型,并将$LD(t)$(用于{em局部决策})定义为可在$t$通信轮中解决的决策问题类。我们首先研究了一个有趣的问题,即随机化是否有助于本地分布式计算,以及在多大程度上有所帮助。具体地说,我们定义了相应的随机化类$BPLD(t,p,q)$,其中包含所有语言,其中存在以$t$轮运行的随机化算法,以至少$p$的概率接受正确的实例,并以至少$q$的概率拒绝不正确的实例。我们表明$p^2+q = 1$是$BPLD(t,p,q)$中包含$LD(t)$的阈值。更准确地说,我们表明存在一种语言,它对于任何$t=o(n)$都不属于$LD(t)$,但是对于任何$p,qin (0,1]$都属于$BPLD(0,p,q)$,例如$p^2+qleq 1$。另一方面,我们表明,限制于遗传语言,$BPLD(t,p,q) = LD(O(t))$,对于任何函数$t$和任何$p,qin (0,1]$这样的$p^2+q>, 1$。此外,我们还研究了非确定性对局部决策的影响,并建立了一些受经典计算复杂性理论启发的结构性结果。具体来说,我们表明非决定论确实有帮助,但这种帮助是有限的,因为存在不能非决定论地决定的语言。也许令人惊讶的是,结果证明,正是随机化与非决定论的结合,使我们能够emph{在恒定的时间}内决定emph{所有}的语言。最后,我们引入了局部约简的概念,并建立了一些完备性结果。
{"title":"Local Distributed Decision","authors":"P. Fraigniaud, Amos Korman, D. Peleg","doi":"10.1109/FOCS.2011.17","DOIUrl":"https://doi.org/10.1109/FOCS.2011.17","url":null,"abstract":"A central theme in distributed network algorithms concerns understanding and coping with the issue of {em locality}. Despite considerable progress, research efforts in this direction have not yet resulted in a solid basis in the form of a fundamental computational complexity theory for locality. Inspired by sequential complexity theory, we focus on a complexity theory for emph{distributed decision problems}. In the context of locality, solving a decision problem requires the processors to independently inspect their local neighborhoods and then collectively decide whether a given global input instance belongs to some specified language. We consider the standard $cal{LOCAL}$ model of computation and define $LD(t)$ (for {em local decision}) as the class of decision problems that can be solved in $t$ communication rounds. We first study the intriguing question of whether randomization helps in local distributed computing, and to what extent. Specifically, we define the corresponding randomized class $BPLD(t,p,q)$, containing all languages for which there exists a randomized algorithm that runs in $t$ rounds, accepts correct instances with probability at least $p$ and rejects incorrect ones with probability at least $q$. We show that $p^2+q = 1$ is a threshold for the containment of $LD(t)$ in $BPLD(t,p,q)$. More precisely, we show that there exists a language that does not belong to $LD(t)$ for any $t=o(n)$ but does belong to $BPLD(0,p,q)$ for any $p,qin (0,1]$ such that $p^2+qleq 1$. On the other hand, we show that, restricted to hereditary languages, $BPLD(t,p,q) = LD(O(t))$, for any function $t$ and any $p,qin (0,1]$ such that $p^2+q&gt, 1$. In addition, we investigate the impact of non-determinism on local decision, and establish some structural results inspired by classical computational complexity theory. Specifically, we show that non-determinism does help, but that this help is limited, as there exist languages that cannot be decided non-deterministically. Perhaps surprisingly, it turns out that it is the combination of randomization with non-determinism that enables to decide emph{all} languages emph{in constant time}. Finally, we introduce the notion of local reduction, and establish some completeness results.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2010-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131587246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 51
Streaming Algorithms via Precision Sampling 基于精确采样的流算法
Pub Date : 2010-11-04 DOI: 10.1109/FOCS.2011.82
Alexandr Andoni, Robert Krauthgamer, Krzysztof Onak
A technique introduced by Indyk and Woodruff (STOC 2005) has inspired several recent advances in data-stream algorithms. We show that a number of these results follow easily from the application of a single probabilistic method called Precision Sampling. Using this method, we obtain simple data-stream algorithms that maintain a randomized sketch of an input vector $x=(x_1,x_2,ldots,x_n)$, which is useful for the following applications:* Estimating the $F_k$-moment of $x$, for $k>2$.* Estimating the $ell_p$-norm of $x$, for $pin[1,2]$, with small update time.* Estimating cascaded norms $ell_p(ell_q)$ for all $p,q>0$.* $ell_1$ sampling, where the goal is to produce an element $i$ with probability (approximately) $|x_i|/|x|_1$. It extends to similarly defined $ell_p$-sampling, for $pin [1,2]$. For all these applications the algorithm is essentially the same: scale the vector $x$ entry-wise by a well-chosen random vector, and run a heavy-hitter estimation algorithm on the resulting vector. Our sketch is a linear function of $x$, thereby allowing general updates to the vector $x$. Precision Sampling itself addresses the problem of estimating a sum $sum_{i=1}^n a_i$ from weak estimates of each real $a_iin[0,1]$. More precisely, the estimator first chooses a desired precision$u_iin(0,1]$ for each $iin[n]$, and then it receives an estimate of every $a_i$ within additive $u_i$. Its goal is to provide a good approximation to $sum a_i$ while keeping a tab on the ``approximation cost'' $sum_i (1/u_i)$. Here we refine previous work (Andoni, Krauthgamer, and Onak, FOCS 2010)which shows that as long as $sum a_i=Omega(1)$, a good multiplicative approximation can be achieved using total precision of only $O(nlog n)$.
Indyk和Woodruff (STOC 2005)引入的一项技术激发了数据流算法的几个最新进展。我们表明,许多这样的结果很容易遵循单一的概率方法称为精确抽样的应用。使用这种方法,我们获得了简单的数据流算法,该算法维护输入向量$x=(x_1,x_2,ldots,x_n)$的随机草图,这对于以下应用很有用:*估计$x$的$F_k$ -矩,对于$k>2$ .*估计$x$的$ell_p$ -范数,对于$pin[1,2]$,更新时间小。*估计所有$p,q>0$ .* $ell_1$抽样的级联规范$ell_p(ell_q)$,其中目标是产生一个元素$i$的概率(近似)$|x_i|/|x|_1$。它扩展到类似定义的$ell_p$ -sampling,用于$pin [1,2]$。对于所有这些应用程序,算法本质上是相同的:通过一个精心选择的随机向量按入口方向缩放向量$x$,并对结果向量运行一个重量级的估计算法。我们的草图是$x$的线性函数,因此允许对向量$x$进行一般更新。精确抽样本身解决了从每个真实$a_iin[0,1]$的弱估计中估计和$sum_{i=1}^n a_i$的问题。更准确地说,估计器首先为每个$iin[n]$选择一个所需的精度$u_iin(0,1]$,然后它接收对附加的$u_i$中的每个$a_i$的估计。它的目标是提供一个良好的近似$sum a_i$,同时在“近似成本”$sum_i (1/u_i)$上保留一个选项卡。在这里,我们改进了以前的工作(Andoni, Krauthgamer, and Onak, FOCS 2010),这表明只要$sum a_i=Omega(1)$,使用$O(nlog n)$的总精度就可以实现良好的乘法近似。
{"title":"Streaming Algorithms via Precision Sampling","authors":"Alexandr Andoni, Robert Krauthgamer, Krzysztof Onak","doi":"10.1109/FOCS.2011.82","DOIUrl":"https://doi.org/10.1109/FOCS.2011.82","url":null,"abstract":"A technique introduced by Indyk and Woodruff (STOC 2005) has inspired several recent advances in data-stream algorithms. We show that a number of these results follow easily from the application of a single probabilistic method called Precision Sampling. Using this method, we obtain simple data-stream algorithms that maintain a randomized sketch of an input vector $x=(x_1,x_2,ldots,x_n)$, which is useful for the following applications:* Estimating the $F_k$-moment of $x$, for $k>2$.* Estimating the $ell_p$-norm of $x$, for $pin[1,2]$, with small update time.* Estimating cascaded norms $ell_p(ell_q)$ for all $p,q>0$.* $ell_1$ sampling, where the goal is to produce an element $i$ with probability (approximately) $|x_i|/|x|_1$. It extends to similarly defined $ell_p$-sampling, for $pin [1,2]$. For all these applications the algorithm is essentially the same: scale the vector $x$ entry-wise by a well-chosen random vector, and run a heavy-hitter estimation algorithm on the resulting vector. Our sketch is a linear function of $x$, thereby allowing general updates to the vector $x$. Precision Sampling itself addresses the problem of estimating a sum $sum_{i=1}^n a_i$ from weak estimates of each real $a_iin[0,1]$. More precisely, the estimator first chooses a desired precision$u_iin(0,1]$ for each $iin[n]$, and then it receives an estimate of every $a_i$ within additive $u_i$. Its goal is to provide a good approximation to $sum a_i$ while keeping a tab on the ``approximation cost'' $sum_i (1/u_i)$. Here we refine previous work (Andoni, Krauthgamer, and Onak, FOCS 2010)which shows that as long as $sum a_i=Omega(1)$, a good multiplicative approximation can be achieved using total precision of only $O(nlog n)$.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2010-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127652881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 101
期刊
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1