首页 > 最新文献

2011 IEEE 52nd Annual Symposium on Foundations of Computer Science最新文献

英文 中文
The Complexity of Renaming 重命名的复杂性
Pub Date : 2011-10-22 DOI: 10.1109/FOCS.2011.66
Dan Alistarh, J. Aspnes, Seth Gilbert, R. Guerraoui
We study the complexity of renaming, a fundamental problem in distributed computing in which a set of processes need to pick distinct names from a given namespace. We prove an individual lower bound of Omega( k ) process steps for deterministic renaming into any namespace of size sub-exponential in k, where k is the number of participants. This bound is tight: it draws an exponential separation between deterministic and randomized solutions, and implies new tight bounds for deterministic fetch-and-increment registers, queues and stacks. The proof of the bound is interesting in its own right, for it relies on the first reduction from renaming to another fundamental problem in distributed computing: mutual exclusion. We complement our individual bound with a global lower bound of Omega( k log ( k / c ) ) on the total step complexity of renaming into a namespace of size ck, for any c geq 1. This applies to randomized algorithms against a strong adversary, and helps derive new global lower bounds for randomized approximate counter and fetch-and-increment implementations, all tight within logarithmic factors.
我们研究了重命名的复杂性,这是分布式计算中的一个基本问题,其中一组进程需要从给定的命名空间中选择不同的名称。我们证明了确定重命名为k中大小为次指数的任何命名空间的过程步骤的单个下界Omega (k),其中k是参与者的数量。这个界限是紧密的:它在确定性和随机解决方案之间绘制了指数分隔,并为确定性的获取和增量寄存器、队列和堆栈暗示了新的紧密界限。边界的证明本身就很有趣,因为它依赖于从重命名到分布式计算中的另一个基本问题的第一个简化:互斥。对于任何c geq 1,我们用一个全局下界Omega (k log (k / c))来补充我们的个体边界,它是重命名为大小为ck的命名空间的总步骤复杂度的下界。这适用于针对强大对手的随机算法,并有助于为随机近似计数器和获取和增量实现导出新的全局下界,所有这些都在对数因子范围内。
{"title":"The Complexity of Renaming","authors":"Dan Alistarh, J. Aspnes, Seth Gilbert, R. Guerraoui","doi":"10.1109/FOCS.2011.66","DOIUrl":"https://doi.org/10.1109/FOCS.2011.66","url":null,"abstract":"We study the complexity of renaming, a fundamental problem in distributed computing in which a set of processes need to pick distinct names from a given namespace. We prove an individual lower bound of Omega( k ) process steps for deterministic renaming into any namespace of size sub-exponential in k, where k is the number of participants. This bound is tight: it draws an exponential separation between deterministic and randomized solutions, and implies new tight bounds for deterministic fetch-and-increment registers, queues and stacks. The proof of the bound is interesting in its own right, for it relies on the first reduction from renaming to another fundamental problem in distributed computing: mutual exclusion. We complement our individual bound with a global lower bound of Omega( k log ( k / c ) ) on the total step complexity of renaming into a namespace of size ck, for any c geq 1. This applies to randomized algorithms against a strong adversary, and helps derive new global lower bounds for randomized approximate counter and fetch-and-increment implementations, all tight within logarithmic factors.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125268943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Algorithms for the Generalized Sorting Problem 广义排序问题的算法
Pub Date : 2011-10-22 DOI: 10.1109/FOCS.2011.54
Zhiyi Huang, Sampath Kannan, S. Khanna
We study the generalized sorting problem where we are given a set of n elements to be sorted but only a subset of all possible pair wise element comparisons is allowed. The goal is to determine the sorted order using the smallest possible number of allowed comparisons. The generalized sorting problem may be equivalently viewed as follows. Given an undirected graph G(V, E) where V is the set of elements to be sorted and E defines the set of allowed comparisons, adaptively find the smallest subset E¡ä subseteq E of edges to probe such that the directed graph induced by E¡ä contains a Hamiltonian path. When G is a complete graph, we get the standard sorting problem, and it is well-known that Theta(n log n) comparisons are necessary and sufficient. An extensively studied special case of the generalized sorting problem is the nuts and bolts problem where the allowed comparison graph is a complete bipartite graph between two equal-size sets. It is known that for this special case also, there is a deterministic algorithm that sorts using Theta(n log n) comparisons. However, when the allowed comparison graph is arbitrary, to our knowledge, no bound better than the trivial O(n^2) bound is known. Our main result is a randomized algorithm that sorts any allowed comparison graph using O(n^{3/2}) comparisons with high probability (provided the input is sortable). We also study the sorting problem in randomly generated allowed comparison graphs, and show that when the edge probability is p, O(min{ n/p^2, n^{3/2}sqrt{p}) comparisons suffice on average to sort.
我们研究了一个广义排序问题,在这个问题中,给定一个n个元素的集合要排序,但只允许所有可能的元素对比较的一个子集。目标是使用尽可能少的允许比较来确定排序顺序。广义排序问题可以等价地看作如下。给定一个无向图G(V, E),其中V是待排序元素的集合,E定义了允许比较的集合,自适应地找到边的最小子集E′ä subseteq E来探测,使得E′ä诱导的有向图包含哈密顿路径。当G是完全图时,我们得到标准排序问题,众所周知,(n log n)比较是充分必要的。广义排序问题的一个被广泛研究的特例是螺母和螺栓问题,其中允许的比较图是两个大小相等的集合之间的完全二部图。众所周知,对于这种特殊情况,也有一种确定性算法,它使用Theta(n log n)比较进行排序。然而,当允许的比较图是任意的,据我们所知,没有比平凡的O(n^2)界更好的界了。我们的主要结果是一个随机算法,它以高概率使用O(n^{3/2})个比较对任何允许的比较图进行排序(假设输入是可排序的)。我们还研究了随机生成的允许比较图的排序问题,并证明了当边概率为p时,O(min{n/p^2, n^{3/2}sqrt{p})次比较平均足以排序。
{"title":"Algorithms for the Generalized Sorting Problem","authors":"Zhiyi Huang, Sampath Kannan, S. Khanna","doi":"10.1109/FOCS.2011.54","DOIUrl":"https://doi.org/10.1109/FOCS.2011.54","url":null,"abstract":"We study the generalized sorting problem where we are given a set of n elements to be sorted but only a subset of all possible pair wise element comparisons is allowed. The goal is to determine the sorted order using the smallest possible number of allowed comparisons. The generalized sorting problem may be equivalently viewed as follows. Given an undirected graph G(V, E) where V is the set of elements to be sorted and E defines the set of allowed comparisons, adaptively find the smallest subset E¡ä subseteq E of edges to probe such that the directed graph induced by E¡ä contains a Hamiltonian path. When G is a complete graph, we get the standard sorting problem, and it is well-known that Theta(n log n) comparisons are necessary and sufficient. An extensively studied special case of the generalized sorting problem is the nuts and bolts problem where the allowed comparison graph is a complete bipartite graph between two equal-size sets. It is known that for this special case also, there is a deterministic algorithm that sorts using Theta(n log n) comparisons. However, when the allowed comparison graph is arbitrary, to our knowledge, no bound better than the trivial O(n^2) bound is known. Our main result is a randomized algorithm that sorts any allowed comparison graph using O(n^{3/2}) comparisons with high probability (provided the input is sortable). We also study the sorting problem in randomly generated allowed comparison graphs, and show that when the edge probability is p, O(min{ n/p^2, n^{3/2}sqrt{p}) comparisons suffice on average to sort.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127505007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Testing and Reconstruction of Lipschitz Functions with Applications to Data Privacy Lipschitz函数的检验与重构及其在数据隐私中的应用
Pub Date : 2011-10-22 DOI: 10.1137/110840741
Madhav Jha, Sofya Raskhodnikova
A function f : D ? R has Lipschitz constant c if dR(f(x),f(y)) = c· dD(x,y) for all x,y in D, where dR and dD denote the distance functions on the range and domain of f, respectively. We say a function is Lipschitz if it has Lipschitz constant 1. (Note that rescaling by a factor of 1/c converts a function with a Lipschitz constant c into a Lipschitz function.) In other words, Lipschitz functions are not very sensitive to small changes in the input. We initiate the study of testing and local reconstruction of the Lipschitz property of functions. A property tester has to distinguish functions with the property (in this case, Lipschitz) from functions that are e -far from having the property, that is, differ from every function with the property on at least an e fraction of the domain. A local filter reconstructs an arbitrary function f to ensure that the reconstructed function g has the desired property (in this case, is Lipschitz), changing f only when necessary. A local filter is given a function f and a query x and, after looking up the value of f on a small number of points, it has to output g(x) for some function g, which has the desired property and does not depend on x. If f has the property, g must be equal to f. We consider functions over domains {0,1}d, {1,..., n} and {1,..., n}d, equipped with l1 distance. We design efficient testers of the Lipschitz property for functions of the form f:{0,1}d? d Z, where d ? (0,1] and d Z is the set of integer multiples of d, and of the form f: {1,..., n} ? R, where R is (discretely) metrically convex. In the first case, the tester runs in time O(d· min{d,r}/d e ), where r is the diameter of the image of f, in the second, in time O((log n)/e ). We give corresponding lower bounds of O (d) and O (log n) on the query complexity (in the second case, only for nonadaptive 1-sided error testers). Our lower bound for functions over {0,1}dis tight for the case of the {0,1,2} range and constant e. The first tester implies an algorithm for functions of the form f:{0,1}d? R that distinguishes Lipschitz functions from functions that are e -far from (1+d )-Lipschitz. We also present a local filter of the Lipschitz property for functions of the form f: {1,..., n}d ? R with lookup complexity O((log n+1)d). For functions of the form {0,1}d, we show that every nonadaptive local filter has lookup complexity exponential in d. The testers that we developed have applications to programs analysis. The reconstructors have applications to data privacy. For the first application, the Lipschitz property of the function computed by a program corresponds to a notion of robustness to noise in the data. The application to privacy is based on the fact that a function f of entries in a database of sensitive information can be released with noise of magnitude proportional to a Lipschitz constant of f, while preserving the privacy of individuals whose data is stored in the database (Dwork, McSherry, Nissim and Smith, TCC 2006). We give a
函数f: D ?R有Lipschitz常数c,如果dR(f(x),f(y)) = c·dD(x,y),其中dR和dD分别表示f的值域和定义域上的距离函数。如果一个函数有李普希茨常数1,我们就说它是李普希茨函数。(请注意,以1/c的系数重新缩放将具有Lipschitz常数c的函数转换为Lipschitz函数。)换句话说,Lipschitz函数对输入的微小变化不是很敏感。研究了函数的Lipschitz性质的检验和局部重构。属性测试人员必须将具有该属性的函数(在本例中是Lipschitz)与不具有该属性的函数区分开来,也就是说,至少在定义域的e部分上与具有该属性的每个函数不同。局部滤波器重构任意函数f,以确保重构函数g具有所需的性质(在本例中为Lipschitz),仅在必要时改变f。给定一个函数f和一个查询x,在查找了少量点上f的值后,它必须为某个函数g输出g(x),该函数g具有所需的性质并且不依赖于x。如果f具有该性质,则g必须等于f。我们考虑函数在{0,1}d,{1,…, n}和{1,…, n}d,配备l1距离。我们设计了对形式为f:{0,1}d?的函数的Lipschitz性质的有效测试仪。dz,哪里?(0,1), d Z是d的整数倍的集合,其形式为f:{1,…, n} ?R,其中R是(离散的)度量凸。在第一种情况下,测试仪运行时间为O(d·min{d,r}/d e),其中r为f图像的直径,在第二种情况下,运行时间为O((log n)/e)。我们给出了查询复杂度的相应下界O (d)和O (log n)(在第二种情况下,仅适用于非自适应单侧错误测试器)。在{0,1}上的函数的下界对于{0,1,2}范围和常数e的情况是紧的。第一个检验法暗示了对形式为f的函数的算法:{0,1}d?R用来区分李普希茨函数和e -远离(1+d)-李普希茨函数。我们也给出了形式为f:{1,…的函数的Lipschitz性质的局部滤波器, n}d ?R,查找复杂度为O((log n+1)d)对于形式为{0,1}d的函数,我们表明每个非自适应局部滤波器在d中具有查找复杂度指数。我们开发的测试器已应用于程序分析。重构器应用于数据隐私。对于第一个应用,由程序计算的函数的Lipschitz性质对应于对数据噪声的鲁棒性概念。对隐私的应用是基于这样一个事实,即敏感信息数据库中条目的函数f可以释放与Lipschitz常数f成比例的噪声,同时保留其数据存储在数据库中的个人的隐私(Dwork, McSherry, Nissim和Smith, TCC 2006)。当一个不可信的客户端提供了函数f的Lipschitz常数时,我们给出了一个基于局部过滤器的差分私有机制来释放函数f。我们表明,当没有给出可靠的Lipschitz常数f时,先前已知的差分私有机制对于一大类对称函数f具有更高的运行时间或更高的期望误差。
{"title":"Testing and Reconstruction of Lipschitz Functions with Applications to Data Privacy","authors":"Madhav Jha, Sofya Raskhodnikova","doi":"10.1137/110840741","DOIUrl":"https://doi.org/10.1137/110840741","url":null,"abstract":"A function f : D ? R has Lipschitz constant c if dR(f(x),f(y)) = c· dD(x,y) for all x,y in D, where dR and dD denote the distance functions on the range and domain of f, respectively. We say a function is Lipschitz if it has Lipschitz constant 1. (Note that rescaling by a factor of 1/c converts a function with a Lipschitz constant c into a Lipschitz function.) In other words, Lipschitz functions are not very sensitive to small changes in the input. We initiate the study of testing and local reconstruction of the Lipschitz property of functions. A property tester has to distinguish functions with the property (in this case, Lipschitz) from functions that are e -far from having the property, that is, differ from every function with the property on at least an e fraction of the domain. A local filter reconstructs an arbitrary function f to ensure that the reconstructed function g has the desired property (in this case, is Lipschitz), changing f only when necessary. A local filter is given a function f and a query x and, after looking up the value of f on a small number of points, it has to output g(x) for some function g, which has the desired property and does not depend on x. If f has the property, g must be equal to f. We consider functions over domains {0,1}d, {1,..., n} and {1,..., n}d, equipped with l1 distance. We design efficient testers of the Lipschitz property for functions of the form f:{0,1}d? d Z, where d ? (0,1] and d Z is the set of integer multiples of d, and of the form f: {1,..., n} ? R, where R is (discretely) metrically convex. In the first case, the tester runs in time O(d· min{d,r}/d e ), where r is the diameter of the image of f, in the second, in time O((log n)/e ). We give corresponding lower bounds of O (d) and O (log n) on the query complexity (in the second case, only for nonadaptive 1-sided error testers). Our lower bound for functions over {0,1}dis tight for the case of the {0,1,2} range and constant e. The first tester implies an algorithm for functions of the form f:{0,1}d? R that distinguishes Lipschitz functions from functions that are e -far from (1+d )-Lipschitz. We also present a local filter of the Lipschitz property for functions of the form f: {1,..., n}d ? R with lookup complexity O((log n+1)d). For functions of the form {0,1}d, we show that every nonadaptive local filter has lookup complexity exponential in d. The testers that we developed have applications to programs analysis. The reconstructors have applications to data privacy. For the first application, the Lipschitz property of the function computed by a program corresponds to a notion of robustness to noise in the data. The application to privacy is based on the fact that a function f of entries in a database of sensitive information can be released with noise of magnitude proportional to a Lipschitz constant of f, while preserving the privacy of individuals whose data is stored in the database (Dwork, McSherry, Nissim and Smith, TCC 2006). We give a","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124635522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 83
An Algebraic Proof of a Robust Social Choice Impossibility Theorem 一类鲁棒社会选择不可能定理的代数证明
Pub Date : 2011-10-22 DOI: 10.1109/FOCS.2011.72
Dvir Falik, E. Friedgut
An important element of social choice theory are impossibility theorems, such as Arrow's theorem and Gibbard-Satterthwaite's theorem, which state that under certain natural constraints, social choice mechanisms are impossible to construct. In recent years, beginning in Kalai'01, much work has been done in finding text it{robust} versions of these theorems, showing that impossibility remains even when the constraints are text it{almost} always satisfied. In this work we present an Algebraic scheme for producing such results. We demonstrate it for a variant of Arrow's theorem, found in Dokow and Holzman [5].
社会选择理论的一个重要组成部分是不可能定理,如阿罗定理和吉巴德-萨特思韦特定理,这些定理指出,在某些自然约束下,社会选择机制是不可能构建的。近年来,从Kalai'01开始,已经做了很多工作来寻找这些定理的鲁棒版本,表明即使约束几乎总是满足,也不可能存在。在这项工作中,我们提出了一个代数方案来产生这样的结果。我们在dodoow和Holzman[5]中发现的阿罗定理的一个变体中证明了这一点。
{"title":"An Algebraic Proof of a Robust Social Choice Impossibility Theorem","authors":"Dvir Falik, E. Friedgut","doi":"10.1109/FOCS.2011.72","DOIUrl":"https://doi.org/10.1109/FOCS.2011.72","url":null,"abstract":"An important element of social choice theory are impossibility theorems, such as Arrow's theorem and Gibbard-Satterthwaite's theorem, which state that under certain natural constraints, social choice mechanisms are impossible to construct. In recent years, beginning in Kalai'01, much work has been done in finding text it{robust} versions of these theorems, showing that impossibility remains even when the constraints are text it{almost} always satisfied. In this work we present an Algebraic scheme for producing such results. We demonstrate it for a variant of Arrow's theorem, found in Dokow and Holzman [5].","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121416222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Unified Continuous Greedy Algorithm for Submodular Maximization 次模最大化的统一连续贪心算法
Pub Date : 2011-10-22 DOI: 10.1109/FOCS.2011.46
Moran Feldman, J. Naor, Roy Schwartz
The study of combinatorial problems with a submodular objective function has attracted much attention in recent years, and is partly motivated by the importance of such problems to economics, algorithmic game theory and combinatorial optimization. Classical works on these problems are mostly combinatorial in nature. Recently, however, many results based on continuous algorithmic tools have emerged. The main bottleneck of such continuous techniques is how to approximately solve a non-convex relaxation for the sub- modular problem at hand. Thus, the efficient computation of better fractional solutions immediately implies improved approximations for numerous applications. A simple and elegant method, called "continuous greedy", successfully tackles this issue for monotone submodular objective functions, however, only much more complex tools are known to work for general non-monotone submodular objectives. In this work we present a new unified continuous greedy algorithm which finds approximate fractional solutions for both the non-monotone and monotone cases, and improves on the approximation ratio for many applications. For general non-monotone submodular objective functions, our algorithm achieves an improved approximation ratio of about 1/e. For monotone submodular objective functions, our algorithm achieves an approximation ratio that depends on the density of the polytope defined by the problem at hand, which is always at least as good as the previously known best approximation ratio of 1-1/e. Some notable immediate implications are an improved 1/e-approximation for maximizing a non-monotone submodular function subject to a matroid or O(1)-knapsack constraints, and information-theoretic tight approximations for Submodular Max-SAT and Submodular Welfare with k players, for any number of players k. A framework for submodular optimization problems, called the contention resolution framework, was introduced recently by Chekuri et al. [11]. The improved approximation ratio of the unified continuous greedy algorithm implies improved ap- proximation ratios for many problems through this framework. Moreover, via a parameter called stopping time, our algorithm merges the relaxation solving and re-normalization steps of the framework, and achieves, for some applications, further improvements. We also describe new monotone balanced con- tention resolution schemes for various matching, scheduling and packing problems, thus, improving the approximations achieved for these problems via the framework.
具有次模目标函数的组合问题的研究近年来引起了人们的广泛关注,部分原因是这些问题对经济学、算法博弈论和组合优化的重要性。关于这些问题的经典著作在本质上大多是组合的。然而,最近出现了许多基于连续算法工具的结果。这种连续技术的主要瓶颈是如何近似地解决手头的子模问题的非凸松弛。因此,更好的分数解的有效计算立即意味着许多应用的改进近似值。一种简单而优雅的方法,称为“连续贪婪”,成功地解决了单调子模目标函数的这个问题,然而,只有更复杂的工具才能用于一般的非单调子模目标。本文提出了一种新的统一连续贪心算法,该算法在非单调和单调情况下都能求出近似分数解,并改进了许多应用的近似比。对于一般非单调次模目标函数,我们的算法实现了约1/e的改进逼近比。对于单调次模目标函数,我们的算法实现了一个依赖于手头问题定义的多面体密度的近似比,它总是至少与之前已知的1-1/e的最佳近似比一样好。一些值得注意的直接影响是改进的1/e逼近,用于最大化受矩阵或O(1)-背包约束的非单调子模函数,以及具有k个参与者的子模Max-SAT和子模福利的信息论严密逼近,对于任意数量的参与者k。最近由Chekuri等人引入了子模优化问题的框架,称为竞争解决框架[11]。统一连续贪婪算法的近似比的改进意味着通过该框架可以提高许多问题的近似比。此外,通过一个称为停止时间的参数,我们的算法合并了框架的松弛求解和重新归一化步骤,并在某些应用中实现了进一步的改进。我们还描述了各种匹配、调度和包装问题的新的单调平衡关注解决方案,从而改进了通过框架对这些问题的逼近。
{"title":"A Unified Continuous Greedy Algorithm for Submodular Maximization","authors":"Moran Feldman, J. Naor, Roy Schwartz","doi":"10.1109/FOCS.2011.46","DOIUrl":"https://doi.org/10.1109/FOCS.2011.46","url":null,"abstract":"The study of combinatorial problems with a submodular objective function has attracted much attention in recent years, and is partly motivated by the importance of such problems to economics, algorithmic game theory and combinatorial optimization. Classical works on these problems are mostly combinatorial in nature. Recently, however, many results based on continuous algorithmic tools have emerged. The main bottleneck of such continuous techniques is how to approximately solve a non-convex relaxation for the sub- modular problem at hand. Thus, the efficient computation of better fractional solutions immediately implies improved approximations for numerous applications. A simple and elegant method, called \"continuous greedy\", successfully tackles this issue for monotone submodular objective functions, however, only much more complex tools are known to work for general non-monotone submodular objectives. In this work we present a new unified continuous greedy algorithm which finds approximate fractional solutions for both the non-monotone and monotone cases, and improves on the approximation ratio for many applications. For general non-monotone submodular objective functions, our algorithm achieves an improved approximation ratio of about 1/e. For monotone submodular objective functions, our algorithm achieves an approximation ratio that depends on the density of the polytope defined by the problem at hand, which is always at least as good as the previously known best approximation ratio of 1-1/e. Some notable immediate implications are an improved 1/e-approximation for maximizing a non-monotone submodular function subject to a matroid or O(1)-knapsack constraints, and information-theoretic tight approximations for Submodular Max-SAT and Submodular Welfare with k players, for any number of players k. A framework for submodular optimization problems, called the contention resolution framework, was introduced recently by Chekuri et al. [11]. The improved approximation ratio of the unified continuous greedy algorithm implies improved ap- proximation ratios for many problems through this framework. Moreover, via a parameter called stopping time, our algorithm merges the relaxation solving and re-normalization steps of the framework, and achieves, for some applications, further improvements. We also describe new monotone balanced con- tention resolution schemes for various matching, scheduling and packing problems, thus, improving the approximations achieved for these problems via the framework.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132021300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 271
On Range Searching in the Group Model and Combinatorial Discrepancy 群模型中的范围搜索与组合误差
Pub Date : 2011-10-22 DOI: 10.1137/120865240
Kasper Green Larsen
In this paper we establish an intimate connection between dynamic range searching in the group model and combinatorial discrepancy. Our result states that, for a broad class of range searching data structures (including all known upper bounds), it must hold that $t_ut_q = Omega(disc^2/lg n)$ where $t_u$ is the worst case update time, $t_q$ the worst case query time and $disc$ is the combinatorial discrepancy of the range searching problem in question. This relation immediately implies a whole range of exceptionally high and near-tight lower bounds for all of the basic range searching problems. We list a few of them in the following:begin{itemize}item For half space range searching in $d$-dimensional space, we get a lower bound of $t_u t_q = Omega(n^{1-1/d}/lg n)$. This comes within a $lg n lg lg n$ factor of the best known upper bound. item For orthogonal range searching in $d$-dimensional space, we get a lower bound of $t_u t_q = Omega(lg^{d-2+mu(d)}n)$, where $mu(d)>0$ is some small but strictly positive function of $d$.item For ball range searching in $d$-dimensional space, we get a lower bound of $t_u t_q = Omega(n^{1-1/d}/lg n)$.end{itemize}We note that the previous highest lower bound for any explicit problem, due to P{v a}tra{c s}cu [STOC'07], states that $t_q =Omega((lg n/lg(lg n+t_u))^2)$, which does however hold for a less restrictive class of data structures. Our result also has implications for the field of combinatorial discrepancy. Using textbook range searching solutions, we improve on the best known discrepancy upper bound for axis-aligned rectangles in dimensions $d geq 3$.
本文建立了群体模型中动态范围搜索与组合误差之间的密切联系。我们的结果表明,对于一大类范围搜索数据结构(包括所有已知的上界),它必须满足$t_ut_q = Omega(disc^2/lg n)$,其中$t_u$是最坏情况下的更新时间,$t_q$是最坏情况下的查询时间,$disc$是所讨论的范围搜索问题的组合差异。这一关系立即暗示了所有基本范围搜索问题的整个范围的异常高和近紧下界。我们在下面列出了其中的一些:begin{itemize}item 对于在$d$维空间中的半空间范围搜索,我们得到了$t_u t_q = Omega(n^{1-1/d}/lg n)$的下界。它在已知上界的$lg n lg lg n$因子范围内。 item 对于$d$维空间的正交范围搜索,我们得到了$t_u t_q = Omega(lg^{d-2+mu(d)}n)$的下界,其中$mu(d)>0$是$d$的一个很小但严格正的函数。item 对于$d$维空间的球距搜索,我们得到了$t_u t_q = Omega(n^{1-1/d}/lg n)$的下界。end{itemize}我们注意到,由于P {v atra }{c scu [}STOC'07],之前任何显式问题的最高下界声明$t_q =Omega((lg n/lg(lg n+t_u))^2)$,但这确实适用于限制较少的数据结构类。我们的结果对组合误差领域也有启示意义。使用教科书范围搜索解决方案,我们改进了尺寸为$d geq 3$的轴对齐矩形的最著名的差异上界。
{"title":"On Range Searching in the Group Model and Combinatorial Discrepancy","authors":"Kasper Green Larsen","doi":"10.1137/120865240","DOIUrl":"https://doi.org/10.1137/120865240","url":null,"abstract":"In this paper we establish an intimate connection between dynamic range searching in the group model and combinatorial discrepancy. Our result states that, for a broad class of range searching data structures (including all known upper bounds), it must hold that $t_ut_q = Omega(disc^2/lg n)$ where $t_u$ is the worst case update time, $t_q$ the worst case query time and $disc$ is the combinatorial discrepancy of the range searching problem in question. This relation immediately implies a whole range of exceptionally high and near-tight lower bounds for all of the basic range searching problems. We list a few of them in the following:begin{itemize}item For half space range searching in $d$-dimensional space, we get a lower bound of $t_u t_q = Omega(n^{1-1/d}/lg n)$. This comes within a $lg n lg lg n$ factor of the best known upper bound. item For orthogonal range searching in $d$-dimensional space, we get a lower bound of $t_u t_q = Omega(lg^{d-2+mu(d)}n)$, where $mu(d)>0$ is some small but strictly positive function of $d$.item For ball range searching in $d$-dimensional space, we get a lower bound of $t_u t_q = Omega(n^{1-1/d}/lg n)$.end{itemize}We note that the previous highest lower bound for any explicit problem, due to P{v a}tra{c s}cu [STOC'07], states that $t_q =Omega((lg n/lg(lg n+t_u))^2)$, which does however hold for a less restrictive class of data structures. Our result also has implications for the field of combinatorial discrepancy. Using textbook range searching solutions, we improve on the best known discrepancy upper bound for axis-aligned rectangles in dimensions $d geq 3$.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134526461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Efficient and Explicit Coding for Interactive Communication 高效和显式的交互通信编码
Pub Date : 2011-10-22 DOI: 10.1109/FOCS.2011.51
R. Gelles, Ankur Moitra, A. Sahai
We revisit the problem of reliable interactive communication over a noisy channel, and obtain the first fully explicit (randomized) efficient constant-rate emulation procedure for reliable interactive communication. Our protocol works for any discrete memory less noisy channel with constant capacity, and fails with exponentially small probability in the total length of the protocol. Following a work by Schulman [Schulman 1993] our simulation uses a tree-code, yet as opposed to the non-constructive absolute tree-code used by Schulman, we introduce a relaxation in the notion of goodness for a tree code and define a potent tree code. This relaxation allows us to construct an explicit emulation procedure for any two-party protocol. Our results also extend to the case of interactive multiparty communication. We show that a randomly generated tree code (with suitable constant alphabet size) is an efficiently decodable potent tree code with overwhelming probability. Furthermore we are able to partially derandomize this result by means of epsilon-biased distributions using only O(N) random bits, where N is the depth of the tree.
我们重新考虑了在噪声信道上的可靠交互通信问题,并获得了可靠交互通信的第一个完全显式(随机)高效的恒速率仿真程序。我们的协议适用于任何具有恒定容量的离散存储器低噪声信道,并且在协议的总长度中以指数小的概率失败。在Schulman [Schulman 1993]的工作之后,我们的模拟使用了树状代码,但与Schulman使用的非建设性绝对树状代码相反,我们引入了树状代码的良度概念的放松,并定义了有效的树状代码。这种放松允许我们为任何两方协议构建显式仿真过程。我们的研究结果也适用于交互式多方通信的情况。我们证明了随机生成的树码(具有合适的恒定字母大小)是一种具有压倒性概率的有效可解码的有效树码。此外,我们能够通过只使用O(N)个随机比特的epsilon-biased分布来部分地非随机化这个结果,其中N是树的深度。
{"title":"Efficient and Explicit Coding for Interactive Communication","authors":"R. Gelles, Ankur Moitra, A. Sahai","doi":"10.1109/FOCS.2011.51","DOIUrl":"https://doi.org/10.1109/FOCS.2011.51","url":null,"abstract":"We revisit the problem of reliable interactive communication over a noisy channel, and obtain the first fully explicit (randomized) efficient constant-rate emulation procedure for reliable interactive communication. Our protocol works for any discrete memory less noisy channel with constant capacity, and fails with exponentially small probability in the total length of the protocol. Following a work by Schulman [Schulman 1993] our simulation uses a tree-code, yet as opposed to the non-constructive absolute tree-code used by Schulman, we introduce a relaxation in the notion of goodness for a tree code and define a potent tree code. This relaxation allows us to construct an explicit emulation procedure for any two-party protocol. Our results also extend to the case of interactive multiparty communication. We show that a randomly generated tree code (with suitable constant alphabet size) is an efficiently decodable potent tree code with overwhelming probability. Furthermore we are able to partially derandomize this result by means of epsilon-biased distributions using only O(N) random bits, where N is the depth of the tree.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134040418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 75
Efficient Reconstruction of Random Multilinear Formulas 随机多线性公式的高效重构
Pub Date : 2011-10-22 DOI: 10.1109/FOCS.2011.70
Ankit Gupta, N. Kayal, Satyanarayana V. Lokam
In the reconstruction problem for a multivariate polynomial f, we have black box access to $f$ and the goal is to efficiently reconstruct a representation of $f$ in a suitable model of computation. We give a polynomial time randomized algorithm for reconstructing emph{random} multilinear formulas. Our algorithm succeeds with high probability when given black box access to the polynomial computed by a random multilinear formula according to a natural distribution. This is the strongest model of computation for which a reconstruction algorithm is presently known, albeit efficient in a distributional sense rather than in the worst-case. Previous results on this problem considered much weaker models such as depth-3 circuits with various restrictions or read-once formulas. Our proof uses ranks of partial derivative matrices as a key ingredient and combines it with analysis of the algebraic structure of random multilinear formulas. Partial derivative matrices have earlier been used to prove lower bounds in a number of models of arithmetic complexity, including multilinear formulas and constant depth circuits. As such, our results give supporting evidence to the general thesis that mathematical properties that capture efficient computation in a model should also enable learning algorithms for functions efficiently computable in that model.
在多元多项式f的重构问题中,我们可以黑盒访问$f$,目标是在合适的计算模型中有效地重构$f$的表示。我们给出了一个多项式时间随机化算法来重建emph{随机的}多线性公式。当给定黑箱访问由随机多元线性公式根据自然分布计算的多项式时,我们的算法有高概率成功。这是目前已知的重构算法中最强的计算模型,尽管在分布意义上比在最坏情况下更有效。先前关于这个问题的结果考虑了更弱的模型,如具有各种限制的深度-3电路或一次读取公式。我们的证明以偏导数矩阵的秩为关键成分,并将其与随机多元线性公式的代数结构分析相结合。偏导数矩阵早先已经被用来证明一些算术复杂度模型的下界,包括多线性公式和等深度电路。因此,我们的结果为一般论点提供了支持证据,即在模型中捕获有效计算的数学属性也应该使该模型中可有效计算的函数的学习算法成为可能。
{"title":"Efficient Reconstruction of Random Multilinear Formulas","authors":"Ankit Gupta, N. Kayal, Satyanarayana V. Lokam","doi":"10.1109/FOCS.2011.70","DOIUrl":"https://doi.org/10.1109/FOCS.2011.70","url":null,"abstract":"In the reconstruction problem for a multivariate polynomial f, we have black box access to $f$ and the goal is to efficiently reconstruct a representation of $f$ in a suitable model of computation. We give a polynomial time randomized algorithm for reconstructing emph{random} multilinear formulas. Our algorithm succeeds with high probability when given black box access to the polynomial computed by a random multilinear formula according to a natural distribution. This is the strongest model of computation for which a reconstruction algorithm is presently known, albeit efficient in a distributional sense rather than in the worst-case. Previous results on this problem considered much weaker models such as depth-3 circuits with various restrictions or read-once formulas. Our proof uses ranks of partial derivative matrices as a key ingredient and combines it with analysis of the algebraic structure of random multilinear formulas. Partial derivative matrices have earlier been used to prove lower bounds in a number of models of arithmetic complexity, including multilinear formulas and constant depth circuits. As such, our results give supporting evidence to the general thesis that mathematical properties that capture efficient computation in a model should also enable learning algorithms for functions efficiently computable in that model.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125923550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
A Randomized Rounding Approach to the Traveling Salesman Problem 旅行商问题的随机四舍五入方法
Pub Date : 2011-10-22 DOI: 10.1109/FOCS.2011.80
S. Gharan, A. Saberi, Mohit Singh
For some positive constant eps_0, we give a (3/2-eps_0)-approximation algorithm for the following problem: given a graph G_0=(V,E_0), find the shortest tour that visits every vertex at least once. This is a special case of the metric traveling salesman problem when the underlying metric is defined by shortest path distances in G_0. The result improves on the 3/2-approximation algorithm due to Christofides [C76] for this special case. Similar to Christofides, our algorithm finds a spanning tree whose cost is upper bounded by the optimum, then it finds the minimum cost Eulerian augmentation (or T-join) of that tree. The main difference is in the selection of the spanning tree. Except in certain cases where the solution of LP is nearly integral, we select the spanning tree randomly by sampling from a maximum entropy distribution defined by the linear programming relaxation. Despite the simplicity of the algorithm, the analysis builds on a variety of ideas such as properties of strongly Rayleigh measures from probability theory, graph theoretical results on the structure of near minimum cuts, and the integrality of the T-join polytope from polyhedral theory. Also, as a byproduct of our result, we show new properties of the near minimum cuts of any graph, which may be of independent interest.
对于某正常数eps_0,我们给出了以下问题的(3/2-eps_0)逼近算法:给定一个图G_0=(V,E_0),求每个顶点至少访问一次的最短巡回。这是度量旅行商问题的一个特殊情况,当底层度量由G_0中的最短路径距离定义时。由于Christofides [C76]的存在,该结果在3/2近似算法的基础上进行了改进。与Christofides类似,我们的算法找到一个代价上限为最优的生成树,然后找到该树的最小代价欧拉增(或t连接)。主要区别在于生成树的选择。除了LP的解是接近积分的某些情况外,我们从线性规划松弛定义的最大熵分布中随机抽样选择生成树。尽管算法简单,但分析建立在各种思想的基础上,如概率论中的强瑞利测度的性质,图论中关于近最小切割结构的结果,以及多面体理论中t -连接多面体的完整性。此外,作为我们的结果的副产品,我们展示了任何图的近最小割的新性质,这可能是独立的兴趣。
{"title":"A Randomized Rounding Approach to the Traveling Salesman Problem","authors":"S. Gharan, A. Saberi, Mohit Singh","doi":"10.1109/FOCS.2011.80","DOIUrl":"https://doi.org/10.1109/FOCS.2011.80","url":null,"abstract":"For some positive constant eps_0, we give a (3/2-eps_0)-approximation algorithm for the following problem: given a graph G_0=(V,E_0), find the shortest tour that visits every vertex at least once. This is a special case of the metric traveling salesman problem when the underlying metric is defined by shortest path distances in G_0. The result improves on the 3/2-approximation algorithm due to Christofides [C76] for this special case. Similar to Christofides, our algorithm finds a spanning tree whose cost is upper bounded by the optimum, then it finds the minimum cost Eulerian augmentation (or T-join) of that tree. The main difference is in the selection of the spanning tree. Except in certain cases where the solution of LP is nearly integral, we select the spanning tree randomly by sampling from a maximum entropy distribution defined by the linear programming relaxation. Despite the simplicity of the algorithm, the analysis builds on a variety of ideas such as properties of strongly Rayleigh measures from probability theory, graph theoretical results on the structure of near minimum cuts, and the integrality of the T-join polytope from polyhedral theory. Also, as a byproduct of our result, we show new properties of the near minimum cuts of any graph, which may be of independent interest.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"875 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114149569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 173
Dispersers for Affine Sources with Sub-polynomial Entropy 次多项式熵仿射源的分散体
Pub Date : 2011-10-22 DOI: 10.1109/FOCS.2011.37
Ronen Shaltiel
We construct an explicit disperser for affine sources over $F_2^n$ with entropy $k=2^{log^{0.9} n}=n^{o(1)}$. This is a polynomial time computable function $D:F_2^n ar B$ such that for every affine space $V$ of $F_2^n$ that has dimension at least $k$, $D(V)=set{0,1}$. This improves the best previous construction of Ben-Sasson and Kop party (STOC 2009) that achieved $k = Omega(n^{4/5})$.Our technique follows a high level approach that was developed in Barak, Kindler, Shaltiel, Sudakov and Wigderson (J. ACM 2010) and Barak, Rao, Shaltiel and Wigderson (STOC 2006) in the context of dispersers for two independent general sources. The main steps are:begin{itemize}item Adjust the high level approach to make it suitable for affine sources. item Implement a ``challenge-response game'' for affine sources (in the spirit of the two aforementioned papers that introduced such games for two independent general sources).item In order to implement the game, we construct extractors for affine block-wise sources. For this we use ideas and components by Rao (CCC 2009). item Combining the three items above, we obtain dispersers for affine sources with entropy larger than $sqrt{n}$.We use a recursive win-win analysis in the spirit of Rein gold, Shaltiel and Wigderson (SICOMP 2006) and Barak, Rao, Shaltiel and Wigderson (STOC 2006) to get affine dispersers with entropy less than $sqrt{n}$.end{itemize}
我们构造了一个显式分散器,用于$F_2^n$上的仿射源,熵为$k=2^{log^{0.9} n}=n^{o(1)}$。这是一个多项式时间可计算的函数$D:F_2^n ar B$对于$F_2^n$的每个仿射空间$V$至少有维数$k$$D(V)=set{0,1}$。这改进了本-萨森和Kop党(STOC 2009)之前的最佳构建,实现了$k = Omega(n^{4/5})$。我们的技术遵循巴拉克,Kindler, Shaltiel, Sudakov和Wigderson (J. ACM 2010)和巴拉克,Rao, Shaltiel和Wigderson (STOC 2006)在两个独立一般来源的分散器背景下开发的高水平方法。主要步骤是:begin{itemize}item 调整高电平方法,使其适合于仿射源。 item 针对仿射源执行“挑战-回应游戏”(基于前面提到的两篇文章的精神,这两篇文章介绍了针对两个独立的通用源的这类游戏)。item 为了实现游戏,我们构建了仿射块源的提取器。为此,我们使用Rao (CCC 2009)的想法和组件。 item 结合以上三项,我们得到了熵大于$sqrt{n}$的仿射源的分散体。我们按照Rein gold, Shaltiel和Wigderson (SICOMP 2006)和Barak, Rao, Shaltiel和Wigderson (STOC 2006)的精神,使用递归双赢分析得到熵小于$sqrt{n}$的仿射分散体。end{itemize}
{"title":"Dispersers for Affine Sources with Sub-polynomial Entropy","authors":"Ronen Shaltiel","doi":"10.1109/FOCS.2011.37","DOIUrl":"https://doi.org/10.1109/FOCS.2011.37","url":null,"abstract":"We construct an explicit disperser for affine sources over $F_2^n$ with entropy $k=2^{log^{0.9} n}=n^{o(1)}$. This is a polynomial time computable function $D:F_2^n ar B$ such that for every affine space $V$ of $F_2^n$ that has dimension at least $k$, $D(V)=set{0,1}$. This improves the best previous construction of Ben-Sasson and Kop party (STOC 2009) that achieved $k = Omega(n^{4/5})$.Our technique follows a high level approach that was developed in Barak, Kindler, Shaltiel, Sudakov and Wigderson (J. ACM 2010) and Barak, Rao, Shaltiel and Wigderson (STOC 2006) in the context of dispersers for two independent general sources. The main steps are:begin{itemize}item Adjust the high level approach to make it suitable for affine sources. item Implement a ``challenge-response game'' for affine sources (in the spirit of the two aforementioned papers that introduced such games for two independent general sources).item In order to implement the game, we construct extractors for affine block-wise sources. For this we use ideas and components by Rao (CCC 2009). item Combining the three items above, we obtain dispersers for affine sources with entropy larger than $sqrt{n}$.We use a recursive win-win analysis in the spirit of Rein gold, Shaltiel and Wigderson (SICOMP 2006) and Barak, Rao, Shaltiel and Wigderson (STOC 2006) to get affine dispersers with entropy less than $sqrt{n}$.end{itemize}","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122272701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
期刊
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1