首页 > 最新文献

2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)最新文献

英文 中文
Determinant-Preserving Sparsification of SDDM Matrices with Applications to Counting and Sampling Spanning Trees SDDM矩阵的保行列式稀疏化及其在计数和抽样生成树中的应用
Pub Date : 2017-05-02 DOI: 10.1109/FOCS.2017.90
D. Durfee, John Peebles, Richard Peng, Anup B. Rao
We show variants of spectral sparsification routines can preserve the totalspanning tree counts of graphs, which by Kirchhoffs matrix-tree theorem, isequivalent to determinant of a graph Laplacian minor, or equivalently, of any SDDM matrix. Our analyses utilizes this combinatorial connection to bridge between statisticalleverage scores / effective resistances and the analysis of random graphsby [Janson, Combinatorics, Probability and Computing 94]. This leads to a routine that in quadratic time, sparsifies a graph down to aboutn^(1.5) edges in ways that preserve both the determinant and the distributionof spanning trees (provided the sparsified graph is viewed as a random object). Extending this algorithm to work with Schur complements and approximateCholesky factorizations leads to algorithms for counting andsampling spanning trees which are nearly optimal for dense graphs.We give an algorithm that computes a (1 +/- δ) approximation to the determinantof any SDDM matrix with constant probability in about n^2 / δ^2 time. This is the first routine for graphs that outperforms general-purpose routines for computingdeterminants of arbitrary matrices. We also give an algorithm that generates in about n^2 / δ^2 time a spanning tree ofa weighted undirected graph from a distribution with total variationdistance of δ from the w-uniform distribution.
我们展示了谱稀疏化例程的变体可以保留图的总生成树计数,根据Kirchhoffs矩阵树定理,它等价于图拉普拉斯次矩阵的行列式,或等价于任何SDDM矩阵的行列式。我们的分析利用这种组合连接在统计平均分数/有效阻力和随机图分析之间架起一座桥梁[Janson, Combinatorics, Probability and Computing 94]。这导致了一个例程,在二次时间内,以保留生成树的行列式和分布的方式将图稀疏化到大约n^(1.5)条边(假设稀疏化的图被视为随机对象)。将该算法扩展到使用Schur补和近似echolesky分解导致生成树的计数和采样算法,这对于密集图来说几乎是最优的。我们给出了一种算法,可以在大约n^2 / δ^2的时间内计算任意SDDM矩阵的行列式的(1 +/- δ)近似。这是图的第一个例程,它优于计算任意矩阵的行列式的通用例程。我们还给出了一种算法,该算法在n^2 / δ^2时间内从总变异距离为δ的分布生成一棵加权无向图的生成树;从w均匀分布。
{"title":"Determinant-Preserving Sparsification of SDDM Matrices with Applications to Counting and Sampling Spanning Trees","authors":"D. Durfee, John Peebles, Richard Peng, Anup B. Rao","doi":"10.1109/FOCS.2017.90","DOIUrl":"https://doi.org/10.1109/FOCS.2017.90","url":null,"abstract":"We show variants of spectral sparsification routines can preserve the totalspanning tree counts of graphs, which by Kirchhoffs matrix-tree theorem, isequivalent to determinant of a graph Laplacian minor, or equivalently, of any SDDM matrix. Our analyses utilizes this combinatorial connection to bridge between statisticalleverage scores / effective resistances and the analysis of random graphsby [Janson, Combinatorics, Probability and Computing 94]. This leads to a routine that in quadratic time, sparsifies a graph down to aboutn^(1.5) edges in ways that preserve both the determinant and the distributionof spanning trees (provided the sparsified graph is viewed as a random object). Extending this algorithm to work with Schur complements and approximateCholesky factorizations leads to algorithms for counting andsampling spanning trees which are nearly optimal for dense graphs.We give an algorithm that computes a (1 +/- δ) approximation to the determinantof any SDDM matrix with constant probability in about n^2 / δ^2 time. This is the first routine for graphs that outperforms general-purpose routines for computingdeterminants of arbitrary matrices. We also give an algorithm that generates in about n^2 / δ^2 time a spanning tree ofa weighted undirected graph from a distribution with total variationdistance of δ from the w-uniform distribution.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116982030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
A Time Hierarchy Theorem for the LOCAL Model 局部模型的时间层次定理
Pub Date : 2017-04-20 DOI: 10.1109/FOCS.2017.23
Yi-Jun Chang, S. Pettie
The celebrated Time Hierarchy Theorem for Turing machines states, informally, that more problems can be solved given more time. The extent to which a time hierarchy-type theorem holds in the classic distributed LOCAL model has been open for many years. In particular, it is consistent with previous results that all natural problems in the LOCAL model can be classified according to a small constant number of complexities, such as O(1), O(log* n), O(log n), 2^{O(sqrt{log n}), etc.In this paper we establish the first time hierarchy theorem for the LOCAL model and prove that several gaps exist in the LOCAL time hierarchy. Our main results are as follows:• We define an infinite set of simple coloring problems called Hierarchical 2½-Coloring. A correctly colored graph can be confirmed by simply checking the neighborhood of each vertex, so this problem fits into the class of locally checkable labeling (LCL) problems. However, the complexity of the k-level Hierarchical 2½-Coloring problem is Θ(n^{1/k}), for positive integer k. The upper and lower bounds hold for both general graphs and trees, and for both randomized and deterministic algorithms.• Consider any LCL problem on bounded degree trees. We prove an automatic-speedup theorem that states that any randomized n^{o(1)}-time algorithm solving the LCL can be transformed into a deterministic O(log n)-time algorithm. Together with a previous result, this establishes that on trees, there are no natural deterministic complexities in the ranges ω(log* n)—o(log n) or ω(log n)—n^{o(1)}.• We expose a gap in the randomized time hierarchy on general graphs. Roughly speaking, any randomized algorithm that solves an LCL problem in sublogarithmic time can be sped up to run in O(T_{LLL}) time, which is the complexity of the distributed Lovasz local lemma problem, currently known to be Ω(log log n) and 2^{O(sqrt{log log n})} on bounded degree graphs.Finally, we revisit Naor and Stockmeyers characterization of O(1)-time LOCAL algorithms for LCL problems (as order-invariant w.r.t. vertex IDs) and calculate the complexity gaps that are directly implied by their proof. For n-rings we see a ω(1)—o(log* n) complexity gap, for (sqrt{n} × √{n})-tori an ω(1)—o(sqrt{log* n}) gap, and for bounded degree trees and general graphs, an ω(1)—o(log(log* n)) complexity gap.
著名的图灵机时间层次定理非正式地说明,如果有更多的时间,可以解决更多的问题。时间层次结构类型定理在经典分布式LOCAL模型中的适用程度多年来一直没有定论。特别地,它与以往的结果一致,即LOCAL模型中所有的自然问题都可以根据一个小的常数数的复杂度进行分类,如O(1)、O(log* n)、O(log n)、2^{O(sqrt{log n})等。本文建立了LOCAL模型的第一次层次定理,并证明了LOCAL时间层次中存在几个间隙。我们的主要结果如下:•我们定义了一个无限的简单着色问题集,称为分层2½通过简单地检查每个顶点的邻域就可以确定一个正确着色的图,因此该问题属于局部可检查标记(LCL)问题。然而,对于正整数k, k级分层2½-着色问题的复杂度为Θ(n^{1/k})。对于一般图和树,以及随机和确定性算法,上界和下界都成立。•考虑有界度树上的任意LCL问题。我们证明了一个自动加速定理,该定理表明任何求解LCL的随机n^{o(1)}时间算法都可以转化为确定性o(log n)时间算法。结合之前的结果,这建立了在树上,在ω(log* n)— 0 (log n)或ω(log n)—n^{0(1)}。•我们揭示了一般图的随机时间层次中的一个缺口。粗略地说,任何在次对数时间内解决LCL问题的随机算法都可以加速到在O(T_{LLL})时间内运行,这是分布式Lovasz局部引理问题的复杂度,目前已知在有界度图上为Ω(log log n)和2^{O(sqrt{log log n})}。最后,我们回顾Naor和Stockmeyers对LCL问题的O(1)时间局部算法的描述(作为顺序不变的w.r.t.顶点id),并计算其证明直接隐含的复杂性差距。对于n环,我们可以看到ω(1)— 0 (log* n)的复杂度差距,对于(sqrt{n} ×√{n})-tori和ω(1)—o(sqrt{log* n})复杂度差,对于有界度树和一般图,ω(1)—o(log(log* n))复杂度差。
{"title":"A Time Hierarchy Theorem for the LOCAL Model","authors":"Yi-Jun Chang, S. Pettie","doi":"10.1109/FOCS.2017.23","DOIUrl":"https://doi.org/10.1109/FOCS.2017.23","url":null,"abstract":"The celebrated Time Hierarchy Theorem for Turing machines states, informally, that more problems can be solved given more time. The extent to which a time hierarchy-type theorem holds in the classic distributed LOCAL model has been open for many years. In particular, it is consistent with previous results that all natural problems in the LOCAL model can be classified according to a small constant number of complexities, such as O(1), O(log* n), O(log n), 2^{O(sqrt{log n}), etc.In this paper we establish the first time hierarchy theorem for the LOCAL model and prove that several gaps exist in the LOCAL time hierarchy. Our main results are as follows:• We define an infinite set of simple coloring problems called Hierarchical 2½-Coloring. A correctly colored graph can be confirmed by simply checking the neighborhood of each vertex, so this problem fits into the class of locally checkable labeling (LCL) problems. However, the complexity of the k-level Hierarchical 2½-Coloring problem is Θ(n^{1/k}), for positive integer k. The upper and lower bounds hold for both general graphs and trees, and for both randomized and deterministic algorithms.• Consider any LCL problem on bounded degree trees. We prove an automatic-speedup theorem that states that any randomized n^{o(1)}-time algorithm solving the LCL can be transformed into a deterministic O(log n)-time algorithm. Together with a previous result, this establishes that on trees, there are no natural deterministic complexities in the ranges ω(log* n)—o(log n) or ω(log n)—n^{o(1)}.• We expose a gap in the randomized time hierarchy on general graphs. Roughly speaking, any randomized algorithm that solves an LCL problem in sublogarithmic time can be sped up to run in O(T_{LLL}) time, which is the complexity of the distributed Lovasz local lemma problem, currently known to be Ω(log log n) and 2^{O(sqrt{log log n})} on bounded degree graphs.Finally, we revisit Naor and Stockmeyers characterization of O(1)-time LOCAL algorithms for LCL problems (as order-invariant w.r.t. vertex IDs) and calculate the complexity gaps that are directly implied by their proof. For n-rings we see a ω(1)—o(log* n) complexity gap, for (sqrt{n} × √{n})-tori an ω(1)—o(sqrt{log* n}) gap, and for bounded degree trees and general graphs, an ω(1)—o(log(log* n)) complexity gap.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130549219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 84
Fooling Intersections of Low-Weight Halfspaces 愚弄低重量半空间的交叉点
Pub Date : 2017-04-17 DOI: 10.1109/FOCS.2017.81
R. Servedio, Li-Yang Tan
A weight-t halfspace} is a Boolean function f(x)=sign(w_1 x_1 + … + w_n x_n - θ) where each w_i is an integer in {-t,dots,t}. We give an explicit pseudorandom generator that δ-fools any intersection of k weight-t halfspaces with seed length poly(log n, log k,t,1/δ). In particular, our result gives an explicit PRG that fools any intersection of any quasipoly(n) number of halfspaces of any polylog(n) weight to any 1/polylog(n) accuracy using seed length polylog(n). Prior to this work no explicit PRG with non-trivial seed length was known even for fooling intersections of n weight-1 halfspaces to constant accuracy.The analysis of our PRG fuses techniques from two different lines of work on unconditional pseudorandomness for different kinds of Boolean functions. We extend the approach of Harsha, Klivans and Meka cite{HKM12} for fooling intersections of regular halfspaces, and combine this approach with results of Bazzi cite{Bazzi:07} and Razborov cite{Razborov:09} on bounded independence fooling CNF formulas. Our analysis introduces new coupling-based ingredients into the standard Lindeberg method for establishing quantitative central limit theorems and associated pseudorandomness results.
权重半空间}是布尔函数f(x)= sign (w_1 x_1 + …+ w_n x_n - θ),其中每个w_i都是{-t, dots,t}中的整数。我们给出了一个显式的伪随机生成器δ-欺骗具有种子长度poly (log n, log k,t,1/δ)的k权重半空间的任何交集。特别是,我们的结果给出了一个显式PRG,该PRG使用种子长度polylog (n)将任意polylog (n)权的任意准poly (n)个数的半空间的任何交集愚弄到任意1/ polylog (n)精度。在此工作之前,没有已知具有非平凡种子长度的显式PRG,即使将n权重为1的半空间的交集愚弄到恒定精度。对我们的PRG的分析融合了两种不同的关于不同类型布尔函数的无条件伪随机性的技术。我们扩展了Harsha, Klivans和Meka cite{HKM12}用于欺骗正则半空间相交的方法,并将该方法与Bazzi cite{Bazzi:07}和Razborov cite{Razborov:09}关于有界独立欺骗CNF公式的结果结合起来。我们的分析在建立定量中心极限定理和相关伪随机结果的标准Lindeberg方法中引入了新的基于耦合的成分。
{"title":"Fooling Intersections of Low-Weight Halfspaces","authors":"R. Servedio, Li-Yang Tan","doi":"10.1109/FOCS.2017.81","DOIUrl":"https://doi.org/10.1109/FOCS.2017.81","url":null,"abstract":"A weight-t halfspace} is a Boolean function f(x)=sign(w_1 x_1 + … + w_n x_n - θ) where each w_i is an integer in {-t,dots,t}. We give an explicit pseudorandom generator that δ-fools any intersection of k weight-t halfspaces with seed length poly(log n, log k,t,1/δ). In particular, our result gives an explicit PRG that fools any intersection of any quasipoly(n) number of halfspaces of any polylog(n) weight to any 1/polylog(n) accuracy using seed length polylog(n). Prior to this work no explicit PRG with non-trivial seed length was known even for fooling intersections of n weight-1 halfspaces to constant accuracy.The analysis of our PRG fuses techniques from two different lines of work on unconditional pseudorandomness for different kinds of Boolean functions. We extend the approach of Harsha, Klivans and Meka cite{HKM12} for fooling intersections of regular halfspaces, and combine this approach with results of Bazzi cite{Bazzi:07} and Razborov cite{Razborov:09} on bounded independence fooling CNF formulas. Our analysis introduces new coupling-based ingredients into the standard Lindeberg method for establishing quantitative central limit theorems and associated pseudorandomness results.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121398252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Fast Similarity Sketching 快速相似素描
Pub Date : 2017-04-14 DOI: 10.1109/FOCS.2017.67
Søren Dahlgaard, M. B. T. Knudsen, M. Thorup
We consider the Similarity Sketching problem: Given a universe [u] = {0,..., u-1} we want a random function S mapping subsets A of [u] into vectors S(A) of size t, such that similarity is preserved. More precisely: Given subsets A,B of [u], define X_i = [S(A)[i] = S(B)[i]] and X = sum_{i in [t]} X_i. We want to have E[X] = t*J(A,B), where J(A,B) = |A intersect B|/|A union B| and furthermore to have strong concentration guarantees (i.e. Chernoff-style bounds) for X. This is a fundamental problem which has found numerous applications in data mining, large-scale classification, computer vision, similarity search, etc. via the classic MinHash algorithm. The vectors S(A) are also called sketches. The seminal t x MinHash algorithm uses t random hash functions h_1,..., h_t, and stores (min_{a in A} h_1(A),..., min_{a in A} h_t(A)) as the sketch of A. The main drawback of MinHash is, however, its O(t*|A|) running time, and finding a sketch with similar properties and faster running time has been the subject of several papers. Addressing this, Li et al. [NIPS12] introduced one permutation hashing (OPH), which creates a sketch of size t in O(t + |A|) time, but with the drawback that possibly some of the t entries are empty when |A| = O(t). One could argue that sketching is not necessary in this case, however the desire in most applications is to have one sketching procedure that works for sets of all sizes. Therefore, filling out these empty entries is the subject of several follow-up papers initiated by Shrivastava and Li [ICML14]. However, these densification schemes fail to provide good concentration bounds exactly in the case |A| = O(t), where they are needed. In this paper we present a new sketch which obtains essentially the best of both worlds. That is, a fast O(t log t + |A|) expected running time while getting the same strong concentration bounds as MinHash. Our new sketch can be seen as a mix between sampling with replacement and sampling without replacement. We demonstrate the power of our new sketch by considering popular applications in large-scale classification with linear SVM as introduced by Li et al. [NIPS11] as well as approximate similarity search using the LSH framework of Indyk and Motwani [STOC98]. In particular, for the j_1, j_2-approximate similarity search problem on a collection of n sets we obtain a data-structure with space usage O(n^{1+rho} + sum_{A in C} |A|) and O(n^rho * log n + |Q|) expected time for querying a set Q compared to a O(n^rho * log n * |Q|) expected query time of the classic result of Indyk and Motwani.
我们考虑相似草图问题:给定一个宇宙[u] ={0,…, u-1}我们需要一个随机函数S将[u]的子集a映射到大小为t的向量S(a),使得相似性保持不变。更精确地说:给定[u]的子集A,B,定义X_i = [S(A)[i] = S(B)[i]]和X = sum_{i in [t]} X_i。我们希望E[X] = t*J(A,B),其中J(A,B) = |A相交B|/|A并集B|,并且对X具有强集中保证(即chernoff式边界)。这是一个基本问题,通过经典的MinHash算法在数据挖掘,大规模分类,计算机视觉,相似性搜索等方面找到了许多应用。向量S(A)也称为草图。开创性的t x MinHash算法使用t个随机哈希函数h_1,…, h_t, and stores (min_{a in a} h_1(a),…)然而,MinHash的主要缺点是它的运行时间为O(t*| a |),并且寻找具有类似属性和更快运行时间的草图已经成为几篇论文的主题。为了解决这个问题,Li等人[NIPS12]引入了一种排列哈希(OPH),它在O(t + | a |)时间内创建了一个大小为t的草图,但缺点是当| a | = O(t)时,t项中可能有一些是空的。有人可能会说,在这种情况下没有必要绘制草图,然而,大多数应用程序都希望有一个适用于所有大小集合的草图过程。因此,填写这些空白条目是Shrivastava和Li [ICML14]等后续几篇论文的主题。然而,这些致密化方案在需要的情况下,却不能提供良好的浓度界限。在本文中,我们提出了一个新的草图,它基本上获得了两个世界的优点。也就是说,快速的O(t log t + | a |)预期运行时间,同时获得与MinHash相同的强集中界限。我们的新草图可以看作是抽样替换和抽样不替换的混合。通过考虑Li等人[NIPS11]引入的线性支持向量机在大规模分类中的流行应用,以及使用Indyk和Motwani [STOC98]的LSH框架的近似相似性搜索,我们展示了新草图的强大功能。特别地,对于n个集合集合上的j_1, j_2-近似相似搜索问题,我们得到了一个空间利用率为O(n^{1+rho} + sum_{a In C} | a |)的数据结构,与Indyk和Motwani经典结果的O(n^rho * log n * |Q|)的期望查询时间相比,查询集合Q的期望查询时间为O(n^rho * log n * |Q|)。
{"title":"Fast Similarity Sketching","authors":"Søren Dahlgaard, M. B. T. Knudsen, M. Thorup","doi":"10.1109/FOCS.2017.67","DOIUrl":"https://doi.org/10.1109/FOCS.2017.67","url":null,"abstract":"We consider the Similarity Sketching problem: Given a universe [u] = {0,..., u-1} we want a random function S mapping subsets A of [u] into vectors S(A) of size t, such that similarity is preserved. More precisely: Given subsets A,B of [u], define X_i = [S(A)[i] = S(B)[i]] and X = sum_{i in [t]} X_i. We want to have E[X] = t*J(A,B), where J(A,B) = |A intersect B|/|A union B| and furthermore to have strong concentration guarantees (i.e. Chernoff-style bounds) for X. This is a fundamental problem which has found numerous applications in data mining, large-scale classification, computer vision, similarity search, etc. via the classic MinHash algorithm. The vectors S(A) are also called sketches. The seminal t x MinHash algorithm uses t random hash functions h_1,..., h_t, and stores (min_{a in A} h_1(A),..., min_{a in A} h_t(A)) as the sketch of A. The main drawback of MinHash is, however, its O(t*|A|) running time, and finding a sketch with similar properties and faster running time has been the subject of several papers. Addressing this, Li et al. [NIPS12] introduced one permutation hashing (OPH), which creates a sketch of size t in O(t + |A|) time, but with the drawback that possibly some of the t entries are empty when |A| = O(t). One could argue that sketching is not necessary in this case, however the desire in most applications is to have one sketching procedure that works for sets of all sizes. Therefore, filling out these empty entries is the subject of several follow-up papers initiated by Shrivastava and Li [ICML14]. However, these densification schemes fail to provide good concentration bounds exactly in the case |A| = O(t), where they are needed. In this paper we present a new sketch which obtains essentially the best of both worlds. That is, a fast O(t log t + |A|) expected running time while getting the same strong concentration bounds as MinHash. Our new sketch can be seen as a mix between sampling with replacement and sampling without replacement. We demonstrate the power of our new sketch by considering popular applications in large-scale classification with linear SVM as introduced by Li et al. [NIPS11] as well as approximate similarity search using the LSH framework of Indyk and Motwani [STOC98]. In particular, for the j_1, j_2-approximate similarity search problem on a collection of n sets we obtain a data-structure with space usage O(n^{1+rho} + sum_{A in C} |A|) and O(n^rho * log n + |Q|) expected time for querying a set Q compared to a O(n^rho * log n * |Q|) expected query time of the classic result of Indyk and Motwani.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122944174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
On the Quantitative Hardness of CVP CVP定量硬度的研究
Pub Date : 2017-04-12 DOI: 10.1109/FOCS.2017.11
Huck Bennett, Alexander Golovnev, Noah Stephens-Davidowitz
For odd integers p ≥ 1 (and p = ∞), we show that the Closest Vector Problem in the ℓp norm (CVP_p) over rank n lattices cannot be solved in 2^(1-≥) n time for any constant ≥ 0 unless the Strong Exponential Time Hypothesis (SETH) fails. We then extend this result to almost all values of p ≥ 1, not including the even integers. This comes tantalizingly close to settling the quantitative time complexity of the important special case of CVP_2 (i.e., CVP in the Euclidean norm), for which a 2^{n +o(n)}-time algorithm is known. In particular, our result applies for any p = p(n) ≠ 2 that approaches 2 as n ↦ ∞.We also show a similar SETH-hardness result for SVP_∞; hardness of approximating CVP_p to within some constant factor under the so-called Gap-ETH assumption; and other hardness results for CVP_p and CVPP_p for any 1 ≤ p
对于奇数p ≥1(和p = ∞),我们证明了n阶格上的ℓp范数(CVP_p)中的最接近向量问题不能在2^(1-≥) n时间内解决,对于任何常数≥除非强指数时间假设(SETH)不成立。然后我们将这个结果扩展到几乎所有的p ≥1,不包括偶数。这非常接近于解决CVP_2(即欧几里得范数中的CVP)的重要特殊情况的定量时间复杂性,其中已知的2^{n +o(n)}时间算法。特别地,我们的结果适用于任意p = p(n) ≠n ↦& # x221E;。我们还发现SVP_∞;在所谓Gap-ETH假设下,CVP_p近似于某常数因子的硬度;以及任意1 ≤的CVP_p和CVPP_p的其他硬度结果;p
{"title":"On the Quantitative Hardness of CVP","authors":"Huck Bennett, Alexander Golovnev, Noah Stephens-Davidowitz","doi":"10.1109/FOCS.2017.11","DOIUrl":"https://doi.org/10.1109/FOCS.2017.11","url":null,"abstract":"For odd integers p ≥ 1 (and p = ∞), we show that the Closest Vector Problem in the ℓp norm (CVP_p) over rank n lattices cannot be solved in 2^(1-≥) n time for any constant ≥ 0 unless the Strong Exponential Time Hypothesis (SETH) fails. We then extend this result to almost all values of p ≥ 1, not including the even integers. This comes tantalizingly close to settling the quantitative time complexity of the important special case of CVP_2 (i.e., CVP in the Euclidean norm), for which a 2^{n +o(n)}-time algorithm is known. In particular, our result applies for any p = p(n) ≠ 2 that approaches 2 as n ↦ ∞.We also show a similar SETH-hardness result for SVP_∞; hardness of approximating CVP_p to within some constant factor under the so-called Gap-ETH assumption; and other hardness results for CVP_p and CVPP_p for any 1 ≤ p","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121092766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Simply Exponential Approximation of the Permanent of Positive Semidefinite Matrices 正半定矩阵的永恒性的简单指数逼近
Pub Date : 2017-04-11 DOI: 10.1109/FOCS.2017.89
Nima Anari, L. Gurvits, S. Gharan, A. Saberi
We design a deterministic polynomial time cn approximation algorithm for the permanent of positive semidefinite matrices where c = e+1 ⋍ 4:84. We write a natural convex relaxation and show that its optimum solution gives a cn approximation of the permanent. We further show that this factor is asymptotically tight by constructing a family of positive semidefinite matrices. We also show that our result implies an approximate version of the permanent-ontop conjecture, which was recently refuted in its original form; we show that the permanent is within a cn factor of the top eigenvalue of the Schur power matrix.
针对c = e+1 ⋍4:84。我们写出了一个自然凸松弛,并证明了它的最优解给出了永久松弛的cn近似。通过构造一组正半定矩阵进一步证明了该因子是渐近紧的。我们还证明了我们的结果暗示了一个近似的永久上顶猜想,这个猜想最近以其原始形式被反驳了;我们证明了常数在舒尔幂矩阵的上特征值的一个cn因子内。
{"title":"Simply Exponential Approximation of the Permanent of Positive Semidefinite Matrices","authors":"Nima Anari, L. Gurvits, S. Gharan, A. Saberi","doi":"10.1109/FOCS.2017.89","DOIUrl":"https://doi.org/10.1109/FOCS.2017.89","url":null,"abstract":"We design a deterministic polynomial time cn approximation algorithm for the permanent of positive semidefinite matrices where c = e+1 ⋍ 4:84. We write a natural convex relaxation and show that its optimum solution gives a cn approximation of the permanent. We further show that this factor is asymptotically tight by constructing a family of positive semidefinite matrices. We also show that our result implies an approximate version of the permanent-ontop conjecture, which was recently refuted in its original form; we show that the permanent is within a cn factor of the top eigenvalue of the Schur power matrix.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130578324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Sublinear Time Low-Rank Approximation of Positive Semidefinite Matrices 正半定矩阵的次线性时间低秩逼近
Pub Date : 2017-04-11 DOI: 10.1109/FOCS.2017.68
Cameron Musco, David P. Woodruff
We show how to compute a relative-error low-rank approximation to any positive semidefinite (PSD) matrix in sublinear time, i.e., for any n x n PSD matrix A, in Õ(n ⋅ poly(k/ε)) time we output a rank-k matrix B, in factored form, for which kA – B║ 2 F ≤ (1 + ε)║A – Ak║2 F , where Ak is the best rank-k approximation to A. When k and 1/ε are not too large compared to the sparsity of A, our algorithm does not need to read all entries of the matrix. Hence, we significantly improve upon previous nnz(A) time algorithms based on oblivious subspace embeddings, and bypass an nnz(A) time lower bound for general matrices (where nnz(A) denotes the number of non-zero entries in the matrix). We prove time lower bounds for low-rank approximation of PSD matrices, showing that our algorithm is close to optimal. Finally, we extend our techniques to give sublinear time algorithms for lowrank approximation of A in the (often stronger) spectral norm metric ║A – B║2 2 and for ridge regression on PSD matrices.
我们展示了如何在亚线性时间内计算任意正半定(PSD)矩阵的相对误差低秩逼近,即对于任意n x n PSD矩阵a,在Õ(n ⋅poly(k/ε))时,我们以因子形式输出一个秩-k矩阵B,其中kA –b # x2551;2 F ≤(1 + ε)║A –k║2 F,其中Ak是a的最佳秩-秩近似。与A的稀疏度相比不是太大,我们的算法不需要读取矩阵的所有条目。因此,我们显著改进了先前基于遗忘子空间嵌入的nnz(A)时间算法,并绕过了一般矩阵的nnz(A)时间下界(其中nnz(A)表示矩阵中非零条目的数量)。我们证明了PSD矩阵的低秩逼近的时间下界,表明我们的算法是接近最优的。最后,我们扩展了我们的技术,给出了在(通常更强的)谱范数度量║ –中A的低秩近似的亚线性时间算法。B║2 2和用于PSD矩阵的脊回归。
{"title":"Sublinear Time Low-Rank Approximation of Positive Semidefinite Matrices","authors":"Cameron Musco, David P. Woodruff","doi":"10.1109/FOCS.2017.68","DOIUrl":"https://doi.org/10.1109/FOCS.2017.68","url":null,"abstract":"We show how to compute a relative-error low-rank approximation to any positive semidefinite (PSD) matrix in sublinear time, i.e., for any n x n PSD matrix A, in Õ(n ⋅ poly(k/ε)) time we output a rank-k matrix B, in factored form, for which kA – B║ 2 F ≤ (1 + ε)║A – Ak║2 F , where Ak is the best rank-k approximation to A. When k and 1/ε are not too large compared to the sparsity of A, our algorithm does not need to read all entries of the matrix. Hence, we significantly improve upon previous nnz(A) time algorithms based on oblivious subspace embeddings, and bypass an nnz(A) time lower bound for general matrices (where nnz(A) denotes the number of non-zero entries in the matrix). We prove time lower bounds for low-rank approximation of PSD matrices, showing that our algorithm is close to optimal. Finally, we extend our techniques to give sublinear time algorithms for lowrank approximation of A in the (often stronger) spectral norm metric ║A – B║2 2 and for ridge regression on PSD matrices.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116727654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 51
Active Classification with Comparison Queries 带有比较查询的主动分类
Pub Date : 2017-04-11 DOI: 10.1109/FOCS.2017.40
D. Kane, Shachar Lovett, S. Moran, Jiapeng Zhang
We study an extension of active learning in which the learning algorithm may ask the annotator to compare the distances of two examples from the boundary of their label-class. For example, in a recommendation system application (say for restaurants), the annotator may be asked whether she liked or disliked a specific restaurant (a label query); or which one of two restaurants did she like more (a comparison query).We focus on the class of half spaces, and show that under natural assumptions, such as large margin or bounded bit-description of the input examples, it is possible to reveal all the labels of a sample of size n using approximately O(log n) queries. This implies an exponential improvement over classical active learning, where only label queries are allowed. We complement these results by showing that if any of these assumptions is removed then, in the worst case, Ω(n) queries are required.Our results follow from a new general framework of active learning with additional queries. We identify a combinatorial dimension, called the inference dimension, that captures the query complexity when each additional query is determined by O(1) examples (such as comparison queries, each of which is determined by the two compared examples). Our results for half spaces follow by bounding the inference dimension in the cases discussed above.
我们研究了主动学习的扩展,其中学习算法可能会要求注释者比较两个示例到其标签类边界的距离。例如,在推荐系统应用程序(比如餐馆)中,可能会询问注释者是否喜欢或不喜欢特定的餐馆(标签查询);或者她更喜欢两家餐厅中的哪一家(一个比较查询)。我们关注半空间类,并表明在自然假设下,例如输入示例的大边距或有界位描述,使用大约O(log n)查询可以显示大小为n的样本的所有标签。这意味着与只允许标签查询的经典主动学习相比,有了指数级的改进。我们通过显示如果删除这些假设中的任何一个,那么在最坏的情况下,需要查询来补充这些结果。我们的研究结果来自一个带有附加查询的主动学习的新通用框架。我们确定了一个组合维,称为推理维,当每个附加查询由O(1)个示例确定时(例如比较查询,每个查询由两个被比较的示例确定),它捕获查询复杂性。在上面讨论的情况下,我们对半空间的结果是通过限定推理维来实现的。
{"title":"Active Classification with Comparison Queries","authors":"D. Kane, Shachar Lovett, S. Moran, Jiapeng Zhang","doi":"10.1109/FOCS.2017.40","DOIUrl":"https://doi.org/10.1109/FOCS.2017.40","url":null,"abstract":"We study an extension of active learning in which the learning algorithm may ask the annotator to compare the distances of two examples from the boundary of their label-class. For example, in a recommendation system application (say for restaurants), the annotator may be asked whether she liked or disliked a specific restaurant (a label query); or which one of two restaurants did she like more (a comparison query).We focus on the class of half spaces, and show that under natural assumptions, such as large margin or bounded bit-description of the input examples, it is possible to reveal all the labels of a sample of size n using approximately O(log n) queries. This implies an exponential improvement over classical active learning, where only label queries are allowed. We complement these results by showing that if any of these assumptions is removed then, in the worst case, Ω(n) queries are required.Our results follow from a new general framework of active learning with additional queries. We identify a combinatorial dimension, called the inference dimension, that captures the query complexity when each additional query is determined by O(1) examples (such as comparison queries, each of which is determined by the two compared examples). Our results for half spaces follow by bounding the inference dimension in the cases discussed above.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"286 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133433452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 64
Weighted k-Server Bounds via Combinatorial Dichotomies 基于组合二分类的加权k-Server界
Pub Date : 2017-04-11 DOI: 10.1109/FOCS.2017.52
N. Bansal, Marek Eliáš, G. Koumoutsos
The weighted k-server problem is a natural generalization of the k-server problem where each server has a different weight. We consider the problem on uniform metrics, which corresponds to a natural generalization of paging. Our main result is a doubly exponential lower bound on the competitive ratio of any deterministic online algorithm, that essentially matches the known upper bounds for the problem and closes a large and long-standing gap.The lower bound is based on relating the weighted k-server problem to a certain combinatorial problem and proving a Ramsey-theoretic lower bound for it. This combinatorial connection also reveals several structural properties of low cost feasible solutions to serve a sequence of requests. We use this to show that the generalized Work Function Algorithm achieves an almost optimum competitive ratio, and to obtain new refined upper bounds on the competitive ratio for the case of d different weight classes.
加权k-服务器问题是k-服务器问题的自然推广,其中每个服务器具有不同的权重。我们考虑统一度量的问题,这对应于分页的自然泛化。我们的主要结果是任何确定性在线算法的竞争比的双指数下界,它本质上与问题的已知上界相匹配,并缩小了一个巨大且长期存在的差距。下界是建立在将加权k-server问题与某组合问题联系起来并证明其拉姆齐理论下界的基础上的。这种组合连接还揭示了低成本可行解决方案的几个结构特性,以满足一系列要求。我们利用这一结果证明了广义功函数算法获得了一个几乎最优的竞争比,并得到了d个不同权重类情况下竞争比的新的精细化上界。
{"title":"Weighted k-Server Bounds via Combinatorial Dichotomies","authors":"N. Bansal, Marek Eliáš, G. Koumoutsos","doi":"10.1109/FOCS.2017.52","DOIUrl":"https://doi.org/10.1109/FOCS.2017.52","url":null,"abstract":"The weighted k-server problem is a natural generalization of the k-server problem where each server has a different weight. We consider the problem on uniform metrics, which corresponds to a natural generalization of paging. Our main result is a doubly exponential lower bound on the competitive ratio of any deterministic online algorithm, that essentially matches the known upper bounds for the problem and closes a large and long-standing gap.The lower bound is based on relating the weighted k-server problem to a certain combinatorial problem and proving a Ramsey-theoretic lower bound for it. This combinatorial connection also reveals several structural properties of low cost feasible solutions to serve a sequence of requests. We use this to show that the generalized Work Function Algorithm achieves an almost optimum competitive ratio, and to obtain new refined upper bounds on the competitive ratio for the case of d different weight classes.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125201061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Tight Lower Bounds for Differentially Private Selection 差分私有选择的紧下界
Pub Date : 2017-04-10 DOI: 10.1109/FOCS.2017.57
T. Steinke, Jonathan Ullman
A pervasive task in the differential privacy literature is to select the k items of highest quality out of a set of d items, where the quality of each item depends on a sensitive dataset that must be protected. Variants of this task arise naturally in fundamental problems like feature selection and hypothesis testing, and also as subroutines for many sophisticated differentially private algorithms.The standard approaches to these tasks—repeated use of the exponential mechanism or the sparse vector technique—approximately solve this problem given a dataset of n = O(√{k}log d) samples. We provide a tight lower bound for some very simple variants of the private selection problem. Our lower bound shows that a sample of size n = Ω(√{k} log d) is required even to achieve a very minimal accuracy guarantee.Our results are based on an extension of the fingerprinting method to sparse selection problems. Previously, the fingerprinting method has been used to provide tight lower bounds for answering an entire set of d queries, but often only some much smaller set of k queries are relevant. Our extension allows us to prove lower bounds that depend on both the number of relevant queries and the total number of queries.
在差分隐私文献中,一个普遍的任务是从一组d个项目中选择质量最高的k个项目,其中每个项目的质量取决于必须保护的敏感数据集。这个任务的变体自然地出现在基本问题中,如特征选择和假设检验,也作为许多复杂的差分私有算法的子程序。这些任务的标准方法——重复使用指数机制或稀疏向量技术——在给定n = O(√{k}log d)个样本的数据集上近似地解决了这个问题。我们为私有选择问题的一些非常简单的变体提供了一个紧下界。我们的下界表明,即使要达到非常低的精度保证,也需要大小为n = Ω(√{k} log d)的样本。我们的结果是基于对稀疏选择问题的指纹方法的扩展。以前,指纹识别方法已被用于为回答全部d个查询提供严格的下界,但通常只有k个查询中的一些小得多的查询集是相关的。我们的扩展允许我们证明依赖于相关查询数量和查询总数的下界。
{"title":"Tight Lower Bounds for Differentially Private Selection","authors":"T. Steinke, Jonathan Ullman","doi":"10.1109/FOCS.2017.57","DOIUrl":"https://doi.org/10.1109/FOCS.2017.57","url":null,"abstract":"A pervasive task in the differential privacy literature is to select the k items of highest quality out of a set of d items, where the quality of each item depends on a sensitive dataset that must be protected. Variants of this task arise naturally in fundamental problems like feature selection and hypothesis testing, and also as subroutines for many sophisticated differentially private algorithms.The standard approaches to these tasks—repeated use of the exponential mechanism or the sparse vector technique—approximately solve this problem given a dataset of n = O(√{k}log d) samples. We provide a tight lower bound for some very simple variants of the private selection problem. Our lower bound shows that a sample of size n = Ω(√{k} log d) is required even to achieve a very minimal accuracy guarantee.Our results are based on an extension of the fingerprinting method to sparse selection problems. Previously, the fingerprinting method has been used to provide tight lower bounds for answering an entire set of d queries, but often only some much smaller set of k queries are relevant. Our extension allows us to prove lower bounds that depend on both the number of relevant queries and the total number of queries.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122234364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 70
期刊
2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1