首页 > 最新文献

2011 IEEE 52nd Annual Symposium on Foundations of Computer Science最新文献

英文 中文
Steiner Shallow-Light Trees are Exponentially Lighter than Spanning Ones 斯坦纳浅光树比生成树更轻
Pub Date : 2011-10-22 DOI: 10.1137/13094791X
Michael Elkin, Shay Solomon
For a pair of parameters $alpha,beta ge 1$, a spanning tree $T$ of a weighted undirected $n$-vertex graph $G = (V,E,w)$ is called an emph{$(alpha,beta)$-shallow-light tree} (shortly, $(alpha,beta)$-SLT)of $G$ with respect to a designated vertex $rt in V$ if (1) it approximates all distances from $rt$ to the other vertices up to a factor of $alpha$, and(2) its weight is at most $beta$ times the weight of the minimum spanning tree $MST(G)$ of $G$. The parameter $alpha$ (respectively, $beta$) is called the emph{root-distortion}(resp., emph{lightness}) of the tree $T$. Shallow-light trees (SLTs) constitute a fundamental graph structure, with numerous theoretical and practical applications. In particular, they were used for constructing spanners, in network design, for VLSI-circuit design, for various data gathering and dissemination tasks in wireless and sensor networks, in overlay networks, and in the message-passing model of distributed computing. Tight tradeoffs between the parameters of SLTs were established by Awer buch et al. cite{ABP90, ABP91} and Khuller et al. cite{KRY93}. They showed that for any $epsilon >, 0$there always exist $(1+epsilon, O(frac{1}{epsilon}))$-SLTs, and that the upper bound $beta = O(frac{1}{epsilon})$on the lightness of SLTs cannot be improved. In this paper we show that using Steiner points one can build SLTs with emph{logarithmic lightness}, i.e., $beta = O(log frac{1}{epsilon})$. This establishes an emph{exponential separation} between spanning SLTs and Steiner ones. One particularly remarkable point on our tradeoff curve is $epsilon =0$. In this regime our construction provides a emph{shortest-path tree} with weight at most $O(log n) cdot w(MST(G))$. Moreover, we prove matching lower bounds that show that all our results are tight up to constant factors. Finally, on our way to these results we settle (up to constant factors) a number of open questions that were raised by Khuller et al. cite{KRY93} in SODA'93.
对于一对参数$alpha,beta ge 1$,一个加权无向$n$ -顶点图$G = (V,E,w)$的生成树$T$被称为emph{$(alpha,beta)$-浅光树}(简称$(alpha,beta)$ -SLT)的$G$相对于一个指定顶点$rt in V$,如果(1)它将$rt$到其他顶点的所有距离近似为$alpha$的一个因子,(2)其权值不超过$beta$乘以$G$的最小生成树$MST(G)$的权值。参数$alpha$(分别为$beta$)称为emph{根部变形}。, emph{亮度})的树$T$。浅光树(SLTs)是一种基本的图结构,具有广泛的理论和实际应用。特别是,它们被用于构建扳手、网络设计、vlsi电路设计、无线和传感器网络中的各种数据收集和传播任务、覆盖网络以及分布式计算的消息传递模型。Awer buch等人cite{ABP90, ABP91}和Khuller等人cite{KRY93}建立了slt参数之间的紧密权衡。结果表明,对于任何$epsilon >, 0$都存在$(1+epsilon, O(frac{1}{epsilon}))$ - slt,并且slt的亮度上界$beta = O(frac{1}{epsilon})$无法提高。在本文中,我们证明了使用斯坦纳点可以用emph{对数亮度},即$beta = O(log frac{1}{epsilon})$来构建slt。这在跨越slt和Steiner之间建立了emph{指数分离}。在我们的权衡曲线上,一个特别值得注意的点是$epsilon =0$。在这种情况下,我们的结构提供了一个重量最多为$O(log n) cdot w(MST(G))$的emph{最短路径树}。此外,我们证明了匹配下界,表明我们所有的结果都紧绷于常数因子。最后,在我们得出这些结果的过程中,我们解决了(直到常数因素)Khuller等人在SODA'93中提出的一些悬而未决的问题cite{KRY93}。
{"title":"Steiner Shallow-Light Trees are Exponentially Lighter than Spanning Ones","authors":"Michael Elkin, Shay Solomon","doi":"10.1137/13094791X","DOIUrl":"https://doi.org/10.1137/13094791X","url":null,"abstract":"For a pair of parameters $alpha,beta ge 1$, a spanning tree $T$ of a weighted undirected $n$-vertex graph $G = (V,E,w)$ is called an emph{$(alpha,beta)$-shallow-light tree} (shortly, $(alpha,beta)$-SLT)of $G$ with respect to a designated vertex $rt in V$ if (1) it approximates all distances from $rt$ to the other vertices up to a factor of $alpha$, and(2) its weight is at most $beta$ times the weight of the minimum spanning tree $MST(G)$ of $G$. The parameter $alpha$ (respectively, $beta$) is called the emph{root-distortion}(resp., emph{lightness}) of the tree $T$. Shallow-light trees (SLTs) constitute a fundamental graph structure, with numerous theoretical and practical applications. In particular, they were used for constructing spanners, in network design, for VLSI-circuit design, for various data gathering and dissemination tasks in wireless and sensor networks, in overlay networks, and in the message-passing model of distributed computing. Tight tradeoffs between the parameters of SLTs were established by Awer buch et al. cite{ABP90, ABP91} and Khuller et al. cite{KRY93}. They showed that for any $epsilon &gt, 0$there always exist $(1+epsilon, O(frac{1}{epsilon}))$-SLTs, and that the upper bound $beta = O(frac{1}{epsilon})$on the lightness of SLTs cannot be improved. In this paper we show that using Steiner points one can build SLTs with emph{logarithmic lightness}, i.e., $beta = O(log frac{1}{epsilon})$. This establishes an emph{exponential separation} between spanning SLTs and Steiner ones. One particularly remarkable point on our tradeoff curve is $epsilon =0$. In this regime our construction provides a emph{shortest-path tree} with weight at most $O(log n) cdot w(MST(G))$. Moreover, we prove matching lower bounds that show that all our results are tight up to constant factors. Finally, on our way to these results we settle (up to constant factors) a number of open questions that were raised by Khuller et al. cite{KRY93} in SODA'93.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134398575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Efficient Fully Homomorphic Encryption from (Standard) LWE 基于(标准)LWE的高效全同态加密
Pub Date : 2011-10-22 DOI: 10.1109/FOCS.2011.12
Zvika Brakerski, V. Vaikuntanathan
We present a fully homomorphic encryption scheme that is based solely on the(standard) learning with errors (LWE) assumption. Applying known results on LWE, the security of our scheme is based on the worst-case hardness of ``short vector problems'' on arbitrary lattices. Our construction improves on previous works in two aspects:begin{enumerate}item We show that ``somewhat homomorphic'' encryption can be based on LWE, using a new {em re-linearization} technique. In contrast, all previous schemes relied on complexity assumptions related to ideals in various rings. item We deviate from the "squashing paradigm'' used in all previous works. We introduce a new {em dimension-modulus reduction} technique, which shortens the cipher texts and reduces the decryption complexity of our scheme, {em without introducing additional assumptions}. end{enumerate}Our scheme has very short cipher texts and we therefore use it to construct an asymptotically efficient LWE-based single-server private information retrieval (PIR) protocol. The communication complexity of our protocol (in the public-key model) is $k cdot polylog(k)+log dbs$ bits per single-bit query (here, $k$ is a security parameter).
我们提出了一种完全同态的加密方案,该方案仅基于(标准)带误差学习(LWE)假设。将已知的结果应用到LWE上,我们的方案的安全性是基于任意格上的“短向量问题”的最坏情况硬度。我们的建设在两个方面改进了以往的工作:begin{enumerate}item 我们证明了“有些同态”加密可以基于LWE,使用一种新的{em再线性化}技术。相比之下,所有先前的方案都依赖于与各种环的理想相关的复杂性假设。 item 我们偏离了之前所有作品中使用的“挤压范式”。我们引入了一种新的{em降维模量}技术,该技术在{em不引入额外假设的情况下}缩短了密文并降低了方案的解密复杂度。 end{enumerate}我们的方案具有非常短的密文,因此我们使用它来构造一个渐近高效的基于lwe的单服务器私有信息检索(PIR)协议。我们的协议(在公钥模型中)的通信复杂性是每单比特查询$k cdot polylog(k)+log dbs$位(这里$k$是一个安全参数)。
{"title":"Efficient Fully Homomorphic Encryption from (Standard) LWE","authors":"Zvika Brakerski, V. Vaikuntanathan","doi":"10.1109/FOCS.2011.12","DOIUrl":"https://doi.org/10.1109/FOCS.2011.12","url":null,"abstract":"We present a fully homomorphic encryption scheme that is based solely on the(standard) learning with errors (LWE) assumption. Applying known results on LWE, the security of our scheme is based on the worst-case hardness of ``short vector problems'' on arbitrary lattices. Our construction improves on previous works in two aspects:begin{enumerate}item We show that ``somewhat homomorphic'' encryption can be based on LWE, using a new {em re-linearization} technique. In contrast, all previous schemes relied on complexity assumptions related to ideals in various rings. item We deviate from the \"squashing paradigm'' used in all previous works. We introduce a new {em dimension-modulus reduction} technique, which shortens the cipher texts and reduces the decryption complexity of our scheme, {em without introducing additional assumptions}. end{enumerate}Our scheme has very short cipher texts and we therefore use it to construct an asymptotically efficient LWE-based single-server private information retrieval (PIR) protocol. The communication complexity of our protocol (in the public-key model) is $k cdot polylog(k)+log dbs$ bits per single-bit query (here, $k$ is a security parameter).","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116092246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1593
Graph Connectivities, Network Coding, and Expander Graphs 图连接,网络编码和扩展图
Pub Date : 2011-10-22 DOI: 10.1137/110844970
Ho Yee Cheung, L. Lau, K. M. Leung
We present a new algebraic formulation to compute edge connectivities in a directed graph, using the ideas developed in network coding. This reduces the problem of computing edge connectivities to solving systems of linear equations, thus allowing us to use tools in linear algebra to design new algorithms. Using the algebraic formulation we obtain faster algorithms for computing single source edge connectivities and all pairs edge connectivities, in some settings the amortized time to compute the edge connectivity for one pair is sub linear. Through this connection, we have also found an interesting use of expanders and super concentrators to design fast algorithms for some graph connectivity problems.
我们提出了一种新的代数公式来计算有向图中的边连通性,使用了网络编码中的思想。这减少了求解线性方程组的计算边缘连通性问题,从而允许我们使用线性代数中的工具来设计新的算法。利用代数公式,我们得到了计算单源边缘连通性和所有对边缘连通性的更快算法,在某些情况下,计算一对边缘连通性的平摊时间是次线性的。通过这种连接,我们还发现了扩展器和超级集中器的有趣用途,可以为一些图连通性问题设计快速算法。
{"title":"Graph Connectivities, Network Coding, and Expander Graphs","authors":"Ho Yee Cheung, L. Lau, K. M. Leung","doi":"10.1137/110844970","DOIUrl":"https://doi.org/10.1137/110844970","url":null,"abstract":"We present a new algebraic formulation to compute edge connectivities in a directed graph, using the ideas developed in network coding. This reduces the problem of computing edge connectivities to solving systems of linear equations, thus allowing us to use tools in linear algebra to design new algorithms. Using the algebraic formulation we obtain faster algorithms for computing single source edge connectivities and all pairs edge connectivities, in some settings the amortized time to compute the edge connectivity for one pair is sub linear. Through this connection, we have also found an interesting use of expanders and super concentrators to design fast algorithms for some graph connectivity problems.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131447212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Optimal Testing of Multivariate Polynomials over Small Prime Fields 小素数域上多元多项式的最优检验
Pub Date : 2011-10-22 DOI: 10.1137/120879257
Elad Haramaty, Amir Shpilka, M. Sudan
We consider the problem of testing if a given function $f : F_q^n right arrow F_q$ is close to a $n$-variate degree $d$ polynomial over the finite field $F_q$ of $q$elements. The natural, low-query, test for this property would be to pick the smallest dimension $t = t_{q,d}approx d/q$ such that every function of degree greater than $d$reveals this aspect on {em some} $t$-dimensional affine subspace of $F_q^n$ and to test that $f$ when restricted to a {em random} $t$-dimensional affine subspace is a polynomial of degree at most $d$ on this subspace. Such a test makes only $q^t$ queries, independent of $n$. Previous works, by Alon et al.~cite{AKKLR}, and Kaufman and Ron~cite{KaufmanRon06} and Jutla et al.~cite{JPRZ04}, showed that this natural test rejected functions that were$Omega(1)$-far from degree $d$-polynomials with probability at least $Omega(q^{-t})$. (The initial work~cite{AKKLR} considered only the case of $q=2$, while the work~cite{JPRZ04}only considered the case of prime $q$. The results in cite{KaufmanRon06} hold for all fields.) Thus to get a constant probability of detecting functions that are at constant distance from the space of degree $d$ polynomials, the tests made $q^{2t}$ queries. Kaufman and Ron also noted that when $q$ is prime, then $q^t$ queries are necessary. Thus these tests were off by at least a quadratic factor from known lower bounds. Bhattacharyya et al.~cite{BKSSZ10} gave an optimal analysis of this test for the case of the binary field and showed that the natural test actually rejects functions that were $Omega(1)$-far from degree $d$-polynomials with probability$Omega(1)$. In this work we extend this result for all fields showing that the natural test does indeed reject functions that are $Omega(1)$-far from degree $d$ polynomials with$Omega(1)$-probability, where the constants depend only on $q$ the field size. Thus our analysis thus shows that this test is optimal (matches known lower bounds) when $q$ is prime. The main technical ingredient in our work is a tight analysis of the number of ``hyper planes'' (affine subspaces of co-dimension $1$) on which the restriction of a degree $d$polynomial has degree less than $d$. We show that the number of such hyper planes is at most $O(q^{t_{q,d}})$ -- which is tight to within constant factors.
我们考虑测试一个给定函数的问题 $f : F_q^n right arrow F_q$ 接近于 $n$-变量度 $d$ 有限域上的多项式 $F_q$ 的 $q$元素。对这个属性的自然的、低查询的测试是选择最小的维度 $t = t_{q,d}approx d/q$ 使得每一个度数大于 $d$揭示了这方面 {em 一些} $t$的-维仿射子空间 $F_q^n$ 为了验证这一点 $f$ 当被限制在 {em 随机的} $t$五维仿射子空间最多是一个次多项式 $d$ 在这个子空间上。这样的测试只会使 $q^t$ 查询,独立于 $n$. 以前的作品,由阿隆等人。 cite{AKKLR}考夫曼和罗恩 cite{KaufmanRon06} 以及Jutla等人。 cite{JPRZ04},表明这种自然测试拒绝了$Omega(1)$-远非程度 $d$-至少有概率的多项式 $Omega(q^{-t})$. (前期工作 cite{AKKLR} 只考虑的情况 $q=2$,而工作 cite{JPRZ04}只考虑了素数的情况 $q$. 结果是 cite{KaufmanRon06} 对所有字段都适用。)从而得到距离度空间等距离的函数的检测概率为常数 $d$ 多项式,测试的结果 $q^{2t}$ 查询。考夫曼和罗恩还指出,当 $q$ 它是素数 $q^t$ 查询是必要的。因此,这些测试与已知的下界至少相差一个二次因子。Bhattacharyya等人。 cite{BKSSZ10} 对于二元场的情况,给出了该检验的最优分析,并表明自然检验实际上拒绝了以下函数 $Omega(1)$-远非程度 $d$-带概率的多项式$Omega(1)$. 在这项工作中,我们将这一结果推广到所有领域,表明自然测试确实拒绝了以下函数 $Omega(1)$-远非程度 $d$ 带的多项式$Omega(1)$-概率,其中常数只依赖于 $q$ 字段大小。因此,我们的分析表明,这个测试是最佳的(匹配已知的下界),当 $q$ 是质数。我们工作的主要技术成分是对“超平面”(协维仿射子空间)数量的严密分析 $1$),其中一个程度的限制 $d$多项式的次数小于 $d$. 我们证明了这种超平面的数量最多是 $O(q^{t_{q,d}})$ ——在常数因子范围内是紧密的。
{"title":"Optimal Testing of Multivariate Polynomials over Small Prime Fields","authors":"Elad Haramaty, Amir Shpilka, M. Sudan","doi":"10.1137/120879257","DOIUrl":"https://doi.org/10.1137/120879257","url":null,"abstract":"We consider the problem of testing if a given function $f : F_q^n right arrow F_q$ is close to a $n$-variate degree $d$ polynomial over the finite field $F_q$ of $q$elements. The natural, low-query, test for this property would be to pick the smallest dimension $t = t_{q,d}approx d/q$ such that every function of degree greater than $d$reveals this aspect on {em some} $t$-dimensional affine subspace of $F_q^n$ and to test that $f$ when restricted to a {em random} $t$-dimensional affine subspace is a polynomial of degree at most $d$ on this subspace. Such a test makes only $q^t$ queries, independent of $n$. Previous works, by Alon et al.~cite{AKKLR}, and Kaufman and Ron~cite{KaufmanRon06} and Jutla et al.~cite{JPRZ04}, showed that this natural test rejected functions that were$Omega(1)$-far from degree $d$-polynomials with probability at least $Omega(q^{-t})$. (The initial work~cite{AKKLR} considered only the case of $q=2$, while the work~cite{JPRZ04}only considered the case of prime $q$. The results in cite{KaufmanRon06} hold for all fields.) Thus to get a constant probability of detecting functions that are at constant distance from the space of degree $d$ polynomials, the tests made $q^{2t}$ queries. Kaufman and Ron also noted that when $q$ is prime, then $q^t$ queries are necessary. Thus these tests were off by at least a quadratic factor from known lower bounds. Bhattacharyya et al.~cite{BKSSZ10} gave an optimal analysis of this test for the case of the binary field and showed that the natural test actually rejects functions that were $Omega(1)$-far from degree $d$-polynomials with probability$Omega(1)$. In this work we extend this result for all fields showing that the natural test does indeed reject functions that are $Omega(1)$-far from degree $d$ polynomials with$Omega(1)$-probability, where the constants depend only on $q$ the field size. Thus our analysis thus shows that this test is optimal (matches known lower bounds) when $q$ is prime. The main technical ingredient in our work is a tight analysis of the number of ``hyper planes'' (affine subspaces of co-dimension $1$) on which the restriction of a degree $d$polynomial has degree less than $d$. We show that the number of such hyper planes is at most $O(q^{t_{q,d}})$ -- which is tight to within constant factors.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130847916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
The Randomness Complexity of Parallel Repetition 并行重复的随机性复杂性
Pub Date : 2011-10-22 DOI: 10.1109/FOCS.2011.93
Kai-Min Chung, R. Pass
Consider a $m$-round interactive protocol with soundness error $1/2$. How much extra randomness is required to decrease the soundness error to $delta$ through parallel repetition? Previous work, initiated by Bell are, Goldreich and Gold wasser, shows that for emph{public-coin} interactive protocols with emph{statistical soundness}, $m cdot O(log (1/delta))$ bits of extra randomness suffices. In this work, we initiate a more general study of the above question. begin{itemize}item We establish the first derandomized parallel repetition theorem for public-coin interactive protocols with emph{computational soundness} (a.k.a. arguments). The parameters of our result essentially matches the earlier works in the information-theoretic setting. item We show that obtaining even a sub-linear dependency on the number of rounds $m$ (i.e., $o(m) cdot log(1/delta)$) is impossible in the information-theoretic, and requires the existence of one-way functions in the computational setting. item We show that non-trivial derandomized parallel repetition for private-coin protocols is impossible in the information-theoretic setting and requires the existence of one-way functions in the computational setting. end{itemize} These results are tight in the sense that parallel repetition theorems in the computational setting can trivially be derandomized using pseudorandom generators, which are implied by the existence of one-way functions.
考虑一个具有可靠性错误$1/2$的$m$ -round交互协议。需要多少额外的随机性才能通过平行重复将稳健性误差降低到$delta$ ?之前由Bell, Goldreich和Gold wasser发起的工作表明,对于具有emph{统计合理性}的emph{公共}货币交互协议,$m cdot O(log (1/delta))$额外的随机性就足够了。在这项工作中,我们对上述问题进行了更广泛的研究。 begin{itemize}item 我们建立了具有emph{计算合理性(即参数)的公共货币交互协议的第一个}非随机并行重复定理。我们的结果的参数基本上与信息论设置中的早期工作相匹配。 item 我们证明,在信息论中,即使获得与轮数$m$(即$o(m) cdot log(1/delta)$)的次线性依赖关系也是不可能的,并且要求在计算设置中存在单向函数。 item 我们证明了私有币协议的非平凡非随机并行重复在信息论环境下是不可能的,并且需要在计算环境下存在单向函数。 end{itemize} 这些结果是紧密的,因为计算设置中的并行重复定理可以使用伪随机生成器轻松地进行非随机化,这是由单向函数的存在所隐含的。
{"title":"The Randomness Complexity of Parallel Repetition","authors":"Kai-Min Chung, R. Pass","doi":"10.1109/FOCS.2011.93","DOIUrl":"https://doi.org/10.1109/FOCS.2011.93","url":null,"abstract":"Consider a $m$-round interactive protocol with soundness error $1/2$. How much extra randomness is required to decrease the soundness error to $delta$ through parallel repetition? Previous work, initiated by Bell are, Goldreich and Gold wasser, shows that for emph{public-coin} interactive protocols with emph{statistical soundness}, $m cdot O(log (1/delta))$ bits of extra randomness suffices. In this work, we initiate a more general study of the above question. begin{itemize}item We establish the first derandomized parallel repetition theorem for public-coin interactive protocols with emph{computational soundness} (a.k.a. arguments). The parameters of our result essentially matches the earlier works in the information-theoretic setting. item We show that obtaining even a sub-linear dependency on the number of rounds $m$ (i.e., $o(m) cdot log(1/delta)$) is impossible in the information-theoretic, and requires the existence of one-way functions in the computational setting. item We show that non-trivial derandomized parallel repetition for private-coin protocols is impossible in the information-theoretic setting and requires the existence of one-way functions in the computational setting. end{itemize} These results are tight in the sense that parallel repetition theorems in the computational setting can trivially be derandomized using pseudorandom generators, which are implied by the existence of one-way functions.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124434384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Min-max Graph Partitioning and Small Set Expansion 最小-最大图划分和小集展开
Pub Date : 2011-10-19 DOI: 10.1109/focs.2011.79
N. Bansal, U. Feige, Robert Krauthgamer, K. Makarychev, V. Nagarajan, J. Naor, Roy Schwartz
We study graph partitioning problems from a min-max perspective, in which an input graph on n vertices should be partitioned into k parts, and the objective is to minimize the maximum number of edges leaving a single part. The two main versions we consider are: (i) the k parts need to be of equal size, and (ii) the parts must separate a set of k given terminals. We consider a common generalization of these two problems, and design for it an O(√log n log k)-approximation algorithm. This improves over an O(log2 n) approximation for the second version due to Svitkina and Tardos, and roughly O(k log n) approximation for the first version that follows from other previous work. We also give an improved O(1)-approximation algorithm for graphs that exclude any fixed minor. Our algorithm uses a new procedure for solving the Small Set Expansion problem. In this problem, we are given a graph G and the goal is to find a non-empty subset S of V of size at most pn with minimum edge-expansion. We give an O(√log n log (1/p)) bicriteria approximation algorithm for the general case of Small Set Expansion and O(1) approximation algorithm for graphs that exclude any fixed minor.
我们从最小-最大的角度研究图划分问题,其中n个顶点的输入图应该划分为k个部分,目标是最小化留下单个部分的最大边数。我们考虑的两个主要版本是:(i) k个部件需要大小相等,(ii)这些部件必须分开k个给定端子的集合。我们考虑了这两个问题的一般推广,并设计了一个O(√log n log k)近似算法。这改进了由于Svitkina和Tardos的第二个版本的O(log2 n)近似值,以及根据其他先前工作得出的第一个版本的大约O(k log n)近似值。我们还给出了一种改进的O(1)-逼近算法,用于排除任何固定小项的图。我们的算法采用了一种新的方法来解决小集展开问题。在这个问题中,我们给定一个图G,目标是找到一个V的非空子集S,其大小最大为pn,且边展开最小。我们给出了一般情况下小集展开的O(√log n log (1/p))双准则逼近算法和排除任何固定次元的图的O(1)逼近算法。
{"title":"Min-max Graph Partitioning and Small Set Expansion","authors":"N. Bansal, U. Feige, Robert Krauthgamer, K. Makarychev, V. Nagarajan, J. Naor, Roy Schwartz","doi":"10.1109/focs.2011.79","DOIUrl":"https://doi.org/10.1109/focs.2011.79","url":null,"abstract":"We study graph partitioning problems from a min-max perspective, in which an input graph on n vertices should be partitioned into k parts, and the objective is to minimize the maximum number of edges leaving a single part. The two main versions we consider are: (i) the k parts need to be of equal size, and (ii) the parts must separate a set of k given terminals. We consider a common generalization of these two problems, and design for it an O(√log n log k)-approximation algorithm. This improves over an O(log2 n) approximation for the second version due to Svitkina and Tardos, and roughly O(k log n) approximation for the first version that follows from other previous work. We also give an improved O(1)-approximation algorithm for graphs that exclude any fixed minor. Our algorithm uses a new procedure for solving the Small Set Expansion problem. In this problem, we are given a graph G and the goal is to find a non-empty subset S of V of size at most pn with minimum edge-expansion. We give an O(√log n log (1/p)) bicriteria approximation algorithm for the general case of Small Set Expansion and O(1) approximation algorithm for graphs that exclude any fixed minor.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125265798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 85
(1 + eps)-Approximate Sparse Recovery (1 + eps)-近似稀疏恢复
Pub Date : 2011-10-19 DOI: 10.1109/FOCS.2011.92
Eric Price, David P. Woodruff
The problem central to sparse recovery and compressive sensing is that of emph{stable sparse recovery}: we want a distribution $math cal{A}$ of matrices $A in R^{m times n}$ such that, for any $x in R^n$ and with probability $1 - delta >, 2/3$ over $A in math cal{A}$, there is an algorithm to recover $hat{x}$ from $Ax$ withbegin{align} norm{p}{hat{x} - x} leq C min_{ktext{-sparse } x'} norm{p}{x - x'}end{align}for some constant $C >, 1$ and norm $p$. The measurement complexity of this problem is well understood for constant $C >, 1$. However, in a variety of applications it is important to obtain $C = 1+eps$ for a small $eps >, 0$, and this complexity is not well understood. We resolve the dependence on $eps$ in the number of measurements required of a $k$-sparse recovery algorithm, up to polylogarithmic factors for the central cases of $p=1$ and $p=2$. Namely, we give new algorithms and lower bounds that show the number of measurements required is $k/eps^{p/2} textrm{polylog}(n)$. For $p=2$, our bound of $frac{1}{eps}klog (n/k)$ is tight up to emph{constant} factors. We also give matching bounds when the output is required to be $k$-sparse, in which case we achieve $k/eps^p textrm{polylog}(n)$. This shows the distinction between the complexity of sparse and non-sparse outputs is fundamental.
稀疏恢复和压缩感知的核心问题是emph{稳定稀疏恢复}:我们想要一个矩阵$A in R^{m times n}$的分布$math cal{A}$,这样,对于任何$x in R^n$和概率$1 - delta >, 2/3$超过$A in math cal{A}$,有一种算法可以从$Ax$和begin{align} norm{p}{hat{x} - x} leq C min_{ktext{-sparse } x'} norm{p}{x - x'}end{align}对某些常数$C >, 1$和范数$p$恢复$hat{x}$。对于常数$C >, 1$,这个问题的测量复杂性是很容易理解的。然而,在各种应用程序中,为一个小的$eps >, 0$获取$C = 1+eps$是很重要的,而且这种复杂性还没有得到很好的理解。我们解决了$k$ -稀疏恢复算法所需的测量数量对$eps$的依赖,直至$p=1$和$p=2$中心情况的多对数因子。也就是说,我们给出了新的算法和下界,表明所需的测量次数是$k/eps^{p/2} textrm{polylog}(n)$。对于$p=2$, $frac{1}{eps}klog (n/k)$的边界被emph{常数}因子所限制。当输出要求为$k$ -sparse时,我们也给出了匹配边界,在这种情况下,我们实现了$k/eps^p textrm{polylog}(n)$。这表明,稀疏输出和非稀疏输出的复杂性之间的区别是根本的。
{"title":"(1 + eps)-Approximate Sparse Recovery","authors":"Eric Price, David P. Woodruff","doi":"10.1109/FOCS.2011.92","DOIUrl":"https://doi.org/10.1109/FOCS.2011.92","url":null,"abstract":"The problem central to sparse recovery and compressive sensing is that of emph{stable sparse recovery}: we want a distribution $math cal{A}$ of matrices $A in R^{m times n}$ such that, for any $x in R^n$ and with probability $1 - delta &gt, 2/3$ over $A in math cal{A}$, there is an algorithm to recover $hat{x}$ from $Ax$ withbegin{align} norm{p}{hat{x} - x} leq C min_{ktext{-sparse } x'} norm{p}{x - x'}end{align}for some constant $C &gt, 1$ and norm $p$. The measurement complexity of this problem is well understood for constant $C &gt, 1$. However, in a variety of applications it is important to obtain $C = 1+eps$ for a small $eps &gt, 0$, and this complexity is not well understood. We resolve the dependence on $eps$ in the number of measurements required of a $k$-sparse recovery algorithm, up to polylogarithmic factors for the central cases of $p=1$ and $p=2$. Namely, we give new algorithms and lower bounds that show the number of measurements required is $k/eps^{p/2} textrm{polylog}(n)$. For $p=2$, our bound of $frac{1}{eps}klog (n/k)$ is tight up to emph{constant} factors. We also give matching bounds when the output is required to be $k$-sparse, in which case we achieve $k/eps^p textrm{polylog}(n)$. This shows the distinction between the complexity of sparse and non-sparse outputs is fundamental.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128543408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
On the Power of Adaptivity in Sparse Recovery 稀疏恢复中自适应的力量
Pub Date : 2011-10-17 DOI: 10.1109/FOCS.2011.83
P. Indyk, Eric Price, David P. Woodruff
The goal of (stable) sparse recovery is to recover a $k$-sparse approximation $x^*$ of a vector $x$ from linear measurements of $x$. Specifically, the goal is to recover $x^*$ such that$$norm{p}{x-x^*} le C min_{ktext{-sparse } x'} norm{q}{x-x'}$$for some constant $C$ and norm parameters $p$ and $q$. It is known that, for $p=q=1$ or $p=q=2$, this task can be accomplished using $m=O(k log (n/k))$ {em non-adaptive}measurements~cite{CRT06:Stable-Signal} and that this bound is tight~cite{DIPW, FPRU, PW11}. In this paper we show that if one is allowed to perform measurements that are {em adaptive}, then the number of measurements can be considerably reduced. Specifically, for $C=1+epsilon$ and $p=q=2$ we showbegin{itemize}item A scheme with $m=O(frac{1}{eps}k log log (neps/k))$ measurements that uses $O(log^* k cdot log log (neps/k))$ rounds. This is a significant improvement over the best possible non-adaptive bound. item A scheme with $m=O(frac{1}{eps}k log (k/eps) + k log (n/k))$ measurements that uses {em two} rounds. This improves over the best possible non-adaptive bound. end{itemize} To the best of our knowledge, these are the first results of this type.
(稳定)稀疏恢复的目标是从$x$的线性测量中恢复向量$x$的$k$ -稀疏近似$x^*$。具体来说,目标是恢复$x^*$,以便$$norm{p}{x-x^*} le C min_{ktext{-sparse } x'} norm{q}{x-x'}$$对于一些常数$C$和规范参数$p$和$q$。众所周知,对于$p=q=1$或$p=q=2$,该任务可以使用$m=O(k log (n/k))$非自适应{em测量}cite{CRT06:Stable-Signal}完成,并且该界是紧密的cite{DIPW, FPRU, PW11}。在本文中,我们表明,如果允许执行自适应的测量,{em那么}测量的数量可以大大减少。具体来说,对于$C=1+epsilon$和$p=q=2$,我们显示begin{itemize}item 使用$O(log^* k cdot log log (neps/k))$轮的$m=O(frac{1}{eps}k log log (neps/k))$测量方案。这是对最佳非自适应边界的重大改进。 item 使用轮$m=O(frac{1}{eps}k log (k/eps) + k log (n/k))$测量的方案。这比可能的最佳非自适应界有所改进。 {em}end{itemize} 据我们所知,这是这种类型的第一次结果。
{"title":"On the Power of Adaptivity in Sparse Recovery","authors":"P. Indyk, Eric Price, David P. Woodruff","doi":"10.1109/FOCS.2011.83","DOIUrl":"https://doi.org/10.1109/FOCS.2011.83","url":null,"abstract":"The goal of (stable) sparse recovery is to recover a $k$-sparse approximation $x^*$ of a vector $x$ from linear measurements of $x$. Specifically, the goal is to recover $x^*$ such that$$norm{p}{x-x^*} le C min_{ktext{-sparse } x'} norm{q}{x-x'}$$for some constant $C$ and norm parameters $p$ and $q$. It is known that, for $p=q=1$ or $p=q=2$, this task can be accomplished using $m=O(k log (n/k))$ {em non-adaptive}measurements~cite{CRT06:Stable-Signal} and that this bound is tight~cite{DIPW, FPRU, PW11}. In this paper we show that if one is allowed to perform measurements that are {em adaptive}, then the number of measurements can be considerably reduced. Specifically, for $C=1+epsilon$ and $p=q=2$ we showbegin{itemize}item A scheme with $m=O(frac{1}{eps}k log log (neps/k))$ measurements that uses $O(log^* k cdot log log (neps/k))$ rounds. This is a significant improvement over the best possible non-adaptive bound. item A scheme with $m=O(frac{1}{eps}k log (k/eps) + k log (n/k))$ measurements that uses {em two} rounds. This improves over the best possible non-adaptive bound. end{itemize} To the best of our knowledge, these are the first results of this type.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122205302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 77
A Polylogarithmic-Competitive Algorithm for the k-Server Problem k-Server问题的多对数竞争算法
Pub Date : 2011-10-07 DOI: 10.1145/2783434
N. Bansal, Niv Buchbinder, A. Madry, J. Naor
We give the first polylogarithmic-competitive randomized algorithm for the k-server problem on an arbitrary finite metric space. In particular, our algorithm achieves a competitive ratio of Õ(log3 n log2 k) for any metric space on n points. This improves upon the (2k-1)-competitive algorithm of Koutsoupias and Papadimitriou (J. ACM 1995) whenever n is sub-exponential in k.
给出了任意有限度量空间上k-server问题的第一个多对数竞争随机化算法。特别是,我们的算法在n个点的任何度量空间中实现了Õ(log3 n log2 k)的竞争比。这改进了Koutsoupias和Papadimitriou (J. ACM 1995)的(2k-1)竞争算法,当n是k的次指数时。
{"title":"A Polylogarithmic-Competitive Algorithm for the k-Server Problem","authors":"N. Bansal, Niv Buchbinder, A. Madry, J. Naor","doi":"10.1145/2783434","DOIUrl":"https://doi.org/10.1145/2783434","url":null,"abstract":"We give the first polylogarithmic-competitive randomized algorithm for the k-server problem on an arbitrary finite metric space. In particular, our algorithm achieves a competitive ratio of Õ(log3 n log2 k) for any metric space on n points. This improves upon the (2k-1)-competitive algorithm of Koutsoupias and Papadimitriou (J. ACM 1995) whenever n is sub-exponential in k.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123175847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 135
Lexicographic Products and the Power of Non-linear Network Coding 词典编纂产品和非线性网络编码的力量
Pub Date : 2011-08-11 DOI: 10.1109/FOCS.2011.39
A. Błasiak, Robert D. Kleinberg, E. Lubetzky
We introduce a technique for establishing and amplifying gaps between parameters of network coding and index coding problems. The technique uses linear programs to establish separations between combinatorial and coding-theoretic parameters and applies hyper graph lexicographic products to amplify these separations. This entails combining the dual solutions of the lexicographic multiplicands and proving that this is a valid dual solution of the product. Our result is general enough to apply to a large family of linear programs. This blend of linear programs and lexicographic products gives a recipe for constructing hard instances in which the gap between combinatorial or coding-theoretic parameters is polynomially large. We find polynomial gaps in cases in which the largest previously known gaps were only small constant factors or entirely unknown. Most notably, we show a polynomial separation between linear and non-linear network coding rates. This involves exploiting a connection between matroids and index coding to establish a previously unknown separation between linear and non-linear index coding rates. We also construct index coding problems with a polynomial gap between the broadcast rate and the trivial lower bound for which no gap was previously known.
我们介绍了一种建立和放大网络编码和索引编码问题之间参数差距的技术。该技术使用线性规划来建立组合参数和编码理论参数之间的分离,并应用超图词典学产品来扩大这些分离。这需要组合字典乘数的对偶解,并证明这是乘积的有效对偶解。我们的结果具有足够的普遍性,可以应用于一大类线性规划。这种线性规划和词典编纂产物的混合提供了一种构造硬实例的方法,其中组合或编码理论参数之间的差距是多项式大的。我们发现多项式的差距在情况下,其中最大的先前已知的差距只是小的常数因素或完全未知。最值得注意的是,我们展示了线性和非线性网络编码率之间的多项式分离。这涉及到利用拟阵和索引编码之间的联系,在线性和非线性索引编码率之间建立以前未知的分离。我们还构造了在广播率和平凡下界之间存在多项式间隙的索引编码问题,该下界之前没有已知的间隙。
{"title":"Lexicographic Products and the Power of Non-linear Network Coding","authors":"A. Błasiak, Robert D. Kleinberg, E. Lubetzky","doi":"10.1109/FOCS.2011.39","DOIUrl":"https://doi.org/10.1109/FOCS.2011.39","url":null,"abstract":"We introduce a technique for establishing and amplifying gaps between parameters of network coding and index coding problems. The technique uses linear programs to establish separations between combinatorial and coding-theoretic parameters and applies hyper graph lexicographic products to amplify these separations. This entails combining the dual solutions of the lexicographic multiplicands and proving that this is a valid dual solution of the product. Our result is general enough to apply to a large family of linear programs. This blend of linear programs and lexicographic products gives a recipe for constructing hard instances in which the gap between combinatorial or coding-theoretic parameters is polynomially large. We find polynomial gaps in cases in which the largest previously known gaps were only small constant factors or entirely unknown. Most notably, we show a polynomial separation between linear and non-linear network coding rates. This involves exploiting a connection between matroids and index coding to establish a previously unknown separation between linear and non-linear index coding rates. We also construct index coding problems with a polynomial gap between the broadcast rate and the trivial lower bound for which no gap was previously known.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129748506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 95
期刊
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1