首页 > 最新文献

2011 IEEE 52nd Annual Symposium on Foundations of Computer Science最新文献

英文 中文
Mechanism Design with Set-Theoretic Beliefs 基于集合论信念的机制设计
Pub Date : 2011-10-22 DOI: 10.1109/FOCS.2011.11
Jiehua Chen, S. Micali
In settings of incomplete information, we put forward (1) a very conservative -- indeed, purely set-theoretic -- model of the beliefs (including totally wrong ones) that each player may have about the payoff types of his opponents, and (2) a new and robust solution concept, based on mutual belief of rationality, capable of leveraging such conservative beliefs. We exemplify the applicability of our new approach for single-good auctions, by showing that, under our solution concept, a normal-form, simple, and deterministic mechanism guarantees -- up to an arbitrarily small, additive constant -- a revenue benchmark that is always greater than or equal to the second-highest valuation, and sometimes much greater. By contrast, we also prove that the same benchmark cannot even be approximated within any positive factor, under classical solution concepts.
在不完全信息的情况下,我们提出了(1)一个非常保守的——实际上是纯粹的集合论的——关于每个玩家对对手的收益类型可能拥有的信念(包括完全错误的信念)的模型,以及(2)一个新的、健壮的解决方案概念,基于对理性的共同信念,能够利用这种保守信念。我们举例说明了我们的新方法对单一商品拍卖的适用性,通过展示,在我们的解决方案概念下,一个正常形式的,简单的,确定性的机制保证——直到一个任意小的,可添加的常数——收入基准总是大于或等于第二高的估值,有时甚至更大。通过对比,我们还证明了在经典解概念下,同一基准甚至不能在任何正因子内逼近。
{"title":"Mechanism Design with Set-Theoretic Beliefs","authors":"Jiehua Chen, S. Micali","doi":"10.1109/FOCS.2011.11","DOIUrl":"https://doi.org/10.1109/FOCS.2011.11","url":null,"abstract":"In settings of incomplete information, we put forward (1) a very conservative -- indeed, purely set-theoretic -- model of the beliefs (including totally wrong ones) that each player may have about the payoff types of his opponents, and (2) a new and robust solution concept, based on mutual belief of rationality, capable of leveraging such conservative beliefs. We exemplify the applicability of our new approach for single-good auctions, by showing that, under our solution concept, a normal-form, simple, and deterministic mechanism guarantees -- up to an arbitrarily small, additive constant -- a revenue benchmark that is always greater than or equal to the second-highest valuation, and sometimes much greater. By contrast, we also prove that the same benchmark cannot even be approximated within any positive factor, under classical solution concepts.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124563378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Near Linear Lower Bound for Dimension Reduction in L1 L1中降维的近线性下界
Pub Date : 2011-10-22 DOI: 10.1109/FOCS.2011.87
Alexandr Andoni, M. Charikar, Ofer Neiman, Huy L. Nguyen
Given a set of $n$ points in $ell_{1}$, how many dimensions are needed to represent all pair wise distances within a specific distortion? This dimension-distortion tradeoff question is well understood for the $ell_{2}$ norm, where $O((log n)/epsilon^{2})$ dimensions suffice to achieve $1+epsilon$ distortion. In sharp contrast, there is a significant gap between upper and lower bounds for dimension reduction in $ell_{1}$. A recent result shows that distortion $1+epsilon$ can be achieved with $n/epsilon^{2}$ dimensions. On the other hand, the only lower bounds known are that distortion $delta$ requires $n^{Omega(1/delta^2)}$ dimensions and that distortion $1+epsilon$ requires $n^{1/2-O(epsilon log(1/epsilon))}$ dimensions. In this work, we show the first near linear lower bounds for dimension reduction in $ell_{1}$. In particular, we show that $1+epsilon$ distortion requires at least $n^{1-O(1/log(1/epsilon))}$ dimensions. Our proofs are combinatorial, but inspired by linear programming. In fact, our techniques lead to a simple combinatorial argument that is equivalent to the LP based proof of Brinkman-Charikar for lower bounds on dimension reduction in $ell_{1}$.
给定$ell_{1}$中的一组$n$点,需要多少个维度来表示特定畸变内的所有成对距离?对于$ell_{2}$规范,可以很好地理解这个维度-扭曲权衡问题,其中$O((log n)/epsilon^{2})$维度足以实现$1+epsilon$扭曲。与此形成鲜明对比的是,$ell_{1}$中降维的上界和下界之间存在明显的差距。最近的一个结果表明,扭曲$1+epsilon$可以实现$n/epsilon^{2}$维度。另一方面,已知的唯一下界是扭曲$delta$需要$n^{Omega(1/delta^2)}$维度,扭曲$1+epsilon$需要$n^{1/2-O(epsilon log(1/epsilon))}$维度。在这项工作中,我们在$ell_{1}$中展示了第一个近线性降维下界。特别是,我们表明$1+epsilon$失真至少需要$n^{1-O(1/log(1/epsilon))}$个维度。我们的证明是组合的,但受到线性规划的启发。事实上,我们的技术导致了一个简单的组合论证,它相当于基于LP的Brinkman-Charikar对$ell_{1}$中降维下界的证明。
{"title":"Near Linear Lower Bound for Dimension Reduction in L1","authors":"Alexandr Andoni, M. Charikar, Ofer Neiman, Huy L. Nguyen","doi":"10.1109/FOCS.2011.87","DOIUrl":"https://doi.org/10.1109/FOCS.2011.87","url":null,"abstract":"Given a set of $n$ points in $ell_{1}$, how many dimensions are needed to represent all pair wise distances within a specific distortion? This dimension-distortion tradeoff question is well understood for the $ell_{2}$ norm, where $O((log n)/epsilon^{2})$ dimensions suffice to achieve $1+epsilon$ distortion. In sharp contrast, there is a significant gap between upper and lower bounds for dimension reduction in $ell_{1}$. A recent result shows that distortion $1+epsilon$ can be achieved with $n/epsilon^{2}$ dimensions. On the other hand, the only lower bounds known are that distortion $delta$ requires $n^{Omega(1/delta^2)}$ dimensions and that distortion $1+epsilon$ requires $n^{1/2-O(epsilon log(1/epsilon))}$ dimensions. In this work, we show the first near linear lower bounds for dimension reduction in $ell_{1}$. In particular, we show that $1+epsilon$ distortion requires at least $n^{1-O(1/log(1/epsilon))}$ dimensions. Our proofs are combinatorial, but inspired by linear programming. In fact, our techniques lead to a simple combinatorial argument that is equivalent to the LP based proof of Brinkman-Charikar for lower bounds on dimension reduction in $ell_{1}$.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127983891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
New Extension of the Weil Bound for Character Sums with Applications to Coding 字符和的Weil界的新扩展及其在编码中的应用
Pub Date : 2011-10-22 DOI: 10.1109/FOCS.2011.41
T. Kaufman, Shachar Lovett
The Weil bound for character sums is a deep result in Algebraic Geometry with many applications both in mathematics and in the theoretical computer science. The Weil bound states that for any polynomial $f(x)$ over a finite field $mathbb{F}$ and any additive character $chi:mathbb{F} to mathbb{C}$, either $chi(f(x))$ is a constant function or it is distributed close to uniform. The Weil bound is quite effective as long as $deg(f) ll sqrt{|mathbb{F}|}$, but it breaks down when the degree of $f$ exceeds $sqrt{|mathbb{F}|}$. As the Weil bound plays a central role in many areas, finding extensions for polynomials of larger degree is an important problem with many possible applications. In this work we develop such an extension over finite fields $mathbb{F}_{p^n}$ of small characteristic: we prove that if $f(x)=g(x)+h(x)$ where $deg(g) ll sqrt{|mathbb{F}|}$ and $h(x)$ is a sparse polynomial of arbitrary degree but bounded weight degree, then the same conclusion of the classical Weil bound still holds: either $chi(f(x))$ is constant or its distribution is close to uniform. In particular, this shows that the sub code of Reed-Muller codes of degree $omega(1)$ generated by traces of sparse polynomials is a code with near optimal distance, while Reed-Muller of such a degree has no distance (i.e. $o(1)$ distance), this is one of the few examples where one can prove that sparse polynomials behave differently from non-sparse polynomials of the same degree. As an application we prove new general results for affine invariant codes. We prove that any affine-invariant subspace of quasi-polynomial size is (1) indeed a code (i.e. has good distance) and (2) is locally testable. Previous results for general affine invariant codes were known only for codes of polynomial size, and of length $2^n$ where $n$ needed to be a prime. Thus, our techniques are the first to extend to general families of such codes of super-polynomial size, where we also remove the requirement from $n$ to be a prime. The proof is based on two main ingredients: the extension of the Weil bound for character sums, and a new Fourier-analytic approach for estimating the weight distribution of general codes with large dual distance, which may be of independent interest.
特征和的Weil界是代数几何中的一个深刻结果,在数学和理论计算机科学中都有许多应用。Weil界表明,对于有限域$mathbb{F}$上的任何多项式$f(x)$和任何可加性字符$chi:mathbb{F} to mathbb{C}$, $chi(f(x))$要么是常数函数,要么是接近均匀分布的函数。Weil约束在$deg(f) ll sqrt{|mathbb{F}|}$时是有效的,但当$f$的程度超过$sqrt{|mathbb{F}|}$时,它就失效了。由于Weil界在许多领域起着核心作用,因此寻找更大次多项式的扩展是一个具有许多可能应用的重要问题。本文在小特征有限域$mathbb{F}_{p^n}$上进行了推广,证明了如果$f(x)=g(x)+h(x)$其中$deg(g) ll sqrt{|mathbb{F}|}$和$h(x)$是任意次但权值有界的稀疏多项式,则经典Weil界的相同结论仍然成立:$chi(f(x))$是常数或其分布接近均匀。特别地,这表明由稀疏多项式的迹线生成的次数为$omega(1)$的Reed-Muller码的子码是一个接近最优距离的码,而这种次数的Reed-Muller码没有距离(即$o(1)$距离),这是少数可以证明稀疏多项式与相同次数的非稀疏多项式行为不同的例子之一。作为应用,我们证明了仿射不变码的一些新的一般结果。证明了拟多项式大小的仿射不变子空间是(1)确实是一个码(即具有良好的距离),(2)是局部可测试的。以前关于一般仿射不变码的结果只适用于多项式大小的码,以及长度为$2^n$的码,其中$n$需要是素数。因此,我们的技术是第一个扩展到这种超多项式大小的代码的一般族,其中我们还从$n$中删除了素数的要求。该证明基于两个主要成分:对特征和的Weil界的扩展,以及估计具有大对偶距离的一般码的权分布的新的傅立叶解析方法,这可能是一个独立的兴趣。
{"title":"New Extension of the Weil Bound for Character Sums with Applications to Coding","authors":"T. Kaufman, Shachar Lovett","doi":"10.1109/FOCS.2011.41","DOIUrl":"https://doi.org/10.1109/FOCS.2011.41","url":null,"abstract":"The Weil bound for character sums is a deep result in Algebraic Geometry with many applications both in mathematics and in the theoretical computer science. The Weil bound states that for any polynomial $f(x)$ over a finite field $mathbb{F}$ and any additive character $chi:mathbb{F} to mathbb{C}$, either $chi(f(x))$ is a constant function or it is distributed close to uniform. The Weil bound is quite effective as long as $deg(f) ll sqrt{|mathbb{F}|}$, but it breaks down when the degree of $f$ exceeds $sqrt{|mathbb{F}|}$. As the Weil bound plays a central role in many areas, finding extensions for polynomials of larger degree is an important problem with many possible applications. In this work we develop such an extension over finite fields $mathbb{F}_{p^n}$ of small characteristic: we prove that if $f(x)=g(x)+h(x)$ where $deg(g) ll sqrt{|mathbb{F}|}$ and $h(x)$ is a sparse polynomial of arbitrary degree but bounded weight degree, then the same conclusion of the classical Weil bound still holds: either $chi(f(x))$ is constant or its distribution is close to uniform. In particular, this shows that the sub code of Reed-Muller codes of degree $omega(1)$ generated by traces of sparse polynomials is a code with near optimal distance, while Reed-Muller of such a degree has no distance (i.e. $o(1)$ distance), this is one of the few examples where one can prove that sparse polynomials behave differently from non-sparse polynomials of the same degree. As an application we prove new general results for affine invariant codes. We prove that any affine-invariant subspace of quasi-polynomial size is (1) indeed a code (i.e. has good distance) and (2) is locally testable. Previous results for general affine invariant codes were known only for codes of polynomial size, and of length $2^n$ where $n$ needed to be a prime. Thus, our techniques are the first to extend to general families of such codes of super-polynomial size, where we also remove the requirement from $n$ to be a prime. The proof is based on two main ingredients: the extension of the Weil bound for character sums, and a new Fourier-analytic approach for estimating the weight distribution of general codes with large dual distance, which may be of independent interest.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130212292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
The Graph Minor Algorithm with Parity Conditions 具有奇偶条件的图小算法
Pub Date : 2011-10-22 DOI: 10.1109/FOCS.2011.52
K. Kawarabayashi, B. Reed, Paul Wollan
We generalize the seminal Graph Minor algorithm of Robertson and Seymour to the parity version. We give polynomial time algorithms for the following problems:begin{enumerate}itemthe parity $H$-minor (Odd $K_k$-minor) containment problem, anditemthe disjoint paths problem with $k$ terminals and the parity condition for each path, end{enumerate}as well as several other related problems. We present an $O(m alpha(m,n) n)$ time algorithm for these problems for any fixed $k$, where $n,m$ are the number of vertices and the number of edges, respectively, and the function $alpha(m,n)$ is the inverse of the Ackermann function (see Tarjan cite{tarjan}). Note that the first problem includes the problem of testing whether or not a given graph contains $k$ disjoint odd cycles (which was recently solved in cite{tony, oddstoc}), if we fix $H$ to be equal to the graph of $k$ disjoint triangles. The algorithm for the second problem generalizes the Robertson Seymour algorithm for the $k$-disjoint paths problem. As with the Robertson-Seymour algorithm for the $k$-disjoint paths problem for any fixed $k$, in each iteration, we would like to either use the presence of a huge clique minor, or alternatively exploit the structure of graphs in which we cannot find such a minor. Here, however, we must maintain the parity of the paths and can only use an ``odd clique minor & quot;. This requires new techniques to describe the structure of the graph when we cannot find such a minor. We emphasize that our proof for the correctness of the above algorithms does not depend on the full power of the Graph Minor structure theorem cite{RS16}. Although the original Graph Minor algorithm of Robertson and Seymour does depend on it and our proof does have similarities to their arguments, we can avoid the structure theorem by building on the shorter proof for the correctness of the graph minor algorithm in cite{kw}. Consequently, we are able to avoid the much of the heavy machinery of the Graph Minor structure theory. Utilizing some results of cite{kw} and cite{lex1, lex2}, our proof is less than 50 pages.
我们将Robertson和Seymour开创性的Graph Minor算法推广到奇偶版本。我们给出了以下问题的多项式时间算法:begin{enumerate}itemthe parity $H$-minor (Odd $K_k$-minor) containment problem, anditemthe disjoint paths problem with $k$ terminals and the parity condition for each path, end{enumerate}以及其他一些相关的问题。对于任何固定的$k$,我们提出了一个$O(m alpha(m,n) n)$时间算法来解决这些问题,其中$n,m$分别是顶点的数量和边的数量,而函数$alpha(m,n)$是Ackermann函数的逆(参见Tarjan cite{tarjan})。注意,第一个问题包括测试给定图是否包含$k$不相交奇环的问题(最近在cite{tony, oddstoc}中解决了这个问题),如果我们将$H$固定为等于$k$不相交三角形的图。第二个问题的算法推广了$k$ -不相交路径问题的Robertson Seymour算法。就像对于任意固定的$k$的$k$ -不相交路径问题的Robertson-Seymour算法一样,在每次迭代中,我们要么使用存在的巨大的小团,要么利用我们找不到这样一个小团的图的结构。然而,在这里,我们必须保持路径的奇偶性,并且只能使用“奇小团”。当我们找不到这样的次元时,这就需要新的技术来描述图的结构。我们强调,我们对上述算法正确性的证明并不依赖于Graph Minor结构定理cite{RS16}的全部能力。虽然Robertson和Seymour的原始Graph Minor算法确实依赖于它,并且我们的证明确实与他们的论点有相似之处,但我们可以通过构建cite{kw}中Graph Minor算法正确性的更短证明来避免结构定理。因此,我们能够避免小图结构理论的许多繁重的机器。利用cite{kw}和cite{lex1, lex2}的一些结果,我们的证明少于50页。
{"title":"The Graph Minor Algorithm with Parity Conditions","authors":"K. Kawarabayashi, B. Reed, Paul Wollan","doi":"10.1109/FOCS.2011.52","DOIUrl":"https://doi.org/10.1109/FOCS.2011.52","url":null,"abstract":"We generalize the seminal Graph Minor algorithm of Robertson and Seymour to the parity version. We give polynomial time algorithms for the following problems:begin{enumerate}itemthe parity $H$-minor (Odd $K_k$-minor) containment problem, anditemthe disjoint paths problem with $k$ terminals and the parity condition for each path, end{enumerate}as well as several other related problems. We present an $O(m alpha(m,n) n)$ time algorithm for these problems for any fixed $k$, where $n,m$ are the number of vertices and the number of edges, respectively, and the function $alpha(m,n)$ is the inverse of the Ackermann function (see Tarjan cite{tarjan}). Note that the first problem includes the problem of testing whether or not a given graph contains $k$ disjoint odd cycles (which was recently solved in cite{tony, oddstoc}), if we fix $H$ to be equal to the graph of $k$ disjoint triangles. The algorithm for the second problem generalizes the Robertson Seymour algorithm for the $k$-disjoint paths problem. As with the Robertson-Seymour algorithm for the $k$-disjoint paths problem for any fixed $k$, in each iteration, we would like to either use the presence of a huge clique minor, or alternatively exploit the structure of graphs in which we cannot find such a minor. Here, however, we must maintain the parity of the paths and can only use an ``odd clique minor & quot;. This requires new techniques to describe the structure of the graph when we cannot find such a minor. We emphasize that our proof for the correctness of the above algorithms does not depend on the full power of the Graph Minor structure theorem cite{RS16}. Although the original Graph Minor algorithm of Robertson and Seymour does depend on it and our proof does have similarities to their arguments, we can avoid the structure theorem by building on the shorter proof for the correctness of the graph minor algorithm in cite{kw}. Consequently, we are able to avoid the much of the heavy machinery of the Graph Minor structure theory. Utilizing some results of cite{kw} and cite{lex1, lex2}, our proof is less than 50 pages.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133623817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Storing Secrets on Continually Leaky Devices 在不断泄漏的设备上存储秘密
Pub Date : 2011-10-22 DOI: 10.1109/FOCS.2011.35
Y. Dodis, Allison Bishop, Brent Waters, D. Wichs
We consider the question of how to store a value secretly on devices that continually leak information about their internal state to an external attacker. If the secret value is stored on a single device from which it is efficiently retrievable, and the attacker can leak even a single predicate of the internal state of that device, then she may learn some information about the secret value itself. Therefore, we consider a setting where the secret value is shared between multiple devices (or multiple components of a single device), each of which continually leaks arbitrary adaptively chosen predicates its individual state. Since leakage is continual, each device must also continually update its state so that an attacker cannot just leak it entirely one bit at a time. In our model, the devices update their state individually and asynchronously, without any communication between them. The update process is necessarily randomized, and its randomness can leak as well. As our main result, we construct a sharing scheme for two devices, where a constant fraction of the internal state of each device can leak in between and during updates. Our scheme has the structure of a public-key encryption, where one share is a secret key and the other is a ciphertext. As a contribution of independent interest, we also get public-key encryption in the continual leakage model, introduced by Brakerski et al. and Dodis et al. (FOCS '10). This scheme tolerates continual leakage on the secret key and the updates, and simplifies the recent construction of Lewko, Lewko and Waters (STOC '11). For our main result, we show how to update the ciphertexts of the encryption scheme so that the message remains hidden even if an attacker interleaves leakage on secret key and ciphertext shares. The security of our scheme is based on the linear assumption in prime-order bilinear groups. We also provide an extension to general access structures realizable by linear secret sharing schemes across many devices. The main advantage of this extension is that the state of some devices can be compromised entirely, while that of the all remaining devices is susceptible to continual leakage. Lastly, we show impossibility of information theoretic sharing schemes in our model, where continually leaky devices update their state individually.
我们考虑如何在设备上秘密存储值的问题,这些设备会不断地向外部攻击者泄露有关其内部状态的信息。如果秘密值存储在可以有效检索的单个设备上,并且攻击者甚至可以泄漏该设备内部状态的单个谓词,那么他可能会了解有关秘密值本身的一些信息。因此,我们考虑在多个设备(或单个设备的多个组件)之间共享秘密值的设置,其中每个设备不断泄漏任意自适应选择的谓词其各自的状态。由于泄漏是连续的,每个设备也必须不断地更新它的状态,这样攻击者就不能一次完全泄漏一个比特。在我们的模型中,设备单独异步地更新它们的状态,它们之间没有任何通信。更新过程必然是随机的,其随机性也可能泄漏。作为我们的主要结果,我们为两个设备构建了一个共享方案,其中每个设备的内部状态的恒定部分可以在更新之间和更新期间泄漏。我们的方案具有公钥加密的结构,其中一个共享是密钥,另一个共享是密文。作为独立兴趣的贡献,我们还在Brakerski等人和Dodis等人(FOCS '10)引入的连续泄漏模型中获得了公钥加密。该方案允许密钥和更新的持续泄漏,并简化了最近的Lewko, Lewko和Waters (STOC '11)的构建。对于我们的主要结果,我们展示了如何更新加密方案的密文,以便即使攻击者在秘密密钥和密文共享上穿插泄漏,消息也保持隐藏。该方案的安全性是基于对素阶双线性群的线性假设。我们还提供了一种扩展,以实现跨许多设备的线性秘密共享方案的一般访问结构。这种扩展的主要优点是,一些设备的状态可以完全受损,而所有剩余设备的状态都容易受到持续泄漏的影响。最后,我们证明了在我们的模型中信息理论共享方案的不可能性,其中连续泄漏设备单独更新其状态。
{"title":"Storing Secrets on Continually Leaky Devices","authors":"Y. Dodis, Allison Bishop, Brent Waters, D. Wichs","doi":"10.1109/FOCS.2011.35","DOIUrl":"https://doi.org/10.1109/FOCS.2011.35","url":null,"abstract":"We consider the question of how to store a value secretly on devices that continually leak information about their internal state to an external attacker. If the secret value is stored on a single device from which it is efficiently retrievable, and the attacker can leak even a single predicate of the internal state of that device, then she may learn some information about the secret value itself. Therefore, we consider a setting where the secret value is shared between multiple devices (or multiple components of a single device), each of which continually leaks arbitrary adaptively chosen predicates its individual state. Since leakage is continual, each device must also continually update its state so that an attacker cannot just leak it entirely one bit at a time. In our model, the devices update their state individually and asynchronously, without any communication between them. The update process is necessarily randomized, and its randomness can leak as well. As our main result, we construct a sharing scheme for two devices, where a constant fraction of the internal state of each device can leak in between and during updates. Our scheme has the structure of a public-key encryption, where one share is a secret key and the other is a ciphertext. As a contribution of independent interest, we also get public-key encryption in the continual leakage model, introduced by Brakerski et al. and Dodis et al. (FOCS '10). This scheme tolerates continual leakage on the secret key and the updates, and simplifies the recent construction of Lewko, Lewko and Waters (STOC '11). For our main result, we show how to update the ciphertexts of the encryption scheme so that the message remains hidden even if an attacker interleaves leakage on secret key and ciphertext shares. The security of our scheme is based on the linear assumption in prime-order bilinear groups. We also provide an extension to general access structures realizable by linear secret sharing schemes across many devices. The main advantage of this extension is that the state of some devices can be compromised entirely, while that of the all remaining devices is susceptible to continual leakage. Lastly, we show impossibility of information theoretic sharing schemes in our model, where continually leaky devices update their state individually.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124977584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 74
Balls and Bins: Smaller Hash Families and Faster Evaluation 球和箱:更小的哈希族和更快的评估
Pub Date : 2011-10-22 DOI: 10.1137/120871626
L. Elisa Celis, Omer Reingold, G. Segev, Udi Wieder
A fundamental fact in the analysis of randomized algorithms is that when n balls are hashed into n bins independently and uniformly at random, with high probability each bin contains at most O(log n / log(log n)) balls. In various applications, however, the assumption that a truly random hash function is available is not always valid, and explicit functions are required. In this paper we study the size of families (or, equivalently, the description length of their functions) that guarantee a maximal load of O(log n / log(log n)) with high probability, as well as the evaluation time of their functions. Whereas such functions must be described using Omega(log n) bits, the best upper bound was formerly O(log^2 n / log(log n)) bits, which is attained by O(log n / log(log n))-wise independent functions. Traditional constructions of the latter offer an evaluation time of O(log n / log(log n)), which according to Siegel's lower bound [FOCS '89] can be reduced only at the cost of significantly increasing the description length. We construct two families that guarantee a maximal load of O(log n / log(log n)) with high probability. Our constructions are based on two different approaches, and exhibit different trade-offs between the description length and the evaluation time. The first construction shows that O(log n / log(log n))-wise independence can in fact be replaced by & quot; gradually increasing independence & quot;, resulting in functions that are described using O(log n log(log n)) bits and evaluated in time O(log n log(log n)). The second construction is based on derandomization techniques for space-bounded computations combined with a tailored construction of a pseudorandom generator, resulting in functions that are described using O(log^(3/2) n) bits and evaluated in time O(sqrt(log n)). The latter can be compared to Siegel's lower bound stating that O(log n / log(log n))-wise independent functions that are evaluated in time O(sqrt(log n)) must be described using Omega(2^(sqrt(log n))) bits.
随机算法分析中的一个基本事实是,当n个球被独立地、均匀地随机散列到n个箱子中时,每个箱子有很大可能最多包含O(log n / log(log n))个球。然而,在各种应用程序中,真正随机哈希函数可用的假设并不总是有效的,需要显式函数。本文研究了保证高概率最大负载为O(log n / log(log n))的族的大小(即其函数的描述长度),以及其函数的评估时间。虽然这样的函数必须用(log n)位来描述,但最好的上界以前是O(log^2 n / log(log n))位,这是通过O(log n / log(log n))独立函数来实现的。后者的传统结构提供了O(log n / log(log n))的评估时间,根据Siegel的下界[FOCS '89],只有以显著增加描述长度为代价才能减少该时间。我们构造了两个族,保证最大负载为O(log n / log(log n))的高概率。我们的构建基于两种不同的方法,并且在描述长度和评估时间之间表现出不同的权衡。第一个构造表明,O(log n / log(log n))的独立性实际上可以用& quot;逐渐增加独立性,导致函数用O(log n log(log n))位来描述,并在O(log n log(log n))时间内求值。第二种构造基于空间有界计算的非随机化技术,结合了伪随机生成器的定制构造,从而产生使用O(log^(3/2) n)位描述并在O(sqrt(log n))时间内求值的函数。后者可以与西格尔的下界相比较,即O(log n / log(log n))明智的独立函数在O(sqrt(log n))时间内求值,必须用(2^(sqrt(log n)))位来描述。
{"title":"Balls and Bins: Smaller Hash Families and Faster Evaluation","authors":"L. Elisa Celis, Omer Reingold, G. Segev, Udi Wieder","doi":"10.1137/120871626","DOIUrl":"https://doi.org/10.1137/120871626","url":null,"abstract":"A fundamental fact in the analysis of randomized algorithms is that when n balls are hashed into n bins independently and uniformly at random, with high probability each bin contains at most O(log n / log(log n)) balls. In various applications, however, the assumption that a truly random hash function is available is not always valid, and explicit functions are required. In this paper we study the size of families (or, equivalently, the description length of their functions) that guarantee a maximal load of O(log n / log(log n)) with high probability, as well as the evaluation time of their functions. Whereas such functions must be described using Omega(log n) bits, the best upper bound was formerly O(log^2 n / log(log n)) bits, which is attained by O(log n / log(log n))-wise independent functions. Traditional constructions of the latter offer an evaluation time of O(log n / log(log n)), which according to Siegel's lower bound [FOCS '89] can be reduced only at the cost of significantly increasing the description length. We construct two families that guarantee a maximal load of O(log n / log(log n)) with high probability. Our constructions are based on two different approaches, and exhibit different trade-offs between the description length and the evaluation time. The first construction shows that O(log n / log(log n))-wise independence can in fact be replaced by & quot; gradually increasing independence & quot;, resulting in functions that are described using O(log n log(log n)) bits and evaluated in time O(log n log(log n)). The second construction is based on derandomization techniques for space-bounded computations combined with a tailored construction of a pseudorandom generator, resulting in functions that are described using O(log^(3/2) n) bits and evaluated in time O(sqrt(log n)). The latter can be compared to Siegel's lower bound stating that O(log n / log(log n))-wise independent functions that are evaluated in time O(sqrt(log n)) must be described using Omega(2^(sqrt(log n))) bits.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129680023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
The Promise of Differential Privacy: A Tutorial on Algorithmic Techniques 差分隐私的承诺:算法技术教程
Pub Date : 2011-10-22 DOI: 10.1109/FOCS.2011.88
C. Dwork
{em Differential privacy} describes a promise, made by a data curator to a data subject: you will not be affected, adversely or otherwise, by allowing your data to be used in any study, no matter what other studies, data sets, or information from other sources is available. At their best, differentially private database mechanisms can make confidential data widely available for accurate data analysis, without resorting to data clean rooms, institutional review boards, data usage agreements, restricted views, or data protection plans. To enjoy the fruits of the research described in this tutorial, the data analyst must accept that raw data can never be accessed directly and that eventually data utility is consumed: overly accurate answers to too many questions will destroy privacy. The goal of algorithmic research on differential privacy is to postpone this inevitability as long as possible.
{em差异隐私}描述了数据管理员对数据主体作出的承诺:允许您的数据在任何研究中使用,无论其他研究、数据集或其他来源的信息如何,您都不会受到不利或其他方面的影响。在最好的情况下,不同的私有数据库机制可以使机密数据广泛地用于准确的数据分析,而无需诉诸于数据洁净室、机构审查委员会、数据使用协议、受限视图或数据保护计划。为了享受本教程中描述的研究成果,数据分析师必须接受这样一个事实:原始数据永远无法直接访问,最终数据效用会被消耗掉:对太多问题的过于准确的答案会破坏隐私。差分隐私算法研究的目标是尽可能地推迟这种必然性。
{"title":"The Promise of Differential Privacy: A Tutorial on Algorithmic Techniques","authors":"C. Dwork","doi":"10.1109/FOCS.2011.88","DOIUrl":"https://doi.org/10.1109/FOCS.2011.88","url":null,"abstract":"{em Differential privacy} describes a promise, made by a data curator to a data subject: you will not be affected, adversely or otherwise, by allowing your data to be used in any study, no matter what other studies, data sets, or information from other sources is available. At their best, differentially private database mechanisms can make confidential data widely available for accurate data analysis, without resorting to data clean rooms, institutional review boards, data usage agreements, restricted views, or data protection plans. To enjoy the fruits of the research described in this tutorial, the data analyst must accept that raw data can never be accessed directly and that eventually data utility is consumed: overly accurate answers to too many questions will destroy privacy. The goal of algorithmic research on differential privacy is to postpone this inevitability as long as possible.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127834429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 50
Medium Access Using Queues 使用队列的介质访问
Pub Date : 2011-10-22 DOI: 10.1109/FOCS.2011.99
Devavrat Shah, Jinwoo Shin, P. Tetali
Consider a wireless network of n nodes represented by a (undirected) graph G where an edge (i,j) models the fact that transmissions of i and j interfere with each other, i.e. simultaneous transmissions of i and j become unsuccessful. Hence it is required that at each time instance a set of non-interfering nodes (corresponding to an independent set in G) access the wireless medium. To utilize wireless resources efficiently, it is required to arbitrate the access of medium among interfering nodes properly. Moreover, to be of practical use, such a mechanism is required to be totally distributed as well as simple. As the main result of this paper, we provide such a medium access algorithm. It is randomized, totally distributed and simple: each node attempts to access medium at each time with probability that is a function of its local information. We establish efficiency of the algorithm by showing that the corresponding network Markov chain is positive recurrent as long as the demand imposed on the network can be supported by the wireless network (using any algorithm). In that sense, the proposed algorithm is optimal in terms of utilizing wireless resources. The algorithm is oblivious to the network graph structure, in contrast with the so-called polynomial back-off algorithm by Hastad-Leighton-Rogoff (STOC '87, SICOMP '96) that is established to be optimal for the complete graph and bipartite graphs (by Goldberg-MacKenzie (SODA '96, JCSS '99)).
考虑一个由(无向)图G表示的n个节点的无线网络,其中一条边(i,j)模拟了i和j的传输相互干扰的事实,即i和j的同时传输不成功。因此,要求在每个时间实例中都有一组互不干扰的节点(对应于G中的一组独立节点)接入无线介质。为了有效地利用无线资源,需要对干扰节点间的介质接入进行合理的仲裁。此外,为了实际使用,这种机制必须是完全分布的,并且简单。作为本文的主要成果,我们提供了这样一种介质访问算法。它是随机的、完全分布的、简单的:每个节点每次都试图访问介质,其概率是其本地信息的函数。通过证明只要无线网络(使用任何算法)能够支持对网络施加的需求,相应的网络马尔可夫链是正循环的,我们建立了算法的效率。从这个意义上说,所提出的算法在利用无线资源方面是最优的。与haad - leighton - rogoff (STOC '87, SICOMP '96)提出的所谓多项式回退算法(Goldberg-MacKenzie (SODA '96, JCSS '99))相比,该算法不受网络图结构的影响,该算法被建立为完全图和二部图的最优算法。
{"title":"Medium Access Using Queues","authors":"Devavrat Shah, Jinwoo Shin, P. Tetali","doi":"10.1109/FOCS.2011.99","DOIUrl":"https://doi.org/10.1109/FOCS.2011.99","url":null,"abstract":"Consider a wireless network of n nodes represented by a (undirected) graph G where an edge (i,j) models the fact that transmissions of i and j interfere with each other, i.e. simultaneous transmissions of i and j become unsuccessful. Hence it is required that at each time instance a set of non-interfering nodes (corresponding to an independent set in G) access the wireless medium. To utilize wireless resources efficiently, it is required to arbitrate the access of medium among interfering nodes properly. Moreover, to be of practical use, such a mechanism is required to be totally distributed as well as simple. As the main result of this paper, we provide such a medium access algorithm. It is randomized, totally distributed and simple: each node attempts to access medium at each time with probability that is a function of its local information. We establish efficiency of the algorithm by showing that the corresponding network Markov chain is positive recurrent as long as the demand imposed on the network can be supported by the wireless network (using any algorithm). In that sense, the proposed algorithm is optimal in terms of utilizing wireless resources. The algorithm is oblivious to the network graph structure, in contrast with the so-called polynomial back-off algorithm by Hastad-Leighton-Rogoff (STOC '87, SICOMP '96) that is established to be optimal for the complete graph and bipartite graphs (by Goldberg-MacKenzie (SODA '96, JCSS '99)).","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128746699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
An FPTAS for #Knapsack and Related Counting Problems 关于# backpack和相关计数问题的FPTAS
Pub Date : 2011-10-22 DOI: 10.1109/FOCS.2011.32
Parikshit Gopalan, Adam R. Klivans, R. Meka, Daniel Stefankovic, S. Vempala, Eric Vigoda
Given $n$ elements with non-negative integer weights $w_1,..., w_n$ and an integer capacity $C$, we consider the counting version of the classic knapsack problem: find the number of distinct subsets whose weights add up to at most $C$. We give the first deterministic, fully polynomial-time approximation scheme (FPTAS) for estimating the number of solutions to any knapsack constraint (our estimate has relative error $1 pm epsilon$). Our algorithm is based on dynamic programming. Previously, randomized polynomial-time approximation schemes (FPRAS) were known first by Morris and Sinclair via Markov chain Monte Carlo techniques, and subsequently by Dyer via dynamic programming and rejection sampling. In addition, we present a new method for deterministic approximate counting using {em read-once branching programs.} Our approach yields an FPTAS for several other counting problems, including counting solutions for the multidimensional knapsack problem with a constant number of constraints, the general integer knapsack problem, and the contingency tables problem with a constant number of rows.
给定$n$个非负整数权重$w_1,…, w_n$和一个整数容量$C$时,我们考虑经典背包问题的计数版本:找出其权重之和不超过$C$的不同子集的个数。我们给出了第一个确定性的,完全多项式时间近似方案(FPTAS),用于估计任何背包约束的解的数量(我们的估计具有相对误差$1 pm epsilon$)。我们的算法基于动态规划。在此之前,随机多项式时间近似格式(FPRAS)首先由Morris和Sinclair通过马尔可夫链蒙特卡罗技术发现,随后由Dyer通过动态规划和拒绝抽样方法发现。此外,我们提出了一种利用{em读一次分支程序进行确定性近似计数的新方法。我们的方法为其他几个计数问题产生了FPTAS,包括具有常数个数约束的多维背包问题的计数解决方案,一般整数背包问题,以及具有常数行数的联列表问题。
{"title":"An FPTAS for #Knapsack and Related Counting Problems","authors":"Parikshit Gopalan, Adam R. Klivans, R. Meka, Daniel Stefankovic, S. Vempala, Eric Vigoda","doi":"10.1109/FOCS.2011.32","DOIUrl":"https://doi.org/10.1109/FOCS.2011.32","url":null,"abstract":"Given $n$ elements with non-negative integer weights $w_1,..., w_n$ and an integer capacity $C$, we consider the counting version of the classic knapsack problem: find the number of distinct subsets whose weights add up to at most $C$. We give the first deterministic, fully polynomial-time approximation scheme (FPTAS) for estimating the number of solutions to any knapsack constraint (our estimate has relative error $1 pm epsilon$). Our algorithm is based on dynamic programming. Previously, randomized polynomial-time approximation schemes (FPRAS) were known first by Morris and Sinclair via Markov chain Monte Carlo techniques, and subsequently by Dyer via dynamic programming and rejection sampling. In addition, we present a new method for deterministic approximate counting using {em read-once branching programs.} Our approach yields an FPTAS for several other counting problems, including counting solutions for the multidimensional knapsack problem with a constant number of constraints, the general integer knapsack problem, and the contingency tables problem with a constant number of rows.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123456756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 62
Evolution with Recombination 进化与重组
Pub Date : 2011-10-22 DOI: 10.1109/FOCS.2011.24
Varun Kanade
Valiant (2007) introduced a computational model of evolution and suggested that Darwinian evolution be studied in the framework of computational learning theory. Valiant describes evolution as a restricted form of learning where exploration is limited to a set of possible mutations and feedback is received through the survival of the fittest mutation. In subsequent work Feldman (2008) showed that evolvability in Valiant's model is equivalent to learning in the correlational statistical query (CSQ) model. We extend Valiant's model to include genetic recombination and show that in certain cases, recombination can significantly speed-up the process of evolution in terms of the number of generations, though at the expense of population size. This follows via a reduction from parallel-CSQ algorithms to evolution with recombination. This gives an exponential speed-up (in terms of the number of generations) over previous known results for evolving conjunctions and half spaces with respect to restricted distributions.
Valiant(2007)提出了一个进化的计算模型,并建议在计算学习理论的框架下研究达尔文进化。Valiant将进化描述为一种受限制的学习形式,其中探索仅限于一系列可能的突变,并通过适者生存的突变获得反馈。在随后的工作中,Feldman(2008)表明Valiant模型中的可进化性等同于相关统计查询(CSQ)模型中的学习。我们将Valiant的模型扩展到包括基因重组,并表明在某些情况下,重组可以显著加快进化过程的世代数量,尽管代价是群体规模。这是通过将并行csq算法简化为结合重组的进化来实现的。相对于之前已知的关于受限分布的演化连词和半空间的结果,这给出了指数级的加速(就代数而言)。
{"title":"Evolution with Recombination","authors":"Varun Kanade","doi":"10.1109/FOCS.2011.24","DOIUrl":"https://doi.org/10.1109/FOCS.2011.24","url":null,"abstract":"Valiant (2007) introduced a computational model of evolution and suggested that Darwinian evolution be studied in the framework of computational learning theory. Valiant describes evolution as a restricted form of learning where exploration is limited to a set of possible mutations and feedback is received through the survival of the fittest mutation. In subsequent work Feldman (2008) showed that evolvability in Valiant's model is equivalent to learning in the correlational statistical query (CSQ) model. We extend Valiant's model to include genetic recombination and show that in certain cases, recombination can significantly speed-up the process of evolution in terms of the number of generations, though at the expense of population size. This follows via a reduction from parallel-CSQ algorithms to evolution with recombination. This gives an exponential speed-up (in terms of the number of generations) over previous known results for evolving conjunctions and half spaces with respect to restricted distributions.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123621836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 45
期刊
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1