首页 > 最新文献

2011 IEEE 52nd Annual Symposium on Foundations of Computer Science最新文献

英文 中文
Fully Homomorphic Encryption without Squashing Using Depth-3 Arithmetic Circuits 使用深度-3算术电路的不压缩全同态加密
Pub Date : 2011-10-22 DOI: 10.1109/FOCS.2011.94
Craig Gentry, S. Halevi
All previously known fully homomorphic encryption (FHE) schemes use Gentry's blueprint:* SWHE: Construct a somewhat homomorphic encryption (SWHE) scheme -- roughly, an encryption scheme that can homomorphically evaluate polynomials up to some degree.* Squash: ``Squash" the decryption function of the SWHE scheme, so that the scheme can evaluate functions twice as complex (in terms of polynomial degree) than its own decryption function. Do this by adding a ``hint " to the SHWE public key -- namely, a large set of vectors that has a secret sparse subset that sums to the original secret key.* Bootstrap: Given a SWHE scheme that can evaluate functions twice as complex as its decryption function, apply Gentry's transformation to get a ``leveled" FHE scheme. To get ``pure" (non-leveled) FHE, one assumes circular security. Here, we describe a new blueprint for FHE. We show how to eliminate the squashing step, and thereby eliminate the need to assume that the sparse subset sum problem (SSSP) is hard, as all previous leveled FHE schemes have done. Using our new blueprint, we obtain the following results:* A ``simple" leveled FHE scheme where we replace SSSP with Decision Diffie-Hellman!* The first leveled FHE scheme based entirely on worst-case hardness}. Specifically, we give a leveled FHE scheme with security based on the shortest independent vector problem over ideal lattices (ideal-SIVP).* Some efficiency improvements for FHE.} While the new blueprint does not yet improve computational efficiency, it reduces cipher text length. As in the previous blueprint, we obtain pure FHE by assuming circular security. Our main technique is to express the decryption function of SWHE schemes as a depth-3 ($sum prod sum$) arithmetic circuit. When we evaluate this decryption function homomorphically, we temporarily switch to a multiplicatively homomorphic encryption (MHE) scheme, such as Elgamal, to handle the $prod$ part, after which we translate the result from the MHE scheme back to the SWHE scheme by evaluating the MHE scheme's decryption function within the SWHE scheme. The SWHE scheme only needs to be able to evaluate the MHE scheme's decryption function (plus minor operations), and does not need to have the self-referential property of being able to evaluate its {em own} decryption function, a property that necessitated squashing in the original blueprint.
所有已知的完全同态加密(FHE)方案都使用Gentry的蓝图:* SWHE:构造一个有点同态的加密(SWHE)方案——粗略地说,一个可以在某种程度上同态计算多项式的加密方案。* Squash:将SWHE方案的解密函数“Squash”,使该方案的求值函数的复杂度(就多项式度而言)是其自身解密函数的两倍。为此,可以向SHWE公钥添加一个“提示”——也就是说,一个大的向量集,它具有一个秘密稀疏子集,其和等于原始秘密密钥。* Bootstrap:给定一个SWHE方案,其计算函数的复杂度是其解密函数的两倍,应用Gentry的变换得到一个“水平”的FHE方案。要获得“纯”(非分级)FHE,需要假定循环安全性。在这里,我们描述了FHE的新蓝图。我们展示了如何消除压缩步骤,从而消除了假设稀疏子集和问题(SSSP)很难的需要,就像以前所有级别的FHE方案所做的那样。使用我们的新蓝图,我们得到了以下结果:*一个“简单”的分层FHE方案,我们用Decision Diffie-Hellman取代SSSP !*一级FHE方案完全基于最坏情况硬度}。具体来说,我们给出了一种基于理想格上最短独立向量问题(ideal- sivp)的具有安全性的分层FHE方案。* FHE的一些效率改进。虽然新的蓝图还没有提高计算效率,但它减少了密文的长度。与前面的蓝图一样,我们通过假设循环安全性来获得纯FHE。我们的主要技术是将SWHE方案的解密功能表示为深度3 ($sum prod sum$)算术电路。当我们对这个解密函数进行同态计算时,我们暂时切换到乘法同态加密(MHE)方案,如Elgamal,来处理$prod$部分,之后我们通过在SWHE方案中计算MHE方案的解密函数,将结果从MHE方案转换回SWHE方案。SWHE方案只需要能够计算MHE方案的解密函数(加上较小的操作),而不需要具有能够计算其{em自己}的解密函数的自引用属性,这种属性需要在原始蓝图中进行压缩。
{"title":"Fully Homomorphic Encryption without Squashing Using Depth-3 Arithmetic Circuits","authors":"Craig Gentry, S. Halevi","doi":"10.1109/FOCS.2011.94","DOIUrl":"https://doi.org/10.1109/FOCS.2011.94","url":null,"abstract":"All previously known fully homomorphic encryption (FHE) schemes use Gentry's blueprint:* SWHE: Construct a somewhat homomorphic encryption (SWHE) scheme -- roughly, an encryption scheme that can homomorphically evaluate polynomials up to some degree.* Squash: ``Squash\" the decryption function of the SWHE scheme, so that the scheme can evaluate functions twice as complex (in terms of polynomial degree) than its own decryption function. Do this by adding a ``hint \" to the SHWE public key -- namely, a large set of vectors that has a secret sparse subset that sums to the original secret key.* Bootstrap: Given a SWHE scheme that can evaluate functions twice as complex as its decryption function, apply Gentry's transformation to get a ``leveled\" FHE scheme. To get ``pure\" (non-leveled) FHE, one assumes circular security. Here, we describe a new blueprint for FHE. We show how to eliminate the squashing step, and thereby eliminate the need to assume that the sparse subset sum problem (SSSP) is hard, as all previous leveled FHE schemes have done. Using our new blueprint, we obtain the following results:* A ``simple\" leveled FHE scheme where we replace SSSP with Decision Diffie-Hellman!* The first leveled FHE scheme based entirely on worst-case hardness}. Specifically, we give a leveled FHE scheme with security based on the shortest independent vector problem over ideal lattices (ideal-SIVP).* Some efficiency improvements for FHE.} While the new blueprint does not yet improve computational efficiency, it reduces cipher text length. As in the previous blueprint, we obtain pure FHE by assuming circular security. Our main technique is to express the decryption function of SWHE schemes as a depth-3 ($sum prod sum$) arithmetic circuit. When we evaluate this decryption function homomorphically, we temporarily switch to a multiplicatively homomorphic encryption (MHE) scheme, such as Elgamal, to handle the $prod$ part, after which we translate the result from the MHE scheme back to the SWHE scheme by evaluating the MHE scheme's decryption function within the SWHE scheme. The SWHE scheme only needs to be able to evaluate the MHE scheme's decryption function (plus minor operations), and does not need to have the self-referential property of being able to evaluate its {em own} decryption function, a property that necessitated squashing in the original blueprint.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129250151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 219
Welfare and Profit Maximization with Production Costs 生产成本下的福利和利润最大化
Pub Date : 2011-10-22 DOI: 10.1109/FOCS.2011.68
Avrim Blum, Anupam Gupta, Y. Mansour, Ankit Sharma
Combinatorial Auctions are a central problem in Algorithmic Mechanism Design: pricing and allocating goods to buyers with complex preferences in order to maximize some desired objective (e.g., social welfare, revenue, or profit). The problem has been well-studied in the case of limited supply (one copy of each item), and in the case of digital goods (the seller can produce additional copies at no cost). Yet in the case of resources -- oil, labor, computing cycles, etc. -- neither of these abstractions is just right: additional supplies of these resources can be found, but at increasing difficulty (marginal cost) as resources are depleted. In this work, we initiate the study of the algorithmic mechanism design problem of combinatorial pricing under increasing marginal cost. The goal is to sell these goods to buyers with unknown and arbitrary combinatorial valuation functions to maximize either the social welfare, or the seller's profit, specifically we focus on the setting of posted item prices with buyers arriving online. We give algorithms that achieve constant factor approximations for a class of natural cost functions -- linear, low-degree polynomial, logarithmic -- and that give logarithmic approximations for more general increasing marginal cost functions (along with a necessary additive loss). We show that these bounds are essentially best possible for these settings.
组合拍卖是算法机制设计中的一个核心问题:为具有复杂偏好的买家定价和分配商品,以最大化某些期望目标(例如,社会福利、收入或利润)。这个问题已经在有限供应的情况下(每件商品一份)和数字商品的情况下(卖家可以免费生产额外的副本)得到了很好的研究。然而,在资源——石油、劳动力、计算周期等——的情况下,这两种抽象都不正确:这些资源的额外供应是可以找到的,但随着资源的枯竭,难度越来越大(边际成本)。本文研究了边际成本增加条件下组合定价的算法机制设计问题。目标是将这些商品出售给具有未知和任意组合估值函数的买家,以最大化社会福利或卖家的利润,具体来说,我们关注的是买家在线到达时张贴商品价格的设置。我们给出了对一类自然成本函数(线性、低次多项式、对数)实现常因子逼近的算法,并为更一般的增加边际成本函数(以及必要的加性损失)提供了对数逼近。我们证明了这些边界本质上是这些设置的最佳可能。
{"title":"Welfare and Profit Maximization with Production Costs","authors":"Avrim Blum, Anupam Gupta, Y. Mansour, Ankit Sharma","doi":"10.1109/FOCS.2011.68","DOIUrl":"https://doi.org/10.1109/FOCS.2011.68","url":null,"abstract":"Combinatorial Auctions are a central problem in Algorithmic Mechanism Design: pricing and allocating goods to buyers with complex preferences in order to maximize some desired objective (e.g., social welfare, revenue, or profit). The problem has been well-studied in the case of limited supply (one copy of each item), and in the case of digital goods (the seller can produce additional copies at no cost). Yet in the case of resources -- oil, labor, computing cycles, etc. -- neither of these abstractions is just right: additional supplies of these resources can be found, but at increasing difficulty (marginal cost) as resources are depleted. In this work, we initiate the study of the algorithmic mechanism design problem of combinatorial pricing under increasing marginal cost. The goal is to sell these goods to buyers with unknown and arbitrary combinatorial valuation functions to maximize either the social welfare, or the seller's profit, specifically we focus on the setting of posted item prices with buyers arriving online. We give algorithms that achieve constant factor approximations for a class of natural cost functions -- linear, low-degree polynomial, logarithmic -- and that give logarithmic approximations for more general increasing marginal cost functions (along with a necessary additive loss). We show that these bounds are essentially best possible for these settings.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114510967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 52
The Power of Linear Estimators 线性估计量的威力
Pub Date : 2011-10-22 DOI: 10.1109/FOCS.2011.81
G. Valiant, Paul Valiant
For a broad class of practically relevant distribution properties, which includes entropy and support size, nearly all of the proposed estimators have an especially simple form. Given a set of independent samples from a discrete distribution, these estimators tally the vector of summary statistics -- the number of domain elements seen once, twice, etc. in the sample -- and output the dot product between these summary statistics, and a fixed vector of coefficients. We term such estimators emph{linear}. This historical proclivity towards linear estimators is slightly perplexing, since, despite many efforts over nearly 60 years, all proposed such estimators have significantly sub optimal convergence, compared to the bounds shown in [VV11]. Our main result, in some sense vindicating this insistence on linear estimators, is that for any property in this broad class, there exists a near-optimal linear estimator. Additionally, we give a practical and polynomial-time algorithm for constructing such estimators for any given parameters. While this result does not yield explicit bounds on the sample complexities of these estimation tasks, we leverage the insights provided by this result to give explicit constructions of near-optimal linear estimators for three properties: entropy, $L_1$ distance to uniformity, and for pairs of distributions, $L_1$ distance. Our entropy estimator, when given $O(frac{n}{eps log n})$ independent samples from a distribution of support at most $n,$ will estimate the entropy of the distribution to within additive accuracy $epsilon$, with probability of failure $o(1/poly(n)).$ From the recent lower bounds given in [VV11], this estimator is optimal, to constant factor, both in its dependence on $n$, and its dependence on $epsilon.$ In particular, the inverse-linear convergence rate of this estimator resolves the main open question of [VV11], which left open the possibility that the error decreased only with the square root of the number of samples. Our distance to uniformity estimator, when given $O(frac{m}{eps^2log m})$ independent samples from any distribution, returns an $eps$-accurate estimate of the $L_1$ distance to the uniform distribution of support $m$. This is constant-factor optimal, for constant $epsilon$. Finally, our framework extends naturally to properties of pairs of distributions, including estimating the $L_1$ distance and KL-divergence between pairs of distributions. We give an explicit linear estimator for estimating $L_1$ distance to additive accuracy $epsilon$ using $O(frac{n}{eps^2log n})$ samples from each distribution, which is constant-factor optimal, for constant $epsilon$. This is the first sub linear-sample estimator for this fundamental property.
对于广泛的实际相关分布属性,包括熵和支持大小,几乎所有提出的估计都有一个特别简单的形式。给定一组来自离散分布的独立样本,这些估计器计算汇总统计向量——在样本中出现一次、两次等的域元素的数量——并输出这些汇总统计与固定系数向量之间的点积。我们称这样的估计量为emph{线性}的。这种对线性估计的历史倾向有点令人困惑,因为尽管近60年来做出了许多努力,但与[VV11]中所示的界限相比,所有提出的这种估计都具有明显的次优收敛性。我们的主要结果,在某种意义上证明了对线性估计量的坚持,就是对于这个广义类中的任何性质,存在一个近最优线性估计量。此外,我们给出了一个实用的多项式时间算法来构造任意给定参数的估计量。虽然这个结果没有对这些估计任务的样本复杂性产生明确的界限,但我们利用这个结果提供的见解,为三个属性给出了近似最优线性估计器的显式结构:熵,$L_1$到均匀性的距离,以及对分布的$L_1$距离。我们的熵估计器,当给出$O(frac{n}{eps log n})$独立样本时,支持分布最多$n,$将估计分布的熵到加性精度$epsilon$内,失效概率$o(1/poly(n)).$从[VV11]中给出的最近的下界来看,这个估计器是最优的,对于常数因子,无论是对$n$的依赖,还是对$epsilon.$的依赖,特别是,该估计器的逆线性收敛速率解决了[VV11]的主要开放性问题,即误差仅随样本数量的平方根而减小的可能性。当给定来自任何分布的$O(frac{m}{eps^2log m})$独立样本时,我们到均匀性估计器的距离返回到支持$m$均匀分布的$L_1$距离的$eps$ -精确估计。这是常数因子最优的,对于常数$epsilon$。最后,我们的框架自然地扩展到分布对的性质,包括估计分布对之间的$L_1$距离和kl散度。我们给出了一个显式的线性估计器,用于估计$L_1$到加性精度$epsilon$的距离,使用来自每个分布的$O(frac{n}{eps^2log n})$样本,这是常数因子最优的,对于常数$epsilon$。这是这个基本性质的第一个子线性样本估计。
{"title":"The Power of Linear Estimators","authors":"G. Valiant, Paul Valiant","doi":"10.1109/FOCS.2011.81","DOIUrl":"https://doi.org/10.1109/FOCS.2011.81","url":null,"abstract":"For a broad class of practically relevant distribution properties, which includes entropy and support size, nearly all of the proposed estimators have an especially simple form. Given a set of independent samples from a discrete distribution, these estimators tally the vector of summary statistics -- the number of domain elements seen once, twice, etc. in the sample -- and output the dot product between these summary statistics, and a fixed vector of coefficients. We term such estimators emph{linear}. This historical proclivity towards linear estimators is slightly perplexing, since, despite many efforts over nearly 60 years, all proposed such estimators have significantly sub optimal convergence, compared to the bounds shown in [VV11]. Our main result, in some sense vindicating this insistence on linear estimators, is that for any property in this broad class, there exists a near-optimal linear estimator. Additionally, we give a practical and polynomial-time algorithm for constructing such estimators for any given parameters. While this result does not yield explicit bounds on the sample complexities of these estimation tasks, we leverage the insights provided by this result to give explicit constructions of near-optimal linear estimators for three properties: entropy, $L_1$ distance to uniformity, and for pairs of distributions, $L_1$ distance. Our entropy estimator, when given $O(frac{n}{eps log n})$ independent samples from a distribution of support at most $n,$ will estimate the entropy of the distribution to within additive accuracy $epsilon$, with probability of failure $o(1/poly(n)).$ From the recent lower bounds given in [VV11], this estimator is optimal, to constant factor, both in its dependence on $n$, and its dependence on $epsilon.$ In particular, the inverse-linear convergence rate of this estimator resolves the main open question of [VV11], which left open the possibility that the error decreased only with the square root of the number of samples. Our distance to uniformity estimator, when given $O(frac{m}{eps^2log m})$ independent samples from any distribution, returns an $eps$-accurate estimate of the $L_1$ distance to the uniform distribution of support $m$. This is constant-factor optimal, for constant $epsilon$. Finally, our framework extends naturally to properties of pairs of distributions, including estimating the $L_1$ distance and KL-divergence between pairs of distributions. We give an explicit linear estimator for estimating $L_1$ distance to additive accuracy $epsilon$ using $O(frac{n}{eps^2log n})$ samples from each distribution, which is constant-factor optimal, for constant $epsilon$. This is the first sub linear-sample estimator for this fundamental property.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116787947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 148
Mutual Exclusion with O(log^2 Log n) Amortized Work 互斥与O(log^2 log n)平摊功
Pub Date : 2011-10-22 DOI: 10.1109/FOCS.2011.84
M. A. Bender, Seth Gilbert
This paper presents a new algorithm for mutual exclusion in which each passage through the critical section costs amortized O(log^2 log n) RMRs with high probability. The algorithm operates in a standard asynchronous, local spinning, shared memory model with an oblivious adversary. It guarantees that every process enters the critical section with high probability. The algorithm achieves its efficient performance by exploiting a connection between mutual exclusion and approximate counting.
本文提出了一种新的互斥算法,在该算法中,每条通过临界截面的通道都以高概率平摊O(log^2 log n)个rmr。该算法在一个标准的异步、本地旋转、共享内存模型中与一个无关的对手操作。它保证了每个过程都有高概率进入临界区域。该算法利用互斥和近似计数之间的联系来实现其高效的性能。
{"title":"Mutual Exclusion with O(log^2 Log n) Amortized Work","authors":"M. A. Bender, Seth Gilbert","doi":"10.1109/FOCS.2011.84","DOIUrl":"https://doi.org/10.1109/FOCS.2011.84","url":null,"abstract":"This paper presents a new algorithm for mutual exclusion in which each passage through the critical section costs amortized O(log^2 log n) RMRs with high probability. The algorithm operates in a standard asynchronous, local spinning, shared memory model with an oblivious adversary. It guarantees that every process enters the critical section with high probability. The algorithm achieves its efficient performance by exploiting a connection between mutual exclusion and approximate counting.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115690088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
A Two Prover One Round Game with Strong Soundness 具有强稳健性的两个证明者一轮博弈
Pub Date : 2011-10-22 DOI: 10.4086/toc.2013.v009a028
Subhash Khot, S. Safra
We show that for any fixed prime $q geq 5$ and constant $zeta >, 0$, it is NP-hard to distinguish whether a two prove one round game with $q^6$ answers has value at least $1-zeta$ or at most $frac{4}{q}$. The result is obtained by combining two techniques: (i) An Inner PCP based on the {it point versus subspace} test for linear functions. The testis analyzed Fourier analytically. (ii) The Outer/Inner PCP composition that relies on a certain {it sub-code covering} property for Hadamard codes. This is a new and essentially black-box method to translate a {it codeword test}for Hadamard codes to a {it consistency test}, leading to a full PCP construction. As an application, we show that unless NP has quasi-polynomial time deterministic algorithms, the Quadratic Programming Problem is in approximable within factor $(log n)^{1/6 - o(1)}$.
我们证明了对于任何固定素数$q geq 5$和常数$zeta >, 0$,它是NP-hard区分两个证明一轮博弈的答案是$q^6$的值至少是$1-zeta$还是最多$frac{4}{q}$。结果是通过结合两种技术获得的:(i)基于线性函数的{it点对子空间}检验的内PCP。睾丸进行傅里叶分析。(ii)外部/内部PCP组合依赖于某种{it覆盖Hadamard代码属性的子}代码。这是将Hadamard代码的{it码字测试}转换为{it一致性测试}的一种新的黑盒方法,从而导致完整的PCP构造。作为一个应用,我们证明了除非NP具有拟多项式时间确定性算法,否则二次规划问题在因子$(log n)^{1/6 - o(1)}$内是近似的。
{"title":"A Two Prover One Round Game with Strong Soundness","authors":"Subhash Khot, S. Safra","doi":"10.4086/toc.2013.v009a028","DOIUrl":"https://doi.org/10.4086/toc.2013.v009a028","url":null,"abstract":"We show that for any fixed prime $q geq 5$ and constant $zeta &gt, 0$, it is NP-hard to distinguish whether a two prove one round game with $q^6$ answers has value at least $1-zeta$ or at most $frac{4}{q}$. The result is obtained by combining two techniques: (i) An Inner PCP based on the {it point versus subspace} test for linear functions. The testis analyzed Fourier analytically. (ii) The Outer/Inner PCP composition that relies on a certain {it sub-code covering} property for Hadamard codes. This is a new and essentially black-box method to translate a {it codeword test}for Hadamard codes to a {it consistency test}, leading to a full PCP construction. As an application, we show that unless NP has quasi-polynomial time deterministic algorithms, the Quadratic Programming Problem is in approximable within factor $(log n)^{1/6 - o(1)}$.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130157537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Green Computing Algorithmics 绿色计算算法
Pub Date : 2011-10-22 DOI: 10.1109/FOCS.2011.44
K. Pruhs
The converging trends of society's desire/need for more sustainable technologies, exponentially increasing power densities within computing devices, and exponentially more computing devices, have inevitably pushed power and energy management into the forefront of computing design and management for purely economic reasons. Thus we are in the midst of a green computing revolution involving the redesign of information technology hardware and software at all levels of the information technology stack. This revolution has spawned a multitude of technological challenges, many of which are algorithmic in nature. We provide pointers into the literature on the green computing algorithmics.
社会对更可持续技术的渴望/需求,计算设备内功率密度的指数级增长,以及计算设备的指数级增长,这些趋势的融合不可避免地将电源和能源管理推到了纯粹出于经济原因的计算设计和管理的最前沿。因此,我们正处于一场绿色计算革命的中间,这场革命涉及信息技术硬件和软件在信息技术栈的各个层次上的重新设计。这场革命催生了大量的技术挑战,其中许多本质上是算法挑战。我们提供了关于绿色计算算法的文献指南。
{"title":"Green Computing Algorithmics","authors":"K. Pruhs","doi":"10.1109/FOCS.2011.44","DOIUrl":"https://doi.org/10.1109/FOCS.2011.44","url":null,"abstract":"The converging trends of society's desire/need for more sustainable technologies, exponentially increasing power densities within computing devices, and exponentially more computing devices, have inevitably pushed power and energy management into the forefront of computing design and management for purely economic reasons. Thus we are in the midst of a green computing revolution involving the redesign of information technology hardware and software at all levels of the information technology stack. This revolution has spawned a multitude of technological challenges, many of which are algorithmic in nature. We provide pointers into the literature on the green computing algorithmics.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"15 23-24","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120931546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Tight Lower Bounds for 2-query LCCs over Finite Fields 有限域上2查询lcc的紧下界
Pub Date : 2011-10-22 DOI: 10.1109/FOCS.2011.28
Arnab Bhattacharyya, Zeev Dvir, Amir Shpilka, Shubhangi Saraf
A Locally Correctable Code (LCC) is an error correcting code that has a probabilistic self-correcting algorithm that, with high probability, can correct any coordinate of the codeword by looking at only a few other coordinates, even if a fraction δ of the coordinates are corrupted. LCCs are a stronger form of LDCs (Locally Decodable Codes) which have received a lot of attention recently due to their many applications and surprising constructions. In this work we show a separation between 2-query LDCs and LCCs over finite fields of prime order. Specifically, we prove a lower bound of the form p^{Ω(δd)} on the length of linear 2-query LCCs over $F_p$, that encode messages of length d. Our bound improves over the known bound of $2^{Ω(δd)} cite{GKST06, KdW04, DS07} which is tight for LDCs. Our proof makes use of tools from additive combinatorics which have played an important role in several recent results in theoretical computer science. Corollaries of our main theorem are new incidence geometry results over finite fields. The first is an improvement to the Sylvester-Gallai theorem over finite fields cite{SS10} and the second is a new analog of Beck's theorem over finite fields.
局部可校正码(LCC)是一种纠错码,它具有概率自校正算法,即使部分δ的坐标损坏,也可以通过查看少数其他坐标来高概率地纠正码字的任何坐标。lcc是ldc(局部可解码代码)的一种更强的形式,由于其许多应用和令人惊讶的结构,ldc最近受到了很多关注。在这项工作中,我们展示了在素数阶有限域上的2查询ldc和lcc之间的分离。具体来说,我们证明了在$F_p$上编码长度为d的消息的线性2查询lcc的长度的p^{Ω(δd)}的下界。我们的下界改进了已知的$2^{Ω(δd)} cite{GKST06, KdW04, DS07}的下界,这对于ldc来说是紧的。我们的证明使用了加法组合学的工具,这些工具在理论计算机科学的几个最新结果中发挥了重要作用。我们主要定理的推论是有限域上新的关联几何结果。第一个是对有限域上Sylvester-Gallai定理的改进,第二个是有限域上Beck定理的一个新的类比。
{"title":"Tight Lower Bounds for 2-query LCCs over Finite Fields","authors":"Arnab Bhattacharyya, Zeev Dvir, Amir Shpilka, Shubhangi Saraf","doi":"10.1109/FOCS.2011.28","DOIUrl":"https://doi.org/10.1109/FOCS.2011.28","url":null,"abstract":"A Locally Correctable Code (LCC) is an error correcting code that has a probabilistic self-correcting algorithm that, with high probability, can correct any coordinate of the codeword by looking at only a few other coordinates, even if a fraction δ of the coordinates are corrupted. LCCs are a stronger form of LDCs (Locally Decodable Codes) which have received a lot of attention recently due to their many applications and surprising constructions. In this work we show a separation between 2-query LDCs and LCCs over finite fields of prime order. Specifically, we prove a lower bound of the form p^{Ω(δd)} on the length of linear 2-query LCCs over $F_p$, that encode messages of length d. Our bound improves over the known bound of $2^{Ω(δd)} cite{GKST06, KdW04, DS07} which is tight for LDCs. Our proof makes use of tools from additive combinatorics which have played an important role in several recent results in theoretical computer science. Corollaries of our main theorem are new incidence geometry results over finite fields. The first is an improvement to the Sylvester-Gallai theorem over finite fields cite{SS10} and the second is a new analog of Beck's theorem over finite fields.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115720538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
The 1D Area Law and the Complexity of Quantum States: A Combinatorial Approach 一维面积定律与量子态的复杂性:一种组合方法
Pub Date : 2011-10-22 DOI: 10.1109/FOCS.2011.91
D. Aharonov, I. Arad, Zeph Landau, U. Vazirani
The classical description of quantum states is in general exponential in the number of qubits. Can we get polynomial descriptions for more restricted sets of states such as ground states of interesting subclasses of local Hamiltonians? This is the basic problem in the study of the complexity of ground states, and requires an understanding of multi-particle entanglement and quantum correlations in such states. Area laws provide a fundamental ingredient in the study of the complexity of ground states, since they offer a way to bound in a quantitative way the entanglement in such states. Although they have long been conjectured for many body systems in arbitrary dimensions, a general rigorous was only recently proved in Hastings' seminal paper cite{ref:Has07} for 1D systems. In this paper, we give a combinatorial proof of the 1D area law for the special case of frustration free systems, improving by an exponential factor the scaling in terms of the inverse spectral gap and the dimensionality of the particles. The scaling in terms of the dimension of the particles is a potentially important issue in the context of resolving the 2D case and higher dimensions, which is one of the most important open questions in Hamiltonian complexity. Our proof is based on a reformulation of the detectability lemma, introduced by us in the context of quantum gap amplificationcite{ref:Aha09b}. We give an alternative proof of the detectability lemma, which is not only simpler and more intuitive than the original proof, but also removes a key restriction in the original statement, making it more suitable for this new context. We also give a one page proof of Hastings' proof that the correlations in the ground states of gapped Hamiltonians decay exponentially with the distance, demonstrating the simplicity of the combinatorial approach for those problems.
量子态的经典描述通常是量子位的数量呈指数增长。我们能否得到更有限的状态集的多项式描述比如局部哈密顿子的有趣子类的基态?这是研究基态复杂性的基本问题,需要理解基态中的多粒子纠缠和量子相关。区域定律为研究基态的复杂性提供了一个基本要素,因为它们提供了一种以定量方式束缚基态纠缠的方法。尽管长期以来,人们一直对任意维度的许多物体系统进行推测,但直到最近,黑斯廷斯的开创性论文cite{ref:Has07}才证明了一维系统的一般严格性。本文给出了无挫折系统特殊情况下一维面积律的组合证明,通过指数因子改进了用逆谱隙和粒子维数表示的标度。在解决二维和高维情况下,粒子维度的缩放是一个潜在的重要问题,这是哈密顿复杂性中最重要的开放性问题之一。我们的证明是基于可探测引理的重新表述,由我们在量子间隙放大cite{ref:Aha09b}的背景下引入。我们给出了一个可检测引理的替代证明,它不仅比原证明更简单直观,而且去掉了原陈述中的一个关键限制,使其更适合于这种新的情况。我们还用一页纸证明了黑斯廷斯关于间隙哈密顿量基态的相关性随距离呈指数衰减的证明,证明了这些问题的组合方法的简单性。
{"title":"The 1D Area Law and the Complexity of Quantum States: A Combinatorial Approach","authors":"D. Aharonov, I. Arad, Zeph Landau, U. Vazirani","doi":"10.1109/FOCS.2011.91","DOIUrl":"https://doi.org/10.1109/FOCS.2011.91","url":null,"abstract":"The classical description of quantum states is in general exponential in the number of qubits. Can we get polynomial descriptions for more restricted sets of states such as ground states of interesting subclasses of local Hamiltonians? This is the basic problem in the study of the complexity of ground states, and requires an understanding of multi-particle entanglement and quantum correlations in such states. Area laws provide a fundamental ingredient in the study of the complexity of ground states, since they offer a way to bound in a quantitative way the entanglement in such states. Although they have long been conjectured for many body systems in arbitrary dimensions, a general rigorous was only recently proved in Hastings' seminal paper cite{ref:Has07} for 1D systems. In this paper, we give a combinatorial proof of the 1D area law for the special case of frustration free systems, improving by an exponential factor the scaling in terms of the inverse spectral gap and the dimensionality of the particles. The scaling in terms of the dimension of the particles is a potentially important issue in the context of resolving the 2D case and higher dimensions, which is one of the most important open questions in Hamiltonian complexity. Our proof is based on a reformulation of the detectability lemma, introduced by us in the context of quantum gap amplificationcite{ref:Aha09b}. We give an alternative proof of the detectability lemma, which is not only simpler and more intuitive than the original proof, but also removes a key restriction in the original statement, making it more suitable for this new context. We also give a one page proof of Hastings' proof that the correlations in the ground states of gapped Hamiltonians decay exponentially with the distance, demonstrating the simplicity of the combinatorial approach for those problems.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"212 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115661421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Which Networks are Least Susceptible to Cascading Failures? 哪些网络最不容易发生级联故障?
Pub Date : 2011-10-22 DOI: 10.1109/FOCS.2011.38
L. Blume, D. Easley, J. Kleinberg, Robert D. Kleinberg, É. Tardos
The spread of a cascading failure through a network is an issue that comes up in many domains: in the contagious failures that spread among financial institutions during a financial crisis, through nodes of a power grid or communication network during a widespread outage, or through a human population during the outbreak of an epidemic disease. Here we study a natural model of threshold contagion: each node is assigned a numerical threshold drawn independently from an underlying distribution, and it will fail as soon as its number of failed neighbors reaches this threshold. Despite the simplicity of the formulation, it has been very challenging to analyze the failure processes that arise from arbitrary threshold distributions, even qualitative questions concerning which graphs are the most resilient to cascading failures in these models have been difficult to resolve. Here we develop a set of new techniques for analyzing the failure probabilities of nodes in arbitrary graphs under this model, and we compare different graphs according to the maximum failure probability of any node in the graph when thresholds are drawn from a given distribution. We find that the space of threshold distributions has a surprisingly rich structure when we consider the risk that these thresholds induce on different graphs: small shifts in the distribution of the thresholds can favor graphs with a maximally clustered structure (i.e., cliques), those with a maximally branching structure (trees), or even intermediate hybrids.
通过网络的级联故障的传播是一个在许多领域都会出现的问题:在金融危机期间在金融机构之间传播的传染性故障,在大范围停电期间通过电网或通信网络的节点传播,或者在流行病爆发期间通过人群传播。在这里,我们研究了一个阈值传染的自然模型:每个节点被分配一个独立于底层分布的数值阈值,一旦它的失败邻居的数量达到这个阈值,它就会失败。尽管公式很简单,但分析由任意阈值分布引起的故障过程非常具有挑战性,甚至关于哪些图对这些模型中的级联故障最有弹性的定性问题也难以解决。在此,我们开发了一套新的技术来分析该模型下任意图中节点的失效概率,并在给定分布中绘制阈值时,根据图中任意节点的最大失效概率来比较不同的图。当我们考虑这些阈值在不同图上引起的风险时,我们发现阈值分布的空间具有惊人的丰富结构:阈值分布的微小变化可能有利于具有最大聚类结构(即,派系)的图,具有最大分支结构(树)的图,甚至是中间杂交的图。
{"title":"Which Networks are Least Susceptible to Cascading Failures?","authors":"L. Blume, D. Easley, J. Kleinberg, Robert D. Kleinberg, É. Tardos","doi":"10.1109/FOCS.2011.38","DOIUrl":"https://doi.org/10.1109/FOCS.2011.38","url":null,"abstract":"The spread of a cascading failure through a network is an issue that comes up in many domains: in the contagious failures that spread among financial institutions during a financial crisis, through nodes of a power grid or communication network during a widespread outage, or through a human population during the outbreak of an epidemic disease. Here we study a natural model of threshold contagion: each node is assigned a numerical threshold drawn independently from an underlying distribution, and it will fail as soon as its number of failed neighbors reaches this threshold. Despite the simplicity of the formulation, it has been very challenging to analyze the failure processes that arise from arbitrary threshold distributions, even qualitative questions concerning which graphs are the most resilient to cascading failures in these models have been difficult to resolve. Here we develop a set of new techniques for analyzing the failure probabilities of nodes in arbitrary graphs under this model, and we compare different graphs according to the maximum failure probability of any node in the graph when thresholds are drawn from a given distribution. We find that the space of threshold distributions has a surprisingly rich structure when we consider the risk that these thresholds induce on different graphs: small shifts in the distribution of the thresholds can favor graphs with a maximally clustered structure (i.e., cliques), those with a maximally branching structure (trees), or even intermediate hybrids.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115077727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 101
Lasserre Hierarchy, Higher Eigenvalues, and Approximation Schemes for Graph Partitioning and Quadratic Integer Programming with PSD Objectives 具有PSD目标的图划分和二次整数规划的Lasserre层次、高特征值和逼近方案
Pub Date : 2011-10-22 DOI: 10.1109/FOCS.2011.36
V. Guruswami, A. Sinop
We present an approximation scheme for optimizing certain Quadratic Integer Programming problems with positive semi definite objective functions and global linear constraints. This framework includes well known graph problems such as Minimum graph bisection, Edge expansion, Uniform sparsest cut, and Small Set expansion, as well as the Unique Games problem. These problems are notorious for the existence of huge gaps between the known algorithmic results and NP-hardness results. Our algorithm is based on rounding semi definite programs from the Lasserre hierarchy, and the analysis uses bounds for low-rank approximations of a matrix in Frobenius norm using columns of the matrix. For all the above graph problems, we give an algorithm running in time $n^{O(r/eps^2)}$ with approximation ratio $frac{1+eps}{min{1,lambda_r}}$, where $lambda_r$ is the $r$'th smallest eigenvalue of the normalized graph Laplacian $Lnorm$. In the case of graph bisection and small set expansion, the number of vertices in the cut is within lower-order terms of the stipulated bound. Our results imply $(1+O(eps))$ factor approximation in time $n^{O(r^ast/eps^2)}$ where $r^ast$ is the number of eigenvalues of $Lnorm$ smaller than $1-eps$. This perhaps gives some indication as to why even showing mere APX-hardness for these problems has been elusive, since the reduction must produce graphs with a slowly growing spectrum (and classes like planar graphs which are known to have such a spectral property often admit good algorithms owing to their nice structure). For Unique Games, we give a factor $(1+frac{2+eps}{lambda_r})$ approximation for minimizing the number of unsatisfied constraints in $n^{O(r/eps)}$ time. This improves an earlier bound for solving Unique Games on expanders, and also shows that Lasserre SDPs are powerful enough to solve well-known integrality gap instances for the basic SDP. We also give an algorithm for independent sets in graphs that performs well when the Laplacian does not have too many eigenvalues bigger than $1+o(1)$.
给出了一类具有正半定目标函数和全局线性约束的二次整数规划问题的逼近优化格式。这个框架包括众所周知的图问题,如最小图平分、边缘扩展、均匀稀疏切割和小集扩展,以及唯一游戏问题。这些问题以已知算法结果与np -硬度结果之间存在巨大差距而臭名昭著。我们的算法基于Lasserre层次结构的舍入半确定程序,并且分析使用Frobenius范数中使用矩阵列的矩阵的低秩近似的界。对于上述所有图问题,我们给出了一个算法运行在时间$n^{O(r/eps^2)}$与近似比$frac{1+eps}{min{1,lambda_r}}$,其中$lambda_r$是归一化图拉普拉斯$Lnorm$的$r$ '最小特征值。在图平分和小集展开的情况下,切割中的顶点数在规定界的低阶项内。我们的结果意味着$(1+O(eps))$因子在时间上的近似$n^{O(r^ast/eps^2)}$,其中$r^ast$是$Lnorm$小于$1-eps$的特征值的个数。这也许给出了一些迹象,说明为什么对于这些问题,即使仅仅显示apx硬度也是难以捉摸的,因为约简必须产生具有缓慢增长光谱的图(并且类,如已知具有这种光谱特性的平面图,由于其良好的结构,通常承认良好的算法)。对于Unique Games,我们给出了在$n^{O(r/eps)}$时间内最小化未满足约束数量的因子$(1+frac{2+eps}{lambda_r})$近似值。这改进了在扩展器上求解唯一博弈的早期边界,也表明Lasserre SDP足够强大,可以解决基本SDP的众所周知的完整性间隙实例。我们还给出了图中独立集的算法,当拉普拉斯函数没有太多大于$1+o(1)$的特征值时,该算法表现良好。
{"title":"Lasserre Hierarchy, Higher Eigenvalues, and Approximation Schemes for Graph Partitioning and Quadratic Integer Programming with PSD Objectives","authors":"V. Guruswami, A. Sinop","doi":"10.1109/FOCS.2011.36","DOIUrl":"https://doi.org/10.1109/FOCS.2011.36","url":null,"abstract":"We present an approximation scheme for optimizing certain Quadratic Integer Programming problems with positive semi definite objective functions and global linear constraints. This framework includes well known graph problems such as Minimum graph bisection, Edge expansion, Uniform sparsest cut, and Small Set expansion, as well as the Unique Games problem. These problems are notorious for the existence of huge gaps between the known algorithmic results and NP-hardness results. Our algorithm is based on rounding semi definite programs from the Lasserre hierarchy, and the analysis uses bounds for low-rank approximations of a matrix in Frobenius norm using columns of the matrix. For all the above graph problems, we give an algorithm running in time $n^{O(r/eps^2)}$ with approximation ratio $frac{1+eps}{min{1,lambda_r}}$, where $lambda_r$ is the $r$'th smallest eigenvalue of the normalized graph Laplacian $Lnorm$. In the case of graph bisection and small set expansion, the number of vertices in the cut is within lower-order terms of the stipulated bound. Our results imply $(1+O(eps))$ factor approximation in time $n^{O(r^ast/eps^2)}$ where $r^ast$ is the number of eigenvalues of $Lnorm$ smaller than $1-eps$. This perhaps gives some indication as to why even showing mere APX-hardness for these problems has been elusive, since the reduction must produce graphs with a slowly growing spectrum (and classes like planar graphs which are known to have such a spectral property often admit good algorithms owing to their nice structure). For Unique Games, we give a factor $(1+frac{2+eps}{lambda_r})$ approximation for minimizing the number of unsatisfied constraints in $n^{O(r/eps)}$ time. This improves an earlier bound for solving Unique Games on expanders, and also shows that Lasserre SDPs are powerful enough to solve well-known integrality gap instances for the basic SDP. We also give an algorithm for independent sets in graphs that performs well when the Laplacian does not have too many eigenvalues bigger than $1+o(1)$.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129412652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 107
期刊
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1