首页 > 最新文献

2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)最新文献

英文 中文
Exponentially-Hard Gap-CSP and Local PRG via Local Hardcore Functions 基于局部硬核函数的指数硬Gap-CSP和局部PRG
Pub Date : 2017-10-01 DOI: 10.1109/FOCS.2017.82
B. Applebaum
The gap-ETH assumption (Dinur 2016; Manurangsi and Raghavendra 2016) asserts that it is exponentially-hard to distinguish between a satisfiable 3-CNF formula and a 3-CNF formula which is at most 0.99-satisfiable. We show that this assumption follows from the exponential hardness of finding a satisfying assignment for smooth 3-CNFs. Here smoothness means that the number of satisfying assignments is not much smaller than the number of almost-satisfying assignments. We further show that the latter (smooth-ETH) assumption follows from the exponential hardness of solving constraint satisfaction problems over well-studied distributions, and, more generally, from the existence of any exponentially-hard locally-computable one-way function. This confirms a conjecture of Dinur (ECCC 2016).We also prove an analogous result in the cryptographic setting. Namely, we show that the existence of exponentially-hard locally-computable pseudorandom generator with linear stretch (el-PRG) follows from the existence of an exponentially-hard locally-computable almost regular one-way functions.None of the above assumptions (gap-ETH and el-PRG) was previously known to follow from the hardness of a search problem. Our results are based on a new construction of general (GL-type) hardcore functions that, for any exponentially-hard one-way function, output linearly many hardcore bits, can be locally computed, and consume only a linear amount of random bits. We also show that such hardcore functions have several other useful applications in cryptography and complexity theory.
缺口- eth假设(Dinur 2016;Manurangsi和Raghavendra 2016)断言,区分可满足的3-CNF公式和最多0.99可满足的3-CNF公式是指数困难的。我们从寻找光滑3-CNFs的满意分配的指数硬度中证明了这一假设。这里的平滑意味着满意的分配的数量并不比几乎满意的分配的数量少多少。我们进一步证明了后一种(平滑eth)假设来自于在充分研究的分布上求解约束满足问题的指数硬度,更一般地说,来自于任何指数硬度的局部可计算单向函数的存在。这证实了Dinur的一个猜想(ECCC 2016)。我们还在密码设置中证明了一个类似的结果。也就是说,我们证明了具有线性伸缩的指数硬局部可计算伪随机生成器(el-PRG)的存在性是由指数硬局部可计算的几乎正则单向函数的存在性推导出来的。上述假设(gap-ETH和el-PRG)都不是先前已知的从搜索问题的难度中得出的。我们的结果基于一般(gl型)硬核函数的新构造,对于任何指数硬的单向函数,它输出线性多的硬核比特,可以在本地计算,并且只消耗线性数量的随机比特。我们还证明了这些核心函数在密码学和复杂性理论中还有其他一些有用的应用。
{"title":"Exponentially-Hard Gap-CSP and Local PRG via Local Hardcore Functions","authors":"B. Applebaum","doi":"10.1109/FOCS.2017.82","DOIUrl":"https://doi.org/10.1109/FOCS.2017.82","url":null,"abstract":"The gap-ETH assumption (Dinur 2016; Manurangsi and Raghavendra 2016) asserts that it is exponentially-hard to distinguish between a satisfiable 3-CNF formula and a 3-CNF formula which is at most 0.99-satisfiable. We show that this assumption follows from the exponential hardness of finding a satisfying assignment for smooth 3-CNFs. Here smoothness means that the number of satisfying assignments is not much smaller than the number of almost-satisfying assignments. We further show that the latter (smooth-ETH) assumption follows from the exponential hardness of solving constraint satisfaction problems over well-studied distributions, and, more generally, from the existence of any exponentially-hard locally-computable one-way function. This confirms a conjecture of Dinur (ECCC 2016).We also prove an analogous result in the cryptographic setting. Namely, we show that the existence of exponentially-hard locally-computable pseudorandom generator with linear stretch (el-PRG) follows from the existence of an exponentially-hard locally-computable almost regular one-way functions.None of the above assumptions (gap-ETH and el-PRG) was previously known to follow from the hardness of a search problem. Our results are based on a new construction of general (GL-type) hardcore functions that, for any exponentially-hard one-way function, output linearly many hardcore bits, can be locally computed, and consume only a linear amount of random bits. We also show that such hardcore functions have several other useful applications in cryptography and complexity theory.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127443287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Faster (and Still Pretty Simple) Unbiased Estimators for Network (Un)reliability 更快(而且仍然相当简单)的网络(非)可靠性无偏估计器
Pub Date : 2017-10-01 DOI: 10.1109/FOCS.2017.75
David R Karger
Consider the problem of estimating the (un)reliability of an n-vertex graph when edges fail with probability p. We show that the Recursive Contraction Algorithms for minimum cuts, essentially unchanged and running in n2+o(1) time, yields an unbiased estimator of constant relative variance (and thus an FPRAS with the same time bound) whenever pc
考虑当边以概率p失败时估计n顶点图的(非)可靠性的问题。我们表明,最小切的递归收缩算法,本质上不变,在n2+o(1)时间内运行,无论何时pc,都会产生恒定相对方差的无偏估计(因此具有相同时间界限的FPRAS)
{"title":"Faster (and Still Pretty Simple) Unbiased Estimators for Network (Un)reliability","authors":"David R Karger","doi":"10.1109/FOCS.2017.75","DOIUrl":"https://doi.org/10.1109/FOCS.2017.75","url":null,"abstract":"Consider the problem of estimating the (un)reliability of an n-vertex graph when edges fail with probability p. We show that the Recursive Contraction Algorithms for minimum cuts, essentially unchanged and running in n2+o(1) time, yields an unbiased estimator of constant relative variance (and thus an FPRAS with the same time bound) whenever pc","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123044275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Two-Round and Non-Interactive Concurrent Non-Malleable Commitments from Time-Lock Puzzles 时间锁谜题的两轮非交互并发非延展性承诺
Pub Date : 2017-10-01 DOI: 10.1109/FOCS.2017.59
Huijia Lin, R. Pass, Pratik Soni
Non-malleable commitments are a fundamental cryptographic tool for preventing against (concurrent) man-in-the-middle attacks. Since their invention by Dolev, Dwork, and Naor in 1991, the round-complexity of non-malleable commitments has been extensively studied, leading up to constant-round concurrent non-malleable commitments based only on one-way functions, and even 3-round concurrent non-malleable commitments based on subexponential one-way functions.But constructions of two-round, or non-interactive, non-malleable commitments have so far remained elusive; the only known construction relied on a strong and non-falsifiable assumption with a non-malleability flavor. Additionally, a recent result by Pass shows the impossibility of basing two-round non-malleable commitments on falsifiable assumptions using a polynomial-time black-box security reduction.In this work, we show how to overcome this impossibility, using super-polynomial-time hardness assumptions. Our main result demonstrates the existence of a two-round concurrent non-malleable commitment based on sub-exponential standard-type assumptions—notably, assuming the existence of the following primitives (all with subexponential security): (1) non-interactive commitments, (2) ZAPs (i.e., 2-round witness indistinguishable proofs), (3) collision-resistant hash functions, and (4) a weak time-lock puzzle.Primitives (1),(2),(3) can be based on e.g., the discrete log assumption and the RSA assumption. Time-lock puzzles—puzzles that can be solved by brute-force in time 2^t, but cannot be solved significantly faster even using parallel computers—were proposed by Rivest, Shamir, and Wagner in 1996, and have been quite extensively studied since; the most popular instantiation relies on the assumption that 2^t repeated squarings mod N = pq require roughly 2^t parallel time. Our notion of a weak time-lock puzzle, requires only that the puzzle cannot be solved in parallel time 2^{t^≥ilon} (and thus we only need to rely on the relatively mild assumption that there are no huge} improvements in the parallel complexity of repeated squaring algorithms).We additionally show that if replacing assumption (2) for a non-interactive witness indistinguishable proof (NIWI), and (3) for auniform} collision-resistant hash function, then a non-interactive} (i.e., one-message) version of our protocolsatisfies concurrent non-malleability w.r.t. uniform attackers.
不可延展性承诺是防止(并发)中间人攻击的基本加密工具。自1991年Dolev、Dwork和Naor发明不可延展性行为的循环复杂度以来,人们对其进行了广泛的研究,导致了仅基于单向函数的常轮并发不可延性行为,甚至基于次指数单向函数的3轮并发不可延性行为。但迄今为止,两轮承诺或非互动、不可延展性承诺的构建仍然难以捉摸;唯一已知的结构依赖于一个强大的、不可证伪的假设,具有不可延展性。此外,Pass最近的一个结果表明,不可能使用多项式时间黑盒安全约简将两轮不可延展性承诺建立在可证伪假设上。在这项工作中,我们展示了如何使用超多项式时间硬度假设来克服这种不可能性。我们的主要结果证明了基于次指数标准型假设的两轮并发不可延展性承诺的存在性;值得注意的是,假设存在以下原语(都具有次指数安全性):(1)非交互式承诺,(2)zap(即2轮证人不可区分证明),(3)抗碰撞哈希函数,以及(4)弱时间锁难题。基元(1)、(2)、(3)可以基于离散对数假设和RSA假设。时间锁谜题—可以通过暴力破解时间2^t,但即使使用并行计算机也不能明显更快地解决的谜题—由Rivest, Shamir和Wagner在1996年提出,并得到了相当广泛的研究;最流行的实例依赖于假设2^t重复平方mod N = pq大约需要2^t并行时间。我们的弱时间锁难题的概念,只要求难题不能在并行时间2^{t^≥ilon}内解决(因此我们只需要依赖相对温和的假设,即重复平方算法的并行复杂性没有巨大的改进)。我们还表明,如果将假设(2)替换为非交互式见证不可区分证明(NIWI),并将假设(3)替换为统一的防碰撞哈希函数,那么我们协议的非交互式}(即单消息)版本满足并发的非延展性w.r.t.统一攻击者。
{"title":"Two-Round and Non-Interactive Concurrent Non-Malleable Commitments from Time-Lock Puzzles","authors":"Huijia Lin, R. Pass, Pratik Soni","doi":"10.1109/FOCS.2017.59","DOIUrl":"https://doi.org/10.1109/FOCS.2017.59","url":null,"abstract":"Non-malleable commitments are a fundamental cryptographic tool for preventing against (concurrent) man-in-the-middle attacks. Since their invention by Dolev, Dwork, and Naor in 1991, the round-complexity of non-malleable commitments has been extensively studied, leading up to constant-round concurrent non-malleable commitments based only on one-way functions, and even 3-round concurrent non-malleable commitments based on subexponential one-way functions.But constructions of two-round, or non-interactive, non-malleable commitments have so far remained elusive; the only known construction relied on a strong and non-falsifiable assumption with a non-malleability flavor. Additionally, a recent result by Pass shows the impossibility of basing two-round non-malleable commitments on falsifiable assumptions using a polynomial-time black-box security reduction.In this work, we show how to overcome this impossibility, using super-polynomial-time hardness assumptions. Our main result demonstrates the existence of a two-round concurrent non-malleable commitment based on sub-exponential standard-type assumptions—notably, assuming the existence of the following primitives (all with subexponential security): (1) non-interactive commitments, (2) ZAPs (i.e., 2-round witness indistinguishable proofs), (3) collision-resistant hash functions, and (4) a weak time-lock puzzle.Primitives (1),(2),(3) can be based on e.g., the discrete log assumption and the RSA assumption. Time-lock puzzles—puzzles that can be solved by brute-force in time 2^t, but cannot be solved significantly faster even using parallel computers—were proposed by Rivest, Shamir, and Wagner in 1996, and have been quite extensively studied since; the most popular instantiation relies on the assumption that 2^t repeated squarings mod N = pq require roughly 2^t parallel time. Our notion of a weak time-lock puzzle, requires only that the puzzle cannot be solved in parallel time 2^{t^≥ilon} (and thus we only need to rely on the relatively mild assumption that there are no huge} improvements in the parallel complexity of repeated squaring algorithms).We additionally show that if replacing assumption (2) for a non-interactive witness indistinguishable proof (NIWI), and (3) for auniform} collision-resistant hash function, then a non-interactive} (i.e., one-message) version of our protocolsatisfies concurrent non-malleability w.r.t. uniform attackers.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127378369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
Quantum Speed-Ups for Solving Semidefinite Programs 求解半定程序的量子加速
Pub Date : 2017-10-01 DOI: 10.1109/FOCS.2017.45
F. Brandão, K. Svore
We give a quantum algorithm for solving semidefinite programs (SDPs). It has worst-case running time n^{frac{1}{2}} m^{frac{1}{2}} s^2 poly(log(n), log(m), R, r, 1/δ), with n and s the dimension and row-sparsity of the input matrices, respectively, m the number of constraints, δ the accuracy of the solution, and R, r upper bounds on the size of the optimal primal and dual solutions, respectively. This gives a square-root unconditional speed-up over any classical method for solving SDPs both in n and m. We prove the algorithm cannot be substantially improved (in terms of n and m) giving a Ω(n^{frac{1}{2}}+m^{frac{1}{2}}) quantum lower bound for solving semidefinite programs with constant s, R, r and δ. The quantum algorithm is constructed by a combination of quantum Gibbs sampling and the multiplicative weight method. In particular it is based on a classical algorithm of Arora and Kale for approximately solving SDPs. We present a modification of their algorithm to eliminate the need for solving an inner linear program which may be of independent interest.
给出了求解半定规划的一种量子算法。它的最坏情况运行时间为n^ {frac{1}{2}} m^ {frac{1}{2}} s^2 poly (log (n), log (m), R, R, 1/δ),其中n和s分别为输入矩阵的维数和行稀疏度,m为约束个数,δ解的精度,以及R、R最优原解和对偶解大小的上界。这给出了在n和m中求解sdp的任何经典方法的平方根无条件加速。我们证明该算法不能得到实质性的改进(就n和m而言),并给出求解具有常数s, R, R和δ的半定程序的量子下界(n^ {frac{1}{2}} +m^ {frac{1}{2}})。该算法将量子吉布斯采样法与加权乘法法相结合。特别地,它是基于Arora和Kale的经典算法来近似求解sdp的。我们提出了一种改进的算法,以消除求解可能独立感兴趣的内线性规划的需要。
{"title":"Quantum Speed-Ups for Solving Semidefinite Programs","authors":"F. Brandão, K. Svore","doi":"10.1109/FOCS.2017.45","DOIUrl":"https://doi.org/10.1109/FOCS.2017.45","url":null,"abstract":"We give a quantum algorithm for solving semidefinite programs (SDPs). It has worst-case running time n^{frac{1}{2}} m^{frac{1}{2}} s^2 poly(log(n), log(m), R, r, 1/δ), with n and s the dimension and row-sparsity of the input matrices, respectively, m the number of constraints, δ the accuracy of the solution, and R, r upper bounds on the size of the optimal primal and dual solutions, respectively. This gives a square-root unconditional speed-up over any classical method for solving SDPs both in n and m. We prove the algorithm cannot be substantially improved (in terms of n and m) giving a Ω(n^{frac{1}{2}}+m^{frac{1}{2}}) quantum lower bound for solving semidefinite programs with constant s, R, r and δ. The quantum algorithm is constructed by a combination of quantum Gibbs sampling and the multiplicative weight method. In particular it is based on a classical algorithm of Arora and Kale for approximately solving SDPs. We present a modification of their algorithm to eliminate the need for solving an inner linear program which may be of independent interest.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132161588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 141
Optimal Interactive Coding for Insertions, Deletions, and Substitutions 插入,删除和替换的最佳交互编码
Pub Date : 2017-10-01 DOI: 10.1109/FOCS.2017.30
Alexander A. Sherstov, Pei Wu
Interactive coding, pioneered by Schulman (FOCS 92, STOC 93), is concerned with making communication protocols resilient to adversarial noise. The canonical model allows the adversary to alter a small constant fraction of symbols, chosen at the adversarys discretion, as they pass through the communication channel. Braverman, Gelles, Mao, and Ostrovsky (2015) proposed a far-reaching generalization of this model, whereby the adversary can additionally manipulate the channel by removing and inserting symbols. They showed how to faithfully simulate any protocol in this model with corruption rate up to 1/18, using a constant-size alphabet and a constant-factor overhead in communication. We give an optimal simulation of any protocol in this generalized model of substitutions, insertions, and deletions, tolerating a corruption rate up to 1/4 while keeping the alphabet to a constant size and the communication overhead to a constant factor. Our corruption tolerance matches an impossibility result for corruption rate 1/4 which holds even for substitutions alone (Braverman and Rao, STOC 11).
由Schulman首创的交互编码(FOCS 92, STOC 93)关注的是使通信协议对对抗性噪声具有弹性。规范模型允许对手在通过通信通道时,根据对手的自由裁量权选择一小部分恒定的符号。他们展示了如何在这个模型中忠实地模拟任何协议,其损坏率高达1/18,使用恒定大小的字母表和恒定因素的通信开销。在这个替换、插入和删除的广义模型中,我们给出了任何协议的最优模拟,容忍高达1/4的损坏率,同时保持字母表的恒定大小和通信开销为恒定因子。我们的腐败容忍度与腐败率1/4的不可能结果相匹配,即使仅对替换也是如此(Braverman和Rao, STOC 11)。
{"title":"Optimal Interactive Coding for Insertions, Deletions, and Substitutions","authors":"Alexander A. Sherstov, Pei Wu","doi":"10.1109/FOCS.2017.30","DOIUrl":"https://doi.org/10.1109/FOCS.2017.30","url":null,"abstract":"Interactive coding, pioneered by Schulman (FOCS 92, STOC 93), is concerned with making communication protocols resilient to adversarial noise. The canonical model allows the adversary to alter a small constant fraction of symbols, chosen at the adversarys discretion, as they pass through the communication channel. Braverman, Gelles, Mao, and Ostrovsky (2015) proposed a far-reaching generalization of this model, whereby the adversary can additionally manipulate the channel by removing and inserting symbols. They showed how to faithfully simulate any protocol in this model with corruption rate up to 1/18, using a constant-size alphabet and a constant-factor overhead in communication. We give an optimal simulation of any protocol in this generalized model of substitutions, insertions, and deletions, tolerating a corruption rate up to 1/4 while keeping the alphabet to a constant size and the communication overhead to a constant factor. Our corruption tolerance matches an impossibility result for corruption rate 1/4 which holds even for substitutions alone (Braverman and Rao, STOC 11).","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130061042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Obfuscating Compute-and-Compare Programs under LWE 在LWE下混淆计算与比较程序
Pub Date : 2017-10-01 DOI: 10.1109/FOCS.2017.61
D. Wichs, Giorgos Zirdelis
We show how to obfuscate a large and expressive class of programs, which we call compute-and-compare programs, under the learning-with-errors (LWE) assumption. Each such program CC[f,y] is parametrized by an arbitrary polynomial-time computable function f along with a target value y and we define CC[f,y](x) to output 1 if f(x)=y and 0 otherwise. In other words, the program performs an arbitrary {computation} f and then compares its output against a target y. Our obfuscator satisfies distributional virtual-black-box security, which guarantees that the obfuscated program does not reveal any partial information about the function f or the target value y, as long as they are chosen from some distribution where y has sufficient pseudo-entropy given f. We also extend our result to multi-bit compute-and-compare programs MBCC[f,y,z](x) which output a message z if f(x)=y.Compute-and-compare programs are powerful enough to capture many interesting obfuscation tasks as special cases. This includes obfuscating {conjunctions, and therefore we improve on the prior work of Brakerski et al. (ITCS 16) which constructed a conjunction obfuscator under a non-standard entropic ring-LWE assumption, while here we obfuscate a significantly broader class of programs under standard LWE. We show that our obfuscator has several interesting applications. For example, we can take any encryption scheme and publish an obfuscated plaintext equality tester that allows users to check whether a ciphertext decrypts to some target value y; as long as y has sufficient pseudo-entropy this will not harm semantic security. We can also use our obfuscator to generically upgrade attribute-based encryption to predicate encryption with one-sided attribute-hiding security, and to upgrade witness encryption to indistinguishability obfuscation which is secure for all null circuits. Furthermore, we show that our obfuscator gives new circular-security counter-examples for public-key bit encryption and for unbounded length key cycles.Our result uses the graph-induced multi-linear maps of Gentry, Gorbunov and Halevi (TCC 15), but only in a carefully restricted manner which is provably secure under LWE. Our technique is inspired by ideas introduced in a recent work of Goyal, Koppula and Waters (EUROCRYPT 17) in a seemingly unrelated context.
我们展示了如何在有误差学习(LWE)假设下混淆一个大型且富有表现力的程序类,我们称之为计算和比较程序。每个这样的程序CC[f,y]都被一个任意多项式时间可计算函数f和一个目标值y参数化,我们定义CC[f,y](x)如果f(x)=y输出1,否则输出0。换句话说,程序执行任意{计算}f,然后将其输出与目标y进行比较。我们的混淆器满足分布式虚拟黑箱安全性,这保证了混淆后的程序不会泄露关于目标y的函数的任何部分信息。我们还将我们的结果扩展到多比特计算和比较程序MBCC[f,y,z](x),如果f(x)=y,则输出消息z。计算和比较程序足够强大,可以捕获许多有趣的混淆任务作为特殊情况。这包括混淆{连词,因此我们改进了Brakerski等人(ITCS 16)之前的工作,他们在非标准熵环-LWE假设下构建了一个连接混淆器,而这里我们在标准LWE下混淆了更广泛的程序类别。我们展示了我们的混淆器有几个有趣的应用。例如,我们可以采用任何加密方案并发布一个混淆的明文等式测试器,允许用户检查密文是否解密到某个目标值y;只要y具有足够的伪熵,就不会损害语义安全性。我们还可以使用我们的混淆器将基于属性的加密升级为具有单侧属性隐藏安全性的谓词加密,并将见证加密升级为对所有空电路都安全的不可区分混淆。此外,我们证明了我们的混淆器为公钥位加密和无界长度密钥循环提供了新的循环安全反例。我们的结果使用了Gentry, Gorbunov和Halevi (TCC 15)的图诱导的多线性地图,但仅以谨慎限制的方式在LWE下被证明是安全的。我们的技术灵感来自于Goyal, Koppula和Waters最近在一个看似无关的背景下的工作(EUROCRYPT 17)。
{"title":"Obfuscating Compute-and-Compare Programs under LWE","authors":"D. Wichs, Giorgos Zirdelis","doi":"10.1109/FOCS.2017.61","DOIUrl":"https://doi.org/10.1109/FOCS.2017.61","url":null,"abstract":"We show how to obfuscate a large and expressive class of programs, which we call compute-and-compare programs, under the learning-with-errors (LWE) assumption. Each such program CC[f,y] is parametrized by an arbitrary polynomial-time computable function f along with a target value y and we define CC[f,y](x) to output 1 if f(x)=y and 0 otherwise. In other words, the program performs an arbitrary {computation} f and then compares its output against a target y. Our obfuscator satisfies distributional virtual-black-box security, which guarantees that the obfuscated program does not reveal any partial information about the function f or the target value y, as long as they are chosen from some distribution where y has sufficient pseudo-entropy given f. We also extend our result to multi-bit compute-and-compare programs MBCC[f,y,z](x) which output a message z if f(x)=y.Compute-and-compare programs are powerful enough to capture many interesting obfuscation tasks as special cases. This includes obfuscating {conjunctions, and therefore we improve on the prior work of Brakerski et al. (ITCS 16) which constructed a conjunction obfuscator under a non-standard entropic ring-LWE assumption, while here we obfuscate a significantly broader class of programs under standard LWE. We show that our obfuscator has several interesting applications. For example, we can take any encryption scheme and publish an obfuscated plaintext equality tester that allows users to check whether a ciphertext decrypts to some target value y; as long as y has sufficient pseudo-entropy this will not harm semantic security. We can also use our obfuscator to generically upgrade attribute-based encryption to predicate encryption with one-sided attribute-hiding security, and to upgrade witness encryption to indistinguishability obfuscation which is secure for all null circuits. Furthermore, we show that our obfuscator gives new circular-security counter-examples for public-key bit encryption and for unbounded length key cycles.Our result uses the graph-induced multi-linear maps of Gentry, Gorbunov and Halevi (TCC 15), but only in a carefully restricted manner which is provably secure under LWE. Our technique is inspired by ideas introduced in a recent work of Goyal, Koppula and Waters (EUROCRYPT 17) in a seemingly unrelated context.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114431389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 116
Fine-Grained Complexity of Analyzing Compressed Data: Quantifying Improvements over Decompress-and-Solve 分析压缩数据的细粒度复杂性:对解压缩和求解的量化改进
Pub Date : 2017-10-01 DOI: 10.1109/FOCS.2017.26
Amir Abboud, A. Backurs, K. Bringmann, Marvin Künnemann
Can we analyze data without decompressing it? As our data keeps growing, understanding the time complexity of problems on compressed inputs, rather than in convenient uncompressed forms, becomes more and more relevant. Suppose we are given a compression of size n of data that originally has size N, and we want to solve a problem with time complexity T(⋅). The naïve strategy of decompress-and-solve gives time T(N), whereas the gold standard is time T(n): to analyze the compression as efficiently as if the original data was small.We restrict our attention to data in the form of a string (text, files, genomes, etc.) and study the most ubiquitous tasks. While the challenge might seem to depend heavily on the specific compression scheme, most methods of practical relevance (Lempel-Ziv-family, dictionary methods, and others) can be unified under the elegant notion of Grammar-Compressions. A vast literature, across many disciplines, established this as an influential notion for Algorithm design.We introduce a direly needed framework for proving (conditional) lower bounds in this field, allowing us to assess whether decompress-and-solve can be improved, and by how much. Our main results are:• The O(nN√log(N/n)) bound for LCS and the O(min(N log N, nM)) bound for Pattern Matching with Wildcards are optimal up to N^{o(1)} factors, under the Strong Exponential Time Hypothesis. (Here, M denotes the uncompressed length of the compressed pattern.)• Decompress-and-solve is essentially optimal for Context-Free Grammar Parsing and RNA Folding, under the k-Clique conjecture.• We give an algorithm showing that decompress-and-solve is not optimal for Disjointness.
我们能在不解压的情况下分析数据吗?随着我们的数据不断增长,理解压缩输入问题的时间复杂性,而不是以方便的未压缩形式,变得越来越重要。假设我们对原始大小为n的数据进行大小为n的压缩,并且我们想要解决一个时间复杂度为T(⋅)的问题。解压缩和求解的naï 5策略需要时间T(N),而黄金标准是时间T(N):像原始数据很小一样有效地分析压缩。我们将注意力限制在字符串形式的数据(文本、文件、基因组等)上,并研究最普遍的任务。虽然这个挑战似乎很大程度上取决于特定的压缩方案,但大多数实际相关的方法(Lempel-Ziv-family、字典方法等)都可以统一在语法压缩的优雅概念之下。大量的文献,跨越许多学科,将这一概念确立为算法设计的一个有影响力的概念。我们引入了一个迫切需要的框架来证明这个领域的(有条件的)下界,允许我们评估解压缩和求解是否可以改进,以及改进多少。我们的主要结果是:•在强指数时间假设下,LCS的O(nN√log(N/ N))界和通配符模式匹配的O(min(N log N, nM))界在N^{O(1)}因子范围内是最优的。(这里,M表示压缩模式的未压缩长度)•在k-Clique猜想下,解压缩并求解本质上是上下文无关语法解析和RNA折叠的最佳选择。•我们给出了一种算法,表明解压缩求解对于不连接不是最优的。
{"title":"Fine-Grained Complexity of Analyzing Compressed Data: Quantifying Improvements over Decompress-and-Solve","authors":"Amir Abboud, A. Backurs, K. Bringmann, Marvin Künnemann","doi":"10.1109/FOCS.2017.26","DOIUrl":"https://doi.org/10.1109/FOCS.2017.26","url":null,"abstract":"Can we analyze data without decompressing it? As our data keeps growing, understanding the time complexity of problems on compressed inputs, rather than in convenient uncompressed forms, becomes more and more relevant. Suppose we are given a compression of size n of data that originally has size N, and we want to solve a problem with time complexity T(⋅). The naïve strategy of decompress-and-solve gives time T(N), whereas the gold standard is time T(n): to analyze the compression as efficiently as if the original data was small.We restrict our attention to data in the form of a string (text, files, genomes, etc.) and study the most ubiquitous tasks. While the challenge might seem to depend heavily on the specific compression scheme, most methods of practical relevance (Lempel-Ziv-family, dictionary methods, and others) can be unified under the elegant notion of Grammar-Compressions. A vast literature, across many disciplines, established this as an influential notion for Algorithm design.We introduce a direly needed framework for proving (conditional) lower bounds in this field, allowing us to assess whether decompress-and-solve can be improved, and by how much. Our main results are:• The O(nN√log(N/n)) bound for LCS and the O(min(N log N, nM)) bound for Pattern Matching with Wildcards are optimal up to N^{o(1)} factors, under the Strong Exponential Time Hypothesis. (Here, M denotes the uncompressed length of the compressed pattern.)• Decompress-and-solve is essentially optimal for Context-Free Grammar Parsing and RNA Folding, under the k-Clique conjecture.• We give an algorithm showing that decompress-and-solve is not optimal for Disjointness.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"156 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121674083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
On Small-Depth Frege Proofs for Tseitin for Grids 网格定位的小深度网格证明
J. Håstad
We prove a lower bound on the size of a small depth Frege refutation of the Tseitin contradiction on the grid. We conclude that polynomial size such refutations must use formulas of almost logarithmic depth.
我们证明了小深度的一个下界,从而反驳了网格上的tseittin矛盾。我们得出结论,多项式大小的反驳必须使用几乎对数深度的公式。
{"title":"On Small-Depth Frege Proofs for Tseitin for Grids","authors":"J. Håstad","doi":"10.1145/3425606","DOIUrl":"https://doi.org/10.1145/3425606","url":null,"abstract":"We prove a lower bound on the size of a small depth Frege refutation of the Tseitin contradiction on the grid. We conclude that polynomial size such refutations must use formulas of almost logarithmic depth.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121772515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Variable-Version Lovász Local Lemma: Beyond Shearer's Bound 变量版本Lovász局部引理:超越Shearer界
Pub Date : 2017-09-15 DOI: 10.1109/FOCS.2017.48
Kun He, Liangpan Li, Xingwu Liu, Yuyi Wang, Mingji Xia
A tight criterion under which the abstract version Lovász Local Lemma (abstract-LLL) holds was given by Shearer [41] decades ago. However, little is known about that of the variable version LLL (variable-LLL) where events are generated by independent random variables, though variable- LLL naturally models and is enough for almost all applications of LLL. We introduce a necessary and sufficient criterion for variable-LLL, in terms of the probabilities of the events and the event-variable graph specifying the dependency among the events. Based on this new criterion, we obtain boundaries for two families of event-variable graphs, namely, cyclic and treelike bigraphs. These are the first two non-trivial cases where the variable-LLL boundary is fully determined. As a byproduct, we also provide a universal constructive method to find a set of events whose union has the maximum probability, given the probability vector and the event-variable graph.Though it is #P-hard in general to determine variable- LLL boundaries, we can to some extent decide whether a gap exists between a variable-LLL boundary and the corresponding abstract-LLL boundary. In particular, we show that the gap existence can be decided without solving Shearer’s conditions or checking our variable-LLL criterion. Equipped with this powerful theorem, we show that there is no gap if the base graph of the event-variable graph is a tree, while gap appears if the base graph has an induced cycle of length at least 4. The problem is almost completely solved except when the base graph has only 3-cliques, in which case we also get partial solutions.A set of reduction rules are established that facilitate to infer gap existence of a event-variable graph from known ones. As an application, various event-variable graphs, in particular combinatorial ones, are shown to be gapful/gapless.
几十年前,Shearer[41]给出了抽象版本lov&# xe1;sz局部引理(abstract- lll)成立的一个紧判据。然而,对于变量版本的LLL (variable-LLL)知之甚少,其中事件由独立的随机变量产生,尽管变量-LLL自然地建模并且足以用于几乎所有的LLL应用。我们从事件的概率和事件之间的依赖关系的事件变量图的角度,引入了变量lll的充分必要判据。在此基础上,我们得到了两类事件变量图的边界,即循环图和树状图。这是变量- lll边界完全确定的前两种非平凡情况。作为一个副产品,我们也提供了一种通用的构造方法,在给定概率向量和事件变量图的情况下,找到具有最大概率并集的事件集。虽然一般来说,变量-LLL边界很难确定,但我们可以在一定程度上确定变量-LLL边界与相应的抽象-LLL边界之间是否存在差距。特别地,我们证明了可以在不解决Shearer’s条件或检查变量lll准则的情况下确定间隙的存在性。利用这一强大的定理,我们证明了当事件变量图的基图是树时不存在间隙,而当基图具有长度至少为4的诱导循环时出现间隙。除了基图只有3个团外,问题几乎完全解决了,在这种情况下,我们也得到了部分解。建立了一套简化规则,便于从已知的事件变量图中推断出事件变量图的间隙存在性。作为一种应用,各种事件变量图,特别是组合图,被证明是有间隙/无间隙的。
{"title":"Variable-Version Lovász Local Lemma: Beyond Shearer's Bound","authors":"Kun He, Liangpan Li, Xingwu Liu, Yuyi Wang, Mingji Xia","doi":"10.1109/FOCS.2017.48","DOIUrl":"https://doi.org/10.1109/FOCS.2017.48","url":null,"abstract":"A tight criterion under which the abstract version Lovász Local Lemma (abstract-LLL) holds was given by Shearer [41] decades ago. However, little is known about that of the variable version LLL (variable-LLL) where events are generated by independent random variables, though variable- LLL naturally models and is enough for almost all applications of LLL. We introduce a necessary and sufficient criterion for variable-LLL, in terms of the probabilities of the events and the event-variable graph specifying the dependency among the events. Based on this new criterion, we obtain boundaries for two families of event-variable graphs, namely, cyclic and treelike bigraphs. These are the first two non-trivial cases where the variable-LLL boundary is fully determined. As a byproduct, we also provide a universal constructive method to find a set of events whose union has the maximum probability, given the probability vector and the event-variable graph.Though it is #P-hard in general to determine variable- LLL boundaries, we can to some extent decide whether a gap exists between a variable-LLL boundary and the corresponding abstract-LLL boundary. In particular, we show that the gap existence can be decided without solving Shearer’s conditions or checking our variable-LLL criterion. Equipped with this powerful theorem, we show that there is no gap if the base graph of the event-variable graph is a tree, while gap appears if the base graph has an induced cycle of length at least 4. The problem is almost completely solved except when the base graph has only 3-cliques, in which case we also get partial solutions.A set of reduction rules are established that facilitate to infer gap existence of a event-variable graph from known ones. As an application, various event-variable graphs, in particular combinatorial ones, are shown to be gapful/gapless.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"117 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134393624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Learning Multi-Item Auctions with (or without) Samples 学习多项目拍卖与(或没有)样品
Pub Date : 2017-09-01 DOI: 10.1109/FOCS.2017.54
Yang Cai, C. Daskalakis
We provide algorithms that learn simple auctions whose revenue is approximately optimal in multi-item multi-bidder settings, for a wide range of bidder valuations including unit-demand, additive, constrained additive, XOS, and subadditive. We obtain our learning results in two settings. The first is the commonly studied setting where sample access to the bidders distributions over valuations is given, for both regular distributions and arbitrary distributions with bounded support. Here, our algorithms require polynomially many samples in the number of items and bidders. The second is a more general max-min learning setting that we introduce, where we are given approximate distributions, and we seek to compute a mechanism whose revenue is approximately optimal simultaneously for all true distributions that are close to the ones we were given. These results are more general in that they imply the sample-based results, and are also applicable in settings where we have no sample access to the underlying distributions but have estimated them indirectly via market research or by observation of bidder behavior in previously run, potentially non-truthful auctions.All our results hold for valuation distributions satisfying the standard (and necessary) independence-across-items property. They also generalize and improve upon recent works of Goldner and Karlin cite{GoldnerK16} and Morgenstern and Roughgarden cite{MorgensternR16, which have provided algorithms that learn approximately optimal multi-item mechanisms in more restricted settings with additive, subadditive and unit-demand valuations using sample access to distributions. We generalize these results to the complete unit-demand, additive, and XOS setting, to i.i.d. subadditive bidders, and to the max-min setting.Our results are enabled by new uniform convergence bounds for hypotheses classes under product measures. Our bounds result in exponential savings in sample complexity compared to bounds derived by bounding the VC dimension and are of independent interest.
我们提供的算法可以学习简单的拍卖,这些拍卖在多项目多投标人设置下的收入近似最优,适用于广泛的投标人估值,包括单位需求、附加、约束附加、XOS和次附加。我们在两种情况下得到我们的学习结果。第一个是通常研究的设置,其中给出了对投标人分布的样本访问,包括规则分布和具有有限支持的任意分布。在这里,我们的算法在项目和投标人的数量上需要多项式多的样本。第二种是我们引入的更一般的最大最小学习设置,我们给出近似分布,我们试图计算一种机制,其收益对于所有接近我们给出的分布的真实分布都是近似最优的。这些结果更普遍,因为它们暗示了基于样本的结果,并且也适用于我们没有样本访问潜在分布的设置,但通过市场研究或通过观察先前运行的投标人行为间接估计它们,可能不真实的拍卖。我们所有的结果都适用于满足标准(和必要的)跨项目独立性的估值分布。他们还概括和改进了Goldner和Karlin以及Morgenstern和Roughgarden最近的工作,这些工作提供了算法,可以在更有限的环境中使用样本访问分布的可加性、次可加性和单位需求估值来学习近似最优的多项目机制。我们将这些结果推广到完整的单位需求、可加性和XOS设置、可加性投标人和最大最小设置。我们的结果是由新的统一收敛界的假设类下的乘积度量。与由VC维边界导出的边界相比,我们的边界导致样本复杂性的指数级节省,并且具有独立的兴趣。
{"title":"Learning Multi-Item Auctions with (or without) Samples","authors":"Yang Cai, C. Daskalakis","doi":"10.1109/FOCS.2017.54","DOIUrl":"https://doi.org/10.1109/FOCS.2017.54","url":null,"abstract":"We provide algorithms that learn simple auctions whose revenue is approximately optimal in multi-item multi-bidder settings, for a wide range of bidder valuations including unit-demand, additive, constrained additive, XOS, and subadditive. We obtain our learning results in two settings. The first is the commonly studied setting where sample access to the bidders distributions over valuations is given, for both regular distributions and arbitrary distributions with bounded support. Here, our algorithms require polynomially many samples in the number of items and bidders. The second is a more general max-min learning setting that we introduce, where we are given approximate distributions, and we seek to compute a mechanism whose revenue is approximately optimal simultaneously for all true distributions that are close to the ones we were given. These results are more general in that they imply the sample-based results, and are also applicable in settings where we have no sample access to the underlying distributions but have estimated them indirectly via market research or by observation of bidder behavior in previously run, potentially non-truthful auctions.All our results hold for valuation distributions satisfying the standard (and necessary) independence-across-items property. They also generalize and improve upon recent works of Goldner and Karlin cite{GoldnerK16} and Morgenstern and Roughgarden cite{MorgensternR16, which have provided algorithms that learn approximately optimal multi-item mechanisms in more restricted settings with additive, subadditive and unit-demand valuations using sample access to distributions. We generalize these results to the complete unit-demand, additive, and XOS setting, to i.i.d. subadditive bidders, and to the max-min setting.Our results are enabled by new uniform convergence bounds for hypotheses classes under product measures. Our bounds result in exponential savings in sample complexity compared to bounds derived by bounding the VC dimension and are of independent interest.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131943718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 61
期刊
2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1