首页 > 最新文献

2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)最新文献

英文 中文
On Learning Mixtures of Well-Separated Gaussians 关于分离良好高斯分布的学习混合
Pub Date : 2017-10-31 DOI: 10.1109/FOCS.2017.17
O. Regev, Aravindan Vijayaraghavan
We consider the problem of efficiently learning mixtures of a large number of spherical Gaussians, when the components of the mixture are well separated. In the most basic form of this problem, we are given samples from a uniform mixture of k standard spherical Gaussians with means mu_1,...,mu_k in R^d, and the goal is to estimate the means up to accuracy δ using poly(k,d, 1/δ) samples.In this work, we study the following question: what is the minimum separation needed between the means for solving this task? The best known algorithm due to Vempala and Wang [JCSS 2004] requires a separation of roughly min{k,d}^{1/4}. On the other hand, Moitra and Valiant [FOCS 2010] showed that with separation o(1), exponentially many samples are required. We address the significant gap between these two bounds, by showing the following results.• We show that with separation o(√log k), super-polynomially many samples are required. In fact, this holds even when the k means of the Gaussians are picked at random in d=O(log k) dimensions.• We show that with separation Ω(√log k), poly(k,d,1/δ) samples suffice. Notice that the bound on the separation is independent of δ. This result is based on a new and efficient accuracy boosting algorithm that takes as input coarse estimates of the true means and in time (and samples) poly(k,d, 1δ) outputs estimates of the means up to arbitrarily good accuracy δ assuming the separation between the means is Ωmin √(log k),√d) (independently of δ). The idea of the algorithm is to iteratively solve a diagonally dominant system of non-linear equations.We also (1) present a computationally efficient algorithm in d=O(1) dimensions with only Ω(√{d}) separation, and (2) extend our results to the case that components might have different weights and variances. These results together essentially characterize the optimal order of separation between components that is needed to learn a mixture of k spherical Gaussians with polynomial samples.
我们考虑了当混合物的成分被很好地分离时,有效地学习大量球形高斯混合物的问题。在这个问题的最基本形式中,我们从k个平均为mu_1,…的标准球面高斯函数的均匀混合物中得到样本。,mu_k在R^d中,目标是估计达到精度的均值δ使用poly(k,d, 1/δ)样品。在这项工作中,我们研究了以下问题:解决这个任务的方法之间需要的最小分离是什么?Vempala和Wang [JCSS 2004]提出的最著名的算法需要大约min{k,d}^{1/4}的分离。另一方面,Moitra和Valiant [FOCS 2010]表明,当分离为0(1)时,需要的样本数量呈指数级增长。通过显示以下结果,我们解决了这两个界限之间的显著差距。我们表明,在分离为0 (√log k)的情况下,需要超多项式的多个样本。事实上,即使在d=O(log k)维中随机选取高斯分布的k个均值时,这一点也成立。我们表明,在分离Ω(√log k)的情况下,poly(k,d,1/δ)样本就足够了。注意,分隔的边界与δ无关。该结果基于一种新的高效精度提升算法,该算法将真实均值的粗略估计作为输入,并在时间(和样本)poly(k,d, 1δ)中输出均值的估计,达到任意好的精度δ假设均值之间的距离为Ωmin √(log k),√d)(独立于δ)。该算法的思想是迭代求解一个对角占优的非线性方程组。我们还(1)提出了d=O(1)个维度的计算效率高的算法,只有Ω(√{d})分离,并且(2)将我们的结果扩展到组件可能具有不同权重和方差的情况。这些结果结合在一起,本质上表征了学习k个具有多项式样本的球状高斯混合所需的组件之间的最佳分离顺序。
{"title":"On Learning Mixtures of Well-Separated Gaussians","authors":"O. Regev, Aravindan Vijayaraghavan","doi":"10.1109/FOCS.2017.17","DOIUrl":"https://doi.org/10.1109/FOCS.2017.17","url":null,"abstract":"We consider the problem of efficiently learning mixtures of a large number of spherical Gaussians, when the components of the mixture are well separated. In the most basic form of this problem, we are given samples from a uniform mixture of k standard spherical Gaussians with means mu_1,...,mu_k in R^d, and the goal is to estimate the means up to accuracy δ using poly(k,d, 1/δ) samples.In this work, we study the following question: what is the minimum separation needed between the means for solving this task? The best known algorithm due to Vempala and Wang [JCSS 2004] requires a separation of roughly min{k,d}^{1/4}. On the other hand, Moitra and Valiant [FOCS 2010] showed that with separation o(1), exponentially many samples are required. We address the significant gap between these two bounds, by showing the following results.• We show that with separation o(√log k), super-polynomially many samples are required. In fact, this holds even when the k means of the Gaussians are picked at random in d=O(log k) dimensions.• We show that with separation Ω(√log k), poly(k,d,1/δ) samples suffice. Notice that the bound on the separation is independent of δ. This result is based on a new and efficient accuracy boosting algorithm that takes as input coarse estimates of the true means and in time (and samples) poly(k,d, 1δ) outputs estimates of the means up to arbitrarily good accuracy δ assuming the separation between the means is Ωmin √(log k),√d) (independently of δ). The idea of the algorithm is to iteratively solve a diagonally dominant system of non-linear equations.We also (1) present a computationally efficient algorithm in d=O(1) dimensions with only Ω(√{d}) separation, and (2) extend our results to the case that components might have different weights and variances. These results together essentially characterize the optimal order of separation between components that is needed to learn a mixture of k spherical Gaussians with polynomial samples.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122470015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 67
How to Achieve Non-Malleability in One or Two Rounds 如何在一、两轮内达到非延展性
Pub Date : 2017-10-01 DOI: 10.1109/FOCS.2017.58
Dakshita Khurana, A. Sahai
Non-malleable commitments, introduced by Dolev, Dwork and Naor (STOC 1991), are a fundamental cryptographic primitive, and their round complexity has been a subject of great interest. And yet, the goal of achieving non-malleable commitments with only one or two rounds} has been elusive. Pass (TCC 2013) captured this difficulty by proving important impossibility results regarding two-round non-malleable commitments. This led to the widespread belief that achieving two-round non-malleable commitments was impossible from standard assumptions. We show that this belief was false. Indeed, we obtain the following positive results:∘ We construct two-message non-malleable commitments satisfying non-malleability with respect to commitment, based on standard sub-exponential assumptions, namely: sub-exponential one-way permutations, sub-exponential ZAPs, and sub-exponential DDH. Furthermore, our protocol is public-coin}.∘ We obtain two-message private-coin} non-malleable commitments with respect to commitment, assuming only sub-exponential DDH or QR or N^{th}-residuosity.∘ We bootstrap the above protocols (under the same assumptions) to obtain two round constant bounded-concurrent non-malleable commitments. In the simultaneous message model, we obtain unbounded concurrent non-malleability in two rounds.∘ In the simultaneous messages model, we obtain one-round} non-malleable commitments, with unbounded concurrent security with respect to opening, under standard sub-exponential assumptions.– This implies non-interactive non-malleable commitments with respect to opening, in a restricted model with a broadcast channel, and a-priori bounded polynomially many parties such that every party is aware of every other party in the system. To the best of our knowledge, this is the first protocol to achieve completely non-interactive non-malleability in any plain model setting from standard assumptions.– As an application of this result, in the simultaneous exchange model, we obtain two-round multi-party pseudorandom coin-flipping.∘ We construct two-message zero-knowledge arguments with super-polynomial strong} simulation (SPSS-ZK), which also serve as an important tool for our constructions of non-malleable commitments.∘ In order to obtain our results, we develop several techniques that may be of independent interest.– We give the first two-round black-box rewinding strategy based on standard sub-exponential assumptions, in the plain model.– We also give a two-round tag amplification technique for non-malleable commitments, that amplifies a 4-tag scheme to a scheme for all tags, while relying on sub-exponential DDH. This includes a more efficient alternative to the DDN encoding.The full version of this paper is available online at: https://eprint.iacr.org/2017/291.pdf.
由Dolev, Dwork和Naor (STOC 1991)提出的不可延展性承诺是一种基本的密码学原语,其圆复杂度一直是人们非常感兴趣的主题。然而,仅用一到两轮就实现不可延展性承诺的目标一直是难以实现的。Pass (TCC 2013)通过证明关于两轮不可延展性承诺的重要不可能性结果抓住了这一困难。这导致人们普遍认为,根据标准假设,实现两轮不可延展性承诺是不可能的。我们证明这种信念是错误的。实际上,我们得到了以下积极的结果:∘我们基于标准的次指数假设,即:次指数单向排列、次指数zap和次指数DDH,构造了满足承诺不可延性的双消息非延性承诺。此外,我们的协议是public-coin。∘在仅假设次指数DDH或QR或N^{th}残差的情况下,我们获得了关于承诺的双消息private-coin}不可延展性承诺。∘我们引导上述协议(在相同的假设下)来获得两个轮常数有界并发的不可延性承诺。在并发消息模型中,我们在两轮内获得了无界并发非延展性。∘在并发消息模型中,在标准的次指数假设下,我们获得了关于开放的无界并发安全性的一轮不可延展性承诺。–这意味着关于开放的非交互式不可延展性承诺,在具有广播通道的受限模型中,以及先验地多项式地限定多方,这样每一方都知道系统中的每一方。据我们所知,这是第一个在标准假设的任何普通模型设置中实现完全非交互式非延展性的协议。–作为该结果的一个应用,在同步交换模型中,我们获得了两轮多方伪随机抛硬币。∘我们用超多项式强模拟(SPSS-ZK)构造了双消息零知识论证,这也是我们构造不可延展性承诺的重要工具。∘为了获得我们的结果,我们开发了几种可能独立感兴趣的技术。–在普通模型中,我们给出了基于标准次指数假设的第一个两轮黑盒倒带策略。–我们还给出了非延展性承诺的两轮标签放大技术,该技术将4标签方案放大为所有标签的方案,同时依赖于次指数DDH。这包括一种比DDN编码更有效的替代方案。本文的完整版本可在https://eprint.iacr.org/2017/291.pdf上获得。
{"title":"How to Achieve Non-Malleability in One or Two Rounds","authors":"Dakshita Khurana, A. Sahai","doi":"10.1109/FOCS.2017.58","DOIUrl":"https://doi.org/10.1109/FOCS.2017.58","url":null,"abstract":"Non-malleable commitments, introduced by Dolev, Dwork and Naor (STOC 1991), are a fundamental cryptographic primitive, and their round complexity has been a subject of great interest. And yet, the goal of achieving non-malleable commitments with only one or two rounds} has been elusive. Pass (TCC 2013) captured this difficulty by proving important impossibility results regarding two-round non-malleable commitments. This led to the widespread belief that achieving two-round non-malleable commitments was impossible from standard assumptions. We show that this belief was false. Indeed, we obtain the following positive results:∘ We construct two-message non-malleable commitments satisfying non-malleability with respect to commitment, based on standard sub-exponential assumptions, namely: sub-exponential one-way permutations, sub-exponential ZAPs, and sub-exponential DDH. Furthermore, our protocol is public-coin}.∘ We obtain two-message private-coin} non-malleable commitments with respect to commitment, assuming only sub-exponential DDH or QR or N^{th}-residuosity.∘ We bootstrap the above protocols (under the same assumptions) to obtain two round constant bounded-concurrent non-malleable commitments. In the simultaneous message model, we obtain unbounded concurrent non-malleability in two rounds.∘ In the simultaneous messages model, we obtain one-round} non-malleable commitments, with unbounded concurrent security with respect to opening, under standard sub-exponential assumptions.– This implies non-interactive non-malleable commitments with respect to opening, in a restricted model with a broadcast channel, and a-priori bounded polynomially many parties such that every party is aware of every other party in the system. To the best of our knowledge, this is the first protocol to achieve completely non-interactive non-malleability in any plain model setting from standard assumptions.– As an application of this result, in the simultaneous exchange model, we obtain two-round multi-party pseudorandom coin-flipping.∘ We construct two-message zero-knowledge arguments with super-polynomial strong} simulation (SPSS-ZK), which also serve as an important tool for our constructions of non-malleable commitments.∘ In order to obtain our results, we develop several techniques that may be of independent interest.– We give the first two-round black-box rewinding strategy based on standard sub-exponential assumptions, in the plain model.– We also give a two-round tag amplification technique for non-malleable commitments, that amplifies a 4-tag scheme to a scheme for all tags, while relying on sub-exponential DDH. This includes a more efficient alternative to the DDN encoding.The full version of this paper is available online at: https://eprint.iacr.org/2017/291.pdf.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117353469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
Boolean Unateness Testing with Õ(n^{3/4}) Adaptive Queries 使用Õ(n^{3/4})自适应查询进行布尔一致性测试
Pub Date : 2017-10-01 DOI: 10.1109/FOCS.2017.85
Xi Chen, Erik Waingarten, Jinyu Xie
We give an adaptive algorithm that tests whether an unknown Boolean function f: {0,1}^n -≈ {0, 1} is unate (i.e. every variable of f is either non-decreasing or non-increasing) or ≥-far from unate with one-sided error and O(n^{3/4}/≥^2) many queries. This improves on the best adaptive O(n/≥)-query algorithm from Baleshzar, Chakrabarty, Pallavoor, Raskhodnikova and Seshadhri when 1/ε
我们给出了一个自适应算法来测试未知布尔函数f: {0,1}^n -≈{0,1}是一元的(即f的每个变量要么是非递减的要么是非递增的)或者≥-远不是一元的,有单侧误差和O(n^{3/4}/≥^2)次查询。这改进了来自Baleshzar, Chakrabarty, Pallavoor, Raskhodnikova和Seshadhri的最佳自适应O(n/≥)查询算法,当1/ε
{"title":"Boolean Unateness Testing with Õ(n^{3/4}) Adaptive Queries","authors":"Xi Chen, Erik Waingarten, Jinyu Xie","doi":"10.1109/FOCS.2017.85","DOIUrl":"https://doi.org/10.1109/FOCS.2017.85","url":null,"abstract":"We give an adaptive algorithm that tests whether an unknown Boolean function f: {0,1}^n -≈ {0, 1} is unate (i.e. every variable of f is either non-decreasing or non-increasing) or ≥-far from unate with one-sided error and O(n^{3/4}/≥^2) many queries. This improves on the best adaptive O(n/≥)-query algorithm from Baleshzar, Chakrabarty, Pallavoor, Raskhodnikova and Seshadhri when 1/ε","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128296105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
A Rounds vs. Communication Tradeoff for Multi-Party Set Disjointness 多方集合不连接的A轮vs.通信权衡
Pub Date : 2017-10-01 DOI: 10.1109/FOCS.2017.22
M. Braverman, R. Oshman
In the set disjointess problem, we have k players, each with a private input X^i ⊆ [n], and the goal is for the players to determine whether or not their sets have a global intersection. The players communicate over a shared blackboard, and we charge them for each bit that they write on the board.We study the trade-off between the number of interaction rounds we allow the players, and the total number of bits they must send to solve set disjointness. We show that if R rounds of interaction are allowed, the communication cost is Ω(nk^{1/R}/R^4), which is nearly tight. We also leverage our proof to show that wellfare maximization with unit demand bidders cannot be solved efficiently in a small number of rounds: here, we have k players bidding on n items, and the goal is to find a matching between items and player that bid on them which approximately maximizes the total number of items assigned. It was previously shown by Alon et. al. that Ω(log log k) rounds of interaction are required to find an assignment which achieves a constant approximation to the maximum-wellfare assignment, even if each player is allowed to write n^{≥(R)} bits on the board in each round, where ≥(R) = exp(-R). We improve this lower bound to Ωlog k / log log k), which is known to be tight up to a log log k factor.
在集合不相交问题中,我们有k个参与者,每个参与者都有一个私人输入X^i ⊆[n],目标是让玩家决定他们的集合是否具有全局交叉点。玩家们在一块共享的黑板上交流,我们根据他们在黑板上写的每一个字收取费用。我们研究了允许玩家进行交互的回合数和他们必须发送的解决集合不连接性的比特总数之间的权衡。我们证明了在允许R轮交互的情况下,通信代价为Ω(nk^{1/R}/R^4),接近紧密。我们还利用我们的证据表明,单位需求投标人的福利最大化不能在少数几轮中有效地解决:在这里,我们有k个玩家竞标n个物品,我们的目标是找到物品和竞标它们的玩家之间的匹配,从而近似最大化分配的物品总数。Alon等人之前的研究表明,即使每个玩家在每轮中允许在棋盘上写下n^{≥(R) = exp(-R),也需要进行Ω(log log k)轮互动才能找到一个与最大福利分配保持常数近近值的分配。我们将这个下界改进为Ωlog k / log log k),这是已知的紧密到log log k因子。
{"title":"A Rounds vs. Communication Tradeoff for Multi-Party Set Disjointness","authors":"M. Braverman, R. Oshman","doi":"10.1109/FOCS.2017.22","DOIUrl":"https://doi.org/10.1109/FOCS.2017.22","url":null,"abstract":"In the set disjointess problem, we have k players, each with a private input X^i ⊆ [n], and the goal is for the players to determine whether or not their sets have a global intersection. The players communicate over a shared blackboard, and we charge them for each bit that they write on the board.We study the trade-off between the number of interaction rounds we allow the players, and the total number of bits they must send to solve set disjointness. We show that if R rounds of interaction are allowed, the communication cost is Ω(nk^{1/R}/R^4), which is nearly tight. We also leverage our proof to show that wellfare maximization with unit demand bidders cannot be solved efficiently in a small number of rounds: here, we have k players bidding on n items, and the goal is to find a matching between items and player that bid on them which approximately maximizes the total number of items assigned. It was previously shown by Alon et. al. that Ω(log log k) rounds of interaction are required to find an assignment which achieves a constant approximation to the maximum-wellfare assignment, even if each player is allowed to write n^{≥(R)} bits on the board in each round, where ≥(R) = exp(-R). We improve this lower bound to Ωlog k / log log k), which is known to be tight up to a log log k factor.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130322639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
An Input Sensitive Online Algorithm for the Metric Bipartite Matching Problem 度量二部匹配问题的输入敏感在线算法
Pub Date : 2017-10-01 DOI: 10.1109/FOCS.2017.53
K. Nayyar, S. Raghvendra
We present a novel input sensitive analysis of a deterministic online algorithm cite{r_approx16} for the minimum metric bipartite matching problem. We show that, in the adversarial model, for any metric space metric and a set of n servers S, the competitive ratio of this algorithm is O(mu_{metric}(S)log^2 n); here mu_{metric}(S) is the maximum ratio of the traveling salesman tour and the diameter of any subset of S. It is straight-forward to show that any algorithm, even with complete knowledge of metric and S, will have a competitive ratio of Ω(mu_metric(S)). So, the performance of this algorithm is sensitive to the input and near-optimal for any given S and metric. As consequences, we also achieve the following results:• If S is a set of points on a line, then mu_metric(S) = Theta(1) and the competitive ratio is O(log^2 n), and,• If S is a set of points spanning a subspace with doubling dimension d, then mu_metric(S) = O(n^{1-1/d}) and the competitive ratio is O(n^{1-1/d}log^2 n).Prior to this result, the previous best-known algorithm for the line metric has a competitive ratio of O(n^{0.59}) and requires both S and the request set R to be on a line. There is also an O(log n) competitive algorithm in the weaker oblivious adversary model.To obtain our results, we partition the requests into well-separated clusters and replace each cluster with a small and a large weighted ball; the weight of a ball is the number of requests in the cluster. We show that the cost of the online matching can be expressed as the sum of the weight times radius of the smaller balls. We also show that the cost of edges of the optimal matching inside each larger ball can be shown to be proportional to the weight times the radius of the larger ball. We then use a simple variant of the well-known Vitalis covering lemma to relate the radii of these balls and obtain the competitive ratio.
针对最小度量二部匹配问题,提出了一种新的确定性在线算法cite{r_approx16}的输入敏感性分析。我们证明,在对抗模型中,对于任意度量空间metric和一组n个服务器S,该算法的竞争比为O(mu _ {metric} (S) log ^2 n);这里mu _ {metric} (S)是旅行推销员的行程和S的任意子集的直径的最大比值。这很直接地表明,任何算法,即使完全了解metric和S,也会有一个竞争比Ω(mu _ metric (S))。因此,该算法的性能对输入很敏感,对于任何给定的S和metric都接近最优。作为结果,我们还实现了以下结果:•如果S是直线上的点的集合,则mu _ metric (S) = Theta(1),竞争比为O(log ^2 n), •如果S是一个点的集合,它生成了一个具有倍维d的子空间,那么mu _ metric (S) = O(n^{1-1/d}),竞争比为O(n^{1-1/d}log ^2 n)。在此结果之前,之前最著名的线度量算法的竞争比为O(n^{0.59}),并且要求S和请求集R都在一条线上。在弱遗忘对手模型中也有一个O(log n)竞争算法。为了获得我们的结果,我们将请求划分为分离良好的簇,并用一个小的和一个大的加权球代替每个簇;球的权重是集群中请求的数量。我们证明了在线匹配的代价可以表示为小球的重量乘以半径的总和。我们还表明,每个大球内最优匹配的边的成本可以显示为与权重乘以大球的半径成正比。然后,我们使用著名的Vitalis覆盖引理的一个简单变体来关联这些球的半径并获得竞争比。
{"title":"An Input Sensitive Online Algorithm for the Metric Bipartite Matching Problem","authors":"K. Nayyar, S. Raghvendra","doi":"10.1109/FOCS.2017.53","DOIUrl":"https://doi.org/10.1109/FOCS.2017.53","url":null,"abstract":"We present a novel input sensitive analysis of a deterministic online algorithm cite{r_approx16} for the minimum metric bipartite matching problem. We show that, in the adversarial model, for any metric space metric and a set of n servers S, the competitive ratio of this algorithm is O(mu_{metric}(S)log^2 n); here mu_{metric}(S) is the maximum ratio of the traveling salesman tour and the diameter of any subset of S. It is straight-forward to show that any algorithm, even with complete knowledge of metric and S, will have a competitive ratio of Ω(mu_metric(S)). So, the performance of this algorithm is sensitive to the input and near-optimal for any given S and metric. As consequences, we also achieve the following results:• If S is a set of points on a line, then mu_metric(S) = Theta(1) and the competitive ratio is O(log^2 n), and,• If S is a set of points spanning a subspace with doubling dimension d, then mu_metric(S) = O(n^{1-1/d}) and the competitive ratio is O(n^{1-1/d}log^2 n).Prior to this result, the previous best-known algorithm for the line metric has a competitive ratio of O(n^{0.59}) and requires both S and the request set R to be on a line. There is also an O(log n) competitive algorithm in the weaker oblivious adversary model.To obtain our results, we partition the requests into well-separated clusters and replace each cluster with a small and a large weighted ball; the weight of a ball is the number of requests in the cluster. We show that the cost of the online matching can be expressed as the sum of the weight times radius of the smaller balls. We also show that the cost of edges of the optimal matching inside each larger ball can be shown to be proportional to the weight times the radius of the larger ball. We then use a simple variant of the well-known Vitalis covering lemma to relate the radii of these balls and obtain the competitive ratio.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126594024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
Fast & Space-Efficient Approximations of Language Edit Distance and RNA Folding: An Amnesic Dynamic Programming Approach 语言编辑距离和RNA折叠的快速高效空间逼近:一种失忆症动态规划方法
Pub Date : 2017-10-01 DOI: 10.1109/FOCS.2017.35
B. Saha
Dynamic programming is a basic, and one of the most systematic techniques for developing polynomial time algorithms with overwhelming applications. However, it often suffers from having high running time and space complexity due to (a) maintaining a table of solutions for a large number of sub-instances, and (b) combining/comparing these solutions to successively solve larger sub-instances. In this paper, we consider a canonical cubic time and quadratic space dynamic programming, and show how improvements in both its time and space uses are possible. As a result, we obtain fast small-space approximation algorithms for the fundamental problems of context free grammar recognition} (the basic computer science problem of parsing), the language edit distance} (a significant generalization of string edit distance and parsing), and RNA folding} (a classical problem in bioinformatics). For these problems, ours are the first algorithms that break the cubic-time barrier of any combinatorial algorithm, and quadratic-space barrier of any algorithm significantly improving upon their long-standing space and time complexities. Our technique applies to many other problems as well including string edit distance computation, and finding longest increasing subsequence.Our improvements come from directly grinding the dynamic programming and looking through the lens of language edit distance which generalizes both context free grammar recognition, and RNA folding. From known conditional lower bound results, neither of these problems can have an exact combinatorial algorithm (one that does not use fast matrix multiplication) running in truly subcubic time. Moreover, for language edit distance such an algorithm cannot exist even when nontrivial multiplicative approximation is allowed. We overcome this hurdle by designing an additive-approximation algorithm that for any parameter k ≈ 0, uses O(nklog{n}) space and O(n^2klog{n}) time and provides an additive O(frac{n}{k}log{n})-approximation. In particular, in tilde{O}(n)footnotemark[1] space and tilde{O}(n^2) time it can solve deterministically whether a string belongs to a context free grammar, or ≥ilon-far from it for any constant ≥ilon ≈ 0. We also improve the above results to obtain an algorithm that outputs an ≥ilon⋅ n-additive approximation to the above problems with space complexity O(n^{2/3} log{n}). The space complexity remains sublinear in n, as long as ≥ilon = o(n^{-frac{1}{4}}). Moreover, we provide the first MapReduce and streaming algorithms for them with multiple passes and sublinear space complexity.
动态规划是开发具有压倒性应用的多项式时间算法的一种基本的、最系统的技术之一。然而,由于(a)维护大量子实例的解决方案表,以及(b)将这些解决方案组合/比较以先后解决更大的子实例,它经常遭受高运行时间和空间复杂性的困扰。在本文中,我们考虑一个典型的三次时间和二次空间动态规划,并说明如何改进其时间和空间的利用是可能的。因此,我们获得了快速的小空间近似算法,用于解决上下文无关语法识别}(解析的基本计算机科学问题),语言编辑距离}(字符串编辑距离和解析的重要概括)和RNA折叠}(生物信息学中的经典问题)等基本问题。对于这些问题,我们的算法是第一个打破任何组合算法的三次时间障碍和任何算法的二次空间障碍的算法,显著改善了它们长期存在的空间和时间复杂性。我们的技术也适用于许多其他问题,包括字符串编辑距离计算和寻找最长递增子序列。我们的改进来自于直接研磨动态规划,并通过语言编辑距离的视角来推广上下文无关语法识别和RNA折叠。从已知的条件下界结果来看,这两个问题都不可能有一个精确的组合算法(不使用快速矩阵乘法的算法)在真正的次立方时间内运行。此外,对于语言编辑距离,即使在允许非平凡乘法近似的情况下,这种算法也不存在。我们通过设计一种加性逼近算法来克服这一障碍,该算法对于任何参数k ≈0,使用O(nk log{n})空间和O(n^2k log{n})时间,并提供一个附加的O(frac{n}{k}log{n})近似。特别是,在tilde{O} (n) footnotemark[1]空间和tilde{O} (n^2)时间内,它可以确定地解决字符串是否属于上下文无关语法,或者≥ilon ≈对于任何常数≥ilon ≈0. 我们还对上述结果进行了改进,得到了输出一个≥空间复杂度为O(n^{2/3}log{n})的上述问题的n加性近似。只要≥亿= 0 (n^{-frac{1}{4}}),空间复杂度在n中保持亚线性。此外,我们还为它们提供了第一个具有多通道和亚线性空间复杂度的MapReduce和流算法。
{"title":"Fast & Space-Efficient Approximations of Language Edit Distance and RNA Folding: An Amnesic Dynamic Programming Approach","authors":"B. Saha","doi":"10.1109/FOCS.2017.35","DOIUrl":"https://doi.org/10.1109/FOCS.2017.35","url":null,"abstract":"Dynamic programming is a basic, and one of the most systematic techniques for developing polynomial time algorithms with overwhelming applications. However, it often suffers from having high running time and space complexity due to (a) maintaining a table of solutions for a large number of sub-instances, and (b) combining/comparing these solutions to successively solve larger sub-instances. In this paper, we consider a canonical cubic time and quadratic space dynamic programming, and show how improvements in both its time and space uses are possible. As a result, we obtain fast small-space approximation algorithms for the fundamental problems of context free grammar recognition} (the basic computer science problem of parsing), the language edit distance} (a significant generalization of string edit distance and parsing), and RNA folding} (a classical problem in bioinformatics). For these problems, ours are the first algorithms that break the cubic-time barrier of any combinatorial algorithm, and quadratic-space barrier of any algorithm significantly improving upon their long-standing space and time complexities. Our technique applies to many other problems as well including string edit distance computation, and finding longest increasing subsequence.Our improvements come from directly grinding the dynamic programming and looking through the lens of language edit distance which generalizes both context free grammar recognition, and RNA folding. From known conditional lower bound results, neither of these problems can have an exact combinatorial algorithm (one that does not use fast matrix multiplication) running in truly subcubic time. Moreover, for language edit distance such an algorithm cannot exist even when nontrivial multiplicative approximation is allowed. We overcome this hurdle by designing an additive-approximation algorithm that for any parameter k ≈ 0, uses O(nklog{n}) space and O(n^2klog{n}) time and provides an additive O(frac{n}{k}log{n})-approximation. In particular, in tilde{O}(n)footnotemark[1] space and tilde{O}(n^2) time it can solve deterministically whether a string belongs to a context free grammar, or ≥ilon-far from it for any constant ≥ilon ≈ 0. We also improve the above results to obtain an algorithm that outputs an ≥ilon⋅ n-additive approximation to the above problems with space complexity O(n^{2/3} log{n}). The space complexity remains sublinear in n, as long as ≥ilon = o(n^{-frac{1}{4}}). Moreover, we provide the first MapReduce and streaming algorithms for them with multiple passes and sublinear space complexity.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124340339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Minor-Free Graphs Have Light Spanners 无次元图有轻扳手
Pub Date : 2017-10-01 DOI: 10.1109/FOCS.2017.76
G. Borradaile, Hung Le, Christian Wulff-Nilsen
We show that every H-minor-free graph has a light (1+≥ilon)-spanner, resolving an open problem of Grigni and Sissokho and proving a conjecture of Grigni and Hung cite{GH12}. Our lightness bound is [Oleft(frac{sigma_H}{≥ilon^3}log frac{1}{≥ilon}right)] where sigma_H = |V(H)|√{log |V(H)|} is the sparsity coefficient of H-minor-free graphs. That is, it has a practical dependency on the size of the minor H. Our result also implies that the polynomial time approximation scheme (PTAS) for the Travelling Salesperson Problem (TSP) in H-minor-free graphs by Demaine, Hajiaghayi and Kawarabayashi is an efficient PTAS whose running time is 2^{O_Hleft(frac{1}{≥ilon^4}log frac{1}{≥ilon}right)}n^{O(1)} where O_H ignores dependencies on the size of H. Our techniques significantly deviate from existing lines of research on spanners for H-minor-free graphs, but build upon the work of Chechik and Wulff-Nilsen for spanners of general graphs[6].
我们证明了每个无h次元图都有一个轻(1+≥ilon)-扳手,解决了Grigni和Sissokho的一个开放问题,并证明了Grigni和Hung的一个猜想cite{GH12}。我们的亮度界是[Oleft(frac{sigma_H}{≥ilon^3}log frac{1}{≥ilon}right)],其中sigma _H = |V(H)|√{log |V(H)|}为无H次元图的稀疏系数。也就是说,它与小h的大小有实际的依赖关系。我们的结果还表明,对于Demaine (h -free - graph)中旅行销售人员问题(TSP)的多项式时间逼近方案(PTAS),Hajiaghayi和Kawarabayashi是一个高效的PTAS,其运行时间为2^{O_H left (frac{1}{≥ilon^4}logfrac{1}{≥ilon}right)}n^{O(1),}其中O_H忽略了对h大小的依赖关系。我们的技术明显偏离了现有的关于无h次图扳手的研究路线,但建立在Chechik和Wulff-Nilsen对一般图扳手[6]的研究基础上。
{"title":"Minor-Free Graphs Have Light Spanners","authors":"G. Borradaile, Hung Le, Christian Wulff-Nilsen","doi":"10.1109/FOCS.2017.76","DOIUrl":"https://doi.org/10.1109/FOCS.2017.76","url":null,"abstract":"We show that every H-minor-free graph has a light (1+≥ilon)-spanner, resolving an open problem of Grigni and Sissokho and proving a conjecture of Grigni and Hung cite{GH12}. Our lightness bound is [Oleft(frac{sigma_H}{≥ilon^3}log frac{1}{≥ilon}right)] where sigma_H = |V(H)|√{log |V(H)|} is the sparsity coefficient of H-minor-free graphs. That is, it has a practical dependency on the size of the minor H. Our result also implies that the polynomial time approximation scheme (PTAS) for the Travelling Salesperson Problem (TSP) in H-minor-free graphs by Demaine, Hajiaghayi and Kawarabayashi is an efficient PTAS whose running time is 2^{O_Hleft(frac{1}{≥ilon^4}log frac{1}{≥ilon}right)}n^{O(1)} where O_H ignores dependencies on the size of H. Our techniques significantly deviate from existing lines of research on spanners for H-minor-free graphs, but build upon the work of Chechik and Wulff-Nilsen for spanners of general graphs[6].","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114510361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Hashing-Based-Estimators for Kernel Density in High Dimensions 高维核密度的哈希估计
Pub Date : 2017-10-01 DOI: 10.1109/FOCS.2017.99
M. Charikar, Paris Siminelakis
Given a set of points P⊄ R^d and a kernel k, the Kernel Density Estimate at a point x∊R^d is defined as mathrm{KDE}_{P}(x)=frac{1}{|P|}sum_{yin P} k(x,y). We study the problem of designing a data structure that given a data set P and a kernel function, returns approximations to the kernel density} of a query point in sublinear time}. We introduce a class of unbiased estimators for kernel density implemented through locality-sensitive hashing, and give general theorems bounding the variance of such estimators. These estimators give rise to efficient data structures for estimating the kernel density in high dimensions for a variety of commonly used kernels. Our work is the first to provide data-structures with theoretical guarantees that improve upon simple random sampling in high dimensions.
给定一组点P⊄R^d和一个核k,在点x∊R^d的核密度估计定义为mathrm{KDE} _P{(x)= }frac{1}{|P|}sum _y{in P }k(x,y)。我们研究了设计一个数据结构的问题,给定一个数据集P和一个核函数,返回查询点在亚线性时间内的核密度的近似值。我们引入了一类通过位置敏感哈希实现的核密度无偏估计,并给出了该类估计方差的一般定理。这些估计器为估计各种常用核的高维核密度提供了有效的数据结构。我们的工作是第一个提供具有理论保证的数据结构,改进了高维的简单随机抽样。
{"title":"Hashing-Based-Estimators for Kernel Density in High Dimensions","authors":"M. Charikar, Paris Siminelakis","doi":"10.1109/FOCS.2017.99","DOIUrl":"https://doi.org/10.1109/FOCS.2017.99","url":null,"abstract":"Given a set of points P⊄ R^d and a kernel k, the Kernel Density Estimate at a point x∊R^d is defined as mathrm{KDE}_{P}(x)=frac{1}{|P|}sum_{yin P} k(x,y). We study the problem of designing a data structure that given a data set P and a kernel function, returns approximations to the kernel density} of a query point in sublinear time}. We introduce a class of unbiased estimators for kernel density implemented through locality-sensitive hashing, and give general theorems bounding the variance of such estimators. These estimators give rise to efficient data structures for estimating the kernel density in high dimensions for a variety of commonly used kernels. Our work is the first to provide data-structures with theoretical guarantees that improve upon simple random sampling in high dimensions.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127364659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 82
A Time-Space Lower Bound for a Large Class of Learning Problems 一类大型学习问题的时空下界
Pub Date : 2017-10-01 DOI: 10.1109/FOCS.2017.73
R. Raz
We prove a general time-space lower bound that applies for a large class of learning problems and shows that for every problem in that class, any learning algorithm requires either a memory of quadratic size or an exponential number of samples. As a special case, this gives a new proof for the time-space lower bound for parity learning [R16]. Our result is stated in terms of the norm of the matrix that corresponds to the learning problem. Let X, A be two finite sets. Let M: A × X rightarrow {-1,1} be a matrix. The matrix M corresponds to the following learning problem: An unknown element x ∊ X was chosen uniformly at random. A learner tries to learn x from a stream of samples, (a_1, b_1), (a_2, b_2)..., where for every i, a_i ∊ A is chosen uniformly at random and b_i = M(a_i,x). Let sigma be the largest singular value of M and note that always sigma ≤ |A|^{1/2} ⋅ |X|^{1/2}. We show that if sigma ≤ |A|^{1/2} ⋅ |X|^{1/2 - ≥ilon, then any learning algorithm for the corresponding learning problem requires either a memory of size quadratic in ≥ilon n or number of samples exponential in ≥ilon n, where n = log_2 |X|.As a special case, this gives a new proof for the memorysamples lower bound for parity learning [14].
我们证明了一个适用于大量学习问题的一般时空下界,并表明对于该类中的每个问题,任何学习算法都需要二次大小的内存或指数数量的样本。作为特例,这给出了宇称学习的时空下界的新证明[R16]。我们的结果是用与学习问题相对应的矩阵范数来表示的。设X A是两个有限集合。设M: A ×X 右行{-1,1}是一个矩阵。矩阵M对应如下学习问题:未知元素x ∊X是均匀随机选择的。一个学习者试图从一系列样本中学习x, (a_1, b_1), (a_2, b_2)…,其中对于每个i, a_i ∊A均匀随机选取,且b_i = M(a_i,x)。设sigma为M的最大奇异值并且注意总是sigma ≤|的| ^ {5}& # x22C5;| | X ^{5}。我们证明了如果sigma ≤|的| ^ {5}& # x22C5;|X|^{1/2 - ≥ilon,则任何相应学习问题的学习算法都需要大小为≥ilon n的二次型内存或≥ilon n的指数型样本数,其中n = log_2 |X|。作为一种特殊情况,这为宇称学习的记忆样本下界提供了新的证明[14]。
{"title":"A Time-Space Lower Bound for a Large Class of Learning Problems","authors":"R. Raz","doi":"10.1109/FOCS.2017.73","DOIUrl":"https://doi.org/10.1109/FOCS.2017.73","url":null,"abstract":"We prove a general time-space lower bound that applies for a large class of learning problems and shows that for every problem in that class, any learning algorithm requires either a memory of quadratic size or an exponential number of samples. As a special case, this gives a new proof for the time-space lower bound for parity learning [R16]. Our result is stated in terms of the norm of the matrix that corresponds to the learning problem. Let X, A be two finite sets. Let M: A × X rightarrow {-1,1} be a matrix. The matrix M corresponds to the following learning problem: An unknown element x ∊ X was chosen uniformly at random. A learner tries to learn x from a stream of samples, (a_1, b_1), (a_2, b_2)..., where for every i, a_i ∊ A is chosen uniformly at random and b_i = M(a_i,x). Let sigma be the largest singular value of M and note that always sigma ≤ |A|^{1/2} ⋅ |X|^{1/2}. We show that if sigma ≤ |A|^{1/2} ⋅ |X|^{1/2 - ≥ilon, then any learning algorithm for the corresponding learning problem requires either a memory of size quadratic in ≥ilon n or number of samples exponential in ≥ilon n, where n = log_2 |X|.As a special case, this gives a new proof for the memorysamples lower bound for parity learning [14].","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"211 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122456024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 55
Approximating Geometric Knapsack via L-Packings 利用l -填料逼近几何背包
Pub Date : 2017-10-01 DOI: 10.1109/FOCS.2017.32
Waldo Gálvez, F. Grandoni, Sandy Heydrich, Salvatore Ingala, A. Khan, Andreas Wiese
We study the two-dimensional geometric knapsack problem (2DK) in which we are given a set of n axis-aligned rectangular items, each one with an associated profit, and an axis-aligned square knapsack. The goal is to find a (non-overlapping) packing of a maximum profit subset of items inside the knapsack (without rotating items). The best-known polynomial-time approximation factor for this problem (even just in the cardinality case) is 2+ε [Jansen and Zhang, SODA 2004]. In this paper we break the 2 approximation barrier, achieving a polynomialtime 17/9 + ε
我们研究了二维几何背包问题(2DK),在这个问题中,我们得到了一组n个轴对齐的矩形物品,每个物品都有一个相关的利润,以及一个轴对齐的方形背包。目标是找到背包内物品(不旋转物品)的最大利润子集的(非重叠)包装。这个问题最著名的多项式时间近似因子(即使只是在基数情况下)是2+ε[Jansen and Zhang, SODA, 2004]。在本文中,我们打破了2近似障碍,实现了多项式时间17/9 + ε
{"title":"Approximating Geometric Knapsack via L-Packings","authors":"Waldo Gálvez, F. Grandoni, Sandy Heydrich, Salvatore Ingala, A. Khan, Andreas Wiese","doi":"10.1109/FOCS.2017.32","DOIUrl":"https://doi.org/10.1109/FOCS.2017.32","url":null,"abstract":"We study the two-dimensional geometric knapsack problem (2DK) in which we are given a set of n axis-aligned rectangular items, each one with an associated profit, and an axis-aligned square knapsack. The goal is to find a (non-overlapping) packing of a maximum profit subset of items inside the knapsack (without rotating items). The best-known polynomial-time approximation factor for this problem (even just in the cardinality case) is 2+ε [Jansen and Zhang, SODA 2004]. In this paper we break the 2 approximation barrier, achieving a polynomialtime 17/9 + ε","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121179461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
期刊
2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1