首页 > 最新文献

ACM Transactions on Algorithms (TALG)最新文献

英文 中文
Smaller Cuts, Higher Lower Bounds 更小的削减,更高的下界
Pub Date : 2019-01-07 DOI: 10.1145/3469834
Amir Abboud, K. Censor-Hillel, Seri Khoury, A. Paz
This article proves strong lower bounds for distributed computing in the congest model, by presenting the bit-gadget: a new technique for constructing graphs with small cuts. The contribution of bit-gadgets is twofold. First, developing careful sparse graph constructions with small cuts extends known techniques to show a near-linear lower bound for computing the diameter, a result previously known only for dense graphs. Moreover, the sparseness of the construction plays a crucial role in applying it to approximations of various distance computation problems, drastically improving over what can be obtained when using dense graphs. Second, small cuts are essential for proving super-linear lower bounds, none of which were known prior to this work. In fact, they allow us to show near-quadratic lower bounds for several problems, such as exact minimum vertex cover or maximum independent set, as well as for coloring a graph with its chromatic number. Such strong lower bounds are not limited to NP-hard problems, as given by two simple graph problems in P, which are shown to require a quadratic and near-quadratic number of rounds. All of the above are optimal up to logarithmic factors. In addition, in this context, the complexity of the all-pairs-shortest-paths problem is discussed. Finally, it is shown that graph constructions for congest lower bounds translate to lower bounds for the semi-streaming model, despite being very different in its nature.
本文通过提出bit-gadget(一种构造具有小切口的图的新技术),证明了在拥塞模型中分布式计算的强下界。电子产品的贡献是双重的。首先,开发具有小切口的谨慎稀疏图结构扩展了已知技术,以显示计算直径的近线性下界,这是以前只知道密集图的结果。此外,构造的稀疏性在将其应用于各种距离计算问题的近似值方面起着至关重要的作用,大大改善了使用密集图时可以获得的结果。其次,小切割对于证明超线性下界是必不可少的,在这项工作之前,这些下界都是未知的。事实上,它们允许我们为几个问题显示近二次下界,例如精确的最小顶点覆盖或最大独立集,以及用其色数为图上色。这种强下界并不局限于np困难问题,如P中的两个简单图问题所给出的,这两个问题需要二次和近二次的轮数。以上所有方法都是最优的,直到对数因子。此外,本文还讨论了全对最短路径问题的复杂性。最后,研究表明,尽管在性质上有很大的不同,但图结构的拥塞下界转化为半流模型的下界。
{"title":"Smaller Cuts, Higher Lower Bounds","authors":"Amir Abboud, K. Censor-Hillel, Seri Khoury, A. Paz","doi":"10.1145/3469834","DOIUrl":"https://doi.org/10.1145/3469834","url":null,"abstract":"This article proves strong lower bounds for distributed computing in the congest model, by presenting the bit-gadget: a new technique for constructing graphs with small cuts. The contribution of bit-gadgets is twofold. First, developing careful sparse graph constructions with small cuts extends known techniques to show a near-linear lower bound for computing the diameter, a result previously known only for dense graphs. Moreover, the sparseness of the construction plays a crucial role in applying it to approximations of various distance computation problems, drastically improving over what can be obtained when using dense graphs. Second, small cuts are essential for proving super-linear lower bounds, none of which were known prior to this work. In fact, they allow us to show near-quadratic lower bounds for several problems, such as exact minimum vertex cover or maximum independent set, as well as for coloring a graph with its chromatic number. Such strong lower bounds are not limited to NP-hard problems, as given by two simple graph problems in P, which are shown to require a quadratic and near-quadratic number of rounds. All of the above are optimal up to logarithmic factors. In addition, in this context, the complexity of the all-pairs-shortest-paths problem is discussed. Finally, it is shown that graph constructions for congest lower bounds translate to lower bounds for the semi-streaming model, despite being very different in its nature.","PeriodicalId":154047,"journal":{"name":"ACM Transactions on Algorithms (TALG)","volume":"567 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115622571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
I/O-Efficient Algorithms for Topological Sort and Related Problems 拓扑排序及相关问题的I/ o高效算法
Pub Date : 2019-01-01 DOI: 10.1145/3418356
Nairen Cao, Jeremy T. Fineman, Katina Russell, Eugene Yang
This article presents I/O-efficient algorithms for topologically sorting a directed acyclic graph and for the more general problem identifying and topologically sorting the strongly connected components of a directed graph G = (V, E). Both algorithms are randomized and have I/O-costs O(sort(E) · poly(log V)), with high probability, where sort(E) = O(E/B log M/B(E/B)) is the I/O cost of sorting an |E|-element array on a machine with size-B blocks and size-M cache/internal memory. These are the first algorithms for these problems that do not incur at least one I/O per vertex, and as such these are the first I/O-efficient algorithms for sparse graphs. By applying the technique of time-forward processing, these algorithms also imply I/O-efficient algorithms for most problems on directed acyclic graphs, such as shortest paths, as well as the single-source reachability problem on arbitrary directed graphs.
本文提出了对有向无环图进行拓扑排序的I/O高效算法,以及对有向图G = (V, E)的强连接分量进行识别和拓扑排序的更一般的问题。这两种算法都是随机化的,I/O成本为O(sort(E)·poly(log V)),具有高概率,其中sort(E) = O(E/B log M/B(E/B))是在具有大小为B的块和大小为M的缓存/内存的机器上对一个E -元素数组进行排序的I/O成本。这些算法是解决这些问题的第一批算法,每个顶点不产生至少一个I/O,因此,这些算法是稀疏图的第一批I/O高效算法。通过应用时间前向处理技术,这些算法还意味着对于有向无环图上的大多数问题(如最短路径)以及任意有向图上的单源可达性问题,可以使用I/ o高效算法。
{"title":"I/O-Efficient Algorithms for Topological Sort and Related Problems","authors":"Nairen Cao, Jeremy T. Fineman, Katina Russell, Eugene Yang","doi":"10.1145/3418356","DOIUrl":"https://doi.org/10.1145/3418356","url":null,"abstract":"This article presents I/O-efficient algorithms for topologically sorting a directed acyclic graph and for the more general problem identifying and topologically sorting the strongly connected components of a directed graph G = (V, E). Both algorithms are randomized and have I/O-costs O(sort(E) · poly(log V)), with high probability, where sort(E) = O(E/B log M/B(E/B)) is the I/O cost of sorting an |E|-element array on a machine with size-B blocks and size-M cache/internal memory. These are the first algorithms for these problems that do not incur at least one I/O per vertex, and as such these are the first I/O-efficient algorithms for sparse graphs. By applying the technique of time-forward processing, these algorithms also imply I/O-efficient algorithms for most problems on directed acyclic graphs, such as shortest paths, as well as the single-source reachability problem on arbitrary directed graphs.","PeriodicalId":154047,"journal":{"name":"ACM Transactions on Algorithms (TALG)","volume":"199 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133658274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The Complexity of the Ideal Membership Problem for Constrained Problems Over the Boolean Domain 布尔域上约束问题的理想隶属度问题的复杂性
Pub Date : 2019-01-01 DOI: 10.1145/3449350
M. Mastrolilli
Given an ideal I and a polynomial f the Ideal Membership Problem (IMP) is to test if f ϵ I. This problem is a fundamental algorithmic problem with important applications and notoriously intractable. We study the complexity of the IMP for combinatorial ideals that arise from constrained problems over the Boolean domain. As our main result, we identify the borderline of tractability. By using Gröbner bases techniques, we extend Schaefer’s dichotomy theorem [STOC, 1978] which classifies all Constraint Satisfaction Problems (CSPs) over the Boolean domain to be either in P or NP-hard. Moreover, our result implies necessary and sufficient conditions for the efficient computation of Theta Body Semi-Definite Programming (SDP) relaxations, identifying therefore the borderline of tractability for constraint language problems. This article is motivated by the pursuit of understanding the recently raised issue of bit complexity of Sum-of-Squares (SoS) proofs [O’Donnell, ITCS, 2017]. Raghavendra and Weitz [ICALP, 2017] show how the IMP tractability for combinatorial ideals implies bounded coefficients in SoS proofs.
给定一个理想I和一个多项式f,理想隶属度问题(IMP)就是检验f是否为λ I。这个问题是一个具有重要应用的基本算法问题,也是众所周知的棘手问题。我们研究了布尔域上由约束问题产生的组合理想的IMP的复杂度。作为我们的主要结果,我们确定了可处理性的边界。通过使用Gröbner基技术,我们扩展了Schaefer的二分定理[STOC, 1978],该定理将布尔域上的所有约束满足问题(csp)分类为P或NP-hard。此外,我们的结果暗示了有效计算Theta体半确定规划(SDP)松弛的充分必要条件,从而确定了约束语言问题的可跟踪性边界。本文的动机是为了理解最近提出的平方和(so)证明的位复杂性问题[O 'Donnell, ITCS, 2017]。Raghavendra和Weitz [ICALP, 2017]展示了组合理想的IMP可跟踪性如何在SoS证明中隐含有界系数。
{"title":"The Complexity of the Ideal Membership Problem for Constrained Problems Over the Boolean Domain","authors":"M. Mastrolilli","doi":"10.1145/3449350","DOIUrl":"https://doi.org/10.1145/3449350","url":null,"abstract":"Given an ideal I and a polynomial f the Ideal Membership Problem (IMP) is to test if f ϵ I. This problem is a fundamental algorithmic problem with important applications and notoriously intractable. We study the complexity of the IMP for combinatorial ideals that arise from constrained problems over the Boolean domain. As our main result, we identify the borderline of tractability. By using Gröbner bases techniques, we extend Schaefer’s dichotomy theorem [STOC, 1978] which classifies all Constraint Satisfaction Problems (CSPs) over the Boolean domain to be either in P or NP-hard. Moreover, our result implies necessary and sufficient conditions for the efficient computation of Theta Body Semi-Definite Programming (SDP) relaxations, identifying therefore the borderline of tractability for constraint language problems. This article is motivated by the pursuit of understanding the recently raised issue of bit complexity of Sum-of-Squares (SoS) proofs [O’Donnell, ITCS, 2017]. Raghavendra and Weitz [ICALP, 2017] show how the IMP tractability for combinatorial ideals implies bounded coefficients in SoS proofs.","PeriodicalId":154047,"journal":{"name":"ACM Transactions on Algorithms (TALG)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131342298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Sticky Brownian Rounding and its Applications to Constraint Satisfaction Problems 粘性布朗舍入及其在约束满足问题中的应用
Pub Date : 2018-12-19 DOI: 10.1145/3459096
Sepehr Abbasi Zadeh, N. Bansal, Guru Guruganesh, Aleksandar Nikolov, Roy Schwartz, Mohit Singh
Semidefinite programming is a powerful tool in the design and analysis of approximation algorithms for combinatorial optimization problems. In particular, the random hyperplane rounding method of Goemans and Williamson [31] has been extensively studied for more than two decades, resulting in various extensions to the original technique and beautiful algorithms for a wide range of applications. Despite the fact that this approach yields tight approximation guarantees for some problems, e.g., Max-Cut, for many others, e.g., Max-SAT and Max-DiCut, the tight approximation ratio is still unknown. One of the main reasons for this is the fact that very few techniques for rounding semi-definite relaxations are known. In this work, we present a new general and simple method for rounding semi-definite programs, based on Brownian motion. Our approach is inspired by recent results in algorithmic discrepancy theory. We develop and present tools for analyzing our new rounding algorithms, utilizing mathematical machinery from the theory of Brownian motion, complex analysis, and partial differential equations. Focusing on constraint satisfaction problems, we apply our method to several classical problems, including Max-Cut, Max-2SAT, and Max-DiCut, and derive new algorithms that are competitive with the best known results. To illustrate the versatility and general applicability of our approach, we give new approximation algorithms for the Max-Cut problem with side constraints that crucially utilizes measure concentration results for the Sticky Brownian Motion, a feature missing from hyperplane rounding and its generalizations.
半定规划是设计和分析组合优化问题近似算法的有力工具。特别是Goemans和Williamson[31]的随机超平面舍入方法已经被广泛研究了二十多年,产生了对原始技术的各种扩展和美观的算法,用于广泛的应用。尽管这种方法对某些问题(例如Max-Cut)产生了严格的近似保证,但对于许多其他问题(例如Max-SAT和Max-DiCut),严格的近似比仍然未知。造成这种情况的一个主要原因是,已知的舍入半确定松弛的技术很少。在本文中,我们提出了一种新的基于布朗运动的半确定规划舍入的一般简便方法。我们的方法受到算法差异理论的启发。我们开发并展示了分析我们新的舍入算法的工具,利用布朗运动理论、复杂分析和偏微分方程的数学机制。关注约束满足问题,我们将我们的方法应用于几个经典问题,包括Max-Cut, Max-2SAT和Max-DiCut,并推导出与最知名结果竞争的新算法。为了说明我们方法的通用性和一般适用性,我们给出了具有侧约束的最大割问题的新近似算法,该算法关键地利用了粘布朗运动的测量集中结果,粘布朗运动是超平面舍入及其推广中缺少的一个特征。
{"title":"Sticky Brownian Rounding and its Applications to Constraint Satisfaction Problems","authors":"Sepehr Abbasi Zadeh, N. Bansal, Guru Guruganesh, Aleksandar Nikolov, Roy Schwartz, Mohit Singh","doi":"10.1145/3459096","DOIUrl":"https://doi.org/10.1145/3459096","url":null,"abstract":"Semidefinite programming is a powerful tool in the design and analysis of approximation algorithms for combinatorial optimization problems. In particular, the random hyperplane rounding method of Goemans and Williamson [31] has been extensively studied for more than two decades, resulting in various extensions to the original technique and beautiful algorithms for a wide range of applications. Despite the fact that this approach yields tight approximation guarantees for some problems, e.g., Max-Cut, for many others, e.g., Max-SAT and Max-DiCut, the tight approximation ratio is still unknown. One of the main reasons for this is the fact that very few techniques for rounding semi-definite relaxations are known. In this work, we present a new general and simple method for rounding semi-definite programs, based on Brownian motion. Our approach is inspired by recent results in algorithmic discrepancy theory. We develop and present tools for analyzing our new rounding algorithms, utilizing mathematical machinery from the theory of Brownian motion, complex analysis, and partial differential equations. Focusing on constraint satisfaction problems, we apply our method to several classical problems, including Max-Cut, Max-2SAT, and Max-DiCut, and derive new algorithms that are competitive with the best known results. To illustrate the versatility and general applicability of our approach, we give new approximation algorithms for the Max-Cut problem with side constraints that crucially utilizes measure concentration results for the Sticky Brownian Motion, a feature missing from hyperplane rounding and its generalizations.","PeriodicalId":154047,"journal":{"name":"ACM Transactions on Algorithms (TALG)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130671618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
A (2+ε)-Approximation for Maximum Weight Matching in the Semi-streaming Model 半流模型中最大权值匹配的A (2+ε)-近似
Pub Date : 2018-12-07 DOI: 10.1145/3274668
A. Paz, Gregory Schwartzman
We present a simple deterministic single-pass (2+ε)-approximation algorithm for the maximum weight matching problem in the semi-streaming model. This improves on the currently best known approximation ratio of (4+ε). Our algorithm uses O(nlog2 n) bits of space for constant values of ε. It relies on a variation of the local-ratio theorem, which may be of use for other algorithms in the semi-streaming model as well.
针对半流模型中的最大权值匹配问题,提出了一种简单的确定性单次(2+ε)近似算法。这改进了目前最著名的近似比(4+ε)。我们的算法使用O(nlog2 n)位空间来表示ε的常数值。它依赖于局部比定理的一种变体,这可能也适用于半流模型中的其他算法。
{"title":"A (2+ε)-Approximation for Maximum Weight Matching in the Semi-streaming Model","authors":"A. Paz, Gregory Schwartzman","doi":"10.1145/3274668","DOIUrl":"https://doi.org/10.1145/3274668","url":null,"abstract":"We present a simple deterministic single-pass (2+ε)-approximation algorithm for the maximum weight matching problem in the semi-streaming model. This improves on the currently best known approximation ratio of (4+ε). Our algorithm uses O(nlog2 n) bits of space for constant values of ε. It relies on a variation of the local-ratio theorem, which may be of use for other algorithms in the semi-streaming model as well.","PeriodicalId":154047,"journal":{"name":"ACM Transactions on Algorithms (TALG)","volume":"293 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116568419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Optimal-Time Dictionary-Compressed Indexes 最佳时间字典压缩索引
Pub Date : 2018-11-30 DOI: 10.1145/3426473
Anders Roy Christiansen, Mikko Berggren Ettienne, T. Kociumaka, G. Navarro, N. Prezza
We describe the first self-indexes able to count and locate pattern occurrences in optimal time within a space bounded by the size of the most popular dictionary compressors. To achieve this result, we combine several recent findings, including string attractors—new combinatorial objects encompassing most known compressibility measures for highly repetitive texts—and grammars based on locally consistent parsing. More in detail, letγ be the size of the smallest attractor for a text T of length n. The measureγ is an (asymptotic) lower bound to the size of dictionary compressors based on Lempel–Ziv, context-free grammars, and many others. The smallest known text representations in terms of attractors use space O(γ log (n/γ)), and our lightest indexes work within the same asymptotic space. Let ε > 0 be a suitably small constant fixed at construction time, m be the pattern length, and occ be the number of its text occurrences. Our index counts pattern occurrences in O(m+log 2+ε n) time and locates them in O(m+(occ+1)log ε n) time. These times already outperform those of most dictionary-compressed indexes, while obtaining the least asymptotic space for any index searching within O((m+occ),polylog, n) time. Further, by increasing the space to O(γ log (n/γ)log ε n), we reduce the locating time to the optimal O(m+occ), and within O(γ log (n/γ)log n) space we can also count in optimal O(m) time. No dictionary-compressed index had obtained this time before. All our indexes can be constructed in O(n) space and O(nlog n) expected time. As a by-product of independent interest, we show how to build, in O(n) expected time and without knowing the sizeγ of the smallest attractor (which is NP-hard to find), a run-length context-free grammar of size O(γ log (n/γ)) generating (only) T. As a result, our indexes can be built without knowingγ.
我们描述了第一个能够在最流行的字典压缩器的大小所限定的空间内以最佳时间计数和定位模式出现的自索引。为了实现这个结果,我们结合了几个最近的发现,包括字符串吸引子(包含大多数已知的用于高度重复文本的压缩性度量的新组合对象)和基于局部一致解析的语法。更详细地说,设γ为长度为n的文本T的最小吸引子的大小。measureγ是基于Lempel-Ziv、上下文无关语法和许多其他语法的字典压缩器大小的(渐近)下界。关于吸引子的最小已知文本表示使用空间O(γ log (n/γ)),我们最轻的索引在相同的渐近空间内工作。设ε > 0为在构建时固定的一个适当的小常数,m为模式长度,occ为其文本出现的次数。我们的索引计数在O(m+log 2+ε n)时间内出现的模式,并将它们定位在O(m+(occ+1)log ε n)时间内。这些时间已经超过了大多数字典压缩索引的时间,同时在O((m+occ),polylog, n)时间内获得任何索引搜索的最小渐近空间。此外,通过将空间增加到O(γ log (n/γ)log ε n),我们将定位时间减少到最优O(m+occ),并且在O(γ log (n/γ)log n)空间内,我们也可以在最优O(m)时间内计数。在此之前没有获得过字典压缩索引。我们所有的索引都可以在O(n)空间和O(nlog n)预期时间内构建。作为独立兴趣的副产品,我们展示了如何在O(n)预期时间内构建一个大小为O(γ log (n/γ))的运行长度上下文无关语法,该语法的大小为O(γ log (n/γ)),仅生成t。因此,我们的索引可以在不知道γ的情况下构建。
{"title":"Optimal-Time Dictionary-Compressed Indexes","authors":"Anders Roy Christiansen, Mikko Berggren Ettienne, T. Kociumaka, G. Navarro, N. Prezza","doi":"10.1145/3426473","DOIUrl":"https://doi.org/10.1145/3426473","url":null,"abstract":"We describe the first self-indexes able to count and locate pattern occurrences in optimal time within a space bounded by the size of the most popular dictionary compressors. To achieve this result, we combine several recent findings, including string attractors—new combinatorial objects encompassing most known compressibility measures for highly repetitive texts—and grammars based on locally consistent parsing. More in detail, letγ be the size of the smallest attractor for a text T of length n. The measureγ is an (asymptotic) lower bound to the size of dictionary compressors based on Lempel–Ziv, context-free grammars, and many others. The smallest known text representations in terms of attractors use space O(γ log (n/γ)), and our lightest indexes work within the same asymptotic space. Let ε > 0 be a suitably small constant fixed at construction time, m be the pattern length, and occ be the number of its text occurrences. Our index counts pattern occurrences in O(m+log 2+ε n) time and locates them in O(m+(occ+1)log ε n) time. These times already outperform those of most dictionary-compressed indexes, while obtaining the least asymptotic space for any index searching within O((m+occ),polylog, n) time. Further, by increasing the space to O(γ log (n/γ)log ε n), we reduce the locating time to the optimal O(m+occ), and within O(γ log (n/γ)log n) space we can also count in optimal O(m) time. No dictionary-compressed index had obtained this time before. All our indexes can be constructed in O(n) space and O(nlog n) expected time. As a by-product of independent interest, we show how to build, in O(n) expected time and without knowing the sizeγ of the smallest attractor (which is NP-hard to find), a run-length context-free grammar of size O(γ log (n/γ)) generating (only) T. As a result, our indexes can be built without knowingγ.","PeriodicalId":154047,"journal":{"name":"ACM Transactions on Algorithms (TALG)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123592900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
Nearly ETH-tight Algorithms for Planar Steiner Tree with Terminals on Few Faces 终端少面的平面Steiner树的近eth紧算法
Pub Date : 2018-11-16 DOI: 10.1145/3371389
Sándor Kisfaludi-Bak, Jesper Nederlof, E. J. V. Leeuwen
The STEINER TREE problem is one of the most fundamental NP-complete problems, as it models many network design problems. Recall that an instance of this problem consists of a graph with edge weights and a subset of vertices (often called terminals); the goal is to find a subtree of the graph of minimum total weight that connects all terminals. A seminal paper by Erickson et al. [Math. Oper. Res., 1987{ considers instances where the underlying graph is planar and all terminals can be covered by the boundary of k faces. Erickson et al. show that the problem can be solved by an algorithm using nO(k) time and nO(k) space, where n denotes the number of vertices of the input graph. In the past 30 years there has been no significant improvement of this algorithm, despite several efforts. In this work, we give an algorithm for PLANAR STEINER TREE with running time 2O(k)nO(√k) with the above parameterization, using only polynomial space. Furthermore, we show that the running time of our algorithm is almost tight: We prove that there is no f(k)no(√k) algorithm for PLANAR STEINER TREE for any computable function f, unless the Exponential Time Hypothesis fails.
斯坦纳树问题是最基本的np完全问题之一,因为它模拟了许多网络设计问题。回想一下,这个问题的一个实例由一个具有边权的图和一个顶点子集(通常称为终端)组成;目标是找到连接所有终端的总权值最小的图的子树。Erickson等人的一篇开创性论文。③。Res., 1987{考虑了底层图形是平面的并且所有端点都可以被k个面的边界覆盖的实例。Erickson等人表明,该问题可以通过使用nO(k)时间和nO(k)空间的算法来解决,其中n表示输入图的顶点数。在过去的30年里,尽管做出了一些努力,但该算法并没有显著的改进。本文仅使用多项式空间,给出了运行时间为2O(k)nO(√k)的PLANAR STEINER TREE算法。此外,我们证明了该算法的运行时间几乎是紧的:我们证明了对于任何可计算函数f,除非指数时间假设失效,否则PLANAR STEINER TREE不存在f(k)no(√k)算法。
{"title":"Nearly ETH-tight Algorithms for Planar Steiner Tree with Terminals on Few Faces","authors":"Sándor Kisfaludi-Bak, Jesper Nederlof, E. J. V. Leeuwen","doi":"10.1145/3371389","DOIUrl":"https://doi.org/10.1145/3371389","url":null,"abstract":"The STEINER TREE problem is one of the most fundamental NP-complete problems, as it models many network design problems. Recall that an instance of this problem consists of a graph with edge weights and a subset of vertices (often called terminals); the goal is to find a subtree of the graph of minimum total weight that connects all terminals. A seminal paper by Erickson et al. [Math. Oper. Res., 1987{ considers instances where the underlying graph is planar and all terminals can be covered by the boundary of k faces. Erickson et al. show that the problem can be solved by an algorithm using nO(k) time and nO(k) space, where n denotes the number of vertices of the input graph. In the past 30 years there has been no significant improvement of this algorithm, despite several efforts. In this work, we give an algorithm for PLANAR STEINER TREE with running time 2O(k)nO(√k) with the above parameterization, using only polynomial space. Furthermore, we show that the running time of our algorithm is almost tight: We prove that there is no f(k)no(√k) algorithm for PLANAR STEINER TREE for any computable function f, unless the Exponential Time Hypothesis fails.","PeriodicalId":154047,"journal":{"name":"ACM Transactions on Algorithms (TALG)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128130913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Clique-width III Clique-width三世
Pub Date : 2018-11-16 DOI: 10.1145/3280824
F. Fomin, P. Golovach, D. Lokshtanov, Saket Saurabh, M. Zehavi
MAX-CUT, EDGE DOMINATING SET, GRAPH COLORING, and HAMILTONIAN CYCLE on graphs of bounded clique-width have received significant attention as they can be formulated in MSO2 (and, therefore, have linear-time algorithms on bounded treewidth graphs by the celebrated Courcelle’s theorem), but cannot be formulated in MSO1 (which would have yielded linear-time algorithms on bounded clique-width graphs by a well-known theorem of Courcelle, Makowsky, and Rotics). Each of these problems can be solved in time g(k)nf(k) on graphs of clique-width k. Fomin et al. (2010) showed that the running times cannot be improved to g(k)nO(1) assuming W[1]≠FPT. However, this does not rule out non-trivial improvements to the exponent f(k) in the running times. In a follow-up paper, Fomin et al. (2014) improved the running times for EDGE DOMINATING SET and MAX-CUT to nO(k), and proved that these problems cannot be solved in time g(k)no(k) unless ETH fails. Thus, prior to this work, EDGE DOMINATING SET and MAX-CUT were known to have tight nΘ (k) algorithmic upper and lower bounds. In this article, we provide lower bounds for HAMILTONIAN CYCLE and GRAPH COLORING. For HAMILTONIAN CYCLE, our lower bound g(k)no(k) matches asymptotically the recent upper bound nO(k) due to Bergougnoux, Kanté, and Kwon (2017). As opposed to the asymptotically tight nΘ(k) bounds for EDGE DOMINATING SET, MAX-CUT, and HAMILTONIAN CYCLE, the GRAPH COLORING problem has an upper bound of nO(2k) and a lower bound of merely no(√ [4]k) (implicit from the W[1]-hardness proof). In this article, we close the gap for GRAPH COLORING by proving a lower bound of n2o(k). This shows that GRAPH COLORING behaves qualitatively different from the other three problems. To the best of our knowledge, GRAPH COLORING is the first natural problem known to require exponential dependence on the parameter in the exponent of n.
有界团宽度图上的MAX-CUT、EDGE支配SET、图上色和hamilton CYCLE受到了极大的关注,因为它们可以在MSO2中公式化(因此,根据著名的Courcelle定理,在有界树宽度图上具有线性时间算法),但不能在MSO1中公式化(根据Courcelle、Makowsky和Rotics的著名定理,在有界团宽度图上产生线性时间算法)。在团宽度为k的图上,这些问题都可以在时间g(k)nf(k)内得到解决。Fomin等(2010)表明,假设W[1]≠FPT,运行时间不能提高到g(k)nO(1)。然而,这并不排除对运行时间中的指数f(k)进行重大改进的可能性。在后续论文中,Fomin et al.(2014)将EDGE支配SET和MAX-CUT的运行时间提高到nO(k),并证明除非ETH失效,否则这些问题无法在g(k) nO(k)时间内解决。因此,在这项工作之前,已知EDGE支配SET和MAX-CUT具有严格的nΘ (k)算法上限和下限。在本文中,我们给出了哈密顿循环和图着色的下界。对于hamilton CYCLE,我们的下界g(k)no(k)与最近的上界no(k)渐近匹配,这是由Bergougnoux、kant和Kwon(2017)得出的。与EDGE支配SET, MAX-CUT和hamilton CYCLE的渐近紧密nΘ(k)界相反,图着色问题的上界为nO(2k),下界仅为nO(√[4]k)(从W[1]-硬度证明中隐含)。在本文中,我们通过证明n2o(k)的下界来缩小图着色的差距。这表明GRAPH COLORING的行为与其他三个问题在性质上有所不同。据我们所知,GRAPH COLORING是已知的第一个需要指数依赖于n指数中的参数的自然问题。
{"title":"Clique-width III","authors":"F. Fomin, P. Golovach, D. Lokshtanov, Saket Saurabh, M. Zehavi","doi":"10.1145/3280824","DOIUrl":"https://doi.org/10.1145/3280824","url":null,"abstract":"MAX-CUT, EDGE DOMINATING SET, GRAPH COLORING, and HAMILTONIAN CYCLE on graphs of bounded clique-width have received significant attention as they can be formulated in MSO2 (and, therefore, have linear-time algorithms on bounded treewidth graphs by the celebrated Courcelle’s theorem), but cannot be formulated in MSO1 (which would have yielded linear-time algorithms on bounded clique-width graphs by a well-known theorem of Courcelle, Makowsky, and Rotics). Each of these problems can be solved in time g(k)nf(k) on graphs of clique-width k. Fomin et al. (2010) showed that the running times cannot be improved to g(k)nO(1) assuming W[1]≠FPT. However, this does not rule out non-trivial improvements to the exponent f(k) in the running times. In a follow-up paper, Fomin et al. (2014) improved the running times for EDGE DOMINATING SET and MAX-CUT to nO(k), and proved that these problems cannot be solved in time g(k)no(k) unless ETH fails. Thus, prior to this work, EDGE DOMINATING SET and MAX-CUT were known to have tight nΘ (k) algorithmic upper and lower bounds. In this article, we provide lower bounds for HAMILTONIAN CYCLE and GRAPH COLORING. For HAMILTONIAN CYCLE, our lower bound g(k)no(k) matches asymptotically the recent upper bound nO(k) due to Bergougnoux, Kanté, and Kwon (2017). As opposed to the asymptotically tight nΘ(k) bounds for EDGE DOMINATING SET, MAX-CUT, and HAMILTONIAN CYCLE, the GRAPH COLORING problem has an upper bound of nO(2k) and a lower bound of merely no(√ [4]k) (implicit from the W[1]-hardness proof). In this article, we close the gap for GRAPH COLORING by proving a lower bound of n2o(k). This shows that GRAPH COLORING behaves qualitatively different from the other three problems. To the best of our knowledge, GRAPH COLORING is the first natural problem known to require exponential dependence on the parameter in the exponent of n.","PeriodicalId":154047,"journal":{"name":"ACM Transactions on Algorithms (TALG)","volume":"163 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133905056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Holant Clones and the Approximability of Conservative Holant Problems Holant克隆与保守Holant问题的逼近性
Pub Date : 2018-11-02 DOI: 10.1145/3381425
Miriam Backens, L. A. Goldberg
We construct a theory of holant clones to capture the notion of expressibility in the holant framework. Their role is analogous to the role played by functional clones in the study of weighted counting Constraint Satisfaction Problems. We explore the landscape of conservative holant clones and determine the situations in which a set F of functions is “universal in the conservative case,” which means that all functions are contained in the holant clone generated by F together with all unary functions. When F is not universal in the conservative case, we give concise generating sets for the clone. We demonstrate the usefulness of the holant clone theory by using it to give a complete complexity-theory classification for the problem of approximating the solution to conservative holant problems. We show that approximation is intractable exactly when F is universal in the conservative case.
我们构造了一个holant克隆理论,以捕捉holant框架中可表达性的概念。它们的作用类似于功能克隆在加权计数约束满足问题研究中的作用。我们探索了保守holant克隆的情况,并确定了函数集F在“保守情况下全称”的情况,这意味着所有函数都包含在由F生成的holant克隆中以及所有一元函数中。当F在保守情况下不全称时,我们给出了克隆的简洁生成集。利用holant克隆理论对保守holant问题的逼近解给出了一个完备的复杂性理论分类,证明了holant克隆理论的有效性。我们证明了在保守情况下,当F是全称时,近似是难以处理的。
{"title":"Holant Clones and the Approximability of Conservative Holant Problems","authors":"Miriam Backens, L. A. Goldberg","doi":"10.1145/3381425","DOIUrl":"https://doi.org/10.1145/3381425","url":null,"abstract":"We construct a theory of holant clones to capture the notion of expressibility in the holant framework. Their role is analogous to the role played by functional clones in the study of weighted counting Constraint Satisfaction Problems. We explore the landscape of conservative holant clones and determine the situations in which a set F of functions is “universal in the conservative case,” which means that all functions are contained in the holant clone generated by F together with all unary functions. When F is not universal in the conservative case, we give concise generating sets for the clone. We demonstrate the usefulness of the holant clone theory by using it to give a complete complexity-theory classification for the problem of approximating the solution to conservative holant problems. We show that approximation is intractable exactly when F is universal in the conservative case.","PeriodicalId":154047,"journal":{"name":"ACM Transactions on Algorithms (TALG)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127492545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A Deamortization Approach for Dynamic Spanner and Dynamic Maximal Matching 一种动态扳手和动态最大匹配的去平抑方法
Pub Date : 2018-10-25 DOI: 10.1145/3469833
A. Bernstein, S. Forster, M. Henzinger
Many dynamic graph algorithms have an amortized update time, rather than a stronger worst-case guarantee. But amortized data structures are not suitable for real-time systems, where each individual operation has to be executed quickly. For this reason, there exist many recent randomized results that aim to provide a guarantee stronger than amortized expected. The strongest possible guarantee for a randomized algorithm is that it is always correct (Las Vegas) and has high-probability worst-case update time, which gives a bound on the time for each individual operation that holds with high probability. In this article, we present the first polylogarithmic high-probability worst-case time bounds for the dynamic spanner and the dynamic maximal matching problem. (1) For dynamic spanner, the only known o(n) worst-case bounds were O(n3/4) high-probability worst-case update time for maintaining a 3-spanner and O(n5/9) for maintaining a 5-spanner. We give a O(1)k log3 (n) high-probability worst-case time bound for maintaining a (2k-1)-spanner, which yields the first worst-case polylog update time for all constant k. (All the results above maintain the optimal tradeoff of stretch 2k-1 and Õ(n1+1/k) edges.) (2) For dynamic maximal matching, or dynamic 2-approximate maximum matching, no algorithm with o(n) worst-case time bound was known and we present an algorithm with O(log 5 (n)) high-probability worst-case time; similar worst-case bounds existed only for maintaining a matching that was (2+ϵ)-approximate, and hence not maximal. Our results are achieved using a new approach for converting amortized guarantees to worst-case ones for randomized data structures by going through a third type of guarantee, which is a middle ground between the two above: An algorithm is said to have worst-case expected update time ɑ if for every update σ, the expected time to process σ is at most ɑ. Although stronger than amortized expected, the worst-case expected guarantee does not resolve the fundamental problem of amortization: A worst-case expected update time of O(1) still allows for the possibility that every 1/f(n) updates requires ϴ (f(n)) time to process, for arbitrarily high f(n). In this article, we present a black-box reduction that converts any data structure with worst-case expected update time into one with a high-probability worst-case update time: The query time remains the same, while the update time increases by a factor of O(log 2(n)). Thus, we achieve our results in two steps: (1) First, we show how to convert existing dynamic graph algorithms with amortized expected polylogarithmic running times into algorithms with worst-case expected polylogarithmic running times. (2) Then, we use our black-box reduction to achieve the polylogarithmic high-probability worst-case time bound. All our algorithms are Las-Vegas-type algorithms.
许多动态图算法有一个平摊更新时间,而不是一个更强的最坏情况保证。但是平摊数据结构不适合实时系统,因为每个单独的操作都必须快速执行。由于这个原因,最近出现了许多旨在提供比平摊期望更强的保证的随机结果。对于随机算法来说,最强有力的保证是它总是正确的(拉斯维加斯),并且具有高概率的最坏情况更新时间,这为每个具有高概率的单独操作提供了时间界限。在本文中,我们给出了动态扳手的第一个多对数高概率最坏情况时间边界和动态最大匹配问题。(1)对于动态扳手,唯一已知的o(n)个最坏情况边界是维护3扳手的o(n3/4)个大概率最坏情况更新时间和维护5扳手的o(n5/9)个更新时间。我们给出了一个O(1)k log3 (n)的高概率最坏情况时间界限来维持一个(2k-1)-spanner,它产生了所有常数k的第一个最坏情况polylog更新时间。(上述所有结果都保持了拉伸2k-1和Õ(n1+1/k)边的最佳权衡。)(2)对于动态最大匹配,或动态2-近似最大匹配,没有已知的O(n)最坏情况时间界限的算法,我们提出了一个O(log 5 (n))高概率最坏情况时间的算法;类似的最坏情况边界只存在于维持(2+ λ)近似的匹配,因此不是最大值。我们的结果是使用一种新的方法来实现的,通过第三种类型的保证,将平铺保证转换为随机数据结构的最坏情况保证,这是上述两种保证之间的中间地带:一个算法被认为具有最坏情况预期更新时间,如果对于每次更新σ,处理σ的预期时间最多为。虽然比平摊期望更强,但最坏情况预期保证并不能解决平摊的基本问题:对于任意高的f(n),最坏情况预期更新时间为O(1)仍然允许每1/f(n)次更新需要ϴ (f(n))时间来处理的可能性。在本文中,我们介绍了一种黑盒约简,它将任何具有最坏情况预期更新时间的数据结构转换为具有高概率最坏情况更新时间的数据结构:查询时间保持不变,而更新时间增加了O(log 2(n))倍。因此,我们分两步实现了我们的结果:(1)首先,我们展示了如何将现有的具有平摊期望多对数运行时间的动态图算法转换为具有最坏情况期望多对数运行时间的算法。(2)然后,我们使用我们的黑盒约简来实现多对数高概率最坏情况的时间界限。我们所有的算法都是拉斯维加斯式的算法。
{"title":"A Deamortization Approach for Dynamic Spanner and Dynamic Maximal Matching","authors":"A. Bernstein, S. Forster, M. Henzinger","doi":"10.1145/3469833","DOIUrl":"https://doi.org/10.1145/3469833","url":null,"abstract":"Many dynamic graph algorithms have an amortized update time, rather than a stronger worst-case guarantee. But amortized data structures are not suitable for real-time systems, where each individual operation has to be executed quickly. For this reason, there exist many recent randomized results that aim to provide a guarantee stronger than amortized expected. The strongest possible guarantee for a randomized algorithm is that it is always correct (Las Vegas) and has high-probability worst-case update time, which gives a bound on the time for each individual operation that holds with high probability. In this article, we present the first polylogarithmic high-probability worst-case time bounds for the dynamic spanner and the dynamic maximal matching problem. (1) For dynamic spanner, the only known o(n) worst-case bounds were O(n3/4) high-probability worst-case update time for maintaining a 3-spanner and O(n5/9) for maintaining a 5-spanner. We give a O(1)k log3 (n) high-probability worst-case time bound for maintaining a (2k-1)-spanner, which yields the first worst-case polylog update time for all constant k. (All the results above maintain the optimal tradeoff of stretch 2k-1 and Õ(n1+1/k) edges.) (2) For dynamic maximal matching, or dynamic 2-approximate maximum matching, no algorithm with o(n) worst-case time bound was known and we present an algorithm with O(log 5 (n)) high-probability worst-case time; similar worst-case bounds existed only for maintaining a matching that was (2+ϵ)-approximate, and hence not maximal. Our results are achieved using a new approach for converting amortized guarantees to worst-case ones for randomized data structures by going through a third type of guarantee, which is a middle ground between the two above: An algorithm is said to have worst-case expected update time ɑ if for every update σ, the expected time to process σ is at most ɑ. Although stronger than amortized expected, the worst-case expected guarantee does not resolve the fundamental problem of amortization: A worst-case expected update time of O(1) still allows for the possibility that every 1/f(n) updates requires ϴ (f(n)) time to process, for arbitrarily high f(n). In this article, we present a black-box reduction that converts any data structure with worst-case expected update time into one with a high-probability worst-case update time: The query time remains the same, while the update time increases by a factor of O(log 2(n)). Thus, we achieve our results in two steps: (1) First, we show how to convert existing dynamic graph algorithms with amortized expected polylogarithmic running times into algorithms with worst-case expected polylogarithmic running times. (2) Then, we use our black-box reduction to achieve the polylogarithmic high-probability worst-case time bound. All our algorithms are Las-Vegas-type algorithms.","PeriodicalId":154047,"journal":{"name":"ACM Transactions on Algorithms (TALG)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126293515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 62
期刊
ACM Transactions on Algorithms (TALG)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1