首页 > 最新文献

ACM Transactions on Database Systems (TODS)最新文献

英文 中文
Conjunctive Regular Path Queries with Capture Groups 带有捕获组的合取常规路径查询
Pub Date : 2022-02-23 DOI: 10.1145/3514230
Markus L. Schmid
In practice, regular expressions are usually extended by so-called capture groups or capture variables, which allow to capture a subexpression by a variable that can be referenced in the regular expression in order to describe repetitions of subwords. We investigate how this concept could be used for pattern-based graph querying; i.e., we investigate conjunctive regular path queries (CRPQs) that are extended by capture variables. If capture variables are added to CRPQs in a completely unrestricted way, then Boolean evaluation becomes PSPACE-hard in data complexity, even for single-edge graph patterns. On the other hand, if capture variables do not occur under a Kleene star, then the data complexity drops to NL-completeness. Combined complexity is in EXPSPACE but drops to PSPACE-completeness if the depth (i.e., the nesting depth of capture variables) is bounded, and it drops to NP-completeness if the size of the images of capture variables is bounded by a constant (regardless of the depth or of whether capture variables occur under a Kleene star). In the application of regular expressions as string searching tools, references to capture variables only describe exact repetitions of subwords (i.e., they implement the equality relation on strings). Following recent developments in graph database research, we also study CRPQs with capture variables that describe arbitrary regular relations. We show that if the expressions have depth 0, or if the size of the images of capture variables is bounded by a constant, then we can allow arbitrary regular relations while staying in the same complexity bounds. We also investigate the problems of checking whether a given tuple is in the solution set and computing the whole solution set. On the conceptual side, we add capture variables to CRPQs in such a way that they can be defined in an expression on one arc of the graph pattern but also referenced in expressions on other arcs. Hence, they add to CRPQs the possibility to define inter-dependencies between different paths, which is a relevant feature of pattern-based graph querying.
在实践中,正则表达式通常通过所谓的捕获组或捕获变量进行扩展,这些捕获组或捕获变量允许通过变量捕获子表达式,该变量可以在正则表达式中引用,以描述子字的重复。我们研究了如何将这个概念用于基于模式的图查询;也就是说,我们研究了由捕获变量扩展的连接规则路径查询(crpq)。如果以完全不受限制的方式将捕获变量添加到crpq中,那么布尔值的计算在数据复杂性方面就会变得难以使用pspace,即使对于单边图模式也是如此。另一方面,如果捕获变量没有出现在Kleene星号下,则数据复杂性下降到nl完备性。组合复杂度为EXPSPACE,但如果深度(即捕获变量的嵌套深度)有限制,则下降到PSPACE-completeness;如果捕获变量的图像大小有常数限制(无论深度或捕获变量是否出现在Kleene星下),则下降到NP-completeness。在正则表达式作为字符串搜索工具的应用中,对捕获变量的引用仅描述子词的精确重复(即,它们实现了字符串上的相等关系)。随着图数据库研究的最新发展,我们还研究了具有描述任意规则关系的捕获变量的crpq。我们表明,如果表达式的深度为0,或者如果捕获变量的图像的大小有一个常数的边界,那么我们可以在保持相同的复杂性边界的情况下允许任意规则关系。我们还研究了检查给定元组是否在解集中以及计算整个解集中的问题。在概念方面,我们以这样一种方式向crpq添加捕获变量,即它们可以在图形模式的一个弧线上的表达式中定义,但也可以在其他弧线上的表达式中引用。因此,它们为crpq增加了定义不同路径之间相互依赖关系的可能性,这是基于模式的图查询的一个相关特征。
{"title":"Conjunctive Regular Path Queries with Capture Groups","authors":"Markus L. Schmid","doi":"10.1145/3514230","DOIUrl":"https://doi.org/10.1145/3514230","url":null,"abstract":"In practice, regular expressions are usually extended by so-called capture groups or capture variables, which allow to capture a subexpression by a variable that can be referenced in the regular expression in order to describe repetitions of subwords. We investigate how this concept could be used for pattern-based graph querying; i.e., we investigate conjunctive regular path queries (CRPQs) that are extended by capture variables. If capture variables are added to CRPQs in a completely unrestricted way, then Boolean evaluation becomes PSPACE-hard in data complexity, even for single-edge graph patterns. On the other hand, if capture variables do not occur under a Kleene star, then the data complexity drops to NL-completeness. Combined complexity is in EXPSPACE but drops to PSPACE-completeness if the depth (i.e., the nesting depth of capture variables) is bounded, and it drops to NP-completeness if the size of the images of capture variables is bounded by a constant (regardless of the depth or of whether capture variables occur under a Kleene star). In the application of regular expressions as string searching tools, references to capture variables only describe exact repetitions of subwords (i.e., they implement the equality relation on strings). Following recent developments in graph database research, we also study CRPQs with capture variables that describe arbitrary regular relations. We show that if the expressions have depth 0, or if the size of the images of capture variables is bounded by a constant, then we can allow arbitrary regular relations while staying in the same complexity bounds. We also investigate the problems of checking whether a given tuple is in the solution set and computing the whole solution set. On the conceptual side, we add capture variables to CRPQs in such a way that they can be defined in an expression on one arc of the graph pattern but also referenced in expressions on other arcs. Hence, they add to CRPQs the possibility to define inter-dependencies between different paths, which is a relevant feature of pattern-based graph querying.","PeriodicalId":6983,"journal":{"name":"ACM Transactions on Database Systems (TODS)","volume":"69 1","pages":"1 - 52"},"PeriodicalIF":0.0,"publicationDate":"2022-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87142362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Formal Framework for Complex Event Recognition 复杂事件识别的形式化框架
Pub Date : 2021-12-08 DOI: 10.1145/3485463
Alejandro Grez, Cristian Riveros, M. Ugarte, Stijn Vansummeren
Complex event recognition (CER) has emerged as the unifying field for technologies that require processing and correlating distributed data sources in real time. CER finds applications in diverse domains, which has resulted in a large number of proposals for expressing and processing complex events. Existing CER languages lack a clear semantics, however, which makes them hard to understand and generalize. Moreover, there are no general techniques for evaluating CER query languages with clear performance guarantees. In this article, we embark on the task of giving a rigorous and efficient framework to CER. We propose a formal language for specifying complex events, called complex event logic (CEL), that contains the main features used in the literature and has a denotational and compositional semantics. We also formalize the so-called selection strategies, which had only been presented as by-design extensions to existing frameworks. We give insight into the language design trade-offs regarding the strict sequencing operators of CEL and selection strategies. With a well-defined semantics at hand, we discuss how to efficiently process complex events by evaluating CEL formulas with unary filters. We start by introducing a formal computational model for CER, called complex event automata (CEA), and study how to compile CEL formulas with unary filters into CEA. Furthermore, we provide efficient algorithms for evaluating CEA over event streams using constant time per event followed by output-linear delay enumeration of the results.
复杂事件识别(CER)已经成为需要实时处理和关联分布式数据源的技术的统一领域。CER在不同的领域都有应用,这就产生了大量表达和处理复杂事件的建议。但是,现有的CER语言缺乏清晰的语义,这使得它们难以理解和泛化。此外,没有通用的技术来评估具有明确性能保证的CER查询语言。在本文中,我们将着手为CER提供一个严格而有效的框架。我们提出了一种用于指定复杂事件的形式化语言,称为复杂事件逻辑(CEL),它包含了文献中使用的主要特征,并具有指称和组合语义。我们还形式化了所谓的选择策略,它只是作为对现有框架的设计扩展而呈现的。我们深入研究了语言设计中关于CEL的严格排序操作符和选择策略的权衡。有了定义良好的语义,我们讨论了如何通过使用一元过滤器评估CEL公式来有效地处理复杂事件。本文首先介绍了复杂事件自动机(complex event automata, CEA)的形式化计算模型,并研究了如何将带有一元过滤器的CEL公式编译成复杂事件自动机。此外,我们提供了有效的算法来评估事件流上的CEA,使用每个事件的恒定时间,然后是结果的输出线性延迟枚举。
{"title":"A Formal Framework for Complex Event Recognition","authors":"Alejandro Grez, Cristian Riveros, M. Ugarte, Stijn Vansummeren","doi":"10.1145/3485463","DOIUrl":"https://doi.org/10.1145/3485463","url":null,"abstract":"Complex event recognition (CER) has emerged as the unifying field for technologies that require processing and correlating distributed data sources in real time. CER finds applications in diverse domains, which has resulted in a large number of proposals for expressing and processing complex events. Existing CER languages lack a clear semantics, however, which makes them hard to understand and generalize. Moreover, there are no general techniques for evaluating CER query languages with clear performance guarantees. In this article, we embark on the task of giving a rigorous and efficient framework to CER. We propose a formal language for specifying complex events, called complex event logic (CEL), that contains the main features used in the literature and has a denotational and compositional semantics. We also formalize the so-called selection strategies, which had only been presented as by-design extensions to existing frameworks. We give insight into the language design trade-offs regarding the strict sequencing operators of CEL and selection strategies. With a well-defined semantics at hand, we discuss how to efficiently process complex events by evaluating CEL formulas with unary filters. We start by introducing a formal computational model for CER, called complex event automata (CEA), and study how to compile CEL formulas with unary filters into CEA. Furthermore, we provide efficient algorithms for evaluating CEA over event streams using constant time per event followed by output-linear delay enumeration of the results.","PeriodicalId":6983,"journal":{"name":"ACM Transactions on Database Systems (TODS)","volume":"28 1","pages":"1 - 49"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83098974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
On Directed Densest Subgraph Discovery 关于有向密集子图的发现
Pub Date : 2021-11-15 DOI: 10.1145/3483940
Chenhao Ma, Yixiang Fang, Reynold Cheng, L. Lakshmanan, Wenjie Zhang, Xuemin Lin
Given a directed graph G, the directed densest subgraph (DDS) problem refers to the finding of a subgraph from G, whose density is the highest among all the subgraphs of G. The DDS problem is fundamental to a wide range of applications, such as fraud detection, community mining, and graph compression. However, existing DDS solutions suffer from efficiency and scalability problems: on a 3,000-edge graph, it takes three days for one of the best exact algorithms to complete. In this article, we develop an efficient and scalable DDS solution. We introduce the notion of [x, y]-core, which is a dense subgraph for G, and show that the densest subgraph can be accurately located through the [x, y]-core with theoretical guarantees. Based on the [x, y]-core, we develop exact and approximation algorithms. We further study the problems of maintaining the DDS over dynamic directed graphs and finding the weighted DDS on weighted directed graphs, and we develop efficient non-trivial algorithms to solve these two problems by extending our DDS algorithms. We have performed an extensive evaluation of our approaches on 15 real large datasets. The results show that our proposed solutions are up to six orders of magnitude faster than the state-of-the-art.
给定一个有向图G,有向密度子图(DDS)问题是指从G中找到一个密度在G的所有子图中最高的子图。DDS问题是广泛应用的基础,如欺诈检测,社区挖掘和图压缩。然而,现有的DDS解决方案存在效率和可扩展性问题:在3000条边的图上,完成一个最精确的算法需要三天的时间。在本文中,我们将开发一个高效且可扩展的DDS解决方案。我们引入了G的密集子图[x, y]-核的概念,并证明了通过[x, y]-核可以精确定位最密集的子图,并给出了理论保证。基于[x, y]核,我们开发了精确和近似算法。我们进一步研究了动态有向图上DDS的维护问题和加权有向图上加权DDS的求值问题,并通过扩展DDS算法,开发了有效的非平凡算法来解决这两个问题。我们在15个真实的大型数据集上对我们的方法进行了广泛的评估。结果表明,我们提出的解决方案比最先进的解决方案快6个数量级。
{"title":"On Directed Densest Subgraph Discovery","authors":"Chenhao Ma, Yixiang Fang, Reynold Cheng, L. Lakshmanan, Wenjie Zhang, Xuemin Lin","doi":"10.1145/3483940","DOIUrl":"https://doi.org/10.1145/3483940","url":null,"abstract":"Given a directed graph G, the directed densest subgraph (DDS) problem refers to the finding of a subgraph from G, whose density is the highest among all the subgraphs of G. The DDS problem is fundamental to a wide range of applications, such as fraud detection, community mining, and graph compression. However, existing DDS solutions suffer from efficiency and scalability problems: on a 3,000-edge graph, it takes three days for one of the best exact algorithms to complete. In this article, we develop an efficient and scalable DDS solution. We introduce the notion of [x, y]-core, which is a dense subgraph for G, and show that the densest subgraph can be accurately located through the [x, y]-core with theoretical guarantees. Based on the [x, y]-core, we develop exact and approximation algorithms. We further study the problems of maintaining the DDS over dynamic directed graphs and finding the weighted DDS on weighted directed graphs, and we develop efficient non-trivial algorithms to solve these two problems by extending our DDS algorithms. We have performed an extensive evaluation of our approaches on 15 real large datasets. The results show that our proposed solutions are up to six orders of magnitude faster than the state-of-the-art.","PeriodicalId":6983,"journal":{"name":"ACM Transactions on Database Systems (TODS)","volume":"8 1","pages":"1 - 45"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82340015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Timely Reporting of Heavy Hitters Using External Memory 及时报告使用外部内存的重量级人物
Pub Date : 2021-11-15 DOI: 10.1145/3472392
Shikha Singh, P. Pandey, M. A. Bender, Jonathan W. Berry, Martín Farach-Colton, Rob Johnson, Thomas M. Kroeger, C. Phillips
Given an input stream S of size N, a ɸ-heavy hitter is an item that occurs at least ɸN times in S. The problem of finding heavy-hitters is extensively studied in the database literature. We study a real-time heavy-hitters variant in which an element must be reported shortly after we see its T = ɸ N-th occurrence (and hence it becomes a heavy hitter). We call this the Timely Event Detection (TED) Problem. The TED problem models the needs of many real-world monitoring systems, which demand accurate (i.e., no false negatives) and timely reporting of all events from large, high-speed streams with a low reporting threshold (high sensitivity). Like the classic heavy-hitters problem, solving the TED problem without false-positives requires large space (Ω (N) words). Thus in-RAM heavy-hitters algorithms typically sacrifice accuracy (i.e., allow false positives), sensitivity, or timeliness (i.e., use multiple passes). We show how to adapt heavy-hitters algorithms to external memory to solve the TED problem on large high-speed streams while guaranteeing accuracy, sensitivity, and timeliness. Our data structures are limited only by I/O-bandwidth (not latency) and support a tunable tradeoff between reporting delay and I/O overhead. With a small bounded reporting delay, our algorithms incur only a logarithmic I/O overhead. We implement and validate our data structures empirically using the Firehose streaming benchmark. Multi-threaded versions of our structures can scale to process 11M observations per second before becoming CPU bound. In comparison, a naive adaptation of the standard heavy-hitters algorithm to external memory would be limited by the storage device’s random I/O throughput, i.e., ≈100K observations per second.
给定大小为N的输入流S,一个重敲子是一个在S中出现至少N次的项。查找重敲子的问题在数据库文献中得到了广泛的研究。我们研究了一个实时重磅变体,其中一个元素必须在我们看到它的T = h n次出现后不久报告(因此它成为重磅变体)。我们称之为及时事件检测(TED)问题。TED问题模拟了许多现实世界监测系统的需求,这些系统要求准确(即,无假阴性)和及时地从具有低报告阈值(高灵敏度)的大型高速流中报告所有事件。像经典的重量级人物问题一样,解决TED问题而不出现误报需要很大的空间(Ω (N)个单词)。因此,在ram中,重量级算法通常会牺牲准确性(即允许误报)、灵敏度或及时性(即使用多次传递)。我们展示了如何在保证准确性、灵敏度和及时性的同时,将重量级算法应用于外部存储器,以解决大型高速流上的TED问题。我们的数据结构仅受I/O带宽(而不是延迟)的限制,并支持在报告延迟和I/O开销之间进行可调的权衡。由于报告延迟很小,我们的算法只会产生对数级的I/O开销。我们使用Firehose流基准来实现和验证我们的数据结构。我们结构的多线程版本可以扩展到每秒处理11M个观测值,然后才会受到CPU限制。相比之下,将标准的重量级算法简单地应用于外部存储器将受到存储设备随机I/O吞吐量的限制,即每秒≈100K的观察值。
{"title":"Timely Reporting of Heavy Hitters Using External Memory","authors":"Shikha Singh, P. Pandey, M. A. Bender, Jonathan W. Berry, Martín Farach-Colton, Rob Johnson, Thomas M. Kroeger, C. Phillips","doi":"10.1145/3472392","DOIUrl":"https://doi.org/10.1145/3472392","url":null,"abstract":"Given an input stream S of size N, a ɸ-heavy hitter is an item that occurs at least ɸN times in S. The problem of finding heavy-hitters is extensively studied in the database literature. We study a real-time heavy-hitters variant in which an element must be reported shortly after we see its T = ɸ N-th occurrence (and hence it becomes a heavy hitter). We call this the Timely Event Detection (TED) Problem. The TED problem models the needs of many real-world monitoring systems, which demand accurate (i.e., no false negatives) and timely reporting of all events from large, high-speed streams with a low reporting threshold (high sensitivity). Like the classic heavy-hitters problem, solving the TED problem without false-positives requires large space (Ω (N) words). Thus in-RAM heavy-hitters algorithms typically sacrifice accuracy (i.e., allow false positives), sensitivity, or timeliness (i.e., use multiple passes). We show how to adapt heavy-hitters algorithms to external memory to solve the TED problem on large high-speed streams while guaranteeing accuracy, sensitivity, and timeliness. Our data structures are limited only by I/O-bandwidth (not latency) and support a tunable tradeoff between reporting delay and I/O overhead. With a small bounded reporting delay, our algorithms incur only a logarithmic I/O overhead. We implement and validate our data structures empirically using the Firehose streaming benchmark. Multi-threaded versions of our structures can scale to process 11M observations per second before becoming CPU bound. In comparison, a naive adaptation of the standard heavy-hitters algorithm to external memory would be limited by the storage device’s random I/O throughput, i.e., ≈100K observations per second.","PeriodicalId":6983,"journal":{"name":"ACM Transactions on Database Systems (TODS)","volume":"7 1","pages":"1 - 35"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82690199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SkinnerDB: Regret-bounded Query Evaluation via Reinforcement Learning 基于强化学习的无悔查询评估
Pub Date : 2021-09-28 DOI: 10.1145/3464389
Immanuel Trummer, Junxiong Wang, Ziyun Wei, Deepak Maram, Samuel Moseley, Saehan Jo, Joseph Antonakakis, Ankush Rayabhari
SkinnerDB uses reinforcement learning for reliable join ordering, exploiting an adaptive processing engine with specialized join algorithms and data structures. It maintains no data statistics and uses no cost or cardinality models. Also, it uses no training workloads nor does it try to link the current query to seemingly similar queries in the past. Instead, it uses reinforcement learning to learn optimal join orders from scratch during the execution of the current query. To that purpose, it divides the execution of a query into many small time slices. Different join orders are tried in different time slices. SkinnerDB merges result tuples generated according to different join orders until a complete query result is obtained. By measuring execution progress per time slice, it identifies promising join orders as execution proceeds. Along with SkinnerDB, we introduce a new quality criterion for query execution strategies. We upper-bound expected execution cost regret, i.e., the expected amount of execution cost wasted due to sub-optimal join order choices. SkinnerDB features multiple execution strategies that are optimized for that criterion. Some of them can be executed on top of existing database systems. For maximal performance, we introduce a customized execution engine, facilitating fast join order switching via specialized multi-way join algorithms and tuple representations. We experimentally compare SkinnerDB’s performance against various baselines, including MonetDB, Postgres, and adaptive processing methods. We consider various benchmarks, including the join order benchmark, TPC-H, and JCC-H, as well as benchmark variants with user-defined functions. Overall, the overheads of reliable join ordering are negligible compared to the performance impact of the occasional, catastrophic join order choice.
SkinnerDB使用强化学习实现可靠的连接排序,利用具有专门连接算法和数据结构的自适应处理引擎。它不维护数据统计信息,也不使用成本模型或基数模型。此外,它不使用训练工作负载,也不试图将当前查询链接到过去看似相似的查询。相反,它使用强化学习在执行当前查询期间从头开始学习最佳连接顺序。为此,它将查询的执行划分为许多小的时间片。在不同的时间片中尝试不同的连接顺序。SkinnerDB将不同连接顺序产生的结果元组进行合并,直到得到一个完整的查询结果。通过测量每个时间片的执行进度,它可以在执行过程中识别有希望的连接顺序。与SkinnerDB一起,我们为查询执行策略引入了一个新的质量标准。我们设定了预期执行成本遗憾的上限,即由于次优连接顺序选择而浪费的预期执行成本。SkinnerDB具有针对该标准进行优化的多种执行策略。其中一些可以在现有数据库系统之上执行。为了获得最大的性能,我们引入了一个定制的执行引擎,通过专门的多路连接算法和元组表示促进快速连接顺序切换。我们通过实验比较了SkinnerDB与各种基线(包括MonetDB、Postgres和自适应处理方法)的性能。我们考虑了各种基准测试,包括连接顺序基准测试、TPC-H和JCC-H,以及带有用户定义函数的基准测试变体。总的来说,与偶尔灾难性的连接顺序选择对性能的影响相比,可靠连接排序的开销可以忽略不计。
{"title":"SkinnerDB: Regret-bounded Query Evaluation via Reinforcement Learning","authors":"Immanuel Trummer, Junxiong Wang, Ziyun Wei, Deepak Maram, Samuel Moseley, Saehan Jo, Joseph Antonakakis, Ankush Rayabhari","doi":"10.1145/3464389","DOIUrl":"https://doi.org/10.1145/3464389","url":null,"abstract":"SkinnerDB uses reinforcement learning for reliable join ordering, exploiting an adaptive processing engine with specialized join algorithms and data structures. It maintains no data statistics and uses no cost or cardinality models. Also, it uses no training workloads nor does it try to link the current query to seemingly similar queries in the past. Instead, it uses reinforcement learning to learn optimal join orders from scratch during the execution of the current query. To that purpose, it divides the execution of a query into many small time slices. Different join orders are tried in different time slices. SkinnerDB merges result tuples generated according to different join orders until a complete query result is obtained. By measuring execution progress per time slice, it identifies promising join orders as execution proceeds. Along with SkinnerDB, we introduce a new quality criterion for query execution strategies. We upper-bound expected execution cost regret, i.e., the expected amount of execution cost wasted due to sub-optimal join order choices. SkinnerDB features multiple execution strategies that are optimized for that criterion. Some of them can be executed on top of existing database systems. For maximal performance, we introduce a customized execution engine, facilitating fast join order switching via specialized multi-way join algorithms and tuple representations. We experimentally compare SkinnerDB’s performance against various baselines, including MonetDB, Postgres, and adaptive processing methods. We consider various benchmarks, including the join order benchmark, TPC-H, and JCC-H, as well as benchmark variants with user-defined functions. Overall, the overheads of reliable join ordering are negligible compared to the performance impact of the occasional, catastrophic join order choice.","PeriodicalId":6983,"journal":{"name":"ACM Transactions on Database Systems (TODS)","volume":"6 1","pages":"1 - 45"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81172006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Error Bounded Line Simplification Algorithms for Trajectory Compression: An Experimental Evaluation 轨迹压缩的误差边界化简算法:实验评价
Pub Date : 2021-09-28 DOI: 10.1145/3474373
Xuelian Lin, Shuai Ma, Jiahao Jiang, Yanchen Hou, Tianyu Wo
Nowadays, various sensors are collecting, storing, and transmitting tremendous trajectory data, and it is well known that the storage, network bandwidth, and computing resources could be heavily wasted if raw trajectory data is directly adopted. Line simplification algorithms are effective approaches to attacking this issue by compressing a trajectory to a set of continuous line segments, and are commonly used in practice. In this article, we first classify the error bounded line simplification algorithms into different categories and review each category of algorithms. We then study the data aging problem of line simplification algorithms and distance metrics from the views of aging friendliness and aging errors. Finally, we present a systematic experimental evaluation of representative error bounded line simplification algorithms, including both compression optimal and sub-optimal methods, in terms of commonly adopted perpendicular Euclidean, synchronous Euclidean, and direction-aware distances. Using real-life trajectory datasets, we systematically evaluate and analyze the performance (compression ratio, average error, running time, aging friendliness, and query friendliness) of error bounded line simplification algorithms with respect to distance metrics, trajectory sizes, and error bounds. Our study provides a full picture of error bounded line simplification algorithms, which leads to guidelines on how to choose appropriate algorithms and distance metrics for practical applications.
目前,各种传感器正在采集、存储和传输大量的轨迹数据,众所周知,如果直接采用原始轨迹数据,会严重浪费存储、网络带宽和计算资源。线化简算法是解决这一问题的有效方法,它将轨迹压缩为一组连续的线段,在实践中得到了广泛的应用。本文首先对误差边界化简算法进行了分类,并对每一类算法进行了综述。然后从老化友好性和老化误差的角度研究了线化简算法和距离度量的数据老化问题。最后,根据常用的垂直欧几里得距离、同步欧几里得距离和方向感知距离,对代表性误差边界线简化算法(包括压缩优化和次优化方法)进行了系统的实验评估。利用真实的轨迹数据集,我们系统地评估和分析了误差边界线简化算法在距离度量、轨迹大小和误差边界方面的性能(压缩比、平均误差、运行时间、老化友好性和查询友好性)。我们的研究提供了错误边界线简化算法的全貌,从而指导如何为实际应用选择合适的算法和距离度量。
{"title":"Error Bounded Line Simplification Algorithms for Trajectory Compression: An Experimental Evaluation","authors":"Xuelian Lin, Shuai Ma, Jiahao Jiang, Yanchen Hou, Tianyu Wo","doi":"10.1145/3474373","DOIUrl":"https://doi.org/10.1145/3474373","url":null,"abstract":"Nowadays, various sensors are collecting, storing, and transmitting tremendous trajectory data, and it is well known that the storage, network bandwidth, and computing resources could be heavily wasted if raw trajectory data is directly adopted. Line simplification algorithms are effective approaches to attacking this issue by compressing a trajectory to a set of continuous line segments, and are commonly used in practice. In this article, we first classify the error bounded line simplification algorithms into different categories and review each category of algorithms. We then study the data aging problem of line simplification algorithms and distance metrics from the views of aging friendliness and aging errors. Finally, we present a systematic experimental evaluation of representative error bounded line simplification algorithms, including both compression optimal and sub-optimal methods, in terms of commonly adopted perpendicular Euclidean, synchronous Euclidean, and direction-aware distances. Using real-life trajectory datasets, we systematically evaluate and analyze the performance (compression ratio, average error, running time, aging friendliness, and query friendliness) of error bounded line simplification algorithms with respect to distance metrics, trajectory sizes, and error bounds. Our study provides a full picture of error bounded line simplification algorithms, which leads to guidelines on how to choose appropriate algorithms and distance metrics for practical applications.","PeriodicalId":6983,"journal":{"name":"ACM Transactions on Database Systems (TODS)","volume":"17 1","pages":"1 - 44"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75498807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Stream Data Cleaning under Speed and Acceleration Constraints 速度和加速度约束下的流数据清理
Pub Date : 2021-09-28 DOI: 10.1145/3465740
Shaoxu Song, Fei Gao, Aoqian Zhang, Jianmin Wang, Philip S. Yu
Stream data are often dirty, for example, owing to unreliable sensor reading or erroneous extraction of stock prices. Most stream data cleaning approaches employ a smoothing filter, which may seriously alter the data without preserving the original information. We argue that the cleaning should avoid changing those originally correct/clean data, a.k.a. the minimum modification rule in data cleaning. To capture the knowledge about what is clean, we consider the (widely existing) constraints on the speed and acceleration of data changes, such as fuel consumption per hour, daily limit of stock prices, or the top speed and acceleration of a car. Guided by these semantic constraints, in this article, we propose the constraint-based approach for cleaning stream data. It is notable that existing data repair techniques clean (a sequence of) data as a whole and fail to support stream computation. To this end, we have to relax the global optimum over the entire sequence to the local optimum in a window. Rather than the commonly observed NP-hardness of general data repairing problems, our major contributions include (1) polynomial time algorithm for global optimum, (2) linear time algorithm towards local optimum under an efficient median-based solution, and (3) experiments on real datasets demonstrate that our method can show significantly lower L1 error than the existing approaches such as smoother.
流数据通常是脏的,例如,由于不可靠的传感器读数或错误的股票价格提取。大多数流数据清理方法采用平滑过滤器,这可能会严重改变数据而不保留原始信息。我们认为清理应该避免改变那些原本正确/干净的数据,即数据清理中的最小修改规则。为了获取关于什么是清洁的知识,我们考虑(广泛存在的)对数据变化的速度和加速度的限制,例如每小时的燃料消耗,股票价格的每日限制,或汽车的最高速度和加速度。在这些语义约束的指导下,在本文中,我们提出了基于约束的方法来清理流数据。值得注意的是,现有的数据修复技术作为一个整体清理(一个序列)数据,不能支持流计算。为此,我们必须将整个序列的全局最优松弛为窗口内的局部最优。与一般数据修复问题中常见的np -硬度不同,我们的主要贡献包括:(1)全局最优的多项式时间算法,(2)基于有效中值解决方案的局部最优线性时间算法,以及(3)在真实数据集上的实验表明,我们的方法可以显着降低L1误差。
{"title":"Stream Data Cleaning under Speed and Acceleration Constraints","authors":"Shaoxu Song, Fei Gao, Aoqian Zhang, Jianmin Wang, Philip S. Yu","doi":"10.1145/3465740","DOIUrl":"https://doi.org/10.1145/3465740","url":null,"abstract":"Stream data are often dirty, for example, owing to unreliable sensor reading or erroneous extraction of stock prices. Most stream data cleaning approaches employ a smoothing filter, which may seriously alter the data without preserving the original information. We argue that the cleaning should avoid changing those originally correct/clean data, a.k.a. the minimum modification rule in data cleaning. To capture the knowledge about what is clean, we consider the (widely existing) constraints on the speed and acceleration of data changes, such as fuel consumption per hour, daily limit of stock prices, or the top speed and acceleration of a car. Guided by these semantic constraints, in this article, we propose the constraint-based approach for cleaning stream data. It is notable that existing data repair techniques clean (a sequence of) data as a whole and fail to support stream computation. To this end, we have to relax the global optimum over the entire sequence to the local optimum in a window. Rather than the commonly observed NP-hardness of general data repairing problems, our major contributions include (1) polynomial time algorithm for global optimum, (2) linear time algorithm towards local optimum under an efficient median-based solution, and (3) experiments on real datasets demonstrate that our method can show significantly lower L1 error than the existing approaches such as smoother.","PeriodicalId":6983,"journal":{"name":"ACM Transactions on Database Systems (TODS)","volume":"4 1","pages":"1 - 44"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80701478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Graph Indexing for Efficient Evaluation of Label-constrained Reachability Queries 图索引用于标签约束可达性查询的有效评估
Pub Date : 2021-05-28 DOI: 10.1145/3451159
Yangjun Chen, Gagandeep Singh
Given a directed edge labeled graph G, to check whether vertex v is reachable from vertex u under a label set S is to know if there is a path from u to v whose edge labels across the path are a subset of S. Such a query is referred to as a label-constrained reachability (LCR) query. In this article, we present a new approach to store a compressed transitive closure of G in the form of intervals over spanning trees (forests). The basic idea is to associate each vertex v with two sequences of some other vertices: one is used to check reachability from v to any other vertex, by using intervals, while the other is used to check reachability to v from any other vertex. We will show that such sequences are in general much shorter than the number of vertices in G. Extensive experiments have been conducted, which demonstrates that our method is much better than all the previous methods for this problem in all the important aspects, including index construction times, index sizes, and query times.
给定一个有向边标记图G,在标签集S下,检验顶点v与顶点u是否可达,就是知道是否存在从u到v的路径,该路径上的边标记是S的子集。这种查询称为标签约束可达性(label-constrained reachability, LCR)查询。在本文中,我们提出了一种新的方法,以生成树(森林)上的区间形式存储G的压缩传递闭包。基本思想是将每个顶点v与其他顶点的两个序列相关联:一个用于通过间隔检查从v到任何其他顶点的可达性,而另一个用于检查从任何其他顶点到v的可达性。我们将证明,这样的序列通常比g中的顶点数短得多。大量的实验表明,我们的方法在所有重要方面都比以前的方法好得多,包括索引构建时间、索引大小和查询时间。
{"title":"Graph Indexing for Efficient Evaluation of Label-constrained Reachability Queries","authors":"Yangjun Chen, Gagandeep Singh","doi":"10.1145/3451159","DOIUrl":"https://doi.org/10.1145/3451159","url":null,"abstract":"Given a directed edge labeled graph G, to check whether vertex v is reachable from vertex u under a label set S is to know if there is a path from u to v whose edge labels across the path are a subset of S. Such a query is referred to as a label-constrained reachability (LCR) query. In this article, we present a new approach to store a compressed transitive closure of G in the form of intervals over spanning trees (forests). The basic idea is to associate each vertex v with two sequences of some other vertices: one is used to check reachability from v to any other vertex, by using intervals, while the other is used to check reachability to v from any other vertex. We will show that such sequences are in general much shorter than the number of vertices in G. Extensive experiments have been conducted, which demonstrates that our method is much better than all the previous methods for this problem in all the important aspects, including index construction times, index sizes, and query times.","PeriodicalId":6983,"journal":{"name":"ACM Transactions on Database Systems (TODS)","volume":"2 1","pages":"1 - 50"},"PeriodicalIF":0.0,"publicationDate":"2021-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76478441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Optimizing One-time and Continuous Subgraph Queries using Worst-case Optimal Joins 使用最坏情况最优连接优化一次性和连续子图查询
Pub Date : 2021-05-28 DOI: 10.1145/3446980
Amine Mhedhbi, C. Kankanamge, S. Salihoglu
We study the problem of optimizing one-time and continuous subgraph queries using the new worst-case optimal join plans. Worst-case optimal plans evaluate queries by matching one query vertex at a time using multiway intersections. The core problem in optimizing worst-case optimal plans is to pick an ordering of the query vertices to match. We make two main contributions: 1. A cost-based dynamic programming optimizer for one-time queries that (i) picks efficient query vertex orderings for worst-case optimal plans and (ii) generates hybrid plans that mix traditional binary joins with worst-case optimal style multiway intersections. In addition to our optimizer, we describe an adaptive technique that changes the query vertex orderings of the worst-case optimal subplans during query execution for more efficient query evaluation. The plan space of our one-time optimizer contains plans that are not in the plan spaces based on tree decompositions from prior work. 2. A cost-based greedy optimizer for continuous queries that builds on the delta subgraph query framework. Given a set of continuous queries, our optimizer decomposes these queries into multiple delta subgraph queries, picks a plan for each delta query, and generates a single combined plan that evaluates all of the queries. Our combined plans share computations across operators of the plans for the delta queries if the operators perform the same intersections. To increase the amount of computation shared, we describe an additional optimization that shares partial intersections across operators. Our optimizers use a new cost metric for worst-case optimal plans called intersection-cost. When generating hybrid plans, our dynamic programming optimizer for one-time queries combines intersection-cost with the cost of binary joins. We demonstrate the effectiveness of our plans, adaptive technique, and partial intersection sharing optimization through extensive experiments. Our optimizers are integrated into GraphflowDB.
我们研究了使用新的最坏情况最优连接计划优化一次性和连续子图查询的问题。最坏情况最优计划通过使用多路交叉点一次匹配一个查询顶点来评估查询。优化最坏情况最优计划的核心问题是选择匹配的查询顶点的顺序。我们做出了两个主要贡献:1。用于一次性查询的基于成本的动态规划优化器,它(i)为最坏情况最优计划选择有效的查询顶点排序,(ii)生成混合计划,将传统的二进制连接与最坏情况最优风格的多路交叉口混合在一起。除了我们的优化器之外,我们还描述了一种自适应技术,该技术在查询执行期间更改最坏情况最优子计划的查询顶点顺序,以提高查询评估的效率。我们的一次性优化器的计划空间包含了基于先前工作的树分解而不在计划空间中的计划。2. 基于增量子图查询框架的连续查询的基于成本的贪心优化器。给定一组连续查询,我们的优化器将这些查询分解为多个增量子图查询,为每个增量查询选择一个计划,并生成一个单独的组合计划来计算所有查询。如果操作符执行相同的交集,那么我们的组合计划将在增量查询的计划的操作符之间共享计算。为了增加共享的计算量,我们描述了一个额外的优化,该优化在多个操作符之间共享部分交集。我们的优化器使用一种新的最坏情况最优计划的成本度量,称为交叉口成本。在生成混合计划时,我们针对一次性查询的动态规划优化器将交集成本与二元连接成本结合起来。我们通过大量的实验证明了我们的方案、自适应技术和部分交叉口共享优化的有效性。我们的优化器集成到GraphflowDB中。
{"title":"Optimizing One-time and Continuous Subgraph Queries using Worst-case Optimal Joins","authors":"Amine Mhedhbi, C. Kankanamge, S. Salihoglu","doi":"10.1145/3446980","DOIUrl":"https://doi.org/10.1145/3446980","url":null,"abstract":"We study the problem of optimizing one-time and continuous subgraph queries using the new worst-case optimal join plans. Worst-case optimal plans evaluate queries by matching one query vertex at a time using multiway intersections. The core problem in optimizing worst-case optimal plans is to pick an ordering of the query vertices to match. We make two main contributions: 1. A cost-based dynamic programming optimizer for one-time queries that (i) picks efficient query vertex orderings for worst-case optimal plans and (ii) generates hybrid plans that mix traditional binary joins with worst-case optimal style multiway intersections. In addition to our optimizer, we describe an adaptive technique that changes the query vertex orderings of the worst-case optimal subplans during query execution for more efficient query evaluation. The plan space of our one-time optimizer contains plans that are not in the plan spaces based on tree decompositions from prior work. 2. A cost-based greedy optimizer for continuous queries that builds on the delta subgraph query framework. Given a set of continuous queries, our optimizer decomposes these queries into multiple delta subgraph queries, picks a plan for each delta query, and generates a single combined plan that evaluates all of the queries. Our combined plans share computations across operators of the plans for the delta queries if the operators perform the same intersections. To increase the amount of computation shared, we describe an additional optimization that shares partial intersections across operators. Our optimizers use a new cost metric for worst-case optimal plans called intersection-cost. When generating hybrid plans, our dynamic programming optimizer for one-time queries combines intersection-cost with the cost of binary joins. We demonstrate the effectiveness of our plans, adaptive technique, and partial intersection sharing optimization through extensive experiments. Our optimizers are integrated into GraphflowDB.","PeriodicalId":6983,"journal":{"name":"ACM Transactions on Database Systems (TODS)","volume":"11 1","pages":"1 - 45"},"PeriodicalIF":0.0,"publicationDate":"2021-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88741550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Scotty 苏格兰狗
Pub Date : 2021-03-27 DOI: 10.1145/3433675
Jonas Traub, P. M. Grulich, A. R. Cuellar, S. Breß, Asterios Katsifodimos, T. Rabl, V. Markl
Window aggregation is a core operation in data stream processing. Existing aggregation techniques focus on reducing latency, eliminating redundant computations, or minimizing memory usage. However, each technique operates under different assumptions with respect to workload characteristics, such as properties of aggregation functions (e.g., invertible, associative), window types (e.g., sliding, sessions), windowing measures (e.g., time- or count-based), and stream (dis)order. In this article, we present Scotty, an efficient and general open-source operator for sliding-window aggregation in stream processing systems, such as Apache Flink, Apache Beam, Apache Samza, Apache Kafka, Apache Spark, and Apache Storm. One can easily extend Scotty with user-defined aggregation functions and window types. Scotty implements the concept of general stream slicing and derives workload characteristics from aggregation queries to improve performance without sacrificing its general applicability. We provide an in-depth view on the algorithms of the general stream slicing approach. Our experiments show that Scotty outperforms alternative solutions.
窗口聚合是数据流处理中的核心操作。现有的聚合技术侧重于减少延迟、消除冗余计算或最小化内存使用。然而,每种技术在工作负载特征方面的不同假设下运行,例如聚合函数的属性(例如,可逆的、关联的)、窗口类型(例如,滑动的、会话的)、窗口度量(例如,基于时间或计数的)和流(无序)顺序。在本文中,我们介绍了Scotty,一个用于流处理系统(如Apache Flink、Apache Beam、Apache Samza、Apache Kafka、Apache Spark和Apache Storm)中滑动窗口聚合的高效且通用的开源操作符。可以使用用户定义的聚合函数和窗口类型轻松扩展Scotty。Scotty实现了通用流切片的概念,并从聚合查询中获得工作负载特征,从而在不牺牲其通用适用性的情况下提高性能。我们对一般流切片方法的算法提供了深入的看法。我们的实验表明,斯科蒂优于其他解决方案。
{"title":"Scotty","authors":"Jonas Traub, P. M. Grulich, A. R. Cuellar, S. Breß, Asterios Katsifodimos, T. Rabl, V. Markl","doi":"10.1145/3433675","DOIUrl":"https://doi.org/10.1145/3433675","url":null,"abstract":"Window aggregation is a core operation in data stream processing. Existing aggregation techniques focus on reducing latency, eliminating redundant computations, or minimizing memory usage. However, each technique operates under different assumptions with respect to workload characteristics, such as properties of aggregation functions (e.g., invertible, associative), window types (e.g., sliding, sessions), windowing measures (e.g., time- or count-based), and stream (dis)order. In this article, we present Scotty, an efficient and general open-source operator for sliding-window aggregation in stream processing systems, such as Apache Flink, Apache Beam, Apache Samza, Apache Kafka, Apache Spark, and Apache Storm. One can easily extend Scotty with user-defined aggregation functions and window types. Scotty implements the concept of general stream slicing and derives workload characteristics from aggregation queries to improve performance without sacrificing its general applicability. We provide an in-depth view on the algorithms of the general stream slicing approach. Our experiments show that Scotty outperforms alternative solutions.","PeriodicalId":6983,"journal":{"name":"ACM Transactions on Database Systems (TODS)","volume":"58 1","pages":"1 - 46"},"PeriodicalIF":0.0,"publicationDate":"2021-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77810172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
ACM Transactions on Database Systems (TODS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1