首页 > 最新文献

ACM Transactions on Database Systems最新文献

英文 中文
Reducing Layered Database Applications to their Essence through Vertical Integration 通过垂直集成将分层数据库应用程序还原为其本质
IF 1.8 2区 计算机科学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2015-10-23 DOI: 10.1145/2818180
K. Rietveld, H. Wijshoff
In the last decade, improvements on single-core performance of CPUs has stagnated. Consequently, methods for the development and optimization of software for these platforms have to be reconsidered. Software must be optimized such that the available single-core performance is exploited more effectively. This can be achieved by reducing the number of instructions that need to be executed. In this article, we show that layered database applications execute many redundant, nonessential, instructions that can be eliminated without affecting the course of execution and the output of the application. This elimination is performed using a vertical integration process which breaks down the different layers of layered database applications. By doing so, applications are being reduced to their essence, and as a consequence, transformations can be carried out that affect both the application code and the data access code which were not possible before. We show that this vertical integration process can be fully automated and, as such, be integrated in an operational workflow. Experimental evaluation of this approach shows that up to 95% of the instructions can be eliminated. The reduction of instructions leads to a more efficient use of the available hardware resources. This results in greatly improved performance of the application and a significant reduction in energy consumption.
在过去的十年中,cpu单核性能的提升停滞不前。因此,为这些平台开发和优化软件的方法必须重新考虑。必须对软件进行优化,以便更有效地利用可用的单核性能。这可以通过减少需要执行的指令数量来实现。在本文中,我们将展示分层数据库应用程序执行许多冗余的、不必要的指令,这些指令可以在不影响执行过程和应用程序输出的情况下消除。这种消除是使用垂直集成过程来执行的,该过程分解了分层数据库应用程序的不同层。通过这样做,应用程序被简化为其本质,因此,可以执行影响应用程序代码和数据访问代码的转换,这在以前是不可能的。我们展示了这个垂直集成过程可以完全自动化,因此可以集成到一个操作工作流中。实验结果表明,该方法可以消除高达95%的指令。指令的减少可以更有效地利用可用的硬件资源。这大大提高了应用程序的性能,并显著降低了能耗。
{"title":"Reducing Layered Database Applications to their Essence through Vertical Integration","authors":"K. Rietveld, H. Wijshoff","doi":"10.1145/2818180","DOIUrl":"https://doi.org/10.1145/2818180","url":null,"abstract":"In the last decade, improvements on single-core performance of CPUs has stagnated. Consequently, methods for the development and optimization of software for these platforms have to be reconsidered. Software must be optimized such that the available single-core performance is exploited more effectively. This can be achieved by reducing the number of instructions that need to be executed. In this article, we show that layered database applications execute many redundant, nonessential, instructions that can be eliminated without affecting the course of execution and the output of the application. This elimination is performed using a vertical integration process which breaks down the different layers of layered database applications. By doing so, applications are being reduced to their essence, and as a consequence, transformations can be carried out that affect both the application code and the data access code which were not possible before. We show that this vertical integration process can be fully automated and, as such, be integrated in an operational workflow. Experimental evaluation of this approach shows that up to 95% of the instructions can be eliminated. The reduction of instructions leads to a more efficient use of the available hardware resources. This results in greatly improved performance of the application and a significant reduction in energy consumption.","PeriodicalId":50915,"journal":{"name":"ACM Transactions on Database Systems","volume":"15 1","pages":"18:1-18:39"},"PeriodicalIF":1.8,"publicationDate":"2015-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84116670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Optimal Location Queries in Road Networks 道路网络中的最优位置查询
IF 1.8 2区 计算机科学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2015-10-23 DOI: 10.1145/2818179
Zitong Chen, Yubao Liu, R. C. Wong, Jiamin Xiong, Ganglin Mai, Cheng Long
In this article, we study an optimal location query based on a road network. Specifically, given a road network containing clients and servers, an optimal location query finds a location on the road network such that when a new server is set up at this location, a certain cost function computed based on the clients and servers (including the new server) is optimized. Two types of cost functions, namely, MinMax and MaxSum, have been used for this query. The optimal location query problem with MinMax as the cost function is called the MinMax query, which finds a location for setting up a new server such that the maximum cost of a client being served by his/her closest server is minimized. The optimal location query problem with MaxSum as the cost function is called the MaxSum query, which finds a location for setting up a new server such that the sum of the weights of clients attracted by the new server is maximized. The MinMax query and the MaxSum query correspond to two types of optimal location query with the objectives defined from the clients' perspective and from the new server's perspective, respectively. Unfortunately, the existing solutions for the optimal query problem are not efficient. In this article, we propose an efficient algorithm, namely, MinMax-Alg (MaxSum-Alg), for the MinMax (MaxSum) query, which is based on a novel idea of nearest location component. We also discuss two extensions of the optimal location query, namely, the optimal multiple-location query and the optimal location query on a 3D road network. Extensive experiments were conducted, showing that our algorithms are faster than the state of the art by at least an order of magnitude on large real benchmark datasets. For example, in our largest real datasets, the state of the art ran for more than 10 (12) hours while our algorithm ran within 3 (2) minutes only for the MinMax (MaxSum) query, that is, our algorithm ran at least 200 (600) times faster than the state of the art.
在本文中,我们研究了一个基于路网的最优位置查询。具体来说,给定一个包含客户端和服务器的路网,最优位置查询在路网中找到这样一个位置,当在该位置设置新服务器时,基于客户端和服务器(包括新服务器)计算的某个成本函数被优化。该查询使用了两种类型的成本函数,即MinMax和MaxSum。以MinMax为代价函数的最优位置查询问题称为MinMax查询,它为设置新服务器找到一个位置,使客户端由他/她最近的服务器提供服务的最大代价最小。以MaxSum为代价函数的最优位置查询问题称为MaxSum查询,它为设置新服务器找到一个位置,使新服务器吸引的客户端的权重总和最大化。MinMax查询和MaxSum查询分别对应于两种类型的最优位置查询,其目标分别从客户机的角度和从新服务器的角度定义。不幸的是,最优查询问题的现有解决方案效率不高。在本文中,我们提出了一种高效的算法,即MinMax- alg (MaxSum- alg),用于MinMax (MaxSum)查询,该算法基于最近位置分量的新思想。我们还讨论了最优位置查询的两个扩展,即最优多位置查询和三维路网上的最优位置查询。进行了大量的实验,表明我们的算法在大型真实基准数据集上比目前的技术水平至少快一个数量级。例如,在我们最大的真实数据集中,最先进的技术运行了超过10(12)个小时,而我们的算法仅在3(2)分钟内运行了MinMax (MaxSum)查询,也就是说,我们的算法运行速度至少比最先进的技术快200(600)倍。
{"title":"Optimal Location Queries in Road Networks","authors":"Zitong Chen, Yubao Liu, R. C. Wong, Jiamin Xiong, Ganglin Mai, Cheng Long","doi":"10.1145/2818179","DOIUrl":"https://doi.org/10.1145/2818179","url":null,"abstract":"In this article, we study an optimal location query based on a road network. Specifically, given a road network containing clients and servers, an optimal location query finds a location on the road network such that when a new server is set up at this location, a certain cost function computed based on the clients and servers (including the new server) is optimized. Two types of cost functions, namely, MinMax and MaxSum, have been used for this query. The optimal location query problem with MinMax as the cost function is called the MinMax query, which finds a location for setting up a new server such that the maximum cost of a client being served by his/her closest server is minimized. The optimal location query problem with MaxSum as the cost function is called the MaxSum query, which finds a location for setting up a new server such that the sum of the weights of clients attracted by the new server is maximized. The MinMax query and the MaxSum query correspond to two types of optimal location query with the objectives defined from the clients' perspective and from the new server's perspective, respectively. Unfortunately, the existing solutions for the optimal query problem are not efficient. In this article, we propose an efficient algorithm, namely, MinMax-Alg (MaxSum-Alg), for the MinMax (MaxSum) query, which is based on a novel idea of nearest location component. We also discuss two extensions of the optimal location query, namely, the optimal multiple-location query and the optimal location query on a 3D road network. Extensive experiments were conducted, showing that our algorithms are faster than the state of the art by at least an order of magnitude on large real benchmark datasets. For example, in our largest real datasets, the state of the art ran for more than 10 (12) hours while our algorithm ran within 3 (2) minutes only for the MinMax (MaxSum) query, that is, our algorithm ran at least 200 (600) times faster than the state of the art.","PeriodicalId":50915,"journal":{"name":"ACM Transactions on Database Systems","volume":"5 1","pages":"17:1-17:41"},"PeriodicalIF":1.8,"publicationDate":"2015-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81579240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Uncertain Graph Processing through Representative Instances 基于代表性实例的不确定图处理
IF 1.8 2区 计算机科学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2015-10-23 DOI: 10.1145/2818182
Panos Parchas, Francesco Gullo, D. Papadias, F. Bonchi
Data in several applications can be represented as an uncertain graph whose edges are labeled with a probability of existence. Exact query processing on uncertain graphs is prohibitive for most applications, as it involves evaluation over an exponential number of instantiations. Thus, typical approaches employ Monte-Carlo sampling, which (i) draws a number of possible graphs (samples), (ii) evaluates the query on each of them, and (iii) aggregates the individual answers to generate the final result. However, this approach can also be extremely time consuming for large uncertain graphs commonly found in practice. To facilitate efficiency, we study the problem of extracting a single representative instance from an uncertain graph. Conventional processing techniques can then be applied on this representative to closely approximate the result on the original graph. In order to maintain data utility, the representative instance should preserve structural characteristics of the uncertain graph. We start with representatives that capture the expected vertex degrees, as this is a fundamental property of the graph topology. We then generalize the notion of vertex degree to the concept of n-clique cardinality, that is, the number of cliques of size n that contain a vertex. For the first problem, we propose two methods: Average Degree Rewiring (ADR), which is based on random edge rewiring, and Approximate B-Matching (ABM), which applies graph matching techniques. For the second problem, we develop a greedy approach and a game-theoretic framework. We experimentally demonstrate, with real uncertain graphs, that indeed the representative instances can be used to answer, efficiently and accurately, queries based on several metrics such as shortest path distance, clustering coefficient, and betweenness centrality.
在一些应用中,数据可以表示为一个不确定图,其边被标记为存在概率。对于大多数应用程序来说,不确定图上的精确查询处理是禁止的,因为它涉及到对指数数量的实例化进行评估。因此,典型的方法采用蒙特卡罗抽样,它(i)绘制许多可能的图(样本),(ii)评估每个图上的查询,以及(iii)汇总单个答案以生成最终结果。然而,对于实践中常见的大型不确定图,这种方法也可能非常耗时。为了提高效率,我们研究了从不确定图中提取单个代表性实例的问题。然后,可以将传统的处理技术应用于该代表上,以接近原始图形上的结果。为了保持数据的实用性,代表性实例应保留不确定图的结构特征。我们从捕获预期顶点度的表示开始,因为这是图拓扑的基本属性。然后我们将顶点度的概念推广到n团基数的概念,即包含一个顶点的大小为n的团的数量。针对第一个问题,我们提出了两种方法:基于随机边重新布线的平均度重新布线(ADR)和应用图匹配技术的近似b匹配(ABM)。对于第二个问题,我们开发了一个贪婪的方法和博弈论框架。我们通过实验证明,使用真实的不确定图,确实可以使用代表性实例来有效而准确地回答基于几个指标(如最短路径距离、聚类系数和中间性中心性)的查询。
{"title":"Uncertain Graph Processing through Representative Instances","authors":"Panos Parchas, Francesco Gullo, D. Papadias, F. Bonchi","doi":"10.1145/2818182","DOIUrl":"https://doi.org/10.1145/2818182","url":null,"abstract":"Data in several applications can be represented as an uncertain graph whose edges are labeled with a probability of existence. Exact query processing on uncertain graphs is prohibitive for most applications, as it involves evaluation over an exponential number of instantiations. Thus, typical approaches employ Monte-Carlo sampling, which (i) draws a number of possible graphs (samples), (ii) evaluates the query on each of them, and (iii) aggregates the individual answers to generate the final result. However, this approach can also be extremely time consuming for large uncertain graphs commonly found in practice. To facilitate efficiency, we study the problem of extracting a single representative instance from an uncertain graph. Conventional processing techniques can then be applied on this representative to closely approximate the result on the original graph.\u0000 In order to maintain data utility, the representative instance should preserve structural characteristics of the uncertain graph. We start with representatives that capture the expected vertex degrees, as this is a fundamental property of the graph topology. We then generalize the notion of vertex degree to the concept of n-clique cardinality, that is, the number of cliques of size n that contain a vertex. For the first problem, we propose two methods: Average Degree Rewiring (ADR), which is based on random edge rewiring, and Approximate B-Matching (ABM), which applies graph matching techniques. For the second problem, we develop a greedy approach and a game-theoretic framework. We experimentally demonstrate, with real uncertain graphs, that indeed the representative instances can be used to answer, efficiently and accurately, queries based on several metrics such as shortest path distance, clustering coefficient, and betweenness centrality.","PeriodicalId":50915,"journal":{"name":"ACM Transactions on Database Systems","volume":"54 1","pages":"20:1-20:39"},"PeriodicalIF":1.8,"publicationDate":"2015-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89336246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Workload-Driven Antijoin Cardinality Estimation 工作负载驱动的反连接基数估计
IF 1.8 2区 计算机科学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2015-10-23 DOI: 10.1145/2818178
Florin Rusu, Zixuan Zhuang, Mingxi Wu, C. Jermaine
Antijoin cardinality estimation is among a handful of problems that has eluded accurate efficient solutions amenable to implementation in relational query optimizers. Given the widespread use of antijoin and subset-based queries in analytical workloads and the extensive research targeted at join cardinality estimation—a seemingly related problem—the lack of adequate solutions for antijoin cardinality estimation is intriguing. In this article, we introduce a novel sampling-based estimator for antijoin cardinality that (unlike existent estimators) provides sufficient accuracy and efficiency to be implemented in a query optimizer. The proposed estimator incorporates three novel ideas. First, we use prior workload information when learning a mixture superpopulation model of the data offline. Second, we design a Bayesian statistics framework that updates the superpopulation model according to the live queries, thus allowing the estimator to adapt dynamically to the online workload. Third, we develop an efficient algorithm for sampling from a hypergeometric distribution in order to generate Monte Carlo trials, without explicitly instantiating either the population or the sample. When put together, these ideas form the basis of an efficient antijoin cardinality estimator satisfying the strict requirements of a query optimizer, as shown by the extensive experimental results over synthetically-generated as well as massive TPC-H data.
反连接基数估计是在关系查询优化器中无法实现准确有效的解决方案的少数问题之一。考虑到在分析工作负载中广泛使用反连接和基于子集的查询,以及针对连接基数估计(一个看似相关的问题)的广泛研究,缺乏反连接基数估计的适当解决方案是令人感兴趣的。在本文中,我们介绍了一种新的基于采样的反连接基数估计器,它(与现有的估计器不同)提供了足够的精度和效率,可以在查询优化器中实现。提出的估计器包含三个新颖的思想。首先,我们在离线学习数据的混合超人口模型时使用先前的工作负载信息。其次,我们设计了一个贝叶斯统计框架,根据实时查询更新超人口模型,从而允许估计器动态适应在线工作负载。第三,我们开发了一种有效的算法,用于从超几何分布中采样,以便生成蒙特卡罗试验,而无需显式实例化总体或样本。当把这些想法放在一起时,这些想法构成了一个有效的反连接基数估计器的基础,满足查询优化器的严格要求,正如在合成生成和大量TPC-H数据上的广泛实验结果所示。
{"title":"Workload-Driven Antijoin Cardinality Estimation","authors":"Florin Rusu, Zixuan Zhuang, Mingxi Wu, C. Jermaine","doi":"10.1145/2818178","DOIUrl":"https://doi.org/10.1145/2818178","url":null,"abstract":"Antijoin cardinality estimation is among a handful of problems that has eluded accurate efficient solutions amenable to implementation in relational query optimizers. Given the widespread use of antijoin and subset-based queries in analytical workloads and the extensive research targeted at join cardinality estimation—a seemingly related problem—the lack of adequate solutions for antijoin cardinality estimation is intriguing. In this article, we introduce a novel sampling-based estimator for antijoin cardinality that (unlike existent estimators) provides sufficient accuracy and efficiency to be implemented in a query optimizer. The proposed estimator incorporates three novel ideas. First, we use prior workload information when learning a mixture superpopulation model of the data offline. Second, we design a Bayesian statistics framework that updates the superpopulation model according to the live queries, thus allowing the estimator to adapt dynamically to the online workload. Third, we develop an efficient algorithm for sampling from a hypergeometric distribution in order to generate Monte Carlo trials, without explicitly instantiating either the population or the sample. When put together, these ideas form the basis of an efficient antijoin cardinality estimator satisfying the strict requirements of a query optimizer, as shown by the extensive experimental results over synthetically-generated as well as massive TPC-H data.","PeriodicalId":50915,"journal":{"name":"ACM Transactions on Database Systems","volume":"10 1","pages":"16:1-16:41"},"PeriodicalIF":1.8,"publicationDate":"2015-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87121341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Boosting the Quality of Approximate String Matching by Synonyms 通过同义词提高近似字符串匹配的质量
IF 1.8 2区 计算机科学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2015-10-23 DOI: 10.1145/2818177
Jiaheng Lu, Chunbin Lin, Wei Wang, Chen Li, Xiaokui Xiao
A string-similarity measure quantifies the similarity between two text strings for approximate string matching or comparison. For example, the strings “Sam” and “Samuel” can be considered to be similar. Most existing work that computes the similarity of two strings only considers syntactic similarities, for example, number of common words or q-grams. While this is indeed an indicator of similarity, there are many important cases where syntactically-different strings can represent the same real-world object. For example, “Bill” is a short form of “William,” and “Database Management Systems” can be abbreviated as “DBMS.” Given a collection of predefined synonyms, the purpose of this article is to explore such existing knowledge to effectively evaluate the similarity between two strings and efficiently perform similarity searches and joins, thereby boosting the quality of approximate string matching. In particular, we first present an expansion-based framework to measure string similarities efficiently while considering synonyms. We then study efficient algorithms for similarity searches and joins by proposing two novel indexes, called SI-trees and QP-trees, which combine signature-filtering and length-filtering strategies. In order to improve the efficiency of our algorithms, we develop an estimator to estimate the size of candidates to enable an online selection of signature filters. This estimator provides strong low-error, high-confidence guarantees while requiring only logarithmic space and time costs, thus making our method attractive both in theory and in practice. Finally, the experimental results from a comprehensive study of the algorithms with three real datasets verify the effectiveness and efficiency of our approaches.
字符串相似性度量对两个文本字符串之间的相似性进行量化,以便进行近似的字符串匹配或比较。例如,字符串“Sam”和“Samuel”可以被认为是相似的。大多数现有的计算两个字符串相似度的工作只考虑语法相似度,例如,常见单词的数量或q-grams。虽然这确实是相似度的一个指标,但在许多重要的情况下,语法不同的字符串可以表示相同的现实世界对象。例如,“Bill”是“William”的缩写形式,而“Database Management Systems”可以缩写为“DBMS”。给定预定义的同义词集合,本文的目的是探索这些现有知识,以有效地评估两个字符串之间的相似性,并有效地执行相似性搜索和连接,从而提高近似字符串匹配的质量。特别是,我们首先提出了一个基于扩展的框架,在考虑同义词的同时有效地测量字符串相似度。然后,我们研究了相似性搜索和连接的有效算法,提出了两种新的索引,称为si树和qp树,它们结合了签名过滤和长度过滤策略。为了提高算法的效率,我们开发了一个估计器来估计候选签名过滤器的大小,以实现签名过滤器的在线选择。该估计器提供了强大的低误差,高置信度保证,同时只需要对数空间和时间成本,从而使我们的方法在理论和实践中都具有吸引力。最后,在三个真实数据集上对算法进行了综合研究,实验结果验证了我们方法的有效性和效率。
{"title":"Boosting the Quality of Approximate String Matching by Synonyms","authors":"Jiaheng Lu, Chunbin Lin, Wei Wang, Chen Li, Xiaokui Xiao","doi":"10.1145/2818177","DOIUrl":"https://doi.org/10.1145/2818177","url":null,"abstract":"A string-similarity measure quantifies the similarity between two text strings for approximate string matching or comparison. For example, the strings “Sam” and “Samuel” can be considered to be similar. Most existing work that computes the similarity of two strings only considers syntactic similarities, for example, number of common words or q-grams. While this is indeed an indicator of similarity, there are many important cases where syntactically-different strings can represent the same real-world object. For example, “Bill” is a short form of “William,” and “Database Management Systems” can be abbreviated as “DBMS.” Given a collection of predefined synonyms, the purpose of this article is to explore such existing knowledge to effectively evaluate the similarity between two strings and efficiently perform similarity searches and joins, thereby boosting the quality of approximate string matching.\u0000 In particular, we first present an expansion-based framework to measure string similarities efficiently while considering synonyms. We then study efficient algorithms for similarity searches and joins by proposing two novel indexes, called SI-trees and QP-trees, which combine signature-filtering and length-filtering strategies. In order to improve the efficiency of our algorithms, we develop an estimator to estimate the size of candidates to enable an online selection of signature filters. This estimator provides strong low-error, high-confidence guarantees while requiring only logarithmic space and time costs, thus making our method attractive both in theory and in practice. Finally, the experimental results from a comprehensive study of the algorithms with three real datasets verify the effectiveness and efficiency of our approaches.","PeriodicalId":50915,"journal":{"name":"ACM Transactions on Database Systems","volume":"60 1","pages":"15:1-15:42"},"PeriodicalIF":1.8,"publicationDate":"2015-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73288325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Databsse system approach the management decision support 数据库系统方法管理决策支持
IF 1.8 2区 计算机科学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2015-09-09 DOI: 10.1145/320493.320500
J. Donovan
Traditional intuitive methods of decision-making are no longer adequate to deal with the complex problems faced by the modern policymaker. Thus systems must be developed to provide the information and analysis necessary for the decisions which must be made. These systems are called decision support systems. Although database systems provide a key ingredient to decision support systems, the problems now facing the policymaker are different from those problems to which database systems have been applied in the past. The problems are usually not known in advance, they are constantly changing, and answers are needed quickly. Hence additional technologies, methodologies, and approaches must expand the traditional areas of database and operating systems research (as well as other software and hardware research) in order for them to become truly effective in supporting policymakers.This paper describes recent work in this area and indicates where future work is needed. Specifically the paper discusses: (1) why there exists a vital need for decision support systems; (2) examples from work in the field of energy which make explicit the characteristics which distinguish these decision support systems from traditional operational and managerial systems; (3) how an awareness of decision support systems has evolved, including a brief review of work done by others and a statement of the computational needs of decision support systems which are consistent with contemporary technology; (4) an approach which has been made to meet many of these computational needs through the development and implementation of a computational facility, the Generalized Management Information System (GMIS); and (5) the application of this computational facility to a complex and important energy problem facing New England in a typical study within the New England Energy Management Information System (NEEMIS) Project.
传统的直观的决策方法已不足以处理现代决策者所面临的复杂问题。因此,必须开发系统,为必须作出的决定提供必要的信息和分析。这些系统被称为决策支持系统。虽然数据库系统为决策支持系统提供了一个关键因素,但决策者现在面临的问题与过去数据库系统所应用的问题不同。这些问题通常是事先不知道的,它们是不断变化的,需要迅速得到答案。因此,额外的技术、方法和方法必须扩展数据库和操作系统研究(以及其他软件和硬件研究)的传统领域,以便它们真正有效地支持决策者。本文描述了这一领域的最新工作,并指出了未来需要开展的工作。具体来说,本文讨论了:(1)为什么存在对决策支持系统的迫切需求;(2)能源领域的工作实例,明确了这些决策支持系统与传统操作和管理系统的区别;(3)对决策支持系统的认识是如何演变的,包括对其他人所做的工作的简要回顾,以及与当代技术一致的决策支持系统的计算需求的陈述;(4)通过开发和实施一种计算设施,即通用管理信息系统(GMIS),来满足其中许多计算需求的方法;(5)在新英格兰能源管理信息系统(NEEMIS)项目的一个典型研究中,将该计算设施应用于新英格兰面临的一个复杂而重要的能源问题。
{"title":"Databsse system approach the management decision support","authors":"J. Donovan","doi":"10.1145/320493.320500","DOIUrl":"https://doi.org/10.1145/320493.320500","url":null,"abstract":"Traditional intuitive methods of decision-making are no longer adequate to deal with the complex problems faced by the modern policymaker. Thus systems must be developed to provide the information and analysis necessary for the decisions which must be made. These systems are called decision support systems. Although database systems provide a key ingredient to decision support systems, the problems now facing the policymaker are different from those problems to which database systems have been applied in the past. The problems are usually not known in advance, they are constantly changing, and answers are needed quickly. Hence additional technologies, methodologies, and approaches must expand the traditional areas of database and operating systems research (as well as other software and hardware research) in order for them to become truly effective in supporting policymakers.\u0000This paper describes recent work in this area and indicates where future work is needed. Specifically the paper discusses: (1) why there exists a vital need for decision support systems; (2) examples from work in the field of energy which make explicit the characteristics which distinguish these decision support systems from traditional operational and managerial systems; (3) how an awareness of decision support systems has evolved, including a brief review of work done by others and a statement of the computational needs of decision support systems which are consistent with contemporary technology; (4) an approach which has been made to meet many of these computational needs through the development and implementation of a computational facility, the Generalized Management Information System (GMIS); and (5) the application of this computational facility to a complex and important energy problem facing New England in a typical study within the New England Energy Management Information System (NEEMIS) Project.","PeriodicalId":50915,"journal":{"name":"ACM Transactions on Database Systems","volume":"40 1","pages":"344-369"},"PeriodicalIF":1.8,"publicationDate":"2015-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88866176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 46
Closing the Gap: Sequence Mining at Scale 缩小差距:大规模的序列挖掘
IF 1.8 2区 计算机科学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2015-06-30 DOI: 10.1145/2757217
Kaustubh Beedkar, K. Berberich, Rainer Gemulla, Iris Miliaraki
Frequent sequence mining is one of the fundamental building blocks in data mining. While the problem has been extensively studied, few of the available techniques are sufficiently scalable to handle datasets with billions of sequences; such large-scale datasets arise, for instance, in text mining and session analysis. In this article, we propose MG-FSM, a scalable algorithm for frequent sequence mining on MapReduce. MG-FSM can handle so-called “gap constraints”, which can be used to limit the output to a controlled set of frequent sequences. Both positional and temporal gap constraints, as well as appropriate maximality and closedness constraints, are supported. At its heart, MG-FSM partitions the input database in a way that allows us to mine each partition independently using any existing frequent sequence mining algorithm. We introduce the notion of ω-equivalency, which is a generalization of the notion of a “projected database” used by many frequent pattern mining algorithms. We also present a number of optimization techniques that minimize partition size, and therefore computational and communication costs, while still maintaining correctness. Our experimental study in the contexts of text mining and session analysis suggests that MG-FSM is significantly more efficient and scalable than alternative approaches.
频繁序列挖掘是数据挖掘的基本组成部分之一。虽然这个问题已经得到了广泛的研究,但很少有可用的技术能够充分扩展到处理具有数十亿序列的数据集;例如,这种大规模数据集出现在文本挖掘和会话分析中。在本文中,我们提出了MG-FSM,一种在MapReduce上进行频繁序列挖掘的可扩展算法。MG-FSM可以处理所谓的“间隙约束”,它可以用来限制输出到一组受控的频繁序列。支持位置和时间间隙约束,以及适当的最大值和封闭性约束。在其核心,MG-FSM以一种允许我们使用任何现有的频繁序列挖掘算法独立挖掘每个分区的方式对输入数据库进行分区。我们引入了ω-等价的概念,这是许多频繁模式挖掘算法所使用的“投影数据库”概念的推广。我们还介绍了一些优化技术,这些技术可以最小化分区大小,从而减少计算和通信成本,同时仍然保持正确性。我们在文本挖掘和会话分析背景下的实验研究表明,MG-FSM比其他方法更有效和可扩展。
{"title":"Closing the Gap: Sequence Mining at Scale","authors":"Kaustubh Beedkar, K. Berberich, Rainer Gemulla, Iris Miliaraki","doi":"10.1145/2757217","DOIUrl":"https://doi.org/10.1145/2757217","url":null,"abstract":"Frequent sequence mining is one of the fundamental building blocks in data mining. While the problem has been extensively studied, few of the available techniques are sufficiently scalable to handle datasets with billions of sequences; such large-scale datasets arise, for instance, in text mining and session analysis. In this article, we propose MG-FSM, a scalable algorithm for frequent sequence mining on MapReduce. MG-FSM can handle so-called “gap constraints”, which can be used to limit the output to a controlled set of frequent sequences. Both positional and temporal gap constraints, as well as appropriate maximality and closedness constraints, are supported. At its heart, MG-FSM partitions the input database in a way that allows us to mine each partition independently using any existing frequent sequence mining algorithm. We introduce the notion of ω-equivalency, which is a generalization of the notion of a “projected database” used by many frequent pattern mining algorithms. We also present a number of optimization techniques that minimize partition size, and therefore computational and communication costs, while still maintaining correctness. Our experimental study in the contexts of text mining and session analysis suggests that MG-FSM is significantly more efficient and scalable than alternative approaches.","PeriodicalId":50915,"journal":{"name":"ACM Transactions on Database Systems","volume":"10 1","pages":"8:1-8:44"},"PeriodicalIF":1.8,"publicationDate":"2015-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73189902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Efficient Processing of Skyline-Join Queries over Multiple Data Sources 多数据源上Skyline-Join查询的高效处理
IF 1.8 2区 计算机科学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2015-06-30 DOI: 10.1145/2699483
M. Nagendra, K. Candan
Efficient processing of skyline queries has been an area of growing interest. Many of the earlier skyline techniques assumed that the skyline query is applied to a single data table. Naturally, these algorithms were not suitable for many applications in which the skyline query may involve attributes belonging to multiple data sources. In other words, if the data used in the skyline query are stored in multiple tables, then join operations would be required before the skyline can be searched. The task of computing skylines on multiple data sources has been coined as the skyline-join problem and various skyline-join algorithms have been proposed. However, the current proposals suffer several drawbacks: they often need to scan the input tables exhaustively in order to obtain the set of skyline-join results; moreover, the pruning techniques employed to eliminate the tuples are largely based on expensive pairwise tuple-to-tuple comparisons. In this article, we aim to address these shortcomings by proposing two novel skyline-join algorithms, namely skyline-sensitive join (S2J) and symmetric skyline-sensitive join (S3J), to process skyline queries over two data sources. Our approaches compute the results using a novel layer/region pruning technique (LR-pruning) that prunes the join space in blocks as opposed to individual data points, thereby avoiding excessive pairwise point-to-point dominance checks. Furthermore, the S3J algorithm utilizes an early stopping condition in order to successfully compute the skyline results by accessing only a subset of the input tables. In addition to S2J and S3J, we also propose the S2 J-M and S3 J-M algorithms. These algorithms extend S2J's and S3J's two-way skyline-join ability to efficiently process skyline-join queries over more than two data sources. S2 J-M and S3 J-M leverage the extended concept of LR-pruning, called M-way LR-pruning, to compute multi-way skyline-joins in which more than two data sources are integrated during skyline processing. We report extensive experimental results that confirm the advantages of the proposed algorithms over state-of-the-art skyline-join techniques.
高效处理天际线查询已经成为人们越来越感兴趣的一个领域。许多早期的skyline技术都假定skyline查询应用于单个数据表。当然,这些算法不适合许多应用程序,其中skyline查询可能涉及属于多个数据源的属性。换句话说,如果skyline查询中使用的数据存储在多个表中,那么在搜索skyline之前将需要进行连接操作。在多个数据源上计算天际线的任务被称为天际线连接问题,并提出了各种天际线连接算法。然而,目前的建议有几个缺点:它们经常需要彻底扫描输入表以获得skyline-join结果集;此外,用于消除元组的修剪技术主要基于昂贵的成对元组到元组比较。在本文中,我们的目标是通过提出两种新的天际线连接算法来解决这些缺点,即天际线敏感连接(S2J)和对称天际线敏感连接(S3J),以处理两个数据源上的天际线查询。我们的方法使用一种新的层/区域修剪技术(lr -剪枝)来计算结果,该技术修剪块中的连接空间,而不是单个数据点,从而避免了过多的成对点对点优势检查。此外,S3J算法利用早期停止条件,通过仅访问输入表的子集来成功计算天际线结果。除了S2J和S3J算法,我们还提出了S2 J-M和S3 J-M算法。这些算法扩展了S2J和S3J的双向天际线连接能力,以有效地处理两个以上数据源上的天际线连接查询。S2 J-M和S3 J-M利用扩展的lr -剪枝概念(称为M-way lr -剪枝)来计算在天际线处理过程中集成两个以上数据源的多路天际线连接。我们报告了大量的实验结果,证实了所提出的算法比最先进的天际线连接技术的优势。
{"title":"Efficient Processing of Skyline-Join Queries over Multiple Data Sources","authors":"M. Nagendra, K. Candan","doi":"10.1145/2699483","DOIUrl":"https://doi.org/10.1145/2699483","url":null,"abstract":"Efficient processing of skyline queries has been an area of growing interest. Many of the earlier skyline techniques assumed that the skyline query is applied to a single data table. Naturally, these algorithms were not suitable for many applications in which the skyline query may involve attributes belonging to multiple data sources. In other words, if the data used in the skyline query are stored in multiple tables, then join operations would be required before the skyline can be searched. The task of computing skylines on multiple data sources has been coined as the skyline-join problem and various skyline-join algorithms have been proposed. However, the current proposals suffer several drawbacks: they often need to scan the input tables exhaustively in order to obtain the set of skyline-join results; moreover, the pruning techniques employed to eliminate the tuples are largely based on expensive pairwise tuple-to-tuple comparisons. In this article, we aim to address these shortcomings by proposing two novel skyline-join algorithms, namely skyline-sensitive join (S2J) and symmetric skyline-sensitive join (S3J), to process skyline queries over two data sources. Our approaches compute the results using a novel layer/region pruning technique (LR-pruning) that prunes the join space in blocks as opposed to individual data points, thereby avoiding excessive pairwise point-to-point dominance checks. Furthermore, the S3J algorithm utilizes an early stopping condition in order to successfully compute the skyline results by accessing only a subset of the input tables. In addition to S2J and S3J, we also propose the S2 J-M and S3 J-M algorithms. These algorithms extend S2J's and S3J's two-way skyline-join ability to efficiently process skyline-join queries over more than two data sources. S2 J-M and S3 J-M leverage the extended concept of LR-pruning, called M-way LR-pruning, to compute multi-way skyline-joins in which more than two data sources are integrated during skyline processing. We report extensive experimental results that confirm the advantages of the proposed algorithms over state-of-the-art skyline-join techniques.","PeriodicalId":50915,"journal":{"name":"ACM Transactions on Database Systems","volume":"42 1","pages":"10:1-10:46"},"PeriodicalIF":1.8,"publicationDate":"2015-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82475921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Technical Correspondence: “Differential Dependencies: Reasoning and Discovery” Revisited 技术通信:“差异依赖:推理和发现”
IF 1.8 2区 计算机科学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2015-06-30 DOI: 10.1145/2757214
M. Vincent, Jixue Liu, Hong-Cheu Liu, S. Link
To address the frequently occurring situation where data is inexact or imprecise, a number of extensions to the classical notion of a functional dependency (FD) integrity constraint have been proposed in recent years. One of these extensions is the notion of a differential dependency (DD), introduced in the recent article &ldquoDifferential Dependencies: Reasoning and Discovery&rdquo by Song and Chen in the March 2011 edition of this journal. A DD generalises the notion of an FD by requiring only that the values of the attribute from the RHS of the DD satisfy a distance constraint whenever the values of attributes from the LHS of the DD satisfy a distance constraint. In contrast, an FD requires that the values from the attributes in the RHS of an FD be equal whenever the values of the attributes from the LHS of the FD are equal. The article &ldquoDifferential Dependencies: Reasoning and Discovery&rdquo investigated a number of aspects of DDs, the most important of which, since they form the basis for the other topics investigated, were the consistency problem (determining whether there exists a relation instance that satisfies a set of DDs) and the implication problem (determining whether a set of DDs logically implies another DD). Concerning these problems, a number of results were claimed in &ldquoDifferential Dependencies: Reasoning and Discovery&rdquo. In this article we conduct a detailed analysis of the correctness of these results. The outcomes of our analysis are that, for almost every claimed result, we show there are either fundamental errors in the proof or the result is false. For some of the claimed results we are able to provide corrected proofs, but for other results their correctness remains open.
为了解决经常发生的数据不精确或不精确的情况,近年来提出了对功能依赖(FD)完整性约束的经典概念的许多扩展。其中一个扩展是差分依赖(DD)的概念,该概念由Song和Chen在本刊2011年3月的最新文章《差分依赖:推理和发现》中介绍。DD推广了FD的概念,只要求来自DD的RHS的属性值满足距离约束,而来自DD的LHS的属性值满足距离约束。相反,当FD的LHS中的属性值相等时,FD的RHS中的属性值必须相等。文章“差异依赖:推理和发现”研究了DD的许多方面,其中最重要的是一致性问题(确定是否存在满足一组DD的关系实例)和隐含问题(确定一组DD是否在逻辑上暗示另一个DD),因为它们构成了所研究的其他主题的基础。关于这些问题,在《差异依赖:推理和发现》中提出了一些结果。在本文中我们对这些结果的正确性进行了详细的分析。我们分析的结果是,对于几乎每一个声称的结果,我们都表明证明中要么存在根本错误,要么结果是错误的。对于所宣称的一些结果,我们能够提供正确的证明,但对于其他结果,它们的正确性仍然是开放的。
{"title":"Technical Correspondence: “Differential Dependencies: Reasoning and Discovery” Revisited","authors":"M. Vincent, Jixue Liu, Hong-Cheu Liu, S. Link","doi":"10.1145/2757214","DOIUrl":"https://doi.org/10.1145/2757214","url":null,"abstract":"To address the frequently occurring situation where data is inexact or imprecise, a number of extensions to the classical notion of a functional dependency (FD) integrity constraint have been proposed in recent years. One of these extensions is the notion of a differential dependency (DD), introduced in the recent article &ldquoDifferential Dependencies: Reasoning and Discovery&rdquo by Song and Chen in the March 2011 edition of this journal. A DD generalises the notion of an FD by requiring only that the values of the attribute from the RHS of the DD satisfy a distance constraint whenever the values of attributes from the LHS of the DD satisfy a distance constraint. In contrast, an FD requires that the values from the attributes in the RHS of an FD be equal whenever the values of the attributes from the LHS of the FD are equal.\u0000 The article &ldquoDifferential Dependencies: Reasoning and Discovery&rdquo investigated a number of aspects of DDs, the most important of which, since they form the basis for the other topics investigated, were the consistency problem (determining whether there exists a relation instance that satisfies a set of DDs) and the implication problem (determining whether a set of DDs logically implies another DD). Concerning these problems, a number of results were claimed in &ldquoDifferential Dependencies: Reasoning and Discovery&rdquo. In this article we conduct a detailed analysis of the correctness of these results. The outcomes of our analysis are that, for almost every claimed result, we show there are either fundamental errors in the proof or the result is false. For some of the claimed results we are able to provide corrected proofs, but for other results their correctness remains open.","PeriodicalId":50915,"journal":{"name":"ACM Transactions on Database Systems","volume":"16 1","pages":"14:1-14:18"},"PeriodicalIF":1.8,"publicationDate":"2015-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81868185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Efficient Processing of Spatial Group Keyword Queries 空间组关键字查询的高效处理
IF 1.8 2区 计算机科学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2015-06-30 DOI: 10.1145/2772600
Xin Cao, G. Cong, Tao Guo, Christian S. Jensen, B. Ooi
With the proliferation of geo-positioning and geo-tagging techniques, spatio-textual objects that possess both a geographical location and a textual description are gaining in prevalence, and spatial keyword queries that exploit both location and textual description are gaining in prominence. However, the queries studied so far generally focus on finding individual objects that each satisfy a query rather than finding groups of objects where the objects in a group together satisfy a query. We define the problem of retrieving a group of spatio-textual objects such that the group's keywords cover the query's keywords and such that the objects are nearest to the query location and have the smallest inter-object distances. Specifically, we study three instantiations of this problem, all of which are NP-hard. We devise exact solutions as well as approximate solutions with provable approximation bounds to the problems. In addition, we solve the problems of retrieving top-k groups of three instantiations, and study a weighted version of the problem that incorporates object weights. We present empirical studies that offer insight into the efficiency of the solutions, as well as the accuracy of the approximate solutions.
随着地理定位和地理标记技术的普及,同时拥有地理位置和文本描述的空间文本对象越来越流行,同时利用位置和文本描述的空间关键字查询也越来越突出。然而,到目前为止所研究的查询通常侧重于查找每个对象都满足查询的单个对象,而不是查找对象组,其中组中的对象一起满足查询。我们定义了检索一组空间文本对象的问题,使得该组的关键字覆盖查询的关键字,并且使得对象最接近查询位置并且具有最小的对象间距离。具体来说,我们研究了这个问题的三个实例,它们都是np困难的。我们设计了这些问题的精确解和具有可证明的近似界的近似解。此外,我们解决了检索三个实例的top-k组的问题,并研究了包含对象权重的问题的加权版本。我们提出的实证研究,提供洞察解决方案的效率,以及近似解决方案的准确性。
{"title":"Efficient Processing of Spatial Group Keyword Queries","authors":"Xin Cao, G. Cong, Tao Guo, Christian S. Jensen, B. Ooi","doi":"10.1145/2772600","DOIUrl":"https://doi.org/10.1145/2772600","url":null,"abstract":"With the proliferation of geo-positioning and geo-tagging techniques, spatio-textual objects that possess both a geographical location and a textual description are gaining in prevalence, and spatial keyword queries that exploit both location and textual description are gaining in prominence. However, the queries studied so far generally focus on finding individual objects that each satisfy a query rather than finding groups of objects where the objects in a group together satisfy a query.\u0000 We define the problem of retrieving a group of spatio-textual objects such that the group's keywords cover the query's keywords and such that the objects are nearest to the query location and have the smallest inter-object distances. Specifically, we study three instantiations of this problem, all of which are NP-hard. We devise exact solutions as well as approximate solutions with provable approximation bounds to the problems. In addition, we solve the problems of retrieving top-k groups of three instantiations, and study a weighted version of the problem that incorporates object weights. We present empirical studies that offer insight into the efficiency of the solutions, as well as the accuracy of the approximate solutions.","PeriodicalId":50915,"journal":{"name":"ACM Transactions on Database Systems","volume":"10 1","pages":"13:1-13:48"},"PeriodicalIF":1.8,"publicationDate":"2015-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82212687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 66
期刊
ACM Transactions on Database Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1