首页 > 最新文献

Proceedings of the Vldb Endowment最新文献

英文 中文
MINT: Detecting Fraudulent Behaviors from Time-Series Relational Data MINT:从时间序列关系数据中检测欺诈行为
3区 计算机科学 Q1 Computer Science Pub Date : 2023-08-01 DOI: 10.14778/3611540.3611551
Fei Xiao, Yuncheng Wu, Meihui Zhang, Gang Chen, Beng Chin Ooi
The e-commerce platforms, such as Shopee, have accumulated a huge volume of time-series relational data, which contains useful information on differentiating fraud users from benign users. Existing fraud behavior detection approaches typically model the time-series data with a vanilla Recurrent Neural Network (RNN) or combine the whole sequence as a single intention without considering the temporal behavioral patterns, row-level interactions, and different view intentions. In this paper, we present MINT, a M ultiview row- IN teractive T ime-aware framework to detect fraudulent behaviors from time-series structured data. The key idea of MINT is to build a time-aware behavior graph for each user's time-series relational data with each row represented as an action node. We utilize the user's temporal information to construct three different graph convolutional matrices for hierarchically learning the user's intentions from different views, that is, short-term, medium-term, and long-term intentions. To capture more meaningful row-level interactions and alleviate the over-smoothing issue in a vanilla time-aware behavior graph, we propose a novel gated neighbor interaction mechanism to calibrate the aggregated information by each action node. Since the receptive fields of the three graph convolutional layers are designed to grow nearly exponentially, our MINT requires many fewer layers than traditional deep graph neural networks (GNNs) to capture multi-hop neighboring information, and avoids recurrent feedforward propagation, thus leading to higher training efficiency and scalability. Our extensive experiments on the large-scale e-commerce datasets from Shopee with up to 4.6 billion records and a public dataset from Amazon show that MINT achieves superior performance over 10 state-of-the-art models and provides better interpretability and scalability.
像Shopee这样的电子商务平台积累了大量的时间序列关系数据,这些数据包含了区分欺诈用户和良性用户的有用信息。现有的欺诈行为检测方法通常使用普通的递归神经网络(RNN)对时间序列数据进行建模,或者将整个序列合并为单个意图,而不考虑时间行为模式、行级交互和不同视图意图。在本文中,我们提出了MINT,一个多视图行- In交互式时间感知框架,用于从时间序列结构化数据中检测欺诈行为。MINT的关键思想是为每个用户的时间序列关系数据构建一个时间感知行为图,其中每行表示为一个动作节点。我们利用用户的时间信息构建了三种不同的图卷积矩阵,从不同的角度分层学习用户的意图,即短期、中期和长期意图。为了捕获更有意义的行级交互并缓解时间感知行为图中的过度平滑问题,我们提出了一种新的门控邻居交互机制来校准每个动作节点的聚合信息。由于三个图卷积层的接受域被设计成几乎呈指数增长,我们的MINT比传统的深度图神经网络(gnn)需要更少的层来捕获多跳相邻信息,并且避免了循环前馈传播,从而提高了训练效率和可扩展性。我们对Shopee的大型电子商务数据集(多达46亿条记录)和亚马逊的公共数据集进行了广泛的实验,结果表明MINT在10个最先进的模型中取得了卓越的性能,并提供了更好的可解释性和可扩展性。
{"title":"MINT: Detecting Fraudulent Behaviors from Time-Series Relational Data","authors":"Fei Xiao, Yuncheng Wu, Meihui Zhang, Gang Chen, Beng Chin Ooi","doi":"10.14778/3611540.3611551","DOIUrl":"https://doi.org/10.14778/3611540.3611551","url":null,"abstract":"The e-commerce platforms, such as Shopee, have accumulated a huge volume of time-series relational data, which contains useful information on differentiating fraud users from benign users. Existing fraud behavior detection approaches typically model the time-series data with a vanilla Recurrent Neural Network (RNN) or combine the whole sequence as a single intention without considering the temporal behavioral patterns, row-level interactions, and different view intentions. In this paper, we present MINT, a M ultiview row- IN teractive T ime-aware framework to detect fraudulent behaviors from time-series structured data. The key idea of MINT is to build a time-aware behavior graph for each user's time-series relational data with each row represented as an action node. We utilize the user's temporal information to construct three different graph convolutional matrices for hierarchically learning the user's intentions from different views, that is, short-term, medium-term, and long-term intentions. To capture more meaningful row-level interactions and alleviate the over-smoothing issue in a vanilla time-aware behavior graph, we propose a novel gated neighbor interaction mechanism to calibrate the aggregated information by each action node. Since the receptive fields of the three graph convolutional layers are designed to grow nearly exponentially, our MINT requires many fewer layers than traditional deep graph neural networks (GNNs) to capture multi-hop neighboring information, and avoids recurrent feedforward propagation, thus leading to higher training efficiency and scalability. Our extensive experiments on the large-scale e-commerce datasets from Shopee with up to 4.6 billion records and a public dataset from Amazon show that MINT achieves superior performance over 10 state-of-the-art models and provides better interpretability and scalability.","PeriodicalId":54220,"journal":{"name":"Proceedings of the Vldb Endowment","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134998306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DuckPGQ: Bringing SQL/PGQ to DuckDB DuckPGQ:将SQL/PGQ引入DuckDB
3区 计算机科学 Q1 Computer Science Pub Date : 2023-08-01 DOI: 10.14778/3611540.3611614
Daniel ten Wolde, Gábor Szárnyas, Peter Boncz
We demonstrate the most important new feature of SQL:2023, namely SQL/PGQ, which eases querying graphs using SQL by introducing new syntax for pattern matching and (shortest) path-finding. We show how support for SQL/PGQ can be integrated into an RDBMS, specifically in the DuckDB system, using an extension module called DuckPGQ. As such, we also demonstrate the use of the DuckDB extensibility mechanism, which allows us to add new functions, data types, operators, optimizer rules, storage systems, and even parsers to DuckDB. We also describe the new data structures and algorithms that the DuckPGQ module is based on, and how they are injected into SQL plans. While the demonstrated DuckPGQ extension module is lean and efficient, we sketch a roadmap to (i) improve its performance through new algorithms (factorized and WCOJ) and better parallelism and (ii) extend its functionality to scenarios beyond SQL, e.g., building and analyzing Graph Neural Networks.
我们展示了SQL:2023最重要的新特性,即SQL/PGQ,它通过引入模式匹配和(最短)寻径的新语法来简化使用SQL查询图。我们将展示如何使用一个名为DuckPGQ的扩展模块将SQL/PGQ支持集成到RDBMS中,特别是在DuckDB系统中。因此,我们还演示了DuckDB可扩展性机制的使用,该机制允许我们向DuckDB添加新的函数、数据类型、操作符、优化器规则、存储系统,甚至解析器。我们还描述了DuckPGQ模块所基于的新数据结构和算法,以及如何将它们注入SQL计划。虽然演示的DuckPGQ扩展模块是精简和高效的,但我们勾画了一个路线图:(i)通过新的算法(分解和WCOJ)和更好的并行性来提高其性能;(ii)将其功能扩展到SQL之外的场景,例如,构建和分析图神经网络。
{"title":"DuckPGQ: Bringing SQL/PGQ to DuckDB","authors":"Daniel ten Wolde, Gábor Szárnyas, Peter Boncz","doi":"10.14778/3611540.3611614","DOIUrl":"https://doi.org/10.14778/3611540.3611614","url":null,"abstract":"We demonstrate the most important new feature of SQL:2023, namely SQL/PGQ, which eases querying graphs using SQL by introducing new syntax for pattern matching and (shortest) path-finding. We show how support for SQL/PGQ can be integrated into an RDBMS, specifically in the DuckDB system, using an extension module called DuckPGQ. As such, we also demonstrate the use of the DuckDB extensibility mechanism, which allows us to add new functions, data types, operators, optimizer rules, storage systems, and even parsers to DuckDB. We also describe the new data structures and algorithms that the DuckPGQ module is based on, and how they are injected into SQL plans. While the demonstrated DuckPGQ extension module is lean and efficient, we sketch a roadmap to (i) improve its performance through new algorithms (factorized and WCOJ) and better parallelism and (ii) extend its functionality to scenarios beyond SQL, e.g., building and analyzing Graph Neural Networks.","PeriodicalId":54220,"journal":{"name":"Proceedings of the Vldb Endowment","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134998309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Story of AWS Glue AWS Glue的故事
3区 计算机科学 Q1 Computer Science Pub Date : 2023-08-01 DOI: 10.14778/3611540.3611547
Mohit Saxena, Benjamin Sowell, Daiyan Alamgir, Nitin Bahadur, Bijay Bisht, Santosh Chandrachood, Chitti Keswani, G. Krishnamoorthy, Austin Lee, Bohou Li, Zach Mitchell, Vaibhav Porwal, Maheedhar Reddy Chappidi, Brian Ross, Noritaka Sekiyama, Omer Zaki, Linchi Zhang, Mehul A. Shah
AWS Glue is Amazon's serverless data integration cloud service that makes it simple and cost effective to extract, clean, enrich, load, and organize data. Originally launched in August 2017, AWS Glue began as an extract-transform-load (ETL) service designed to relieve developers and data engineers of the undifferentiated heavy lifting needed to load databases, data warehouses, and build data lakes on Amazon S3. Since then, it has evolved to serve a larger audience including ETL specialists and data scientists, and includes a broader suite of data integration capabilities. Today, hundreds of thousands of customers use AWS Glue every month. In this paper, we describe the use cases and challenges cloud customers face in preparing data for analytics and the tenets we chose to drive Glue's design. We chose early on to focus on ease-of-use, scale, and extensibility. At its core, Glue offers serverless Apache Spark and Python engines backed by a purpose-built resource manager for fast startup and auto-scaling. In Spark, it offers a new data structure --- DynamicFrames --- for manipulating messy schema-free semi-structured data such as event logs, a variety of transformations and tooling to simplify data preparation, and a new shuffle plugin to offload to cloud storage. It also includes a Hivemetastore compatible Data Catalog with Glue crawlers to build and manage metadata, e.g. for data lakes on Amazon S3. Finally, Glue Studio is its visual interface for authoring Spark and Python-based ETL jobs. We describe the innovations that differentiate AWS Glue and drive its popularity and how it has evolved over the years.
AWS Glue是亚马逊的无服务器数据集成云服务,它使提取、清理、丰富、加载和组织数据变得简单而经济高效。AWS Glue最初于2017年8月推出,最初是一种提取-转换-加载(ETL)服务,旨在减轻开发人员和数据工程师在Amazon S3上加载数据库、数据仓库和构建数据湖所需的繁重工作。从那时起,它已经发展到服务于包括ETL专家和数据科学家在内的更大的受众,并包括更广泛的数据集成功能套件。如今,每个月都有数十万客户使用AWS Glue。在本文中,我们描述了云客户在准备分析数据时面临的用例和挑战,以及我们选择的驱动Glue设计的原则。我们在早期选择将重点放在易用性、可扩展性和可扩展性上。Glue的核心是提供无服务器的Apache Spark和Python引擎,由专门构建的资源管理器支持,用于快速启动和自动扩展。在Spark中,它提供了一种新的数据结构——DynamicFrames——用于操作杂乱的无模式半结构化数据,如事件日志,各种转换和工具来简化数据准备,以及一个新的shuffle插件来卸载到云存储。它还包括一个与Hivemetastore兼容的数据目录和Glue爬虫来构建和管理元数据,例如Amazon S3上的数据湖。最后,Glue Studio是用于创建基于Spark和python的ETL作业的可视化界面。我们将介绍使AWS Glue与众不同并推动其流行的创新,以及多年来它是如何发展的。
{"title":"The Story of AWS Glue","authors":"Mohit Saxena, Benjamin Sowell, Daiyan Alamgir, Nitin Bahadur, Bijay Bisht, Santosh Chandrachood, Chitti Keswani, G. Krishnamoorthy, Austin Lee, Bohou Li, Zach Mitchell, Vaibhav Porwal, Maheedhar Reddy Chappidi, Brian Ross, Noritaka Sekiyama, Omer Zaki, Linchi Zhang, Mehul A. Shah","doi":"10.14778/3611540.3611547","DOIUrl":"https://doi.org/10.14778/3611540.3611547","url":null,"abstract":"AWS Glue is Amazon's serverless data integration cloud service that makes it simple and cost effective to extract, clean, enrich, load, and organize data. Originally launched in August 2017, AWS Glue began as an extract-transform-load (ETL) service designed to relieve developers and data engineers of the undifferentiated heavy lifting needed to load databases, data warehouses, and build data lakes on Amazon S3. Since then, it has evolved to serve a larger audience including ETL specialists and data scientists, and includes a broader suite of data integration capabilities. Today, hundreds of thousands of customers use AWS Glue every month. In this paper, we describe the use cases and challenges cloud customers face in preparing data for analytics and the tenets we chose to drive Glue's design. We chose early on to focus on ease-of-use, scale, and extensibility. At its core, Glue offers serverless Apache Spark and Python engines backed by a purpose-built resource manager for fast startup and auto-scaling. In Spark, it offers a new data structure --- DynamicFrames --- for manipulating messy schema-free semi-structured data such as event logs, a variety of transformations and tooling to simplify data preparation, and a new shuffle plugin to offload to cloud storage. It also includes a Hivemetastore compatible Data Catalog with Glue crawlers to build and manage metadata, e.g. for data lakes on Amazon S3. Finally, Glue Studio is its visual interface for authoring Spark and Python-based ETL jobs. We describe the innovations that differentiate AWS Glue and drive its popularity and how it has evolved over the years.","PeriodicalId":54220,"journal":{"name":"Proceedings of the Vldb Endowment","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134996886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sniffer: A Novel Model Type Detection System against Machine-Learning-as-a-Service Platforms Sniffer:一种针对机器学习即服务平台的新型模型类型检测系统
3区 计算机科学 Q1 Computer Science Pub Date : 2023-08-01 DOI: 10.14778/3611540.3611591
Zhuo Ma, Yilong Yang, Bin Xiao, Yang Liu, Xinjing Liu, Zhuoran Ma, Tong Yang
Recent works explore several attacks against Machine-Learning-as-a-Service (MLaaS) platforms (e.g., the model stealing attack), allegedly posing potential real-world threats beyond viability in laboratories. However, hampered by model-type-sensitive , most of the attacks can hardly break mainstream real-world MLaaS platforms. That is, many MLaaS attacks are designed against only one certain type of model, such as tree models or neural networks. As the black-box MLaaS interface hides model type info, the attacker cannot choose a proper attack method with confidence, limiting the attack performance. In this paper, we demonstrate a system, named Sniffer, that is capable of making model-type-sensitive attacks "great again" in real-world applications. Specifically, Sniffer consists of four components: Generator, Querier, Probe, and Arsenal. The first two components work for preparing attack samples. Probe, as the most characteristic component in Sniffer, implements a series of self-designed algorithms to determine the type of models hidden behind the black-box MLaaS interfaces. With model type info unraveled, an optimum method can be selected from Arsenal (containing multiple attack methods) to accomplish its attack. Our demonstration shows how the audience can interact with Sniffer in a web-based interface against five mainstream MLaaS platforms.
最近的研究探索了几种针对机器学习即服务(MLaaS)平台的攻击(例如,模型窃取攻击),据称这些攻击构成了超出实验室可行性的潜在现实威胁。然而,受模型类型敏感的限制,大多数攻击很难突破现实世界主流的MLaaS平台。也就是说,许多MLaaS攻击只针对一种特定类型的模型,例如树模型或神经网络。由于黑盒MLaaS接口隐藏了模型类型信息,攻击者无法放心地选择合适的攻击方法,限制了攻击性能。在本文中,我们演示了一个名为Sniffer的系统,它能够在实际应用程序中使模型类型敏感攻击“再次伟大”。具体来说,Sniffer由四个组件组成:Generator、Querier、Probe和Arsenal。前两个组件用于准备攻击样本。Probe作为Sniffer中最具特色的组件,实现了一系列自己设计的算法来确定隐藏在黑箱MLaaS接口背后的模型类型。随着模型类型信息的展开,可以从武器库(包含多种攻击方法)中选择最优方法来完成其攻击。我们的演示向观众展示了如何在基于web的界面中针对五种主流MLaaS平台与Sniffer进行交互。
{"title":"Sniffer: A Novel Model Type Detection System against Machine-Learning-as-a-Service Platforms","authors":"Zhuo Ma, Yilong Yang, Bin Xiao, Yang Liu, Xinjing Liu, Zhuoran Ma, Tong Yang","doi":"10.14778/3611540.3611591","DOIUrl":"https://doi.org/10.14778/3611540.3611591","url":null,"abstract":"Recent works explore several attacks against Machine-Learning-as-a-Service (MLaaS) platforms (e.g., the model stealing attack), allegedly posing potential real-world threats beyond viability in laboratories. However, hampered by model-type-sensitive , most of the attacks can hardly break mainstream real-world MLaaS platforms. That is, many MLaaS attacks are designed against only one certain type of model, such as tree models or neural networks. As the black-box MLaaS interface hides model type info, the attacker cannot choose a proper attack method with confidence, limiting the attack performance. In this paper, we demonstrate a system, named Sniffer, that is capable of making model-type-sensitive attacks \"great again\" in real-world applications. Specifically, Sniffer consists of four components: Generator, Querier, Probe, and Arsenal. The first two components work for preparing attack samples. Probe, as the most characteristic component in Sniffer, implements a series of self-designed algorithms to determine the type of models hidden behind the black-box MLaaS interfaces. With model type info unraveled, an optimum method can be selected from Arsenal (containing multiple attack methods) to accomplish its attack. Our demonstration shows how the audience can interact with Sniffer in a web-based interface against five mainstream MLaaS platforms.","PeriodicalId":54220,"journal":{"name":"Proceedings of the Vldb Endowment","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134998300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DHive: Query Execution Performance Analysis via Dataflow in Apache Hive Hive: Apache Hive中基于数据流的查询执行性能分析
3区 计算机科学 Q1 Computer Science Pub Date : 2023-08-01 DOI: 10.14778/3611540.3611605
Chaozu Zhang, Qiaomu Shen, Bo Tang
Nowadays, Apache Hive has been widely used for large-scale data analysis applications in many organizations. Various visual analytical tools are developed to help Hive users quickly analyze the query execution process and identify the performance bottleneck of executed queries. However, existing tools mostly focus on showing the time usage of query sub-components (jobs and operators) but fail to provide enough evidence to analyze the root reasons for the slow execution progress. To tackle this problem, we develop a visual analytical system DHive to visualize and analyze the query execution progress via dataflow analysis. DHive shows the dataflow during query execution at multiple levels: query level, job level and task level, which enable users to identify the key jobs/tasks and explain their time usage by linking them to the auxiliary information such as the system configuration and hardware status. We demonstrate the effectiveness of DHive by two cases in a production cluster. DHive is open-source at https://github.com/DBGroup-SUSTech/DHive.git.
如今,Apache Hive已被广泛用于许多组织的大规模数据分析应用程序。Hive开发了各种可视化分析工具,帮助用户快速分析查询执行过程,识别执行查询的性能瓶颈。但是,现有的工具主要侧重于显示查询子组件(作业和操作符)的时间使用情况,但无法提供足够的证据来分析执行进度缓慢的根本原因。为了解决这个问题,我们开发了一个可视化分析系统hive,通过数据流分析对查询执行过程进行可视化分析。hive在多个级别显示查询执行过程中的数据流:查询级别、作业级别和任务级别,用户可以通过将关键的作业/任务与系统配置和硬件状态等辅助信息联系起来,从而识别关键的作业/任务并解释其时间使用情况。我们通过一个生产集群中的两个案例来演示hive的有效性。hive是开源的,网址是https://github.com/DBGroup-SUSTech/DHive.git。
{"title":"DHive: Query Execution Performance Analysis via Dataflow in Apache Hive","authors":"Chaozu Zhang, Qiaomu Shen, Bo Tang","doi":"10.14778/3611540.3611605","DOIUrl":"https://doi.org/10.14778/3611540.3611605","url":null,"abstract":"Nowadays, Apache Hive has been widely used for large-scale data analysis applications in many organizations. Various visual analytical tools are developed to help Hive users quickly analyze the query execution process and identify the performance bottleneck of executed queries. However, existing tools mostly focus on showing the time usage of query sub-components (jobs and operators) but fail to provide enough evidence to analyze the root reasons for the slow execution progress. To tackle this problem, we develop a visual analytical system DHive to visualize and analyze the query execution progress via dataflow analysis. DHive shows the dataflow during query execution at multiple levels: query level, job level and task level, which enable users to identify the key jobs/tasks and explain their time usage by linking them to the auxiliary information such as the system configuration and hardware status. We demonstrate the effectiveness of DHive by two cases in a production cluster. DHive is open-source at https://github.com/DBGroup-SUSTech/DHive.git.","PeriodicalId":54220,"journal":{"name":"Proceedings of the Vldb Endowment","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134998307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anser: Adaptive Information Sharing Framework of AnalyticDB 答:AnalyticDB自适应信息共享框架
3区 计算机科学 Q1 Computer Science Pub Date : 2023-08-01 DOI: 10.14778/3611540.3611553
Liang Lin, Yuhan Li, Bin Wu, Huijun Mai, Renjie Lou, Jian Tan, Feifei Li
The surge in data analytics has fostered burgeoning demand for AnalyticDB on Alibaba Cloud, which has well served thousands of customers from various business sectors. The most notable feature is the diversity of the workloads it handles, including batch processing, real-time data analytics, and unstructured data analytics. To improve the overall performance for such diverse workloads, one of the major challenges is to optimize long-running complex queries without sacrificing the processing efficiency of short-running interactive queries. While existing methods attempt to utilize runtime dynamic statistics for adaptive query processing, they often focus on specific scenarios instead of providing a holistic solution. To address this challenge, we propose a new framework called Anser , which enhances the design of traditional distributed data warehouses by embedding a new information sharing mechanism. This allows for the efficient management of the production and consumption of various dynamic information across the system. Building on top of Anser , we introduce a novel scheduling policy that optimizes both data and information exchanges within the physical plan, enabling the acceleration of complex analytical queries without sacrificing the performance of short-running interactive queries. We conduct comprehensive experiments over public and in-house workloads to demonstrate the effectiveness and efficiency of our proposed information sharing framework.
数据分析的激增促进了对阿里云上的AnalyticDB的需求迅速增长,该服务已经为来自不同业务领域的数千名客户提供了良好的服务。最显著的特性是它处理的工作负载的多样性,包括批处理、实时数据分析和非结构化数据分析。为了提高这种不同工作负载的整体性能,主要挑战之一是优化长时间运行的复杂查询,同时不牺牲短时间运行的交互式查询的处理效率。虽然现有的方法试图利用运行时动态统计信息进行自适应查询处理,但它们通常侧重于特定场景,而不是提供整体解决方案。为了应对这一挑战,我们提出了一个名为Anser的新框架,它通过嵌入新的信息共享机制来增强传统分布式数据仓库的设计。这允许对整个系统中各种动态信息的生产和消费进行有效的管理。在Anser的基础上,我们引入了一种新的调度策略,该策略可以优化物理计划中的数据和信息交换,从而加速复杂的分析查询,而不会牺牲短时间运行的交互式查询的性能。我们在公共和内部工作负载上进行了全面的实验,以证明我们提出的信息共享框架的有效性和效率。
{"title":"Anser: Adaptive Information Sharing Framework of AnalyticDB","authors":"Liang Lin, Yuhan Li, Bin Wu, Huijun Mai, Renjie Lou, Jian Tan, Feifei Li","doi":"10.14778/3611540.3611553","DOIUrl":"https://doi.org/10.14778/3611540.3611553","url":null,"abstract":"The surge in data analytics has fostered burgeoning demand for AnalyticDB on Alibaba Cloud, which has well served thousands of customers from various business sectors. The most notable feature is the diversity of the workloads it handles, including batch processing, real-time data analytics, and unstructured data analytics. To improve the overall performance for such diverse workloads, one of the major challenges is to optimize long-running complex queries without sacrificing the processing efficiency of short-running interactive queries. While existing methods attempt to utilize runtime dynamic statistics for adaptive query processing, they often focus on specific scenarios instead of providing a holistic solution. To address this challenge, we propose a new framework called Anser , which enhances the design of traditional distributed data warehouses by embedding a new information sharing mechanism. This allows for the efficient management of the production and consumption of various dynamic information across the system. Building on top of Anser , we introduce a novel scheduling policy that optimizes both data and information exchanges within the physical plan, enabling the acceleration of complex analytical queries without sacrificing the performance of short-running interactive queries. We conduct comprehensive experiments over public and in-house workloads to demonstrate the effectiveness and efficiency of our proposed information sharing framework.","PeriodicalId":54220,"journal":{"name":"Proceedings of the Vldb Endowment","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135003293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ScalarDB: Universal Transaction Manager for Polystores 用于polystore的通用事务管理器
3区 计算机科学 Q1 Computer Science Pub Date : 2023-08-01 DOI: 10.14778/3611540.3611563
Hiroyuki Yamada, Toshihiro Suzuki, Yuji Ito, Jun Nemoto
This paper presents ScalarDB, a universal transaction manager that achieves distributed transactions across multiple disparate databases. ScalarDB provides a database-agnostic transaction manager on top of its database abstraction; thus, it achieves transactions spanning various databases without depending on the transactional capability of underlying databases. ScalarDB is based on several research works and extended to provide a strong correctness guarantee (i.e., strict serializability), further performance optimizations, and several critical mechanisms for productization. In this paper, we describe the design and implementation of ScalarDB. We also present evaluation results showing that ScalarDB achieves database-spanning transactions with reasonable performance and near-linear scalability without sacrificing correctness. Finally, we share some case studies and lessons learned while building and running ScalarDB.
本文介绍了ScalarDB,一个通用的事务管理器,它可以跨多个不同的数据库实现分布式事务。ScalarDB在其数据库抽象之上提供了一个与数据库无关的事务管理器;因此,它可以实现跨各种数据库的事务,而不依赖于底层数据库的事务功能。ScalarDB基于几项研究工作,并进行了扩展,以提供强大的正确性保证(即严格的序列化性)、进一步的性能优化和几个关键的产品化机制。在本文中,我们描述了ScalarDB的设计和实现。我们还提供了评估结果,表明ScalarDB在不牺牲正确性的情况下实现了具有合理性能和近线性可伸缩性的数据库跨事务。最后,我们将分享一些在构建和运行ScalarDB时获得的案例研究和经验教训。
{"title":"ScalarDB: Universal Transaction Manager for Polystores","authors":"Hiroyuki Yamada, Toshihiro Suzuki, Yuji Ito, Jun Nemoto","doi":"10.14778/3611540.3611563","DOIUrl":"https://doi.org/10.14778/3611540.3611563","url":null,"abstract":"This paper presents ScalarDB, a universal transaction manager that achieves distributed transactions across multiple disparate databases. ScalarDB provides a database-agnostic transaction manager on top of its database abstraction; thus, it achieves transactions spanning various databases without depending on the transactional capability of underlying databases. ScalarDB is based on several research works and extended to provide a strong correctness guarantee (i.e., strict serializability), further performance optimizations, and several critical mechanisms for productization. In this paper, we describe the design and implementation of ScalarDB. We also present evaluation results showing that ScalarDB achieves database-spanning transactions with reasonable performance and near-linear scalability without sacrificing correctness. Finally, we share some case studies and lessons learned while building and running ScalarDB.","PeriodicalId":54220,"journal":{"name":"Proceedings of the Vldb Endowment","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135003295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Lindorm TSDB: A Cloud-Native Time-Series Database for Large-Scale Monitoring Systems Lindorm TSDB:用于大规模监控系统的云原生时间序列数据库
3区 计算机科学 Q1 Computer Science Pub Date : 2023-08-01 DOI: 10.14778/3611540.3611559
Chunhui Shen, Qianyu Ouyang, Feibo Li, Zhipeng Liu, Longcheng Zhu, Yujie Zou, Qing Su, Tianhuan Yu, Yi Yi, Jianhong Hu, Cen Zheng, Bo Wen, Hanbang Zheng, Lunfan Xu, Sicheng Pan, Bin Wu, Xiao He, Ye Li, Jian Tan, Sheng Wang, Dan Pei, Wei Zhang, Feifei Li
Internet services supported by large-scale distributed systems have become essential for our daily life. To ensure the stability and high quality of services, diverse metric data are constantly collected and managed in a time-series database to monitor the service status. However, when the number of metrics becomes massive, existing time-series databases are inefficient in handling high-rate data ingestion and queries hitting multiple metrics. Besides, they all lack the support of machine learning functions, which are crucial for sophisticated analysis of large-scale time series. In this paper, we present Lindorm TSDB, a distributed time-series database designed for handling monitoring metrics at scale. It sustains high write throughput and low query latency with massive active metrics. It also allows users to analyze data with anomaly detection and time series forecasting algorithms directly through SQL. Furthermore, Lindorm TSDB retains stable performance even during node scaling. We evaluate Lindorm TSDB under different data scales, and the results show that it outperforms two popular open-source time-series databases on both writing and query, while executing time-series machine learning tasks efficiently.
大规模分布式系统支持的互联网服务已经成为我们日常生活中必不可少的一部分。为了保证服务的稳定性和高质量,不断收集和管理各种度量数据,并将其保存在时间序列数据库中,以监控服务状态。然而,当指标的数量变得巨大时,现有的时间序列数据库在处理高速数据摄取和涉及多个指标的查询方面效率低下。此外,它们都缺乏机器学习功能的支持,而机器学习功能对于大规模时间序列的复杂分析至关重要。在本文中,我们提出了Lindorm TSDB,一个分布式时间序列数据库,旨在处理大规模的监控指标。它通过大量活动度量维持高写吞吐量和低查询延迟。它还允许用户直接通过SQL使用异常检测和时间序列预测算法分析数据。此外,即使在节点扩展期间,Lindorm TSDB也保持稳定的性能。我们在不同的数据规模下对Lindorm TSDB进行了评估,结果表明它在编写和查询方面都优于两种流行的开源时间序列数据库,同时有效地执行时间序列机器学习任务。
{"title":"Lindorm TSDB: A Cloud-Native Time-Series Database for Large-Scale Monitoring Systems","authors":"Chunhui Shen, Qianyu Ouyang, Feibo Li, Zhipeng Liu, Longcheng Zhu, Yujie Zou, Qing Su, Tianhuan Yu, Yi Yi, Jianhong Hu, Cen Zheng, Bo Wen, Hanbang Zheng, Lunfan Xu, Sicheng Pan, Bin Wu, Xiao He, Ye Li, Jian Tan, Sheng Wang, Dan Pei, Wei Zhang, Feifei Li","doi":"10.14778/3611540.3611559","DOIUrl":"https://doi.org/10.14778/3611540.3611559","url":null,"abstract":"Internet services supported by large-scale distributed systems have become essential for our daily life. To ensure the stability and high quality of services, diverse metric data are constantly collected and managed in a time-series database to monitor the service status. However, when the number of metrics becomes massive, existing time-series databases are inefficient in handling high-rate data ingestion and queries hitting multiple metrics. Besides, they all lack the support of machine learning functions, which are crucial for sophisticated analysis of large-scale time series. In this paper, we present Lindorm TSDB, a distributed time-series database designed for handling monitoring metrics at scale. It sustains high write throughput and low query latency with massive active metrics. It also allows users to analyze data with anomaly detection and time series forecasting algorithms directly through SQL. Furthermore, Lindorm TSDB retains stable performance even during node scaling. We evaluate Lindorm TSDB under different data scales, and the results show that it outperforms two popular open-source time-series databases on both writing and query, while executing time-series machine learning tasks efficiently.","PeriodicalId":54220,"journal":{"name":"Proceedings of the Vldb Endowment","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135003303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PolarDB-SCC: A Cloud-Native Database Ensuring Low Latency for Strongly Consistent Reads PolarDB-SCC:一个云原生数据库,确保低延迟的强一致读取
3区 计算机科学 Q1 Computer Science Pub Date : 2023-08-01 DOI: 10.14778/3611540.3611562
Xinjun Yang, Yingqiang Zhang, Hao Chen, Chuan Sun, Feifei Li, Wenchao Zhou
A classic design of cloud-native databases adopts an architecture that consists of one read/write (RW) node and one or more read-only (RO) nodes. In such a design, the propagation of write-ahead logs (WALs) from the RW node to the RO node(s) is typically performed asynchronously. Consequently, system designers either have to accept a loose consistency guarantee, where a read from the RO node may return stale data, or tolerate significant performance degradation in terms of read latency, as it then needs to wait for the log to be propagated and applied. Most commercial cloud-native databases, such as Amazon Aurora, choose performance over strong consistency. As a result, it makes RO nodes useless for many applications requiring read-after-write consistency (a form of strong consistency), and the support for serverless databases (i.e., allowing the RO nodes to be scaled out automatically) is impossible as they require a single endpoint. This paper proposes PolarDB-SCC (PolarDB-Strongly Consistent Cluster), a cloud-native database architecture that guarantees strongly consistent reads with very low latency. The core idea is to eliminate unnecessary waits and reduce the necessary wait time on RO nodes while still supporting strong consistency. To achieve this, it tracks the RW node's modification timestamp at three progressively finer-grained levels. We further design a Linear Lamport timestamp to reduce the RO node's timestamp fetching operations and leverage the RDMA network for all the data transferring ( e.g. , timestamp fetching and log shipment) to minimize network overhead and extra CPU usage. Our evaluation shows that PolarDB-SCC does not incur any noticeable overhead for ensuring strongly consistent reads compared with the eventually consistent (stale) read policy. To the best of our knowledge, PolarDB-SCC is the first "read-write splitting" cloud-native database that supports strongly consistent read with negligible overhead. Compared with a straightforward read-wait design, PolarDB-SCC improves throughput by up to 4.51× and reduces median latency by up to 3.66× in SysBench's read-write workload. PolarDB-SCC is already commercially available at Alibaba Cloud.
经典的云原生数据库设计采用一个RW (read/write)节点和一个或多个RO (read/write)节点组成的架构。在这样的设计中,预写日志(write-ahead logs, wal)从RW节点到RO节点的传播通常是异步执行的。因此,系统设计人员要么必须接受松散的一致性保证(从RO节点读取可能返回过时的数据),要么必须容忍读取延迟方面的显著性能下降,因为它需要等待日志被传播和应用。大多数商业云原生数据库(如Amazon Aurora)选择性能而不是强一致性。因此,对于许多需要读写后一致性(强一致性的一种形式)的应用程序来说,它使RO节点变得无用,并且不可能支持无服务器数据库(即允许RO节点自动向外扩展),因为它们需要单个端点。本文提出了PolarDB-SCC (polardb - strong Consistent Cluster),这是一种云原生数据库架构,可以保证读取的强一致性和极低的延迟。其核心思想是消除不必要的等待,减少RO节点上必要的等待时间,同时仍然支持强一致性。为了实现这一点,它在三个逐步细化的级别上跟踪RW节点的修改时间戳。我们进一步设计了Linear Lamport时间戳,以减少RO节点的时间戳获取操作,并利用RDMA网络进行所有数据传输(例如,时间戳获取和日志发送),以最小化网络开销和额外的CPU使用。我们的评估表明,与最终一致的(陈旧的)读取策略相比,PolarDB-SCC在确保强一致性读取方面不会产生任何明显的开销。据我们所知,PolarDB-SCC是第一个“读写分离”的云原生数据库,它支持高一致性读取,开销可以忽略不计。与直接的读取等待设计相比,在SysBench的读写工作负载中,PolarDB-SCC将吞吐量提高了4.51倍,并将中位延迟降低了3.66倍。PolarDB-SCC已经在阿里云上商业化。
{"title":"PolarDB-SCC: A Cloud-Native Database Ensuring Low Latency for Strongly Consistent Reads","authors":"Xinjun Yang, Yingqiang Zhang, Hao Chen, Chuan Sun, Feifei Li, Wenchao Zhou","doi":"10.14778/3611540.3611562","DOIUrl":"https://doi.org/10.14778/3611540.3611562","url":null,"abstract":"A classic design of cloud-native databases adopts an architecture that consists of one read/write (RW) node and one or more read-only (RO) nodes. In such a design, the propagation of write-ahead logs (WALs) from the RW node to the RO node(s) is typically performed asynchronously. Consequently, system designers either have to accept a loose consistency guarantee, where a read from the RO node may return stale data, or tolerate significant performance degradation in terms of read latency, as it then needs to wait for the log to be propagated and applied. Most commercial cloud-native databases, such as Amazon Aurora, choose performance over strong consistency. As a result, it makes RO nodes useless for many applications requiring read-after-write consistency (a form of strong consistency), and the support for serverless databases (i.e., allowing the RO nodes to be scaled out automatically) is impossible as they require a single endpoint. This paper proposes PolarDB-SCC (PolarDB-Strongly Consistent Cluster), a cloud-native database architecture that guarantees strongly consistent reads with very low latency. The core idea is to eliminate unnecessary waits and reduce the necessary wait time on RO nodes while still supporting strong consistency. To achieve this, it tracks the RW node's modification timestamp at three progressively finer-grained levels. We further design a Linear Lamport timestamp to reduce the RO node's timestamp fetching operations and leverage the RDMA network for all the data transferring ( e.g. , timestamp fetching and log shipment) to minimize network overhead and extra CPU usage. Our evaluation shows that PolarDB-SCC does not incur any noticeable overhead for ensuring strongly consistent reads compared with the eventually consistent (stale) read policy. To the best of our knowledge, PolarDB-SCC is the first \"read-write splitting\" cloud-native database that supports strongly consistent read with negligible overhead. Compared with a straightforward read-wait design, PolarDB-SCC improves throughput by up to 4.51× and reduces median latency by up to 3.66× in SysBench's read-write workload. PolarDB-SCC is already commercially available at Alibaba Cloud.","PeriodicalId":54220,"journal":{"name":"Proceedings of the Vldb Endowment","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135003304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
AQUA: Automatic Collaborative Query Processing in Analytical Database 分析数据库中的自动协同查询处理
3区 计算机科学 Q1 Computer Science Pub Date : 2023-08-01 DOI: 10.14778/3611540.3611607
Yuchen Peng, Ke Chen, Lidan Shou, Dawei Jiang, Gang Chen
Data analysts nowadays are keen to have analytical capabilities involving deep learning (DL). Collaborative queries, which employ relational operations to process structured data and DL models to process unstructured data, provide a powerful facility for DL-based in-database analysis. The classical approach to support collaborative queries in relational databases is to integrate DL models with user-defined functions (UDFs) in a general-purpose language (e.g., C++) to process unstructured data. This approach suffers from suboptimal performance as the opaque UDFs preclude the generation of an optimal query plan. A recent work, DL2SQL, addresses the problem of collaborative query optimization by first converting DL computations into SQL subqueries and then using a classical relational query optimizer to optimize the entire collaborative query. However, the DL2SQL approach compromises usability by requiring data analysts to manually manage DL-related data and tune query performance. To this end, this paper introduces AQUA, an analytical database designed for efficient collaborative query processing. Built on DL2SQL, AQUA automates translations from collaborative queries into SQL queries. To enhance usability, AQUA introduces two techniques: 1) a declarative scheme for DL-related data management, and 2) DL-specific optimizations for collaborative query processing, eliminating the burden of manual data management and performance tuning from the data analysts. We demonstrate the key contributions of AQUA via a web APP that allows the audience to perform collaborative queries on the CIFAR-10 dataset.
如今,数据分析师渴望拥有涉及深度学习(DL)的分析能力。协作查询使用关系操作处理结构化数据,使用DL模型处理非结构化数据,为基于DL的数据库内分析提供了强大的工具。在关系数据库中支持协作查询的经典方法是使用通用语言(例如c++)将深度学习模型与用户定义函数(udf)集成在一起,以处理非结构化数据。这种方法的性能不是最优的,因为不透明的udf排除了最优查询计划的生成。最近的一项工作,DL2SQL,通过首先将DL计算转换为SQL子查询,然后使用经典的关系查询优化器来优化整个协作查询,解决了协作查询优化问题。但是,DL2SQL方法要求数据分析人员手动管理与dl相关的数据并调优查询性能,从而损害了可用性。为此,本文介绍了AQUA,一个为高效协同查询处理而设计的分析数据库。AQUA建立在DL2SQL之上,可以自动将协作查询转换为SQL查询。为了增强可用性,AQUA引入了两种技术:1)用于与dl相关的数据管理的声明式方案,以及2)用于协作查询处理的特定于dl的优化,从而消除了数据分析师手动数据管理和性能调优的负担。我们通过一个web应用程序演示了AQUA的关键贡献,该应用程序允许观众在CIFAR-10数据集上执行协作查询。
{"title":"AQUA: Automatic Collaborative Query Processing in Analytical Database","authors":"Yuchen Peng, Ke Chen, Lidan Shou, Dawei Jiang, Gang Chen","doi":"10.14778/3611540.3611607","DOIUrl":"https://doi.org/10.14778/3611540.3611607","url":null,"abstract":"Data analysts nowadays are keen to have analytical capabilities involving deep learning (DL). Collaborative queries, which employ relational operations to process structured data and DL models to process unstructured data, provide a powerful facility for DL-based in-database analysis. The classical approach to support collaborative queries in relational databases is to integrate DL models with user-defined functions (UDFs) in a general-purpose language (e.g., C++) to process unstructured data. This approach suffers from suboptimal performance as the opaque UDFs preclude the generation of an optimal query plan. A recent work, DL2SQL, addresses the problem of collaborative query optimization by first converting DL computations into SQL subqueries and then using a classical relational query optimizer to optimize the entire collaborative query. However, the DL2SQL approach compromises usability by requiring data analysts to manually manage DL-related data and tune query performance. To this end, this paper introduces AQUA, an analytical database designed for efficient collaborative query processing. Built on DL2SQL, AQUA automates translations from collaborative queries into SQL queries. To enhance usability, AQUA introduces two techniques: 1) a declarative scheme for DL-related data management, and 2) DL-specific optimizations for collaborative query processing, eliminating the burden of manual data management and performance tuning from the data analysts. We demonstrate the key contributions of AQUA via a web APP that allows the audience to perform collaborative queries on the CIFAR-10 dataset.","PeriodicalId":54220,"journal":{"name":"Proceedings of the Vldb Endowment","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135003650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the Vldb Endowment
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1