首页 > 最新文献

Proc. VLDB Endow.最新文献

英文 中文
The FastLanes Compression Layout: Decoding >100 Billion Integers per Second with Scalar Code FastLanes压缩布局:用标量码每秒解码> 1000亿个整数
Pub Date : 2023-05-01 DOI: 10.14778/3598581.3598587
Azim Afroozeh, P. Boncz
The open-source FastLanes project aims to improve big data formats, such as Parquet, ORC and columnar database formats, in multiple ways. In this paper, we significantly accelerate decoding of all common Light-Weight Compression (LWC) schemes: DICT, FOR, DELTA and RLE through better data-parallelism. We do so by re-designing the compression layout using two main ideas: (i) generalizing the value interleaving technique in the basic operation of bit-(un)packing by targeting a virtual 1024-bits SIMD register, (ii) reordering the tuples in all columns of a table in the same Unified Transposed Layout that puts tuple chunks in a common "04261537" order (explained in the paper); allowing for maximum independent work for all possible basic SIMD lane widths: 8, 16, 32, and 64 bits. We address the software development, maintenance and future-proofness challenges of increasing hardware diversity, by defining a virtual 1024-bits instruction set that consists of simple operators supported by all SIMD dialects; and also, importantly, by scalar code. The interleaved and tuple-reordered layout actually makes scalar decoding faster, extracting more data-parallelism from today's wide-issue CPUs. Importantly, the scalar version can be fully auto-vectorized by modern compilers, eliminating technical debt in software caused by platform-specific SIMD intrinsics. Micro-benchmarks on Intel, AMD, Apple and AWS CPUs show that FastLanes accelerates decoding by factors (decoding >40 values per CPU cycle). FastLanes can make queries faster, as compressing the data reduces bandwidth needs, while decoding is almost free.
开源的FastLanes项目旨在以多种方式改进大数据格式,如Parquet、ORC和柱状数据库格式。在本文中,我们通过更好的数据并行性显著加快了所有常见的轻量级压缩(LWC)方案:DICT, FOR, DELTA和RLE的解码速度。我们通过使用两个主要思想重新设计压缩布局来做到这一点:(i)通过针对虚拟1024位SIMD寄存器,在位(非)打包的基本操作中推广值交错技术,(ii)在相同的统一转置布局中重新排序表的所有列中的元组,将元组块置于共同的“04261537”顺序中(在论文中解释);允许所有可能的基本SIMD通道宽度的最大独立工作:8,16,32和64位。我们通过定义一个由所有SIMD方言支持的简单操作符组成的虚拟1024位指令集,解决了硬件多样性增加带来的软件开发、维护和面向未来的挑战;重要的是,用标量编码。交错和元重排序的布局实际上使标量解码更快,从今天的大问题cpu中提取更多的数据并行性。重要的是,标量版本可以由现代编译器完全自动向量化,从而消除了由特定于平台的SIMD内在特性引起的软件技术债务。在英特尔、AMD、苹果和AWS CPU上的微基准测试表明,FastLanes加速了解码速度(每个CPU周期解码>40个值)。FastLanes可以使查询更快,因为压缩数据减少了带宽需求,而解码几乎是免费的。
{"title":"The FastLanes Compression Layout: Decoding >100 Billion Integers per Second with Scalar Code","authors":"Azim Afroozeh, P. Boncz","doi":"10.14778/3598581.3598587","DOIUrl":"https://doi.org/10.14778/3598581.3598587","url":null,"abstract":"\u0000 The open-source FastLanes project aims to improve big data formats, such as Parquet, ORC and columnar database formats, in multiple ways. In this paper, we significantly accelerate decoding of all common Light-Weight Compression (LWC) schemes: DICT, FOR, DELTA and RLE through better data-parallelism. We do so by re-designing the compression layout using two main ideas: (i) generalizing the\u0000 value interleaving\u0000 technique in the basic operation of bit-(un)packing by targeting a virtual 1024-bits SIMD register, (ii) reordering the tuples in all columns of a table in the same Unified Transposed Layout that puts tuple chunks in a common \"04261537\" order (explained in the paper); allowing for maximum independent work for all possible basic SIMD lane widths: 8, 16, 32, and 64 bits.\u0000 \u0000 We address the software development, maintenance and future-proofness challenges of increasing hardware diversity, by defining a virtual 1024-bits instruction set that consists of simple operators supported by all SIMD dialects; and also, importantly, by scalar code. The interleaved and tuple-reordered layout actually makes scalar decoding faster, extracting more data-parallelism from today's wide-issue CPUs. Importantly, the scalar version can be fully auto-vectorized by modern compilers, eliminating technical debt in software caused by platform-specific SIMD intrinsics.\u0000 Micro-benchmarks on Intel, AMD, Apple and AWS CPUs show that FastLanes accelerates decoding by factors (decoding >40 values per CPU cycle). FastLanes can make queries faster, as compressing the data reduces bandwidth needs, while decoding is almost free.","PeriodicalId":20467,"journal":{"name":"Proc. VLDB Endow.","volume":"6 1","pages":"2132-2144"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87752745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
TiQuE: Improving the Transactional Performance of Analytical Systems for True Hybrid Workloads TiQuE:为真正的混合工作负载提高分析系统的事务性能
Pub Date : 2023-05-01 DOI: 10.14778/3598581.3598598
Nuno Faria, J. Pereira, A. Alonso, R. Vilaça, Yunus Koning, N. Nes
Transactions have been a key issue in database management for a long time and there are a plethora of architectures and algorithms to support and implement them. The current state-of-the-art is focused on storage management and is tightly coupled with its design, leading, for instance, to the need for completely new engines to support new features such as Hybrid Transactional Analytical Processing (HTAP). We address this challenge with a proposal to implement transactional logic in a query language such as SQL. This means that our approach can be layered on existing analytical systems but that the retrieval of a transactional snapshot and the validation of update transactions runs in the server and can take advantage of advanced query execution capabilities of an optimizing query engine. We demonstrate our proposal, TiQuE, on MonetDB and obtain an average 500x improvement in transactional throughput while retaining good performance on analytical queries, making it competitive with the state-of-the-art HTAP systems.
长期以来,事务一直是数据库管理中的一个关键问题,有大量的体系结构和算法来支持和实现它们。当前最先进的技术主要集中在存储管理上,并与其设计紧密结合,例如,导致需要全新的引擎来支持混合事务分析处理(HTAP)等新功能。我们建议用查询语言(如SQL)实现事务逻辑来解决这个挑战。这意味着我们的方法可以在现有的分析系统上分层,但是事务快照的检索和更新事务的验证在服务器上运行,并且可以利用优化查询引擎的高级查询执行功能。我们在MonetDB上演示了我们的建议,TiQuE,并在事务吞吐量方面平均提高了500倍,同时在分析查询方面保持了良好的性能,使其与最先进的HTAP系统相竞争。
{"title":"TiQuE: Improving the Transactional Performance of Analytical Systems for True Hybrid Workloads","authors":"Nuno Faria, J. Pereira, A. Alonso, R. Vilaça, Yunus Koning, N. Nes","doi":"10.14778/3598581.3598598","DOIUrl":"https://doi.org/10.14778/3598581.3598598","url":null,"abstract":"Transactions have been a key issue in database management for a long time and there are a plethora of architectures and algorithms to support and implement them. The current state-of-the-art is focused on storage management and is tightly coupled with its design, leading, for instance, to the need for completely new engines to support new features such as Hybrid Transactional Analytical Processing (HTAP). We address this challenge with a proposal to implement transactional logic in a query language such as SQL. This means that our approach can be layered on existing analytical systems but that the retrieval of a transactional snapshot and the validation of update transactions runs in the server and can take advantage of advanced query execution capabilities of an optimizing query engine. We demonstrate our proposal, TiQuE, on MonetDB and obtain an average 500x improvement in transactional throughput while retaining good performance on analytical queries, making it competitive with the state-of-the-art HTAP systems.","PeriodicalId":20467,"journal":{"name":"Proc. VLDB Endow.","volume":"179 1","pages":"2274-2288"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74976528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SDPipe: A Semi-Decentralized Framework for Heterogeneity-aware Pipeline-parallel Training SDPipe:一个半分散的异构感知管道并行训练框架
Pub Date : 2023-05-01 DOI: 10.14778/3598581.3598604
Xupeng Miao, Yining Shi, Zhi Yang, Bin Cui, Zhihao Jia
The increasing size of both deep learning models and training data necessitates the ability to scale out model training through pipeline-parallel training, which combines pipelined model parallelism and data parallelism. However, most of them assume an ideal homogeneous dedicated cluster. As for real cloud clusters, these approaches suffer from the intensive model synchronization overheads due to the dynamic environment heterogeneity. Such a huge challenge leaves the design in a dilemma: either the performance bottleneck of the central parameter server (PS) or severe performance degradation caused by stragglers for decentralized synchronization (like All-Reduce). This approach presents SDPipe, a new semi-decentralized framework to get the best of both worlds, achieving both high heterogeneity tolerance and convergence efficiency in pipeline-parallel training. To provide high performance, we decentralize the communication model synchronization, which accounts for the largest proportion of synchronization overhead. In contrast, we centralize the process of group scheduling, which is lightweight but needs a global view for better performance and convergence speed against heterogeneity. We show via a prototype implementation the significant advantage of SDPipe on performance and scalability, facing different environments.
随着深度学习模型和训练数据规模的不断增加,需要通过管道并行训练来扩展模型训练的能力,这种训练结合了管道模型并行性和数据并行性。然而,它们中的大多数都假设有一个理想的同构专用集群。对于真实的云集群,由于动态环境的异构性,这些方法存在大量的模型同步开销。如此巨大的挑战使设计陷入两难境地:要么是中心参数服务器(PS)的性能瓶颈,要么是分散同步(如All-Reduce)的离散器导致的严重性能下降。该方法提出了一种新的半去中心化框架SDPipe,在管道并行训练中实现了较高的异构容忍度和收敛效率。为了提供高性能,我们分散了通信模型同步,这占同步开销的最大比例。相比之下,我们将组调度过程集中起来,这是轻量级的,但需要全局视图以获得更好的性能和收敛速度。我们通过一个原型实现展示了SDPipe在不同环境下在性能和可伸缩性方面的显著优势。
{"title":"SDPipe: A Semi-Decentralized Framework for Heterogeneity-aware Pipeline-parallel Training","authors":"Xupeng Miao, Yining Shi, Zhi Yang, Bin Cui, Zhihao Jia","doi":"10.14778/3598581.3598604","DOIUrl":"https://doi.org/10.14778/3598581.3598604","url":null,"abstract":"\u0000 The increasing size of both deep learning models and training data necessitates the ability to scale out model training through pipeline-parallel training, which combines pipelined model parallelism and data parallelism. However, most of them assume an ideal homogeneous dedicated cluster. As for real cloud clusters, these approaches suffer from the intensive model synchronization overheads due to the dynamic environment heterogeneity. Such a huge challenge leaves the design in a dilemma: either the performance bottleneck of the central parameter server (PS) or severe performance degradation caused by stragglers for decentralized synchronization (like All-Reduce). This approach presents SDPipe, a new\u0000 semi-decentralized\u0000 framework to get the best of both worlds, achieving both high heterogeneity tolerance and convergence efficiency in pipeline-parallel training. To provide high performance, we decentralize the communication model synchronization, which accounts for the largest proportion of synchronization overhead. In contrast, we centralize the process of group scheduling, which is lightweight but needs a global view for better performance and convergence speed against heterogeneity. We show via a prototype implementation the significant advantage of SDPipe on performance and scalability, facing different environments.\u0000","PeriodicalId":20467,"journal":{"name":"Proc. VLDB Endow.","volume":"64 1","pages":"2354-2363"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76849676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
LRU-C: Parallelizing Database I/Os for Flash SSDs LRU-C:为Flash ssd并行化数据库I/ o
Pub Date : 2023-05-01 DOI: 10.14778/3598581.3598605
Bo-Hyun Lee, Mijin An, Sang-Won Lee
The conventional database buffer managers have two inherent sources of I/O serialization: read stall and mutex conflict. The serialized I/O makes storage and CPU under-utilized, limiting transaction throughput and latency. Such harm stands out on flash SSDs with asymmetric read-write speed and abundant I/O parallelism. To make database I/Os parallel and thus leverage the parallelism in flash SSDs, we propose a novel approach to database buffering, the LRU-C method. It introduces the LRU-C pointer that points to the least-recently-used-clean page in the LRU list. Upon a page miss, LRU-C selects the current LRU-clean page as a victim and adjusts the pointer to the next LRU-clean one in the LRU list. This way, LRU-C can avoid the I/O serialization of read stalls. The LRU-C pointer enables two further optimizations for higher I/O throughput: dynamic-batch-write and parallel LRU-list manipulation. The former allows the background flusher to write more dirty pages at a time, while the latter mitigates mutex-induced I/O serializations. Experiment results from running OLTP workloads using MySQL-based LRU-C prototype on flash SSDs show that it improves transaction throughput compared to the Vanilla MySQL and the state-of-the-art WAR solution by 3x and 1.52x, respectively, and also cuts the tail latency drastically. Though LRU-C might compromise the hit ratio slightly, its increased I/O throughput far offsets the reduced hit ratio.
传统的数据库缓冲区管理器有两个固有的I/O序列化源:读失速和互斥锁冲突。序列化的I/O使存储和CPU得不到充分利用,限制了事务吞吐量和延迟。这种危害在具有非对称读写速度和大量I/O并行性的闪存ssd上尤为突出。为了使数据库I/ o并行,从而利用闪存ssd中的并行性,我们提出了一种新的数据库缓冲方法,LRU-C方法。它引入了LRU- c指针,该指针指向LRU列表中最近最少使用的清理页面。当缺页时,LRU-c选择当前的LRU-clean页作为牺牲品,并将指针调整到LRU列表中的下一个LRU-clean页。这样,LRU-C就可以避免I/O序列化的读停顿。LRU-C指针支持两个进一步的优化,以获得更高的I/O吞吐量:动态批处理写入和并行lru列表操作。前者允许后台刷新器一次写更多的脏页,而后者减轻了互斥诱导的I/O序列化。在闪存ssd上使用基于MySQL的LRU-C原型运行OLTP工作负载的实验结果表明,与Vanilla MySQL和最先进的WAR解决方案相比,它将事务吞吐量分别提高了3倍和1.52倍,并且还大大减少了尾部延迟。尽管LRU-C可能会略微降低命中率,但其增加的I/O吞吐量远远抵消了降低的命中率。
{"title":"LRU-C: Parallelizing Database I/Os for Flash SSDs","authors":"Bo-Hyun Lee, Mijin An, Sang-Won Lee","doi":"10.14778/3598581.3598605","DOIUrl":"https://doi.org/10.14778/3598581.3598605","url":null,"abstract":"\u0000 The conventional database buffer managers have two inherent sources of I/O serialization: read stall and mutex conflict. The serialized I/O makes storage and CPU under-utilized, limiting transaction throughput and latency. Such harm stands out on flash SSDs with asymmetric read-write speed and abundant I/O parallelism. To make database I/Os parallel and thus leverage the parallelism in flash SSDs, we propose a novel approach to database buffering, the\u0000 LRU-C\u0000 method. It introduces the LRU-C pointer that points to the\u0000 least-recently-used-clean\u0000 page in the LRU list. Upon a page miss, LRU-C selects the current LRU-clean page as a victim and adjusts the pointer to the next LRU-clean one in the LRU list. This way, LRU-C can avoid the I/O serialization of read stalls. The LRU-C pointer enables two further optimizations for higher I/O throughput:\u0000 dynamic-batch-write\u0000 and\u0000 parallel LRU-list manipulation.\u0000 The former allows the background flusher to write more dirty pages at a time, while the latter mitigates mutex-induced I/O serializations. Experiment results from running OLTP workloads using MySQL-based LRU-C prototype on flash SSDs show that it improves transaction throughput compared to the Vanilla MySQL and the state-of-the-art WAR solution by 3x and 1.52x, respectively, and also cuts the tail latency drastically. Though LRU-C might compromise the hit ratio slightly, its increased I/O throughput far offsets the reduced hit ratio.\u0000","PeriodicalId":20467,"journal":{"name":"Proc. VLDB Endow.","volume":"60 1","pages":"2364-2376"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90857233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Designing and Learning Piecewise Space-Filling Curves 分段空间填充曲线的设计与学习
Pub Date : 2023-05-01 DOI: 10.14778/3598581.3598589
Jiangneng Li, Zheng Wang, Gao Cong, Cheng Long, H. M. Kiah, Bin Cui
To index multi-dimensional data, space-filling curves (SFCs) have been used to map the data to one dimension, and then a one-dimensional indexing method such as the B-tree is used to index the mapped data. The existing SFCs all adopt a single mapping scheme for the whole data space. However, a single mapping scheme often does not perform well on all the data space. In this paper, we propose a new type of SFC called piecewise SFCs, which adopts different mapping schemes for different data subspaces. Specifically, we propose a data structure called Bit Merging tree (BMTree), which can generate data subspaces and their SFCs simultaneously and achieve desirable properties of the SFC for the whole data space. Furthermore, we develop a reinforcement learning based solution to build the BMTree, aiming to achieve excellent query performance. Extensive experiments show that our proposed method outperforms existing SFCs in terms of query performance.
为了对多维数据进行索引,首先使用空间填充曲线(sfc)将数据映射到一个维度,然后使用b树等一维索引方法对映射的数据进行索引。现有的sfc对整个数据空间均采用单一的映射方案。然而,单一的映射方案通常不能在所有的数据空间上表现良好。在本文中,我们提出了一种新的SFC,称为分段SFC,它对不同的数据子空间采用不同的映射方案。具体来说,我们提出了一种Bit merge tree (BMTree)数据结构,它可以同时生成数据子空间及其SFC,并在整个数据空间中实现SFC的理想特性。此外,我们开发了一种基于强化学习的解决方案来构建BMTree,旨在获得出色的查询性能。大量的实验表明,我们提出的方法在查询性能方面优于现有的sfc。
{"title":"Towards Designing and Learning Piecewise Space-Filling Curves","authors":"Jiangneng Li, Zheng Wang, Gao Cong, Cheng Long, H. M. Kiah, Bin Cui","doi":"10.14778/3598581.3598589","DOIUrl":"https://doi.org/10.14778/3598581.3598589","url":null,"abstract":"To index multi-dimensional data, space-filling curves (SFCs) have been used to map the data to one dimension, and then a one-dimensional indexing method such as the B-tree is used to index the mapped data. The existing SFCs all adopt a single mapping scheme for the whole data space. However, a single mapping scheme often does not perform well on all the data space. In this paper, we propose a new type of SFC called piecewise SFCs, which adopts different mapping schemes for different data subspaces. Specifically, we propose a data structure called Bit Merging tree (BMTree), which can generate data subspaces and their SFCs simultaneously and achieve desirable properties of the SFC for the whole data space. Furthermore, we develop a reinforcement learning based solution to build the BMTree, aiming to achieve excellent query performance. Extensive experiments show that our proposed method outperforms existing SFCs in terms of query performance.","PeriodicalId":20467,"journal":{"name":"Proc. VLDB Endow.","volume":"37 1","pages":"2158-2171"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79066417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
What Modern NVMe Storage Can Do, And How To Exploit It: High-Performance I/O for High-Performance Storage Engines 现代NVMe存储可以做什么,以及如何利用它:高性能存储引擎的高性能I/O
Pub Date : 2023-05-01 DOI: 10.14778/3598581.3598584
Gabriel Haas, Viktor Leis
NVMe SSDs based on flash are cheap and offer high throughput. Combining several of these devices into a single server enables 10 million I/O operations per second or more. Our experiments show that existing out-of-memory database systems and storage engines achieve only a fraction of this performance. In this work, we demonstrate that it is possible to close the performance gap between hardware and software through an I/O optimized storage engine design. In a heavy out-of-memory setting, where the dataset is 10 times larger than main memory, our system can achieve more than 1 million TPC-C transactions per second.
基于闪存的NVMe固态硬盘价格便宜,并且提供高吞吐量。将这些设备组合到一个服务器中可以实现每秒1000万次或更多的I/O操作。我们的实验表明,现有的内存不足数据库系统和存储引擎只能达到这种性能的一小部分。在这项工作中,我们证明了通过I/O优化存储引擎设计可以缩小硬件和软件之间的性能差距。在内存不足的情况下,数据集比主存大10倍,我们的系统每秒可以实现超过100万个TPC-C事务。
{"title":"What Modern NVMe Storage Can Do, And How To Exploit It: High-Performance I/O for High-Performance Storage Engines","authors":"Gabriel Haas, Viktor Leis","doi":"10.14778/3598581.3598584","DOIUrl":"https://doi.org/10.14778/3598581.3598584","url":null,"abstract":"NVMe SSDs based on flash are cheap and offer high throughput. Combining several of these devices into a single server enables 10 million I/O operations per second or more. Our experiments show that existing out-of-memory database systems and storage engines achieve only a fraction of this performance. In this work, we demonstrate that it is possible to close the performance gap between hardware and software through an I/O optimized storage engine design. In a heavy out-of-memory setting, where the dataset is 10 times larger than main memory, our system can achieve more than 1 million TPC-C transactions per second.","PeriodicalId":20467,"journal":{"name":"Proc. VLDB Endow.","volume":"1 1","pages":"2090-2102"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73983251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Decoupled Graph Neural Networks for Large Dynamic Graphs 大型动态图解耦图神经网络
Pub Date : 2023-05-01 DOI: 10.48550/arXiv.2305.08273
Y. Zheng, Zhewei Wei, Jiajun Liu
Real-world graphs, such as social networks, financial transactions, and recommendation systems, often demonstrate dynamic behavior. This phenomenon, known as graph stream, involves the dynamic changes of nodes and the emergence and disappearance of edges. To effectively capture both the structural and temporal aspects of these dynamic graphs, dynamic graph neural networks have been developed. However, existing methods are usually tailored to process either continuous-time or discrete-time dynamic graphs, and cannot be generalized from one to the other. In this paper, we propose a decoupled graph neural network for large dynamic graphs, including a unified dynamic propagation that supports efficient computation for both continuous and discrete dynamic graphs. Since graph structure-related computations are only performed during the propagation process, the prediction process for the downstream task can be trained separately without expensive graph computations, and therefore any sequence model can be plugged-in and used. As a result, our algorithm achieves exceptional scalability and expressiveness. We evaluate our algorithm on seven real-world datasets of both continuous-time and discrete-time dynamic graphs. The experimental results demonstrate that our algorithm achieves state-of-the-art performance in both kinds of dynamic graphs. Most notably, the scalability of our algorithm is well illustrated by its successful application to large graphs with up to over a billion temporal edges and over a hundred million nodes.
现实世界的图,如社会网络、金融交易和推荐系统,经常展示动态行为。这种现象被称为图流,涉及节点的动态变化和边的出现和消失。为了有效地捕获这些动态图的结构和时间方面,动态图神经网络已经发展起来。然而,现有的方法通常是针对处理连续时间或离散时间动态图而定制的,并且不能从一种推广到另一种。在本文中,我们提出了一种解耦的大型动态图神经网络,包括统一的动态传播,支持连续和离散动态图的高效计算。由于与图结构相关的计算只在传播过程中进行,因此下游任务的预测过程可以单独训练,而无需进行昂贵的图计算,因此可以插入和使用任何序列模型。因此,我们的算法实现了卓越的可扩展性和表达性。我们在连续时间和离散时间动态图的七个真实数据集上评估了我们的算法。实验结果表明,我们的算法在两种动态图中都达到了最先进的性能。最值得注意的是,我们的算法的可扩展性很好地说明了它的成功应用于具有超过10亿个时间边和超过1亿个节点的大型图。
{"title":"Decoupled Graph Neural Networks for Large Dynamic Graphs","authors":"Y. Zheng, Zhewei Wei, Jiajun Liu","doi":"10.48550/arXiv.2305.08273","DOIUrl":"https://doi.org/10.48550/arXiv.2305.08273","url":null,"abstract":"Real-world graphs, such as social networks, financial transactions, and recommendation systems, often demonstrate dynamic behavior. This phenomenon, known as graph stream, involves the dynamic changes of nodes and the emergence and disappearance of edges. To effectively capture both the structural and temporal aspects of these dynamic graphs, dynamic graph neural networks have been developed. However, existing methods are usually tailored to process either continuous-time or discrete-time dynamic graphs, and cannot be generalized from one to the other. In this paper, we propose a decoupled graph neural network for large dynamic graphs, including a unified dynamic propagation that supports efficient computation for both continuous and discrete dynamic graphs. Since graph structure-related computations are only performed during the propagation process, the prediction process for the downstream task can be trained separately without expensive graph computations, and therefore any sequence model can be plugged-in and used. As a result, our algorithm achieves exceptional scalability and expressiveness. We evaluate our algorithm on seven real-world datasets of both continuous-time and discrete-time dynamic graphs. The experimental results demonstrate that our algorithm achieves state-of-the-art performance in both kinds of dynamic graphs. Most notably, the scalability of our algorithm is well illustrated by its successful application to large graphs with up to over a billion temporal edges and over a hundred million nodes.","PeriodicalId":20467,"journal":{"name":"Proc. VLDB Endow.","volume":"1 1","pages":"2239-2247"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85071720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MiniGraph: Querying Big Graphs with a Single Machine MiniGraph:用单个机器查询大图
Pub Date : 2023-05-01 DOI: 10.14778/3598581.3598590
Xiaoke Zhu, Yang Liu, Shuhao Liu, W. Fan
This paper presents MiniGraph, an out-of-core system for querying big graphs with a single machine. As opposed to previous single-machine graph systems, MiniGraph proposes a pipelined architecture to overlap I/O and CPU operations, and improves multi-core parallelism. It also introduces a hybrid model to support both vertex-centric and graph-centric parallel computations, to simplify parallel graph programming, speed up beyond-neighborhood computations, and parallelize computations within each subgraph. The model induces a two-level parallel execution model to explore both inter-subgraph and intra-subgraph parallelism. Moreover, MiniGraph develops new optimization techniques under its architecture. Using real-life graphs of different types, we show that MiniGraph is up to 76.1x faster than prior out-of-core systems, and performs better than some multi-machine systems that use up to 12 machines.
本文介绍了MiniGraph,一个单机查询大图的核外系统。与之前的单机图形系统不同,MiniGraph提出了一种流水线架构来重叠I/O和CPU操作,并提高了多核并行性。它还引入了一个混合模型来支持以顶点为中心和以图为中心的并行计算,以简化并行图编程,加快超邻域计算,并并行化每个子图内的计算。该模型引入了一个两级并行执行模型,以探索子图间和子图内的并行性。此外,MiniGraph还在其架构下开发了新的优化技术。使用不同类型的实际图表,我们表明MiniGraph比以前的out- core系统快76.1倍,并且比一些使用多达12台机器的多机器系统性能更好。
{"title":"MiniGraph: Querying Big Graphs with a Single Machine","authors":"Xiaoke Zhu, Yang Liu, Shuhao Liu, W. Fan","doi":"10.14778/3598581.3598590","DOIUrl":"https://doi.org/10.14778/3598581.3598590","url":null,"abstract":"This paper presents MiniGraph, an out-of-core system for querying big graphs with a single machine. As opposed to previous single-machine graph systems, MiniGraph proposes a pipelined architecture to overlap I/O and CPU operations, and improves multi-core parallelism. It also introduces a hybrid model to support both vertex-centric and graph-centric parallel computations, to simplify parallel graph programming, speed up beyond-neighborhood computations, and parallelize computations within each subgraph. The model induces a two-level parallel execution model to explore both inter-subgraph and intra-subgraph parallelism. Moreover, MiniGraph develops new optimization techniques under its architecture. Using real-life graphs of different types, we show that MiniGraph is up to 76.1x faster than prior out-of-core systems, and performs better than some multi-machine systems that use up to 12 machines.","PeriodicalId":20467,"journal":{"name":"Proc. VLDB Endow.","volume":"232 1","pages":"2172-2185"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72840031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Temporal SIR-GN: Efficient and Effective Structural Representation Learning for Temporal Graphs 时间图SIR-GN:时间图的高效结构表示学习
Pub Date : 2023-05-01 DOI: 10.14778/3598581.3598583
Janet Layne, Justin Carpenter, Edoardo Serra, Francesco Gullo
Node representation learning (NRL) generates numerical vectors (embeddings) for the nodes of a graph. Structural NRL specifically assigns similar node embeddings for those nodes that exhibit similar structural roles. This is in contrast with its proximity-based counterpart, wherein similarity between embeddings reflects spatial proximity among nodes. Structural NRL is useful for tasks such as node classification where nodes of the same class share structural roles, though there may exist a distant, or no path between them. Athough structural NRL has been well-studied in static graphs, it has received limited attention in the temporal setting. Here, the embeddings are required to represent the evolution of nodes' structural roles over time. The existing methods are limited in terms of efficiency and effectiveness: they scale poorly to even moderate number of timestamps, or capture structural role only tangentially. In this work, we present a novel unsupervised approach to structural representation learning for temporal graphs that overcomes these limitations. For each node, our approach clusters then aggregates the embedding of a node's neighbors for each timestamp, followed by a further temporal aggregation of all timestamps. This is repeated for (at most) d iterations, so as to acquire information from the d -hop neighborhood of a node. Our approach takes linear time in the number of overall temporal edges, and possesses important theoretical properties that formally demonstrate its effectiveness. Extensive experiments on synthetic and real datasets show superior performance in node classification and regression tasks, and superior scalability of our approach to large graphs.
节点表示学习(NRL)为图的节点生成数值向量(嵌入)。结构NRL特别为那些表现出相似结构角色的节点分配相似的节点嵌入。这与基于接近度的对应方法形成对比,其中嵌入之间的相似性反映了节点之间的空间接近性。结构化NRL对于节点分类这样的任务很有用,其中相同类的节点共享结构角色,尽管它们之间可能存在距离,或者没有路径。尽管在静态图中已经对结构NRL进行了很好的研究,但在时间设置中却受到了有限的关注。在这里,需要嵌入来表示节点的结构角色随时间的演变。现有的方法在效率和有效性方面是有限的:它们很难扩展到甚至中等数量的时间戳,或者只是切切地捕捉结构作用。在这项工作中,我们提出了一种新的无监督方法来克服这些限制的时间图结构表示学习。对于每个节点,我们的方法集群然后为每个时间戳聚合节点邻居的嵌入,然后对所有时间戳进行进一步的时间聚合。这将重复(最多)d次迭代,以便从节点的d跳邻域获取信息。我们的方法在整体时间边的数量上需要线性时间,并且具有正式证明其有效性的重要理论性质。在合成数据集和真实数据集上进行的大量实验表明,我们的方法在节点分类和回归任务上具有优越的性能,并且在处理大型图时具有优越的可扩展性。
{"title":"Temporal SIR-GN: Efficient and Effective Structural Representation Learning for Temporal Graphs","authors":"Janet Layne, Justin Carpenter, Edoardo Serra, Francesco Gullo","doi":"10.14778/3598581.3598583","DOIUrl":"https://doi.org/10.14778/3598581.3598583","url":null,"abstract":"Node representation learning (NRL) generates numerical vectors (embeddings) for the nodes of a graph. Structural NRL specifically assigns similar node embeddings for those nodes that exhibit similar structural roles. This is in contrast with its proximity-based counterpart, wherein similarity between embeddings reflects spatial proximity among nodes. Structural NRL is useful for tasks such as node classification where nodes of the same class share structural roles, though there may exist a distant, or no path between them.\u0000 Athough structural NRL has been well-studied in static graphs, it has received limited attention in the temporal setting. Here, the embeddings are required to represent the evolution of nodes' structural roles over time. The existing methods are limited in terms of efficiency and effectiveness: they scale poorly to even moderate number of timestamps, or capture structural role only tangentially.\u0000 \u0000 In this work, we present a novel unsupervised approach to structural representation learning for temporal graphs that overcomes these limitations. For each node, our approach clusters then aggregates the embedding of a node's neighbors for each timestamp, followed by a further temporal aggregation of all timestamps. This is repeated for (at most)\u0000 d\u0000 iterations, so as to acquire information from the\u0000 d\u0000 -hop neighborhood of a node. Our approach takes linear time in the number of overall temporal edges, and possesses important theoretical properties that formally demonstrate its effectiveness.\u0000 \u0000 Extensive experiments on synthetic and real datasets show superior performance in node classification and regression tasks, and superior scalability of our approach to large graphs.","PeriodicalId":20467,"journal":{"name":"Proc. VLDB Endow.","volume":"34 1","pages":"2075-2089"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74748120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Extract-Transform-Load for Video Streams 提取-转换-加载视频流
Pub Date : 2023-05-01 DOI: 10.14778/3598581.3598600
Ferdinand Kossmann, Ziniu Wu, Eugenie Lai, Nesime Tatbul, Lei Cao, Tim Kraska, S. Madden
Social media, self-driving cars, and traffic cameras produce video streams at large scales and cheap cost. However, storing and querying video at such scales is prohibitively expensive. We propose to treat large-scale video analytics as a data warehousing problem: Video is a format that is easy to produce but needs to be transformed into an application-specific format that is easy to query. Analogously, we define the problem of Video Extract-Transform-Load ( V-ETL ). V-ETL systems need to reduce the cost of running a user-defined V-ETL job while also giving throughput guarantees to keep up with the rate at which data is produced. We find that no current system sufficiently fulfills both needs and therefore propose Skyscraper , a system tailored to V-ETL. Skyscraper can execute arbitrary video ingestion pipelines and adaptively tunes them to reduce cost at minimal or no quality degradation, e.g., by adjusting sampling rates and resolutions to the ingested content. Skyscraper can hereby be provisioned with cheap on-premises compute and uses a combination of buffering and cloud bursting to deal with peaks in workload caused by expensive processing configurations. In our experiments, we find that Skyscraper significantly reduces the cost of V-ETL ingestion compared to adaptions of current SOTA systems, while at the same time giving robustness guarantees that these systems are lacking.
社交媒体、自动驾驶汽车和交通摄像头可以大规模、低成本地生产视频流。然而,如此大规模的存储和查询视频是非常昂贵的。我们建议将大规模视频分析视为数据仓库问题:视频是一种易于生成的格式,但需要转换为易于查询的特定于应用程序的格式。类似地,我们定义了视频提取-转换-加载(V-ETL)问题。V-ETL系统需要降低运行用户定义的V-ETL作业的成本,同时还要提供吞吐量保证,以跟上数据生成的速度。我们发现目前没有系统能够充分满足这两种需求,因此提出了摩天大楼,这是一个为V-ETL量身定制的系统。摩天大楼可以执行任意的视频摄取管道,并自适应地调整它们,以最小化或没有质量下降来降低成本,例如,通过调整摄取内容的采样率和分辨率。因此,摩天楼可以配备廉价的本地计算,并结合使用缓冲和云爆发来处理由昂贵的处理配置引起的工作负载高峰。在我们的实验中,我们发现与当前SOTA系统的适应相比,Skyscraper显著降低了V-ETL摄取的成本,同时提供了这些系统所缺乏的鲁棒性保证。
{"title":"Extract-Transform-Load for Video Streams","authors":"Ferdinand Kossmann, Ziniu Wu, Eugenie Lai, Nesime Tatbul, Lei Cao, Tim Kraska, S. Madden","doi":"10.14778/3598581.3598600","DOIUrl":"https://doi.org/10.14778/3598581.3598600","url":null,"abstract":"\u0000 Social media, self-driving cars, and traffic cameras produce video streams at large scales and cheap cost. However, storing and querying video at such scales is prohibitively expensive. We propose to treat large-scale video analytics as a data warehousing problem: Video is a format that is easy to produce but needs to be transformed into an application-specific format that is easy to query. Analogously, we define the problem of Video Extract-Transform-Load (\u0000 V-ETL\u0000 ).\u0000 V-ETL\u0000 systems need to reduce the cost of running a user-defined\u0000 V-ETL\u0000 job while also giving throughput guarantees to keep up with the rate at which data is produced. We find that no current system sufficiently fulfills both needs and therefore propose\u0000 Skyscraper\u0000 , a system tailored to\u0000 V-ETL. Skyscraper\u0000 can execute arbitrary video ingestion pipelines and adaptively tunes them to reduce cost at minimal or no quality degradation, e.g., by adjusting sampling rates and resolutions to the ingested content.\u0000 Skyscraper\u0000 can hereby be provisioned with cheap on-premises compute and uses a combination of buffering and cloud bursting to deal with peaks in workload caused by expensive processing configurations. In our experiments, we find that\u0000 Skyscraper\u0000 significantly reduces the cost of\u0000 V-ETL\u0000 ingestion compared to adaptions of current SOTA systems, while at the same time giving robustness guarantees that these systems are lacking.\u0000","PeriodicalId":20467,"journal":{"name":"Proc. VLDB Endow.","volume":"1 1","pages":"2302-2315"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77218679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proc. VLDB Endow.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1