首页 > 最新文献

Proceedings of the 16th ACM International Conference on Systems and Storage最新文献

英文 中文
When SkyPilot meets Kubernetes 当飞行员会议召开时
Pub Date : 2023-06-05 DOI: 10.1145/3579370.3594764
G. Vernik, Ronen I. Kat, O. Cohen, Zongheng Yang
The Sky vision[3] aims to open a new era in cloud computing. Sky abstracts clouds and dynamically use multiple clouds to optimize workload execution. This enable users to focus on their business logic, rather than interact with multiple clouds, and manually optimize performance and costs.
Sky vision[3]旨在开启云计算的新时代。Sky对云进行抽象,并动态地使用多个云来优化工作负载的执行。这使用户能够专注于他们的业务逻辑,而不是与多个云交互,并手动优化性能和成本。
{"title":"When SkyPilot meets Kubernetes","authors":"G. Vernik, Ronen I. Kat, O. Cohen, Zongheng Yang","doi":"10.1145/3579370.3594764","DOIUrl":"https://doi.org/10.1145/3579370.3594764","url":null,"abstract":"The Sky vision[3] aims to open a new era in cloud computing. Sky abstracts clouds and dynamically use multiple clouds to optimize workload execution. This enable users to focus on their business logic, rather than interact with multiple clouds, and manually optimize performance and costs.","PeriodicalId":180024,"journal":{"name":"Proceedings of the 16th ACM International Conference on Systems and Storage","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121277780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cache Line Deltas Compression 缓存线增量压缩
Pub Date : 2023-06-05 DOI: 10.1145/3579370.3594753
Daniel Cohen, S. Cohen, D. Naor, D. Waddington, Moshik Hershcovitch
Synchronization of replicated data and program state is an essential aspect of application fault-tolerance. Current solutions use virtual memory mapping to identify page writes and replicate them at the destination. This approach has limitations because the granularity is restricted to a minimum of 4KiB per page, which may result in more data being replicated. Motivated by the emerging CXL hardware, we expand on the work Waddington, et al. [SoCC 22] by evaluating popular compression algorithms on VM snapshot data at cache line granularity. We measure the compression ratio vs. the compression time and present our conclusions.
复制数据和程序状态的同步是应用程序容错的一个重要方面。当前的解决方案使用虚拟内存映射来识别页写入,并在目标复制它们。这种方法有局限性,因为粒度被限制为每页最少4KiB,这可能导致复制更多的数据。在新兴CXL硬件的推动下,我们对Waddington等人[SoCC 22]的工作进行了扩展,在缓存线粒度上评估了流行的VM快照数据压缩算法。我们测量了压缩比与压缩时间,并给出了我们的结论。
{"title":"Cache Line Deltas Compression","authors":"Daniel Cohen, S. Cohen, D. Naor, D. Waddington, Moshik Hershcovitch","doi":"10.1145/3579370.3594753","DOIUrl":"https://doi.org/10.1145/3579370.3594753","url":null,"abstract":"Synchronization of replicated data and program state is an essential aspect of application fault-tolerance. Current solutions use virtual memory mapping to identify page writes and replicate them at the destination. This approach has limitations because the granularity is restricted to a minimum of 4KiB per page, which may result in more data being replicated. Motivated by the emerging CXL hardware, we expand on the work Waddington, et al. [SoCC 22] by evaluating popular compression algorithms on VM snapshot data at cache line granularity. We measure the compression ratio vs. the compression time and present our conclusions.","PeriodicalId":180024,"journal":{"name":"Proceedings of the 16th ACM International Conference on Systems and Storage","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124091576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Next-Generation Security Entity Linkage: Harnessing the Power of Knowledge Graphs and Large Language 下一代安全实体链接:利用知识图谱和大型语言的力量
Pub Date : 2023-06-05 DOI: 10.1145/3579370.3594759
Daniel Alfasi, T. Shapira, A. Bremler-Barr
With the continuous increase in reported Common Vulnerabilities and Exposures (CVEs), security teams are overwhelmed by vast amounts of data, which are often analyzed manually, leading to a slow and inefficient process. To address cybersecurity threats effectively, it is essential to establish connections across multiple security entity databases, including CVEs, Common Weakness Enumeration (CWEs), and Common Attack Pattern Enumeration and Classification (CAPECs). In this study, we introduce a new approach that leverages the RotatE [4] knowledge graph embedding model, initialized with embeddings from Ada language model developed by OpenAI [3]. Additionally, we extend this approach by initializing the embeddings for the relations.
随着报告的常见漏洞和暴露(cve)的不断增加,安全团队被大量数据所淹没,这些数据通常是手动分析的,导致流程缓慢且效率低下。为了有效地解决网络安全威胁,必须跨多个安全实体数据库建立连接,包括cve、通用弱点枚举(CWEs)和通用攻击模式枚举和分类(CAPECs)。在本研究中,我们引入了一种利用RotatE[4]知识图嵌入模型的新方法,该模型使用OpenAI[3]开发的Ada语言模型的嵌入进行初始化。此外,我们通过初始化关系的嵌入来扩展这种方法。
{"title":"Next-Generation Security Entity Linkage: Harnessing the Power of Knowledge Graphs and Large Language","authors":"Daniel Alfasi, T. Shapira, A. Bremler-Barr","doi":"10.1145/3579370.3594759","DOIUrl":"https://doi.org/10.1145/3579370.3594759","url":null,"abstract":"With the continuous increase in reported Common Vulnerabilities and Exposures (CVEs), security teams are overwhelmed by vast amounts of data, which are often analyzed manually, leading to a slow and inefficient process. To address cybersecurity threats effectively, it is essential to establish connections across multiple security entity databases, including CVEs, Common Weakness Enumeration (CWEs), and Common Attack Pattern Enumeration and Classification (CAPECs). In this study, we introduce a new approach that leverages the RotatE [4] knowledge graph embedding model, initialized with embeddings from Ada language model developed by OpenAI [3]. Additionally, we extend this approach by initializing the embeddings for the relations.","PeriodicalId":180024,"journal":{"name":"Proceedings of the 16th ACM International Conference on Systems and Storage","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114397437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CCO - Cloud Cost Optimizer CCO -云成本优化器
Pub Date : 2023-06-05 DOI: 10.1145/3579370.3594746
A. Yehoshua, I. Kolchinsky, A. Schuster
Cloud computing can be complex, but optimal management of it doesn't have to be. In this paper, we present the design and implementation of a scalable multi-Cloud Cost Optimizer (CCO) that calculates the optimal deployment scheme for a given workload on public or hybrid clouds. The goal of CCO is to reduce monetary costs while taking into account the specifications of the workload, including resource requirements and constraints. By using a combination of meta-heuristics, CCO addresses the combinatorial complexity of the problem and currently supports AWS and Azure. The CCO tool [1], can be accessed through a web UI or API and supports on-demand and spot instances. For broad discussion refer to [2].
云计算可能很复杂,但对它的最佳管理却不必如此。在本文中,我们介绍了一个可扩展的多云成本优化器(CCO)的设计和实现,它可以计算公共云或混合云上给定工作负载的最佳部署方案。CCO的目标是减少货币成本,同时考虑到工作负载的规格,包括资源需求和限制。通过使用元启发式的组合,CCO解决了问题的组合复杂性,目前支持AWS和Azure。CCO工具[1],可以通过web UI或API访问,并支持按需和现场实例。广泛讨论请参见[2]。
{"title":"CCO - Cloud Cost Optimizer","authors":"A. Yehoshua, I. Kolchinsky, A. Schuster","doi":"10.1145/3579370.3594746","DOIUrl":"https://doi.org/10.1145/3579370.3594746","url":null,"abstract":"Cloud computing can be complex, but optimal management of it doesn't have to be. In this paper, we present the design and implementation of a scalable multi-Cloud Cost Optimizer (CCO) that calculates the optimal deployment scheme for a given workload on public or hybrid clouds. The goal of CCO is to reduce monetary costs while taking into account the specifications of the workload, including resource requirements and constraints. By using a combination of meta-heuristics, CCO addresses the combinatorial complexity of the problem and currently supports AWS and Azure. The CCO tool [1], can be accessed through a web UI or API and supports on-demand and spot instances. For broad discussion refer to [2].","PeriodicalId":180024,"journal":{"name":"Proceedings of the 16th ACM International Conference on Systems and Storage","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121762819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RAM buffering for performance improvement of sequential write workload RAM缓冲用于提高顺序写工作负载的性能
Pub Date : 2023-06-05 DOI: 10.1145/3579370.3594762
Svetlana Lazareva, G. Petrunin
This paper presents on-line algorithm that determines further datapath for incoming requests - should they temporarily stay in RAM buffers for future merge operation or should they be written to disks immediately. With workload analysis in real time, the delay time spent in RAM buffer is a self-tuned parameter. This approach increases sequential write requests latency but sufficiently raises the overall performance of sequential write workloads without the use of expensive non-volatile cache.
本文提出了一种在线算法,用于确定传入请求的进一步数据路径——它们是应该暂时留在RAM缓冲区中以供将来的合并操作,还是应该立即写入磁盘。对于实时工作负载分析,在RAM缓冲区中花费的延迟时间是一个自调参数。这种方法增加了顺序写请求延迟,但在不使用昂贵的非易失性缓存的情况下,充分提高了顺序写工作负载的总体性能。
{"title":"RAM buffering for performance improvement of sequential write workload","authors":"Svetlana Lazareva, G. Petrunin","doi":"10.1145/3579370.3594762","DOIUrl":"https://doi.org/10.1145/3579370.3594762","url":null,"abstract":"This paper presents on-line algorithm that determines further datapath for incoming requests - should they temporarily stay in RAM buffers for future merge operation or should they be written to disks immediately. With workload analysis in real time, the delay time spent in RAM buffer is a self-tuned parameter. This approach increases sequential write requests latency but sufficiently raises the overall performance of sequential write workloads without the use of expensive non-volatile cache.","PeriodicalId":180024,"journal":{"name":"Proceedings of the 16th ACM International Conference on Systems and Storage","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131401249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of hybrid storage system based on Open CAS technology, optimized for HPC workload 基于Open CAS技术的混合存储系统的开发,并针对高性能计算负载进行了优化
Pub Date : 2023-06-05 DOI: 10.1145/3579370.3594763
Svetlana Lazareva, Ivan Petrov
HPC runs in a distributed structure with a single shared pool of data. In our case, the distributed structure is Lustre file system [4], and the single shared pool of data is our declustered HDD RAID (denoted as DCR). To increase performance, it is suggested to use Open CAS technology [3] as a cache on RAM/NVDIMM with special parameters, optimized for heavy data-intensive sequential HPC workload and an online-algorithm which reduces the number of RMW operations, by merging sequential requests into one full-stripe one.
HPC运行在具有单个共享数据池的分布式结构中。在我们的示例中,分布式结构是Lustre文件系统[4],单个共享数据池是我们的散簇HDD RAID(表示为DCR)。为了提高性能,建议使用Open CAS技术[3]作为RAM/NVDIMM上具有特殊参数的缓存,该缓存针对大量数据密集型顺序HPC工作负载进行了优化,并通过将顺序请求合并为一个全条带请求来减少RMW操作的数量。
{"title":"Development of hybrid storage system based on Open CAS technology, optimized for HPC workload","authors":"Svetlana Lazareva, Ivan Petrov","doi":"10.1145/3579370.3594763","DOIUrl":"https://doi.org/10.1145/3579370.3594763","url":null,"abstract":"HPC runs in a distributed structure with a single shared pool of data. In our case, the distributed structure is Lustre file system [4], and the single shared pool of data is our declustered HDD RAID (denoted as DCR). To increase performance, it is suggested to use Open CAS technology [3] as a cache on RAM/NVDIMM with special parameters, optimized for heavy data-intensive sequential HPC workload and an online-algorithm which reduces the number of RMW operations, by merging sequential requests into one full-stripe one.","PeriodicalId":180024,"journal":{"name":"Proceedings of the 16th ACM International Conference on Systems and Storage","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129682185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TurboHash: A Hash Table for Key-value Store on Persistent Memory TurboHash:用于持久内存上键值存储的哈希表
Pub Date : 2023-06-05 DOI: 10.1145/3579370.3594766
Xingsheng Zhao, Chen Zhong, Song Jiang
Major efforts on the design of persistent hash table on a non-volatile byte-addressable memory focus on efficient support of crash consistency with fence/flush primitives as well on non-disruptive table rehashing operations. When a data entry in a hash bucket cannot be updated with one atomic write, out-of-place update, instead of in-place update, is required to avoid data corruption after a failure. This often causes extra fences/flushes. Meanwhile, when open addressing techniques, such as linear probing, are adopted for high load factor, the scope of search for a key can be large. Excessive use of fence/flush and extended key search paths are two major sources of performance degradation with hash tables in persistent memory. To address the issues, we design a persistent hash table, named TurboHash, for building high-performance key-value store. Turbo-Hash has a number of much desired features all in one design. (1) It supports out-of-place update with a cost equivalent to that of an in-place write to provide lock-free reads. (2) Long-distance linear probing is minimized (only when necessary). (3) It conducts only shard resizing for expansion and avoids expensive directory-level rehashing; And (4) it exploits hardware features for high I/O and computation efficiency, including Intel's Optane DC's performance characteristics and Intel AVX instructions. We have implemented TurboHash on the Optane persistent memory and conducted extensive evaluations. Experiment results show that TurboHash improves state-of-the-arts by 2-8 times in terms of throughput and latency.
在非易失字节可寻址内存上设计持久哈希表的主要工作集中在有效地支持使用fence/flush原语的崩溃一致性以及非中断表重哈希操作。当哈希桶中的数据条目不能通过一次原子写来更新时,需要进行位置外更新而不是位置更新,以避免发生故障后的数据损坏。这通常会导致额外的围栏/清除。同时,当采用开放寻址技术,如线性探测时,对于高负载因子,一个键的搜索范围可以很大。过度使用fence/flush和扩展键搜索路径是持久内存中哈希表性能下降的两个主要原因。为了解决这些问题,我们设计了一个名为TurboHash的持久哈希表,用于构建高性能的键值存储。Turbo-Hash在一个设计中具有许多非常理想的功能。(1)它支持位置外更新,其成本与位置内写的成本相当,以提供无锁读。(2)远程线性探测最小化(仅在必要时)。(3)只对分片进行扩容调整,避免了昂贵的目录级重哈希;(4)利用硬件特性实现高I/O和计算效率,包括英特尔Optane DC的性能特性和英特尔AVX指令。我们在Optane持久内存上实现了TurboHash,并进行了广泛的评估。实验结果表明,TurboHash在吞吐量和延迟方面提高了2-8倍。
{"title":"TurboHash: A Hash Table for Key-value Store on Persistent Memory","authors":"Xingsheng Zhao, Chen Zhong, Song Jiang","doi":"10.1145/3579370.3594766","DOIUrl":"https://doi.org/10.1145/3579370.3594766","url":null,"abstract":"Major efforts on the design of persistent hash table on a non-volatile byte-addressable memory focus on efficient support of crash consistency with fence/flush primitives as well on non-disruptive table rehashing operations. When a data entry in a hash bucket cannot be updated with one atomic write, out-of-place update, instead of in-place update, is required to avoid data corruption after a failure. This often causes extra fences/flushes. Meanwhile, when open addressing techniques, such as linear probing, are adopted for high load factor, the scope of search for a key can be large. Excessive use of fence/flush and extended key search paths are two major sources of performance degradation with hash tables in persistent memory. To address the issues, we design a persistent hash table, named TurboHash, for building high-performance key-value store. Turbo-Hash has a number of much desired features all in one design. (1) It supports out-of-place update with a cost equivalent to that of an in-place write to provide lock-free reads. (2) Long-distance linear probing is minimized (only when necessary). (3) It conducts only shard resizing for expansion and avoids expensive directory-level rehashing; And (4) it exploits hardware features for high I/O and computation efficiency, including Intel's Optane DC's performance characteristics and Intel AVX instructions. We have implemented TurboHash on the Optane persistent memory and conducted extensive evaluations. Experiment results show that TurboHash improves state-of-the-arts by 2-8 times in terms of throughput and latency.","PeriodicalId":180024,"journal":{"name":"Proceedings of the 16th ACM International Conference on Systems and Storage","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133827747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing Memory Allocation for Multi-Subgraph Mapping on Spatial Accelerators 空间加速器上多子图映射的内存分配优化
Pub Date : 2023-06-05 DOI: 10.1145/3579370.3594767
Lei Lei, Decai Pan, Dajiang Liu, Peng Ouyang, Xueliang Du
Spatial accelerators enable the pervasive use of energy-efficient solutions for computation-intensive applications. In the mapping of spatial accelerators, a large kernel is usually partitioned into multiple subgraphs for resource constraints, leading to more memory accesses and access conflicts. To minimize the access conflicts, existing works either neglect the interference of multiple subgraphs or pay little attention to data's life cycle along the execution order. To this end, this paper proposes an optimized memory allocation approach for multi-subgraph mapping on spatial accelerators by constructing an optimization problem using Integer Linear Programming (ILP). The experimental results demonstrate that our work can find conflict-free solutions for most kernels and achieve 1.15× speedup, as compared to the state-of-the-art approach.
空间加速器使计算密集型应用程序能够广泛使用节能解决方案。在空间加速器的映射中,由于资源的限制,一个大的内核通常被划分为多个子图,从而导致更多的内存访问和访问冲突。为了最大限度地减少访问冲突,现有的工作要么忽略了多个子图的干扰,要么很少关注数据沿执行顺序的生命周期。为此,本文利用整数线性规划(ILP)构造了一个优化问题,提出了空间加速器上多子图映射的内存优化分配方法。实验结果表明,与最先进的方法相比,我们的工作可以为大多数内核找到无冲突的解决方案,并实现1.15倍的加速。
{"title":"Optimizing Memory Allocation for Multi-Subgraph Mapping on Spatial Accelerators","authors":"Lei Lei, Decai Pan, Dajiang Liu, Peng Ouyang, Xueliang Du","doi":"10.1145/3579370.3594767","DOIUrl":"https://doi.org/10.1145/3579370.3594767","url":null,"abstract":"Spatial accelerators enable the pervasive use of energy-efficient solutions for computation-intensive applications. In the mapping of spatial accelerators, a large kernel is usually partitioned into multiple subgraphs for resource constraints, leading to more memory accesses and access conflicts. To minimize the access conflicts, existing works either neglect the interference of multiple subgraphs or pay little attention to data's life cycle along the execution order. To this end, this paper proposes an optimized memory allocation approach for multi-subgraph mapping on spatial accelerators by constructing an optimization problem using Integer Linear Programming (ILP). The experimental results demonstrate that our work can find conflict-free solutions for most kernels and achieve 1.15× speedup, as compared to the state-of-the-art approach.","PeriodicalId":180024,"journal":{"name":"Proceedings of the 16th ACM International Conference on Systems and Storage","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133001707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Near-Memory Processing Offload to Remote (Persistent) Memory 近内存处理卸载到远程(持久)内存
Pub Date : 2023-06-05 DOI: 10.1145/3579370.3594745
Roei Kisous, Amit Golander, Yigal Korman, Tim Gubner, Rune Humborstad, Manyi Lu
Traditional Von Neumann computing architectures are struggling to keep up with the rapidly growing demand for scale, performance, power-efficiency and memory capacity. One promising approach to this challenge is Remote Memory, in which the memory is over RDMA fabric [1]. We enhance the remote memory architecture with Near Memory Processing (NMP), a capability that offloads particular compute tasks from the client to the server side as illustrated in Figure 1. Similar motivation drove IBM to offload object processing to their remote KV storage [2]. NMP offload adds latency and server resource costs, therefore, it should only be used when the offload value is substantial, specifically, to save: network bandwidth (e.g. Filter/Aggregate), round trip time (e.g. tree Lookup) and/or distributed locks (e.g. Append to a shared journal).
传统的冯·诺伊曼计算架构正在努力跟上对规模、性能、能效和内存容量快速增长的需求。解决这一挑战的一个有希望的方法是远程内存,其中内存通过RDMA结构[1]。我们使用近内存处理(NMP)增强了远程内存体系结构,NMP是一种将特定的计算任务从客户机卸载到服务器端的功能,如图1所示。类似的动机促使IBM将对象处理工作卸载到远程KV存储器中[2]。NMP卸载增加了延迟和服务器资源成本,因此,它应该只在卸载值较大时使用,特别是为了节省:网络带宽(例如Filter/Aggregate)、往返时间(例如树查找)和/或分布式锁(例如追加到共享日志)。
{"title":"Near-Memory Processing Offload to Remote (Persistent) Memory","authors":"Roei Kisous, Amit Golander, Yigal Korman, Tim Gubner, Rune Humborstad, Manyi Lu","doi":"10.1145/3579370.3594745","DOIUrl":"https://doi.org/10.1145/3579370.3594745","url":null,"abstract":"Traditional Von Neumann computing architectures are struggling to keep up with the rapidly growing demand for scale, performance, power-efficiency and memory capacity. One promising approach to this challenge is Remote Memory, in which the memory is over RDMA fabric [1]. We enhance the remote memory architecture with Near Memory Processing (NMP), a capability that offloads particular compute tasks from the client to the server side as illustrated in Figure 1. Similar motivation drove IBM to offload object processing to their remote KV storage [2]. NMP offload adds latency and server resource costs, therefore, it should only be used when the offload value is substantial, specifically, to save: network bandwidth (e.g. Filter/Aggregate), round trip time (e.g. tree Lookup) and/or distributed locks (e.g. Append to a shared journal).","PeriodicalId":180024,"journal":{"name":"Proceedings of the 16th ACM International Conference on Systems and Storage","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123372251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Benefits of Encryption at the Storage Client 在存储客户端加密的好处
Pub Date : 2023-06-05 DOI: 10.1145/3579370.3594758
Or Ozeri, Danny Harnik, Effi Ofer
Client side encryption is a setting in which storage I/O is encrypted at the client machine before being sent out to a storage system. This is typically done by adding an encryption layer before the storage client or driver. We identify that in cases where some of the storage functions are performed at the client, it is beneficial to also integrate the encryption into the storage client. We implemented such an encryption layer into Ceph RBD - a popular open source distributed storage system. We explain some the main benefits of this approach: The ability to do layered encryption with different encryption keys per layer, the ability to support more complex storage encryption, and finally we observe that by integrating the encryption with the storage client we managed to achieve a nice performance boost.
客户端加密是一种设置,在将存储I/O发送到存储系统之前,在客户端机器上对其进行加密。这通常是通过在存储客户端或驱动程序之前添加加密层来完成的。我们认为,在某些存储功能是在客户机上执行的情况下,将加密集成到存储客户机中是有益的。我们在Ceph RBD(一个流行的开源分布式存储系统)中实现了这样一个加密层。我们解释了这种方法的一些主要优点:能够使用每层不同的加密密钥进行分层加密,能够支持更复杂的存储加密,最后我们观察到,通过将加密与存储客户机集成,我们设法实现了很好的性能提升。
{"title":"Benefits of Encryption at the Storage Client","authors":"Or Ozeri, Danny Harnik, Effi Ofer","doi":"10.1145/3579370.3594758","DOIUrl":"https://doi.org/10.1145/3579370.3594758","url":null,"abstract":"Client side encryption is a setting in which storage I/O is encrypted at the client machine before being sent out to a storage system. This is typically done by adding an encryption layer before the storage client or driver. We identify that in cases where some of the storage functions are performed at the client, it is beneficial to also integrate the encryption into the storage client. We implemented such an encryption layer into Ceph RBD - a popular open source distributed storage system. We explain some the main benefits of this approach: The ability to do layered encryption with different encryption keys per layer, the ability to support more complex storage encryption, and finally we observe that by integrating the encryption with the storage client we managed to achieve a nice performance boost.","PeriodicalId":180024,"journal":{"name":"Proceedings of the 16th ACM International Conference on Systems and Storage","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124366893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the 16th ACM International Conference on Systems and Storage
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1