首页 > 最新文献

2020 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)最新文献

英文 中文
Self-adaptive Threshold-based Policy for Microservices Elasticity 基于自适应阈值的微服务弹性策略
Fabiana Rossi, V. Cardellini, F. L. Presti
The microservice architecture structures an application as a collection of loosely coupled and distributed services. Since application workloads usually change over time, the number of replicas per microservice should be accordingly scaled at run-time. The most widely adopted scaling policy relies on statically defined thresholds, expressed in terms of system-oriented metrics. This policy might not be well-suited to scale multi-component and latency-sensitive applications, which express requirements in terms of response time. In this paper, we present a two-layered hierarchical solution for controlling the elasticity of microservice-based applications. The higher-level controller estimates the microservice contribution to the application performance, and informs the lower-level components. The latter accordingly scale the single microservices using a dynamic threshold-based policy. So, we propose MB Threshold and QL Threshold, two policies that employ respectively model-based and model-free reinforcement learning approaches to learn threshold update strategies. These policies can compute different thresholds for the different application components, according to the desired deployment objectives. A wide set of simulation results shows the benefits and flexibility of the proposed solution, emphasizing the advantages of using dynamic thresholds over the most adopted policy that uses static thresholds.
微服务架构将应用程序构建为松散耦合和分布式服务的集合。由于应用程序工作负载通常会随着时间的推移而变化,因此每个微服务的副本数量应该在运行时相应地缩放。最广泛采用的扩展策略依赖于静态定义的阈值,用面向系统的度量表示。此策略可能不太适合扩展多组件和对延迟敏感的应用程序,这些应用程序根据响应时间表达需求。在本文中,我们提出了一种控制基于微服务的应用程序弹性的两层分层解决方案。高级控制器估计微服务对应用程序性能的贡献,并通知低级组件。后者使用基于动态阈值的策略相应地扩展单个微服务。因此,我们提出了MB Threshold和QL Threshold两种策略,分别采用基于模型和无模型的强化学习方法来学习阈值更新策略。这些策略可以根据所需的部署目标,为不同的应用程序组件计算不同的阈值。大量的模拟结果显示了所建议的解决方案的优点和灵活性,强调了使用动态阈值比使用静态阈值的最常用策略的优点。
{"title":"Self-adaptive Threshold-based Policy for Microservices Elasticity","authors":"Fabiana Rossi, V. Cardellini, F. L. Presti","doi":"10.1109/MASCOTS50786.2020.9285951","DOIUrl":"https://doi.org/10.1109/MASCOTS50786.2020.9285951","url":null,"abstract":"The microservice architecture structures an application as a collection of loosely coupled and distributed services. Since application workloads usually change over time, the number of replicas per microservice should be accordingly scaled at run-time. The most widely adopted scaling policy relies on statically defined thresholds, expressed in terms of system-oriented metrics. This policy might not be well-suited to scale multi-component and latency-sensitive applications, which express requirements in terms of response time. In this paper, we present a two-layered hierarchical solution for controlling the elasticity of microservice-based applications. The higher-level controller estimates the microservice contribution to the application performance, and informs the lower-level components. The latter accordingly scale the single microservices using a dynamic threshold-based policy. So, we propose MB Threshold and QL Threshold, two policies that employ respectively model-based and model-free reinforcement learning approaches to learn threshold update strategies. These policies can compute different thresholds for the different application components, according to the desired deployment objectives. A wide set of simulation results shows the benefits and flexibility of the proposed solution, emphasizing the advantages of using dynamic thresholds over the most adopted policy that uses static thresholds.","PeriodicalId":272614,"journal":{"name":"2020 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114329488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Symbolic Execution for Network Functions with Time-Driven Logic 时间驱动逻辑网络函数的符号执行
Harsha Sharma, Wenfei Wu, Bangwen Deng
Symbolic Execution is a commonly used technique in network function (NF) verification, and it helps network operators to find implementation or configuration bugs before the deployment. By studying most existing symbolic execution engine, we realize that they only focus on packet arrival based event logic; we propose that NF modeling language should include time-driven logic to describe the actual NF implementations more accurately and performing complete verification. Thus, we define primitives to express time-driven logic in NF modeling language and develop a symbolic execution engine NF-SE that can verify such logic for NFs for multiple packets. Our prototype of NF-SE and evaluation on multiple example NFs demonstrate its usefulness and correctness.
符号执行是网络功能(NF)验证中常用的一种技术,它可以帮助网络运营商在部署之前发现实现或配置错误。通过研究现有的大多数符号执行引擎,我们发现它们只关注基于事件逻辑的数据包到达;我们建议NF建模语言应该包含时间驱动逻辑,以更准确地描述实际的NF实现并进行完整的验证。因此,我们定义了在NF建模语言中表达时间驱动逻辑的原语,并开发了一个符号执行引擎NF- se,可以为多个数据包验证NFs的这种逻辑。我们的NF-SE原型和对多个示例nf的评估证明了它的有效性和正确性。
{"title":"Symbolic Execution for Network Functions with Time-Driven Logic","authors":"Harsha Sharma, Wenfei Wu, Bangwen Deng","doi":"10.1109/MASCOTS50786.2020.9285941","DOIUrl":"https://doi.org/10.1109/MASCOTS50786.2020.9285941","url":null,"abstract":"Symbolic Execution is a commonly used technique in network function (NF) verification, and it helps network operators to find implementation or configuration bugs before the deployment. By studying most existing symbolic execution engine, we realize that they only focus on packet arrival based event logic; we propose that NF modeling language should include time-driven logic to describe the actual NF implementations more accurately and performing complete verification. Thus, we define primitives to express time-driven logic in NF modeling language and develop a symbolic execution engine NF-SE that can verify such logic for NFs for multiple packets. Our prototype of NF-SE and evaluation on multiple example NFs demonstrate its usefulness and correctness.","PeriodicalId":272614,"journal":{"name":"2020 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130943370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Investigating Genome Analysis Pipeline Performance on GATK with Cloud Object Storage 基于云对象存储的GATK基因组分析流水线性能研究
Tatsuhiro Chiba, Takeshi Yoshimura
Achieving fast, scalable, and cost-effective genome analytics is always important to open up a new frontier in biomedical and life science. Genome Analysis Toolkit (GATK), an industry-standard genome analysis tool, improves its scalability and performance by leveraging Spark and HDFS. Spark with HDFS has been a leading analytics platform in a past few years, however, the system cannot exploit full advantage of cloud elasticity in a recent modern cloud. In this paper we investigate performance characteristics of GATK using Spark with HDFS and identify scalability issues. Based on a quantitative analysis, we introduce a new approach to utilize Cloud Object Storage (COS) in GATK instead of HDFS, which can help decoupling compute and storage. We demonstrate how this approach can contribute to the improvement of the entire pipeline performance and cost saving. As a result, we demonstrate GATK with IBM COS can achieve up to 28% faster than GATK with HDFS. We also show that this approach can achieve up to 67 % cost saving in total, which includes the time for data loading and whole pipeline analysis.
实现快速、可扩展且具有成本效益的基因组分析对于开辟生物医学和生命科学的新领域一直很重要。Genome Analysis Toolkit (GATK)是一个行业标准的基因组分析工具,通过利用Spark和HDFS来提高其可扩展性和性能。在过去的几年里,Spark with HDFS一直是领先的分析平台,然而,在最近的现代云环境中,该系统无法充分利用云弹性的优势。在本文中,我们研究了使用Spark和HDFS的GATK的性能特征,并确定了可扩展性问题。在定量分析的基础上,我们引入了一种新的方法,在GATK中利用云对象存储(COS)代替HDFS,可以帮助解耦计算和存储。我们演示了这种方法如何有助于改善整个管道性能并节省成本。因此,我们证明了使用IBM COS的GATK可以比使用HDFS的GATK快28%。我们还表明,这种方法可以节省高达67%的总成本,其中包括数据加载和整个管道分析的时间。
{"title":"Investigating Genome Analysis Pipeline Performance on GATK with Cloud Object Storage","authors":"Tatsuhiro Chiba, Takeshi Yoshimura","doi":"10.1109/MASCOTS50786.2020.9285945","DOIUrl":"https://doi.org/10.1109/MASCOTS50786.2020.9285945","url":null,"abstract":"Achieving fast, scalable, and cost-effective genome analytics is always important to open up a new frontier in biomedical and life science. Genome Analysis Toolkit (GATK), an industry-standard genome analysis tool, improves its scalability and performance by leveraging Spark and HDFS. Spark with HDFS has been a leading analytics platform in a past few years, however, the system cannot exploit full advantage of cloud elasticity in a recent modern cloud. In this paper we investigate performance characteristics of GATK using Spark with HDFS and identify scalability issues. Based on a quantitative analysis, we introduce a new approach to utilize Cloud Object Storage (COS) in GATK instead of HDFS, which can help decoupling compute and storage. We demonstrate how this approach can contribute to the improvement of the entire pipeline performance and cost saving. As a result, we demonstrate GATK with IBM COS can achieve up to 28% faster than GATK with HDFS. We also show that this approach can achieve up to 67 % cost saving in total, which includes the time for data loading and whole pipeline analysis.","PeriodicalId":272614,"journal":{"name":"2020 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133186951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Merkle Hash Grids Instead of Merkle Trees 默克尔哈希网格代替默克尔树
Jehan-Francois Pâris, T. Schwarz
Merkle grids are a new data organization that replicates the functionality of Merkle trees while reducing their transmission and storage costs by up to 50 percent. All Merkle grids organize the objects whose conformity they monitor in a square array. They add row and column hashes to it such that (a) all row hashes contain the hash of the concatenation of the hashes of all the objects in their respective row and (b) all column hashes contain the hash of the concatenation of the hashes of all the objects in their respective column. In addition, a single signed master hash contains the hash of the concatenation of all row and column hashes. Extended Merkle grids add two auxiliary Merkle trees to speed up searches among both row hashes and column hashes. While both basic and extended Merkle grids perform authentication of all blocks better than Merkle trees, only extended Merkle grids can locate individual non-conforming objects or authenticate a single non-conforming object as fast as Merkle trees.
默克尔网格是一种新的数据组织,它复制了默克尔树的功能,同时将传输和存储成本降低了50%。所有的默克尔网格都将其监控的对象组织成一个方形阵列。它们向它添加行和列哈希,以便(a)所有行哈希包含其各自行中所有对象的哈希的连接的哈希,(b)所有列哈希包含其各自列中所有对象的哈希的连接的哈希。此外,单签名主哈希包含所有行和列哈希的连接的哈希。扩展的Merkle网格添加了两个辅助的Merkle树,以加速行哈希和列哈希之间的搜索。虽然基本和扩展的Merkle网格都比Merkle树更好地执行所有块的身份验证,但只有扩展的Merkle网格才能像Merkle树一样快速地定位单个不符合对象或验证单个不符合对象。
{"title":"Merkle Hash Grids Instead of Merkle Trees","authors":"Jehan-Francois Pâris, T. Schwarz","doi":"10.1109/MASCOTS50786.2020.9285942","DOIUrl":"https://doi.org/10.1109/MASCOTS50786.2020.9285942","url":null,"abstract":"Merkle grids are a new data organization that replicates the functionality of Merkle trees while reducing their transmission and storage costs by up to 50 percent. All Merkle grids organize the objects whose conformity they monitor in a square array. They add row and column hashes to it such that (a) all row hashes contain the hash of the concatenation of the hashes of all the objects in their respective row and (b) all column hashes contain the hash of the concatenation of the hashes of all the objects in their respective column. In addition, a single signed master hash contains the hash of the concatenation of all row and column hashes. Extended Merkle grids add two auxiliary Merkle trees to speed up searches among both row hashes and column hashes. While both basic and extended Merkle grids perform authentication of all blocks better than Merkle trees, only extended Merkle grids can locate individual non-conforming objects or authenticate a single non-conforming object as fast as Merkle trees.","PeriodicalId":272614,"journal":{"name":"2020 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)","volume":"528 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134277328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Infrastructure-Aware TensorFlow for Heterogeneous Datacenters 异构数据中心的基础设施感知TensorFlow
Moiz Arif, M. M. Rafique, Seung-Hwan Lim, Zaki Malik
Heterogeneous datacenters, with a variety of compute, memory, and network resources, are becoming increasingly popular to address the resource requirements of time-sensitive applications. One such application framework is the TensorFlow platform, which has become a platform of choice for running machine learning workloads. The state-of-the-art TensorFlow platform is oblivious to the availability and performance profiles of the underlying datacenter resources and does not incorporate resource requirements of the given workloads for distributed training. This leads to executing the training tasks on busy and resource-constrained worker nodes, which results in a significant increase in the overall training time. In this paper, we address this challenge and propose architectural improvements and new software modules in the default TensorFlow platform to make it aware of the availability and capabilities of the underlying datacenter resources. The proposed Infrastructure-Aware Tensor-Flow efficiently schedules the training tasks on the best possible resources for execution and reduces the overall training time. Our evaluation using the worker nodes with varying availability and performance profiles shows that the proposed enhancements yield up to 54 % reduced training time as compared to the default TensorFlow platform.
具有各种计算、内存和网络资源的异构数据中心正变得越来越流行,以满足对时间敏感的应用程序的资源需求。TensorFlow平台就是这样一个应用框架,它已经成为运行机器学习工作负载的首选平台。最先进的TensorFlow平台不关心底层数据中心资源的可用性和性能配置文件,也不考虑分布式训练的给定工作负载的资源需求。这导致在繁忙和资源受限的工作节点上执行训练任务,从而导致总体训练时间的显着增加。在本文中,我们解决了这一挑战,并提出了架构改进和默认TensorFlow平台中的新软件模块,以使其了解底层数据中心资源的可用性和功能。所提出的基于基础设施感知的张量流有效地将训练任务安排在最佳资源上执行,并减少了总体训练时间。我们使用具有不同可用性和性能配置文件的工作节点进行的评估表明,与默认TensorFlow平台相比,拟议的增强可减少高达54%的训练时间。
{"title":"Infrastructure-Aware TensorFlow for Heterogeneous Datacenters","authors":"Moiz Arif, M. M. Rafique, Seung-Hwan Lim, Zaki Malik","doi":"10.1109/MASCOTS50786.2020.9285969","DOIUrl":"https://doi.org/10.1109/MASCOTS50786.2020.9285969","url":null,"abstract":"Heterogeneous datacenters, with a variety of compute, memory, and network resources, are becoming increasingly popular to address the resource requirements of time-sensitive applications. One such application framework is the TensorFlow platform, which has become a platform of choice for running machine learning workloads. The state-of-the-art TensorFlow platform is oblivious to the availability and performance profiles of the underlying datacenter resources and does not incorporate resource requirements of the given workloads for distributed training. This leads to executing the training tasks on busy and resource-constrained worker nodes, which results in a significant increase in the overall training time. In this paper, we address this challenge and propose architectural improvements and new software modules in the default TensorFlow platform to make it aware of the availability and capabilities of the underlying datacenter resources. The proposed Infrastructure-Aware Tensor-Flow efficiently schedules the training tasks on the best possible resources for execution and reduces the overall training time. Our evaluation using the worker nodes with varying availability and performance profiles shows that the proposed enhancements yield up to 54 % reduced training time as compared to the default TensorFlow platform.","PeriodicalId":272614,"journal":{"name":"2020 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131256056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Performance Prediction for Data-driven Workflows on Apache Spark Apache Spark上数据驱动工作流的性能预测
Andrea Gulino, Arif Canakoglu, S. Ceri, D. Ardagna
Spark is an in-memory framework for implementing distributed applications of various types. Predicting the execution time of Spark applications is an important but challenging problem that has been tackled in the past few years by several studies; most of them achieving good prediction accuracy on simple applications (e.g. known ML algorithms or SQL-based applications). In this work, we consider complex data-driven workflow applications, in which the execution and data flow can be modeled by Directly Acyclic Graphs (DAGs). Workflows can be made of an arbitrary combination of known tasks, each applying a set of Spark operations to their input data. By adopting a hybrid approach, combining analytical and machine learning (ML) models, trained on small DAGs, we can predict, with good accuracy, the execution time of unseen workflows of higher complexity and size. We validate our approach through an extensive experimentation on real-world complex applications, comparing different ML models and choices of feature sets.
Spark是一个内存框架,用于实现各种类型的分布式应用程序。预测Spark应用程序的执行时间是一个重要但具有挑战性的问题,在过去的几年里,一些研究已经解决了这个问题;它们中的大多数在简单的应用程序(例如已知的ML算法或基于sql的应用程序)上实现了良好的预测精度。在这项工作中,我们考虑了复杂的数据驱动工作流应用程序,其中执行和数据流可以通过直接无环图(dag)建模。工作流可以由已知任务的任意组合组成,每个任务对其输入数据应用一组Spark操作。通过采用混合方法,结合分析和机器学习(ML)模型,在小dag上训练,我们可以很准确地预测更高复杂性和规模的未见工作流的执行时间。我们通过在现实世界的复杂应用程序上进行广泛的实验来验证我们的方法,比较不同的ML模型和特征集的选择。
{"title":"Performance Prediction for Data-driven Workflows on Apache Spark","authors":"Andrea Gulino, Arif Canakoglu, S. Ceri, D. Ardagna","doi":"10.1109/MASCOTS50786.2020.9285944","DOIUrl":"https://doi.org/10.1109/MASCOTS50786.2020.9285944","url":null,"abstract":"Spark is an in-memory framework for implementing distributed applications of various types. Predicting the execution time of Spark applications is an important but challenging problem that has been tackled in the past few years by several studies; most of them achieving good prediction accuracy on simple applications (e.g. known ML algorithms or SQL-based applications). In this work, we consider complex data-driven workflow applications, in which the execution and data flow can be modeled by Directly Acyclic Graphs (DAGs). Workflows can be made of an arbitrary combination of known tasks, each applying a set of Spark operations to their input data. By adopting a hybrid approach, combining analytical and machine learning (ML) models, trained on small DAGs, we can predict, with good accuracy, the execution time of unseen workflows of higher complexity and size. We validate our approach through an extensive experimentation on real-world complex applications, comparing different ML models and choices of feature sets.","PeriodicalId":272614,"journal":{"name":"2020 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133300384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Symbiotic HW Cache and SW DTLB Prefetching for DRAM/NVM Hybrid Memory 共生HW Cache和SW DTLB预取的DRAM/NVM混合内存
Onkar Patil, F. Mueller, Latchesar Ionkov, Jason Lee, M. Lang
The introduction of NVDIMM memory devices has encouraged the use of DRAM/NVM based hybrid memory systems to increase the memory-per-core ratio in compute nodes and obtain possible energy and cost benefits. However, Non-Volatile Memory (NVM) is slower than DRAM in terms of read/write latency. This difference in performance will adversely affect memory-bound applications. Traditionally, data prefetching at the hardware level has been used to increase the number of cache hits to mitigate performance degradation. However, software (SW) prefetching has not been used effectively to reduce the effects of high memory access latencies. Also, the current cache hierarchy and hardware (HW) prefetching are not optimized for a hybrid memory system. We hypothesize that HW and SW prefetching can complement each other in placing data in caches and the Data Translation Look-aside Buffer (DTLB) prior to their references, and by doing so adaptively, highly varying access latencies in a DRAM/NVM hybrid memory system are taken into account. This work contributes an adaptive SW prefetch method based on the characterization of read/write/unroll prefetch distances for NVM and DRAM. Prefetch performance is characterized via custom benchmarks based on STREAM2 specifications in a multicore MPI runtime environment and compared to the performance of the standard SW prefetch pass in GCC. Furthermore, the effects of HW prefetching on kernels executing on hybrid memory system are evaluated. Experimental results indicate that SW prefetching targeted to populate the DTLB results in up to 26% performance improvement when symbiotically used in conjunction with HW prefetching, as opposed to only HW prefetching. Based on our findings, changes to GCC's prefetch-loop-arrays compiler pass are proposed to take advantage of DTLB prefetching in a hybrid memory system for kernels that are frequently used in HPC applications.
NVDIMM内存设备的引入鼓励使用基于DRAM/NVM的混合内存系统,以提高计算节点的每核内存比率,并获得可能的能源和成本效益。然而,非易失性内存(NVM)在读/写延迟方面比DRAM慢。这种性能差异将对内存受限的应用程序产生不利影响。传统上,硬件级别的数据预取用于增加缓存命中次数,以减轻性能下降。然而,软件(SW)预取并没有有效地用于减少高内存访问延迟的影响。此外,当前的缓存层次结构和硬件(HW)预取没有针对混合内存系统进行优化。我们假设HW和SW预取可以在引用之前将数据放入缓存和数据转换旁置缓冲区(DTLB)中相互补充,并且通过自适应地这样做,考虑到DRAM/NVM混合存储系统中高度变化的访问延迟。这项工作为NVM和DRAM提供了一种基于读/写/展开预取距离特征的自适应SW预取方法。预取性能是通过多核MPI运行时环境中基于STREAM2规范的自定义基准来表征的,并与GCC中标准SW预取传递的性能进行了比较。此外,还评估了硬件预取对在混合存储系统上执行的内核的影响。实验结果表明,与仅使用HW预取相比,当与HW预取共生使用时,以填充DTLB为目标的SW预取可使性能提高高达26%。根据我们的研究结果,建议对GCC的预取循环数组编译器传递进行更改,以便在HPC应用程序中经常使用的内核的混合内存系统中利用DTLB预取。
{"title":"Symbiotic HW Cache and SW DTLB Prefetching for DRAM/NVM Hybrid Memory","authors":"Onkar Patil, F. Mueller, Latchesar Ionkov, Jason Lee, M. Lang","doi":"10.1109/MASCOTS50786.2020.9285963","DOIUrl":"https://doi.org/10.1109/MASCOTS50786.2020.9285963","url":null,"abstract":"The introduction of NVDIMM memory devices has encouraged the use of DRAM/NVM based hybrid memory systems to increase the memory-per-core ratio in compute nodes and obtain possible energy and cost benefits. However, Non-Volatile Memory (NVM) is slower than DRAM in terms of read/write latency. This difference in performance will adversely affect memory-bound applications. Traditionally, data prefetching at the hardware level has been used to increase the number of cache hits to mitigate performance degradation. However, software (SW) prefetching has not been used effectively to reduce the effects of high memory access latencies. Also, the current cache hierarchy and hardware (HW) prefetching are not optimized for a hybrid memory system. We hypothesize that HW and SW prefetching can complement each other in placing data in caches and the Data Translation Look-aside Buffer (DTLB) prior to their references, and by doing so adaptively, highly varying access latencies in a DRAM/NVM hybrid memory system are taken into account. This work contributes an adaptive SW prefetch method based on the characterization of read/write/unroll prefetch distances for NVM and DRAM. Prefetch performance is characterized via custom benchmarks based on STREAM2 specifications in a multicore MPI runtime environment and compared to the performance of the standard SW prefetch pass in GCC. Furthermore, the effects of HW prefetching on kernels executing on hybrid memory system are evaluated. Experimental results indicate that SW prefetching targeted to populate the DTLB results in up to 26% performance improvement when symbiotically used in conjunction with HW prefetching, as opposed to only HW prefetching. Based on our findings, changes to GCC's prefetch-loop-arrays compiler pass are proposed to take advantage of DTLB prefetching in a hybrid memory system for kernels that are frequently used in HPC applications.","PeriodicalId":272614,"journal":{"name":"2020 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)","volume":"422 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133516062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Improving NAND flash performance with read heat separation 采用读热分离技术改善NAND闪存性能
R. Pletka, N. Papandreou, R. Stoica, H. Pozidis, Nikolas Ioannou, T. Fisher, Aaron Fry, Kip Ingram, Andrew Walls
The continuous growth in 3D-NAND flash storage density has primarily been enabled by 3D stacking and by increasing the number of bits stored per memory cell. Unfortunately, these desirable flash device design choices are adversely affecting reliability and latency characteristics. In particular, increasing the number of bits stored per cell results in having to apply additional voltage thresholds during each read operation, therefore increasing the read latency characteristics. While most NAND flash challenges can be mitigated through appropriate background processing, the flash read latency characteristics cannot be hidden and remains the biggest challenge, especially for the newest flash generations that store four bits per cell. In this paper, we introduce read heat separation (RHS), a new heat-aware data-placement technique that exploits the skew present in real-world workloads to place frequently read user data on low-latency flash pages. Although conceptually simple, such a technique is difficult to integrate in a flash controller, as it introduces a significant amount of complexity, requires more metadata, and is further constrained by other flash-specific peculiarities. To overcome these challenges, we propose a novel flash controller architecture supporting read heat-aware data placement. We first discuss the trade-offs that such a new design entails and analyze the key aspects that influence the efficiency of RHS. Through both, extensive simulations and an implementation we realized in a commercial enterprise-grade solid-state drive controller, we show that our architecture can indeed significantly reduce the average read latency. For certain workloads, it can reverse the system-level read latency trends when using recent multi-bit flash generations and hence outperform SSDs using previous faster flash generations.
3D- nand闪存存储密度的持续增长主要是通过3D堆叠和增加每个存储单元的存储位数来实现的。不幸的是,这些理想的闪存器件设计选择对可靠性和延迟特性产生了不利影响。特别是,增加每个单元存储的比特数会导致在每次读取操作期间必须施加额外的电压阈值,从而增加读取延迟特性。虽然大多数NAND闪存挑战可以通过适当的后台处理来缓解,但闪存读取延迟特性无法隐藏,并且仍然是最大的挑战,特别是对于每单元存储4位的最新一代闪存。在本文中,我们介绍了读取热分离(RHS),这是一种新的热感知数据放置技术,它利用现实工作负载中存在的倾斜,将频繁读取的用户数据放置在低延迟的闪存页面上。虽然概念上很简单,但这种技术很难集成到flash控制器中,因为它引入了大量的复杂性,需要更多的元数据,并且受到其他特定于flash的特性的进一步限制。为了克服这些挑战,我们提出了一种新的闪存控制器架构,支持读取热感知数据放置。我们首先讨论这种新设计所需要的权衡,并分析影响RHS效率的关键方面。通过广泛的模拟和我们在商业企业级固态驱动器控制器中实现的实现,我们表明我们的架构确实可以显着降低平均读取延迟。对于某些工作负载,当使用最近的多比特闪存时,它可以扭转系统级读延迟趋势,因此优于使用以前更快的闪存的ssd。
{"title":"Improving NAND flash performance with read heat separation","authors":"R. Pletka, N. Papandreou, R. Stoica, H. Pozidis, Nikolas Ioannou, T. Fisher, Aaron Fry, Kip Ingram, Andrew Walls","doi":"10.1109/MASCOTS50786.2020.9285970","DOIUrl":"https://doi.org/10.1109/MASCOTS50786.2020.9285970","url":null,"abstract":"The continuous growth in 3D-NAND flash storage density has primarily been enabled by 3D stacking and by increasing the number of bits stored per memory cell. Unfortunately, these desirable flash device design choices are adversely affecting reliability and latency characteristics. In particular, increasing the number of bits stored per cell results in having to apply additional voltage thresholds during each read operation, therefore increasing the read latency characteristics. While most NAND flash challenges can be mitigated through appropriate background processing, the flash read latency characteristics cannot be hidden and remains the biggest challenge, especially for the newest flash generations that store four bits per cell. In this paper, we introduce read heat separation (RHS), a new heat-aware data-placement technique that exploits the skew present in real-world workloads to place frequently read user data on low-latency flash pages. Although conceptually simple, such a technique is difficult to integrate in a flash controller, as it introduces a significant amount of complexity, requires more metadata, and is further constrained by other flash-specific peculiarities. To overcome these challenges, we propose a novel flash controller architecture supporting read heat-aware data placement. We first discuss the trade-offs that such a new design entails and analyze the key aspects that influence the efficiency of RHS. Through both, extensive simulations and an implementation we realized in a commercial enterprise-grade solid-state drive controller, we show that our architecture can indeed significantly reduce the average read latency. For certain workloads, it can reverse the system-level read latency trends when using recent multi-bit flash generations and hence outperform SSDs using previous faster flash generations.","PeriodicalId":272614,"journal":{"name":"2020 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114188350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Baloo: Measuring and Modeling the Performance Configurations of Distributed DBMS Baloo:测量和建模分布式DBMS的性能配置
Johannes Grohmann, Daniel Seybold, Simon Eismann, Mark Leznik, Samuel Kounev, Jörg Domaschka
Correctly configuring a distributed database management system (DBMS) deployed in a cloud environment for maximizing performance poses many challenges to operators. Even if the entire configuration spectrum could be measured directly, which is often infeasible due to the multitude of parameters, single measurements are subject to random variations and need to be repeated multiple times. In this work, we propose Baloo, a framework for systematically measuring and modeling different performance-relevant configurations of distributed DBMS in cloud environments. Baloo dynamically estimates the required number of measurement configurations, as well as the number of required measurement repetitions per configuration based on a desired target accuracy. We evaluate Baloo based on a data set consisting of 900 DBMS configuration measurements conducted in our private cloud setup. Our evaluation shows that the highly configurable framework is able to achieve a prediction error of up to 12 %, while saving over 80 % of the measurement effort. We also publish all code and the acquired data set to foster future research.
正确配置部署在云环境中的分布式数据库管理系统(DBMS),以实现性能最大化,这给运营商带来了许多挑战。即使可以直接测量整个配置谱,由于参数众多,这通常是不可行的,单次测量也会受到随机变化的影响,需要重复多次。在这项工作中,我们提出了Baloo,这是一个框架,用于系统地测量和建模云环境中分布式DBMS的不同性能相关配置。Baloo根据期望的目标精度动态估计所需的测量配置数量,以及每个配置所需的测量重复次数。我们基于在私有云设置中进行的由900个DBMS配置度量组成的数据集来评估Baloo。我们的评估表明,高度可配置的框架能够实现高达12%的预测误差,同时节省超过80%的测量工作。我们还公布了所有代码和获得的数据集,以促进未来的研究。
{"title":"Baloo: Measuring and Modeling the Performance Configurations of Distributed DBMS","authors":"Johannes Grohmann, Daniel Seybold, Simon Eismann, Mark Leznik, Samuel Kounev, Jörg Domaschka","doi":"10.1109/MASCOTS50786.2020.9285960","DOIUrl":"https://doi.org/10.1109/MASCOTS50786.2020.9285960","url":null,"abstract":"Correctly configuring a distributed database management system (DBMS) deployed in a cloud environment for maximizing performance poses many challenges to operators. Even if the entire configuration spectrum could be measured directly, which is often infeasible due to the multitude of parameters, single measurements are subject to random variations and need to be repeated multiple times. In this work, we propose Baloo, a framework for systematically measuring and modeling different performance-relevant configurations of distributed DBMS in cloud environments. Baloo dynamically estimates the required number of measurement configurations, as well as the number of required measurement repetitions per configuration based on a desired target accuracy. We evaluate Baloo based on a data set consisting of 900 DBMS configuration measurements conducted in our private cloud setup. Our evaluation shows that the highly configurable framework is able to achieve a prediction error of up to 12 %, while saving over 80 % of the measurement effort. We also publish all code and the acquired data set to foster future research.","PeriodicalId":272614,"journal":{"name":"2020 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114396495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Security-Performance Trade-offs of Kubernetes Container Runtimes Kubernetes容器运行时的安全性能权衡
William Viktorsson, C. Klein, Johan Tordsson
The extreme adoption rate of container technologies along with raised security concerns have resulted in the development of multiple alternative container runtimes targeting security through additional layers of indirection. In an apples-to-apples comparison, we deploy three runtimes in the same Kubernetes cluster, the security focused Kata and gVisor, as well as the default Kubernetes runtime runC. Our evaluation based on three real applications demonstrate that runC outperforms the more secure alternatives up to 5x, that gVisor deploys containers up to 2x faster than Kata, but that Kata executes container up to 1.6x faster than gVisor. Our work illustrates that alternative, more secure, runtimes can be used in a plug-and-play manner in Kubernetes, but at a significant performance penalty. Our study is useful both to practitioners - to understand the current state of the technology in order to make the right decision in the selection, operation and/or design of platforms - and to scholars to illustrate how these technologies evolved over time.
容器技术的高度采用率以及安全性问题的增加导致了通过额外的间接层来实现安全性的多个可选容器运行时的开发。在一个苹果对苹果的比较中,我们在同一个Kubernetes集群中部署了三个运行时,专注于安全的Kata和gVisor,以及默认的Kubernetes运行时runC。我们基于三个真实应用程序的评估表明,runC的性能比更安全的替代方案高出5倍,gVisor部署容器的速度比Kata快2倍,而Kata执行容器的速度比gVisor快1.6倍。我们的工作表明,可以在Kubernetes中以即插即用的方式使用替代的、更安全的运行时,但会带来很大的性能损失。我们的研究对从业者和学者都很有用,他们可以了解技术的现状,以便在平台的选择、操作和/或设计中做出正确的决定,也可以说明这些技术是如何随着时间的推移而发展的。
{"title":"Security-Performance Trade-offs of Kubernetes Container Runtimes","authors":"William Viktorsson, C. Klein, Johan Tordsson","doi":"10.1109/MASCOTS50786.2020.9285946","DOIUrl":"https://doi.org/10.1109/MASCOTS50786.2020.9285946","url":null,"abstract":"The extreme adoption rate of container technologies along with raised security concerns have resulted in the development of multiple alternative container runtimes targeting security through additional layers of indirection. In an apples-to-apples comparison, we deploy three runtimes in the same Kubernetes cluster, the security focused Kata and gVisor, as well as the default Kubernetes runtime runC. Our evaluation based on three real applications demonstrate that runC outperforms the more secure alternatives up to 5x, that gVisor deploys containers up to 2x faster than Kata, but that Kata executes container up to 1.6x faster than gVisor. Our work illustrates that alternative, more secure, runtimes can be used in a plug-and-play manner in Kubernetes, but at a significant performance penalty. Our study is useful both to practitioners - to understand the current state of the technology in order to make the right decision in the selection, operation and/or design of platforms - and to scholars to illustrate how these technologies evolved over time.","PeriodicalId":272614,"journal":{"name":"2020 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115454801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
期刊
2020 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1