首页 > 最新文献

BenchCouncil Transactions on Benchmarks, Standards and Evaluations最新文献

英文 中文
Workflow Critical Path: A data-oriented critical path metric for Holistic HPC Workflows 工作流关键路径:面向数据的整体HPC工作流关键路径度量
Pub Date : 2021-10-01 DOI: 10.1016/j.tbench.2021.100001
Daniel D. Nguyen, Karen L. Karavanic

Current trends in HPC, such as the push to exascale, convergence with Big Data, and growing complexity of HPC applications, have created gaps that traditional performance tools do not cover. One example is Holistic HPC Workflows — HPC workflows comprising multiple codes, paradigms, or platforms that are not developed using a workflow management system. To diagnose the performance of these applications, we define a new metric called Workflow Critical Path (WCP), a data-oriented metric for Holistic HPC Workflows. WCP constructs graphs that span across the workflow codes and platforms, using data states as vertices and data mutations as edges. Using cloud-based technologies, we implement a prototype called Crux, a distributed analysis tool for calculating and visualizing WCP. Our experiments with a workflow simulator on Amazon Web Services show Crux is scalable and capable of correctly calculating WCP for common Holistic HPC workflow patterns. We explore the use of WCP and discuss how Crux could be used in a production HPC environment.

当前HPC的趋势,如向百亿亿级的推动、与大数据的融合以及HPC应用程序的日益复杂,已经产生了传统性能工具无法覆盖的差距。一个例子是整体HPC工作流——HPC工作流包含多个代码、范例或平台,这些代码、范例或平台不是使用工作流管理系统开发的。为了诊断这些应用程序的性能,我们定义了一个名为工作流关键路径(WCP)的新指标,这是一个面向数据的整体HPC工作流指标。WCP构建跨越工作流代码和平台的图形,使用数据状态作为顶点,使用数据突变作为边。使用基于云的技术,我们实现了一个名为Crux的原型,这是一个用于计算和可视化WCP的分布式分析工具。我们在Amazon Web Services上的工作流模拟器上的实验表明Crux是可伸缩的,并且能够正确计算常见的整体HPC工作流模式的WCP。我们将探索WCP的使用,并讨论如何在生产HPC环境中使用Crux。
{"title":"Workflow Critical Path: A data-oriented critical path metric for Holistic HPC Workflows","authors":"Daniel D. Nguyen,&nbsp;Karen L. Karavanic","doi":"10.1016/j.tbench.2021.100001","DOIUrl":"https://doi.org/10.1016/j.tbench.2021.100001","url":null,"abstract":"<div><p>Current trends in HPC, such as the push to exascale, convergence with Big Data, and growing complexity of HPC applications, have created gaps that traditional performance tools do not cover. One example is Holistic HPC Workflows — HPC workflows comprising multiple codes, paradigms, or platforms that are not developed using a workflow management system. To diagnose the performance of these applications, we define a new metric called Workflow Critical Path (WCP), a data-oriented metric for Holistic HPC Workflows. WCP constructs graphs that span across the workflow codes and platforms, using data states as vertices and data mutations as edges. Using cloud-based technologies, we implement a prototype called Crux, a distributed analysis tool for calculating and visualizing WCP. Our experiments with a workflow simulator on Amazon Web Services show Crux is scalable and capable of correctly calculating WCP for common Holistic HPC workflow patterns. We explore the use of WCP and discuss how Crux could be used in a production HPC environment.</p></div>","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"1 1","pages":"Article 100001"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772485921000016/pdfft?md5=b03db3d209b242d8a2f663d25834fd8c&pid=1-s2.0-S2772485921000016-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92003797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Latency-aware automatic CNN channel pruning with GPU runtime analysis 延迟感知自动CNN频道修剪与GPU运行时分析
Pub Date : 2021-10-01 DOI: 10.1016/j.tbench.2021.100009
Jiaqiang Liu, Jingwei Sun, Zhongtian Xu, Guangzhong Sun

The huge storage and computation cost of convolutional neural networks (CNN) make them challenging to meet the real-time inference requirement in many applications. Existing channel pruning methods mainly focus on removing unimportant channels in a CNN model based on rule-of-thumb designs, using reduced floating-point operations (FLOPs) and parameter numbers to measure the pruning quality. The inference latency of pruned models is often overlooked. In this paper, we propose a latency-aware automatic CNN channel pruning method (LACP), which aims to search low latency and accurate pruned network structure automatically. We evaluate the inaccuracy of measuring pruning quality by FLOPs and the number of parameters, and use the model inference latency as the direct optimization metric. To bridge model pruning and inference acceleration, we analyze the inference latency of convolutional layers on GPU. Results show that the inference latency of convolutional layers exhibits a staircase pattern along with channel number due to the GPU tail effect. Based on that observation, we greatly shrink the search space of network structures. Then we apply an evolutionary procedure to search a computationally efficient pruned network structure, which reduces the inference latency and maintains the model accuracy. Experiments and comparisons with state-of-the-art methods on three image classification datasets show that our method can achieve better inference acceleration with less accuracy loss.

卷积神经网络(CNN)巨大的存储和计算成本使其难以满足许多应用对实时推理的要求。现有的信道修剪方法主要集中在基于经验法则设计的CNN模型中去除不重要的信道,使用减少的浮点运算(FLOPs)和参数数来衡量修剪质量。精简模型的推理延迟常常被忽略。本文提出了一种延迟感知的CNN频道自动剪枝方法(LACP),该方法旨在自动搜索低延迟和精确剪枝的网络结构。我们评估了用FLOPs和参数数量来衡量修剪质量的不准确性,并使用模型推理延迟作为直接优化度量。为了桥接模型修剪和推理加速,我们分析了卷积层在GPU上的推理延迟。结果表明,由于GPU的尾部效应,卷积层的推理延迟随通道数呈阶梯状分布。在此基础上,我们大大缩小了网络结构的搜索空间。然后应用进化过程搜索计算效率高的剪枝网络结构,减少了推理延迟并保持了模型的准确性。在三个图像分类数据集上的实验和与现有方法的比较表明,我们的方法可以在较小的精度损失下获得更好的推理加速。
{"title":"Latency-aware automatic CNN channel pruning with GPU runtime analysis","authors":"Jiaqiang Liu,&nbsp;Jingwei Sun,&nbsp;Zhongtian Xu,&nbsp;Guangzhong Sun","doi":"10.1016/j.tbench.2021.100009","DOIUrl":"10.1016/j.tbench.2021.100009","url":null,"abstract":"<div><p>The huge storage and computation cost of convolutional neural networks (CNN) make them challenging to meet the real-time inference requirement in many applications. Existing channel pruning methods mainly focus on removing unimportant channels in a CNN model based on rule-of-thumb designs, using reduced floating-point operations (FLOPs) and parameter numbers to measure the pruning quality. The inference latency of pruned models is often overlooked. In this paper, we propose a latency-aware automatic CNN channel pruning method (LACP), which aims to search low latency and accurate pruned network structure automatically. We evaluate the inaccuracy of measuring pruning quality by FLOPs and the number of parameters, and use the model inference latency as the direct optimization metric. To bridge model pruning and inference acceleration, we analyze the inference latency of convolutional layers on GPU. Results show that the inference latency of convolutional layers exhibits a staircase pattern along with channel number due to the GPU tail effect. Based on that observation, we greatly shrink the search space of network structures. Then we apply an evolutionary procedure to search a computationally efficient pruned network structure, which reduces the inference latency and maintains the model accuracy. Experiments and comparisons with state-of-the-art methods on three image classification datasets show that our method can achieve better inference acceleration with less accuracy loss.</p></div>","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"1 1","pages":"Article 100009"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772485921000090/pdfft?md5=e3e618453811eda67a8549ff2a96e500&pid=1-s2.0-S2772485921000090-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77841061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Benchmarking for Observability: The Case of Diagnosing Storage Failures 可观察性基准测试:以存储故障诊断为例
Pub Date : 2021-10-01 DOI: 10.1016/j.tbench.2021.100006
Duo Zhang, Mai Zheng

Diagnosing storage system failures is challenging even for professionals. One recent example is the “When Solid State Drives Are Not That Solid” incident occurred at Algolia data center, where Samsung SSDs were mistakenly blamed for failures caused by a Linux kernel bug. With the system complexity keeps increasing, diagnosing failures will likely become more difficult.

To better understand real-world failures and the potential limitations of state-of-the-art tools, we first conduct an empirical study on 277 user-reported storage failures in this paper. We characterize the issues along multiple dimensions (e.g., time to resolve, kernel components involved), which provides a quantitative measurement of the challenge in practice. Moreover, we analyze a set of the storage issues in depth and derive a benchmark suite called BugBenchk. The benchmark suite includes the necessary workloads and software environments to reproduce 9 storage failures, covers 4 different file systems and the block I/O layer of the storage stack, and enables realistic evaluation of diverse kernel-level tools for debugging.

To demonstrate the usage, we apply BugBenchk to study two representative tools for debugging. We focus on measuring the observations that the tools enable developers to make (i.e., observability), and derive concrete metrics to measure the observability qualitatively and quantitatively. Our measurement demonstrates the different design tradeoffs in terms of debugging information and overhead. More importantly, we observe that both tools may behave abnormally when applied to diagnose a few tricky cases. Also, we find that neither tool can provide low-level information on how the persistent storage states are changed, which is essential for understanding storage failures. To address the limitation, we develop lightweight extensions to enable such functionality in both tools. We hope that BugBenchk and the enabled measurements will inspire follow-up research in benchmarking and tool support and help address the challenge of failure diagnosis in general.

即使对专业人士来说,诊断存储系统故障也是一项挑战。最近的一个例子是发生在Algolia数据中心的“当固态硬盘不那么坚固时”事件,三星固态硬盘被错误地归咎于Linux内核错误导致的故障。随着系统复杂性的不断增加,故障诊断将变得越来越困难。为了更好地理解现实世界的故障和最先进工具的潜在局限性,我们首先对277个用户报告的存储故障进行了实证研究。我们沿着多个维度(例如,解决时间,涉及的内核组件)描述问题,这在实践中提供了对挑战的定量测量。此外,我们深入分析了一组存储问题,并派生了一个名为bugbench的基准测试套件。基准测试套件包括再现9个存储故障所需的工作负载和软件环境,涵盖4个不同的文件系统和存储堆栈的块I/O层,并支持对各种内核级工具进行实际评估以进行调试。为了演示这种用法,我们应用bugbench来研究两个有代表性的调试工具。我们专注于测量工具使开发人员能够进行的观察(即,可观察性),并推导出具体的度量来定性地和定量地测量可观察性。我们的测量在调试信息和开销方面演示了不同的设计权衡。更重要的是,我们观察到,当应用于诊断一些棘手的情况时,这两种工具可能表现异常。此外,我们发现这两种工具都不能提供关于持久存储状态如何更改的底层信息,而这对于理解存储故障至关重要。为了解决这个限制,我们开发了轻量级扩展,以便在这两个工具中启用此类功能。我们希望bugbench和启用的度量将激发对基准测试和工具支持的后续研究,并帮助解决一般故障诊断的挑战。
{"title":"Benchmarking for Observability: The Case of Diagnosing Storage Failures","authors":"Duo Zhang,&nbsp;Mai Zheng","doi":"10.1016/j.tbench.2021.100006","DOIUrl":"10.1016/j.tbench.2021.100006","url":null,"abstract":"<div><p>Diagnosing storage system failures is challenging even for professionals. One recent example is the “When Solid State Drives Are Not That Solid” incident occurred at Algolia data center, where Samsung SSDs were mistakenly blamed for failures caused by a Linux kernel bug. With the system complexity keeps increasing, diagnosing failures will likely become more difficult.</p><p>To better understand real-world failures and the potential limitations of state-of-the-art tools, we first conduct an empirical study on 277 user-reported storage failures in this paper. We characterize the issues along multiple dimensions (e.g., time to resolve, kernel components involved), which provides a quantitative measurement of the challenge in practice. Moreover, we analyze a set of the storage issues in depth and derive a benchmark suite called <span><math><mrow><mi>B</mi><mi>u</mi><mi>g</mi><mi>B</mi><mi>e</mi><mi>n</mi><mi>c</mi><msup><mrow><mi>h</mi></mrow><mrow><mi>k</mi></mrow></msup></mrow></math></span>. The benchmark suite includes the necessary workloads and software environments to reproduce 9 storage failures, covers 4 different file systems and the block I/O layer of the storage stack, and enables realistic evaluation of diverse kernel-level tools for debugging.</p><p>To demonstrate the usage, we apply <span><math><mrow><mi>B</mi><mi>u</mi><mi>g</mi><mi>B</mi><mi>e</mi><mi>n</mi><mi>c</mi><msup><mrow><mi>h</mi></mrow><mrow><mi>k</mi></mrow></msup></mrow></math></span> to study two representative tools for debugging. We focus on measuring the observations that the tools enable developers to make (i.e., observability), and derive concrete metrics to measure the observability qualitatively and quantitatively. Our measurement demonstrates the different design tradeoffs in terms of debugging information and overhead. More importantly, we observe that both tools may behave abnormally when applied to diagnose a few tricky cases. Also, we find that neither tool can provide low-level information on how the persistent storage states are changed, which is essential for understanding storage failures. To address the limitation, we develop lightweight extensions to enable such functionality in both tools. We hope that <span><math><mrow><mi>B</mi><mi>u</mi><mi>g</mi><mi>B</mi><mi>e</mi><mi>n</mi><mi>c</mi><msup><mrow><mi>h</mi></mrow><mrow><mi>k</mi></mrow></msup></mrow></math></span> and the enabled measurements will inspire follow-up research in benchmarking and tool support and help address the challenge of failure diagnosis in general.</p></div>","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"1 1","pages":"Article 100006"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772485921000065/pdfft?md5=ccacbd5e0872d394b75b85db5386c9b4&pid=1-s2.0-S2772485921000065-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75003267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Benchmarking feature selection methods with different prediction models on large-scale healthcare event data 对大规模医疗事件数据上不同预测模型的特征选择方法进行基准测试
Pub Date : 2021-10-01 DOI: 10.1016/j.tbench.2021.100004
Fan Zhang , Chunjie Luo , Chuanxin Lan , Jianfeng Zhan

With the development of the Electronic Health Record (EHR) technique, vast volumes of digital clinical data are generated. Based on the data, many methods are developed to improve the performance of clinical predictions. Among those methods, Deep Neural Networks (DNN) have been proven outstanding with respect to accuracy by employing many patient instances and events (features). However, each patient-specific event requires time and money. Collecting too many features before making a decision is insufferable, especially for time-critical tasks such as mortality prediction. So it is essential to predict with high accuracy using as minimal clinical events as possible, which makes feature selection a critical question. This paper presents detailed benchmarking results of various feature selection methods, applying different classification and regression algorithms for clinical prediction tasks, including mortality prediction, length of stay prediction, and ICD-9 code group prediction. We use the publicly available dataset, Medical Information Mart for Intensive Care III (MIMIC-III), in our experiments. Our results show that Genetic Algorithm (GA) based methods perform well with only a few features and outperform others. Besides, for the mortality prediction task, the feature subset selected by GA for one classifier can also be used to others while achieving good performance.

随着电子病历(EHR)技术的发展,产生了大量的数字化临床数据。基于这些数据,开发了许多方法来提高临床预测的性能。在这些方法中,深度神经网络(DNN)通过使用许多患者实例和事件(特征)来证明其准确性。然而,每个针对患者的事件都需要时间和金钱。在做出决定之前收集太多的特征是令人难以忍受的,特别是对于时间紧迫的任务,如死亡率预测。因此,使用尽可能少的临床事件进行高精度预测是至关重要的,这使得特征选择成为一个关键问题。本文详细介绍了各种特征选择方法的基准测试结果,将不同的分类和回归算法应用于临床预测任务,包括死亡率预测、住院时间预测和ICD-9代码组预测。我们在实验中使用了公开可用的数据集,重症监护医疗信息市场III (MIMIC-III)。我们的研究结果表明,基于遗传算法(GA)的方法仅在少数特征上表现良好,并且优于其他方法。此外,对于死亡率预测任务,遗传算法为一个分类器选择的特征子集也可以用于其他分类器,同时获得良好的性能。
{"title":"Benchmarking feature selection methods with different prediction models on large-scale healthcare event data","authors":"Fan Zhang ,&nbsp;Chunjie Luo ,&nbsp;Chuanxin Lan ,&nbsp;Jianfeng Zhan","doi":"10.1016/j.tbench.2021.100004","DOIUrl":"10.1016/j.tbench.2021.100004","url":null,"abstract":"<div><p>With the development of the Electronic Health Record (EHR) technique, vast volumes of digital clinical data are generated. Based on the data, many methods are developed to improve the performance of clinical predictions. Among those methods, Deep Neural Networks (DNN) have been proven outstanding with respect to accuracy by employing many patient instances and events (features). However, each patient-specific event requires time and money. Collecting too many features before making a decision is insufferable, especially for time-critical tasks such as mortality prediction. So it is essential to predict with high accuracy using as minimal clinical events as possible, which makes feature selection a critical question. This paper presents detailed benchmarking results of various feature selection methods, applying different classification and regression algorithms for clinical prediction tasks, including mortality prediction, length of stay prediction, and ICD-9 code group prediction. We use the publicly available dataset, Medical Information Mart for Intensive Care III (MIMIC-III), in our experiments. Our results show that Genetic Algorithm (GA) based methods perform well with only a few features and outperform others. Besides, for the mortality prediction task, the feature subset selected by GA for one classifier can also be used to others while achieving good performance.</p></div>","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"1 1","pages":"Article 100004"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772485921000041/pdfft?md5=99fde57d63abff83585ab50c968fd9b0&pid=1-s2.0-S2772485921000041-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81823992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Workflow Critical Path: A data-oriented critical path metric for Holistic HPC Workflows 工作流关键路径:面向数据的整体HPC工作流关键路径度量
Pub Date : 2021-10-01 DOI: 10.15760/etd.7369
Daniel D. Nguyen, K. Karavanic
............................................................................................................................... i List of Tables ...................................................................................................................... iv List of Figures ..................................................................................................................... v Chapter 1: Introduction ....................................................................................................... 1 1.1 Motivation ................................................................................................................. 3 1.2 Definitions ................................................................................................................. 5 1.3 Thesis Statement ....................................................................................................... 6 1.4 Contributions ............................................................................................................. 6 Chapter 2: Background ........................................................................................................ 9 2.1 Parallel Computing .................................................................................................... 9 2.2 Critical Path Analysis ................................................................................................ 9 2.3 High Performance Computing ................................................................................ 11 2.4 Holistic HPC Workflows ........................................................................................ 12 2.5 Instrumentation, Profiling, and Tracing .................................................................. 12 Chapter 3: Related Work ................................................................................................... 14 3.1 Workflow Management Systems ............................................................................ 14 3.2 Distributed Systems Tracing Tools ......................................................................... 17 3.3 HPC Performance Measurement Tools ................................................................... 20 3.4 Performance Analysis of Scientific Workflows ...................................................... 21 Chapter 4: Architecture ..................................................................................................... 22 4.1 Data State ................................................................................................................ 22 4.2 Crux UI .................................................................................................................... 24 4.3 Crux API ................................................................................................................. 24 4.4 Crux Database ......................................................................................................... 27
...............................................................................................................................我的表列表 ......................................................................................................................第四列数据 .....................................................................................................................第五章1:介绍 .......................................................................................................1 1.1动机 .................................................................................................................3 1.2定义 .................................................................................................................5 1.3论文声明 .......................................................................................................6 1.4贡献 .............................................................................................................6第二章:背景 ........................................................................................................9 2.1并行计算 ....................................................................................................9 2.2关键路径分析 ................................................................................................9 2.3高性能计算 ................................................................................11 2.4整体HPC工作流 ........................................................................................12个2.5仪器、分析和跟踪 ..................................................................12章3:相关工作 ...................................................................................................14个3.1工作流管理系统 ............................................................................14个3.2分布式系统跟踪工具 .........................................................................17 3.3 HPC性能测量工具 ...................................................................20 3.4科学工作流的性能分析 ......................................................第四章:体系结构 .....................................................................................................22 4.1数据状态 ................................................................................................................22 4.2关键用户界面 ....................................................................................................................24个4.3关键API .................................................................................................................24个4.4关键数据库 .........................................................................................................27 4.5关键关键路径算法 ..................................................................................29日4.6部署在HPC集群 ...................................................................................31日4.7 HPC应用程序的工具 .....................................................................33
{"title":"Workflow Critical Path: A data-oriented critical path metric for Holistic HPC Workflows","authors":"Daniel D. Nguyen, K. Karavanic","doi":"10.15760/etd.7369","DOIUrl":"https://doi.org/10.15760/etd.7369","url":null,"abstract":"............................................................................................................................... i List of Tables ...................................................................................................................... iv List of Figures ..................................................................................................................... v Chapter 1: Introduction ....................................................................................................... 1 1.1 Motivation ................................................................................................................. 3 1.2 Definitions ................................................................................................................. 5 1.3 Thesis Statement ....................................................................................................... 6 1.4 Contributions ............................................................................................................. 6 Chapter 2: Background ........................................................................................................ 9 2.1 Parallel Computing .................................................................................................... 9 2.2 Critical Path Analysis ................................................................................................ 9 2.3 High Performance Computing ................................................................................ 11 2.4 Holistic HPC Workflows ........................................................................................ 12 2.5 Instrumentation, Profiling, and Tracing .................................................................. 12 Chapter 3: Related Work ................................................................................................... 14 3.1 Workflow Management Systems ............................................................................ 14 3.2 Distributed Systems Tracing Tools ......................................................................... 17 3.3 HPC Performance Measurement Tools ................................................................... 20 3.4 Performance Analysis of Scientific Workflows ...................................................... 21 Chapter 4: Architecture ..................................................................................................... 22 4.1 Data State ................................................................................................................ 22 4.2 Crux UI .................................................................................................................... 24 4.3 Crux API ................................................................................................................. 24 4.4 Crux Database ......................................................................................................... 27","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"80 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77251801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Revisiting the effects of the Spectre and Meltdown patches using the top-down microarchitectural method and purchasing power parity theory 使用自上而下的微架构方法和购买力平价理论重新审视Spectre和Meltdown补丁的影响
Pub Date : 2021-10-01 DOI: 10.1016/j.tbench.2021.100011
Yectli A. Huerta , David J. Lilja

Software patches are made available to fix security vulnerabilities, enhance performance, and usability. Previous works focused on measuring the performance effect of patches on benchmark runtimes. In this study, we used the Top-Down microarchitecture analysis method to understand how pipeline bottlenecks were affected by the application of the Spectre and Meltdown security patches. Bottleneck analysis makes it possible to better understand how different hardware resources are being utilized, highlighting portions of the pipeline where possible improvements could be achieved. We complement the Top-Down analysis technique with the use a normalization technique from the field of economics, purchasing power parity (PPP), to better understand the relative difference between patched and unpatched runs. In this study, we showed that security patches had an effect that was reflected on the corresponding Top-Down metrics. We showed that recent compilers are not as negatively affected as previously reported. Out of the 14 benchmarks that make up the SPEC OMP2012 suite, three had noticeable slowdowns when the patches were applied. We also found that Top-Down metrics had large relative differences when the security patches were applied, differences that standard techniques based in absolute, non-normalized, metrics failed to highlight.

提供软件补丁以修复安全漏洞、增强性能和可用性。以前的工作主要集中在衡量补丁对基准运行时的性能影响。在本研究中,我们使用自顶向下的微架构分析方法来了解Spectre和Meltdown安全补丁的应用如何影响管道瓶颈。瓶颈分析可以更好地理解不同的硬件资源是如何被利用的,突出显示可以实现改进的管道部分。我们通过使用经济学领域的标准化技术,即购买力平价(PPP)来补充自上而下的分析技术,以更好地理解修补和未修补运行之间的相对差异。在这项研究中,我们展示了安全补丁具有反映在相应的自上而下度量标准上的效果。我们表明,最近的编译器并没有像之前报道的那样受到负面影响。在构成SPEC OMP2012套件的14个基准测试中,有3个在应用补丁时出现了明显的减速。我们还发现,当应用安全补丁时,自上而下的度量具有较大的相对差异,基于绝对的、非规范化的度量的标准技术无法突出这些差异。
{"title":"Revisiting the effects of the Spectre and Meltdown patches using the top-down microarchitectural method and purchasing power parity theory","authors":"Yectli A. Huerta ,&nbsp;David J. Lilja","doi":"10.1016/j.tbench.2021.100011","DOIUrl":"10.1016/j.tbench.2021.100011","url":null,"abstract":"<div><p>Software patches are made available to fix security vulnerabilities, enhance performance, and usability. Previous works focused on measuring the performance effect of patches on benchmark runtimes. In this study, we used the Top-Down microarchitecture analysis method to understand how pipeline bottlenecks were affected by the application of the Spectre and Meltdown security patches. Bottleneck analysis makes it possible to better understand how different hardware resources are being utilized, highlighting portions of the pipeline where possible improvements could be achieved. We complement the Top-Down analysis technique with the use a normalization technique from the field of economics, purchasing power parity (PPP), to better understand the relative difference between patched and unpatched runs. In this study, we showed that security patches had an effect that was reflected on the corresponding Top-Down metrics. We showed that recent compilers are not as negatively affected as previously reported. Out of the 14 benchmarks that make up the SPEC OMP2012 suite, three had noticeable slowdowns when the patches were applied. We also found that Top-Down metrics had large relative differences when the security patches were applied, differences that standard techniques based in absolute, non-normalized, metrics failed to highlight.</p></div>","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"1 1","pages":"Article 100011"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772485921000119/pdfft?md5=6e1139ebac69c084c6fe482b82fdd42c&pid=1-s2.0-S2772485921000119-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91428938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
BenchCouncil Transactions on Benchmarks, Standards and Evaluations
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1