{"title":"CPI for Runtime Performance Measurement: The Good, the Bad, and the Ugly","authors":"Li Yi, Cong Li, Jianmei Guo","doi":"10.1109/IISWC50251.2020.00019","DOIUrl":null,"url":null,"abstract":"Originally used for micro-architectural performance characterization, the metric of cycles per instruction (CPI) is now emerging as a proxy for workload performance measurement in runtime cloud environments. It has been used to evaluate the performance per workload before and after applying a system configuration change and to detect contentions on the micro-architectural resources in workload colocation. In this paper, we re-examine the use of CPI on two representative cloud computing workloads. An alternative metric, reference cycles per instruction (RCPI), is defined for comparison. We show that CPI is more sensitive than RCPI in identifying micro-architectural performance change in some cases. However, in the other cases with a different frequency scaling, we observe a better CPI value given a worse performance. We conjecture that both the observations are due to the bias of CPI towards scenarios with a low core frequency. We next demonstrate that a significant change in either CPI or RCPI does not necessarily indicate a boost or loss in performance, since both CPI and RCPI are dependent on workload intensities. It implies that the use of CPI without referring to the workload intensity is probably inappropriate. This provokes the discussion of the right way to use CPI, e.g., modeling CPI as a dependent variable given other relevant factors as the independent variables.","PeriodicalId":365983,"journal":{"name":"2020 IEEE International Symposium on Workload Characterization (IISWC)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Symposium on Workload Characterization (IISWC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IISWC50251.2020.00019","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Originally used for micro-architectural performance characterization, the metric of cycles per instruction (CPI) is now emerging as a proxy for workload performance measurement in runtime cloud environments. It has been used to evaluate the performance per workload before and after applying a system configuration change and to detect contentions on the micro-architectural resources in workload colocation. In this paper, we re-examine the use of CPI on two representative cloud computing workloads. An alternative metric, reference cycles per instruction (RCPI), is defined for comparison. We show that CPI is more sensitive than RCPI in identifying micro-architectural performance change in some cases. However, in the other cases with a different frequency scaling, we observe a better CPI value given a worse performance. We conjecture that both the observations are due to the bias of CPI towards scenarios with a low core frequency. We next demonstrate that a significant change in either CPI or RCPI does not necessarily indicate a boost or loss in performance, since both CPI and RCPI are dependent on workload intensities. It implies that the use of CPI without referring to the workload intensity is probably inappropriate. This provokes the discussion of the right way to use CPI, e.g., modeling CPI as a dependent variable given other relevant factors as the independent variables.