{"title":"Perspector: Benchmarking Benchmark Suites","authors":"Sandeep Kumar, Abhisek Panda, S. Sarangi","doi":"10.23919/DATE56975.2023.10136940","DOIUrl":null,"url":null,"abstract":"Estimating the quality of a benchmark suite is a non-trivial task. A poorly selected or improperly configured bench-mark suite can present a distorted picture of the performance of the evaluated framework. With computing venturing into new domains, the total number of benchmark suites available is increasing by the day. Researchers must evaluate these suites quickly and decisively for their effectiveness. We present Perspector, a novel tool to quantify the performance of a benchmark suite. Perspector comprises novel metrics to characterize the quality of a benchmark suite. It provides a math-ematical framework for capturing some qualitative suggestions and observations made in prior work. The metrics are generic and domain-agnostic. Furthermore, our tool can be used to compare the efficacy of one suite vis-a-vis other benchmark suites, systematically and rigorously create a suite of workloads, and appropriately tune them for a target system.","PeriodicalId":340349,"journal":{"name":"2023 Design, Automation & Test in Europe Conference & Exhibition (DATE)","volume":"147 ","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 Design, Automation & Test in Europe Conference & Exhibition (DATE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/DATE56975.2023.10136940","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Estimating the quality of a benchmark suite is a non-trivial task. A poorly selected or improperly configured bench-mark suite can present a distorted picture of the performance of the evaluated framework. With computing venturing into new domains, the total number of benchmark suites available is increasing by the day. Researchers must evaluate these suites quickly and decisively for their effectiveness. We present Perspector, a novel tool to quantify the performance of a benchmark suite. Perspector comprises novel metrics to characterize the quality of a benchmark suite. It provides a math-ematical framework for capturing some qualitative suggestions and observations made in prior work. The metrics are generic and domain-agnostic. Furthermore, our tool can be used to compare the efficacy of one suite vis-a-vis other benchmark suites, systematically and rigorously create a suite of workloads, and appropriately tune them for a target system.