Nils Bosbach, Lukas Jünger, Rebecca Pelke, Niko Zurstraßen, R. Leupers
{"title":"Entropy-Based Analysis of Benchmarks for Instruction Set Simulators","authors":"Nils Bosbach, Lukas Jünger, Rebecca Pelke, Niko Zurstraßen, R. Leupers","doi":"10.1145/3579170.3579267","DOIUrl":null,"url":null,"abstract":"Instruction-Set Simulators (ISSs) are widely used to simulate the execution of programs for a target architecture on a host machine. They translate the instructions of the program that should be executed into instructions of the host Instruction-Set Architecture (ISA). The performance of ISSs strongly depends on the implementation and the instructions that should be executed. Therefore, benchmarks that are used to compare the performance of ISSs should contain a variety of instructions. Since many benchmarks are written in high-level programming languages, it is usually not clear to the user which instructions are underlying the benchmarks. In this work, we present a tool that can be used to analyze the variety of instructions used in a benchmark. In a multi-stage analysis, the properties of the benchmarks are collected. An entropy-based metric is used to measure the diversity of the instructions used by the benchmark. In a case study, we present results for the benchmarks Whetstone, Dhrystone, Coremark STREAM, and stdcbench. We show the diversity of those benchmarks for different compiler optimizations and indicate which benchmarks should be used to test the general performance of an ISS.","PeriodicalId":153341,"journal":{"name":"Proceedings of the DroneSE and RAPIDO: System Engineering for constrained embedded systems","volume":"93 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the DroneSE and RAPIDO: System Engineering for constrained embedded systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3579170.3579267","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Instruction-Set Simulators (ISSs) are widely used to simulate the execution of programs for a target architecture on a host machine. They translate the instructions of the program that should be executed into instructions of the host Instruction-Set Architecture (ISA). The performance of ISSs strongly depends on the implementation and the instructions that should be executed. Therefore, benchmarks that are used to compare the performance of ISSs should contain a variety of instructions. Since many benchmarks are written in high-level programming languages, it is usually not clear to the user which instructions are underlying the benchmarks. In this work, we present a tool that can be used to analyze the variety of instructions used in a benchmark. In a multi-stage analysis, the properties of the benchmarks are collected. An entropy-based metric is used to measure the diversity of the instructions used by the benchmark. In a case study, we present results for the benchmarks Whetstone, Dhrystone, Coremark STREAM, and stdcbench. We show the diversity of those benchmarks for different compiler optimizations and indicate which benchmarks should be used to test the general performance of an ISS.