使用RUGRAT随机基准应用程序生成器评估程序分析和测试工具

Ishtiaque Hussain, Christoph Csallner, M. Grechanik, Chen Fu, Qing Xie, Sangmin Park, Kunal Taneja, B. M. Hossain
{"title":"使用RUGRAT随机基准应用程序生成器评估程序分析和测试工具","authors":"Ishtiaque Hussain, Christoph Csallner, M. Grechanik, Chen Fu, Qing Xie, Sangmin Park, Kunal Taneja, B. M. Hossain","doi":"10.1145/2338966.2336798","DOIUrl":null,"url":null,"abstract":"Benchmarks are heavily used in different areas of computer science to evaluate algorithms and tools. In program analysis and testing, open-source and commercial programs are routinely used as bench- marks to evaluate different aspects of algorithms and tools. Unfor- tunately, many of these programs are written by programmers who introduce different biases, not to mention that it is very difficult to find programs that can serve as benchmarks with high reproducibil- ity of results. We propose a novel approach for generating random benchmarks for evaluating program analysis and testing tools. Our approach uses stochastic parse trees, where language grammar production rules are assigned probabilities that specify the frequencies with which instantiations of these rules will appear in the generated pro- grams. We implemented our tool for Java and applied it to generate benchmarks with which we evaluated different program analysis and testing tools. Our tool was also implemented by a major soft- ware company for C++ and used by a team of developers to gener- ate benchmarks that enabled them to reproduce a bug in less than four hours.","PeriodicalId":315305,"journal":{"name":"International Workshop on Dynamic Analysis","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"Evaluating program analysis and testing tools with the RUGRAT random benchmark application generator\",\"authors\":\"Ishtiaque Hussain, Christoph Csallner, M. Grechanik, Chen Fu, Qing Xie, Sangmin Park, Kunal Taneja, B. M. Hossain\",\"doi\":\"10.1145/2338966.2336798\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Benchmarks are heavily used in different areas of computer science to evaluate algorithms and tools. In program analysis and testing, open-source and commercial programs are routinely used as bench- marks to evaluate different aspects of algorithms and tools. Unfor- tunately, many of these programs are written by programmers who introduce different biases, not to mention that it is very difficult to find programs that can serve as benchmarks with high reproducibil- ity of results. We propose a novel approach for generating random benchmarks for evaluating program analysis and testing tools. Our approach uses stochastic parse trees, where language grammar production rules are assigned probabilities that specify the frequencies with which instantiations of these rules will appear in the generated pro- grams. We implemented our tool for Java and applied it to generate benchmarks with which we evaluated different program analysis and testing tools. Our tool was also implemented by a major soft- ware company for C++ and used by a team of developers to gener- ate benchmarks that enabled them to reproduce a bug in less than four hours.\",\"PeriodicalId\":315305,\"journal\":{\"name\":\"International Workshop on Dynamic Analysis\",\"volume\":\"9 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-07-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Workshop on Dynamic Analysis\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2338966.2336798\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Workshop on Dynamic Analysis","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2338966.2336798","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

摘要

基准测试在计算机科学的不同领域被广泛用于评估算法和工具。在程序分析和测试中,开源和商业程序通常被用作基准来评估算法和工具的不同方面。不幸的是,这些程序中的许多都是由引入不同偏差的程序员编写的,更不用说很难找到可以作为具有高结果可重复性的基准的程序。我们提出了一种新的方法来生成评估程序分析和测试工具的随机基准。我们的方法使用随机解析树,其中语言语法生成规则被分配概率,指定这些规则的实例将在生成的程序中出现的频率。我们为Java实现了我们的工具,并应用它来生成基准,我们用它来评估不同的程序分析和测试工具。我们的工具也被一家主要的c++软件公司实现,并被一组开发人员用来生成基准,使他们能够在不到四个小时的时间内重现错误。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Evaluating program analysis and testing tools with the RUGRAT random benchmark application generator
Benchmarks are heavily used in different areas of computer science to evaluate algorithms and tools. In program analysis and testing, open-source and commercial programs are routinely used as bench- marks to evaluate different aspects of algorithms and tools. Unfor- tunately, many of these programs are written by programmers who introduce different biases, not to mention that it is very difficult to find programs that can serve as benchmarks with high reproducibil- ity of results. We propose a novel approach for generating random benchmarks for evaluating program analysis and testing tools. Our approach uses stochastic parse trees, where language grammar production rules are assigned probabilities that specify the frequencies with which instantiations of these rules will appear in the generated pro- grams. We implemented our tool for Java and applied it to generate benchmarks with which we evaluated different program analysis and testing tools. Our tool was also implemented by a major soft- ware company for C++ and used by a team of developers to gener- ate benchmarks that enabled them to reproduce a bug in less than four hours.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Dynamic analysis of inefficiently-used containers Dynamic cost verification for cloud applications Communication-aware HW/SW co-design for heterogeneous multicore platforms Extended program invariants: applications in testing and fault localization Evaluating program analysis and testing tools with the RUGRAT random benchmark application generator
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1