{"title":"Empirical evaluation of research prototypes at variable stages of maturity","authors":"O. Badreddin","doi":"10.1109/USER.2013.6603076","DOIUrl":null,"url":null,"abstract":"Empirical evaluation of research tools is growing especially in the field of software engineering. A number of research techniques have been proposed and used in evaluating research prototypes. We take the view that evaluation of software engineering tools is best achieved in industrial settings, with real life artifacts and tasks, and with professional software engineers. However, the feasibility of such evaluation is limited for many reasons. Some challenges are related to the prototypes under study, others are related to the industrial environments where the need to meet business requirements take precedence on experimenting with new tools and techniques. In this paper, we summarize our experiences in evaluating a research prototype tool using a grounded theory study, a questionnaire, and a controlled experiment. We discuss the challenges that hindered our industrial evaluation and share ideas on how to overcome these challenges. We propose an action research study where the research tool is used by a small number of experienced professionals in an industrial project.","PeriodicalId":319590,"journal":{"name":"2013 2nd International Workshop on User Evaluations for Software Engineering Researchers (USER)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 2nd International Workshop on User Evaluations for Software Engineering Researchers (USER)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/USER.2013.6603076","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
Empirical evaluation of research tools is growing especially in the field of software engineering. A number of research techniques have been proposed and used in evaluating research prototypes. We take the view that evaluation of software engineering tools is best achieved in industrial settings, with real life artifacts and tasks, and with professional software engineers. However, the feasibility of such evaluation is limited for many reasons. Some challenges are related to the prototypes under study, others are related to the industrial environments where the need to meet business requirements take precedence on experimenting with new tools and techniques. In this paper, we summarize our experiences in evaluating a research prototype tool using a grounded theory study, a questionnaire, and a controlled experiment. We discuss the challenges that hindered our industrial evaluation and share ideas on how to overcome these challenges. We propose an action research study where the research tool is used by a small number of experienced professionals in an industrial project.