Engineering sciences study different different topics than natural sciences, and utility is an essential factor in choosing engineering research problems. But despite these differences, research methods for the engineering sciences are no different than research methods for any other kind of science. At most there is a difference in emphasis. In the case of requirements engineering research-and more generally software engineering research-there is a confusion about the relative roles of research and about design and the methods appropriate for each of these activities. This paper analyzes these roles and provides a classification of research methods that can be used in any science-engineering or otherwise.
{"title":"Designing Requirements Engineering Research","authors":"R. Wieringa, Hans Heerkens","doi":"10.1109/CERE.2007.4","DOIUrl":"https://doi.org/10.1109/CERE.2007.4","url":null,"abstract":"Engineering sciences study different different topics than natural sciences, and utility is an essential factor in choosing engineering research problems. But despite these differences, research methods for the engineering sciences are no different than research methods for any other kind of science. At most there is a difference in emphasis. In the case of requirements engineering research-and more generally software engineering research-there is a confusion about the relative roles of research and about design and the methods appropriate for each of these activities. This paper analyzes these roles and provides a classification of research methods that can be used in any science-engineering or otherwise.","PeriodicalId":137204,"journal":{"name":"2007 Fifth International Workshop on Comparative Evaluation in Requirements Engineering","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115300567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Agile teams commonly use User Stories, conversations with On-Site Customers, and Test Cases to gather requirements. Some Agile teams like to add other artifacts, such as Use Cases to provide more detail to the Agile Requirements. This paper presents the results of a controlled experiment aimed to learn whether Use Cases could help Agile Requirements, and, indirectly, to find if Agile Requirements techniques are sufficient. In the study, subjects were given requirements for three maintenance tasks as Use Cases, or Agile Requirements, or both. We found that subjects using Use Cases spent less time understanding requirements in comparison to subjects not using Use Cases. In addition, the presence of the Use Cases helped subjects to ask better questions to the On-Site Customer. However, we could not determine if subjects using Use Cases understood the requirements better. We conclude that the inclusion of Use Cases in Agile Requirements could benefit Agile teams.
{"title":"Are Use Cases Beneficial for Developers Using Agile Requirements?","authors":"R. Gallardo-Valencia, V. Olivera, S. Sim","doi":"10.1109/CERE.2007.2","DOIUrl":"https://doi.org/10.1109/CERE.2007.2","url":null,"abstract":"Agile teams commonly use User Stories, conversations with On-Site Customers, and Test Cases to gather requirements. Some Agile teams like to add other artifacts, such as Use Cases to provide more detail to the Agile Requirements. This paper presents the results of a controlled experiment aimed to learn whether Use Cases could help Agile Requirements, and, indirectly, to find if Agile Requirements techniques are sufficient. In the study, subjects were given requirements for three maintenance tasks as Use Cases, or Agile Requirements, or both. We found that subjects using Use Cases spent less time understanding requirements in comparison to subjects not using Use Cases. In addition, the presence of the Use Cases helped subjects to ask better questions to the On-Site Customer. However, we could not determine if subjects using Use Cases understood the requirements better. We conclude that the inclusion of Use Cases in Agile Requirements could benefit Agile teams.","PeriodicalId":137204,"journal":{"name":"2007 Fifth International Workshop on Comparative Evaluation in Requirements Engineering","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116169405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Alspaugh, S. Sim, K. Winbladh, M.H.D. Leila, Naslavsky, H. Ziv, D. Richardson
We studied the clarity of three requirements forms, operationalized as ease of problem detection, freedom from obstructions to understanding, and understandability by a variety of stakeholders. A set of use cases for an industrial system was translated into ScenarioML scenarios and into sequence diagrams; problems identified during each translation were noted; and all three forms were presented to a range of system stakeholders, who were nterviewed before and after performing tasks using the forms. The data was analyzed, and convergent results were triangulated across data sources and methods. The data indicated that ScenarioML scenarios best support requirements clarity, then sequence diagrams but only for stakeholders experienced with them, and finally use cases as the least clear form.
{"title":"Clarity for Stakeholders: Empirical Evaluation of ScenarioML, Use Cases, and Sequence Diagrams","authors":"T. Alspaugh, S. Sim, K. Winbladh, M.H.D. Leila, Naslavsky, H. Ziv, D. Richardson","doi":"10.1109/CERE.2007.3","DOIUrl":"https://doi.org/10.1109/CERE.2007.3","url":null,"abstract":"We studied the clarity of three requirements forms, operationalized as ease of problem detection, freedom from obstructions to understanding, and understandability by a variety of stakeholders. A set of use cases for an industrial system was translated into ScenarioML scenarios and into sequence diagrams; problems identified during each translation were noted; and all three forms were presented to a range of system stakeholders, who were nterviewed before and after performing tasks using the forms. The data was analyzed, and convergent results were triangulated across data sources and methods. The data indicated that ScenarioML scenarios best support requirements clarity, then sequence diagrams but only for stakeholders experienced with them, and finally use cases as the least clear form.","PeriodicalId":137204,"journal":{"name":"2007 Fifth International Workshop on Comparative Evaluation in Requirements Engineering","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116324935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Perini, A. Susi, Filippo Ricca, Cinzia Bazzanella
Requirements prioritization aims at identifying the most important requirements for a system (or a release). A large number of approaches have been proposed so far, to help decision makers in performing this activity. Some of them provide supporting tools. Questions on when a prioritization technique should be preferred to another one as well as on how to characterize and measure their properties arise. Several empirical studies have been conducted to analyze characteristics of the available approaches, but their results are often difficult to compare. In this paper we discuss an empirical study aiming at evaluating two state-of-the art, tool-supported requirements prioritization techniques, AHP and CBRanking. The experiment has been conducted with 18 experienced subjects on a set of 20 requirements from a real project. We focus on a crucial variable, namely the ranking accuracy. We discuss different ways to measure it and analyze the data collected in the experimental study with reference to this variable. Results indicate that AHP gives more accurate rankings than CBRanking, but the ranks produced by the two methods are similar for all the involved subjects.
{"title":"An Empirical Study to Compare the Accuracy of AHP and CBRanking Techniques for Requirements Prioritization","authors":"A. Perini, A. Susi, Filippo Ricca, Cinzia Bazzanella","doi":"10.1109/CERE.2007.1","DOIUrl":"https://doi.org/10.1109/CERE.2007.1","url":null,"abstract":"Requirements prioritization aims at identifying the most important requirements for a system (or a release). A large number of approaches have been proposed so far, to help decision makers in performing this activity. Some of them provide supporting tools. Questions on when a prioritization technique should be preferred to another one as well as on how to characterize and measure their properties arise. Several empirical studies have been conducted to analyze characteristics of the available approaches, but their results are often difficult to compare. In this paper we discuss an empirical study aiming at evaluating two state-of-the art, tool-supported requirements prioritization techniques, AHP and CBRanking. The experiment has been conducted with 18 experienced subjects on a set of 20 requirements from a real project. We focus on a crucial variable, namely the ranking accuracy. We discuss different ways to measure it and analyze the data collected in the experimental study with reference to this variable. Results indicate that AHP gives more accurate rankings than CBRanking, but the ranks produced by the two methods are similar for all the involved subjects.","PeriodicalId":137204,"journal":{"name":"2007 Fifth International Workshop on Comparative Evaluation in Requirements Engineering","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124332743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}