A. Perini, A. Susi, Filippo Ricca, Cinzia Bazzanella
{"title":"层次分析法与银行技术在需求优先排序中的准确性比较实证研究","authors":"A. Perini, A. Susi, Filippo Ricca, Cinzia Bazzanella","doi":"10.1109/CERE.2007.1","DOIUrl":null,"url":null,"abstract":"Requirements prioritization aims at identifying the most important requirements for a system (or a release). A large number of approaches have been proposed so far, to help decision makers in performing this activity. Some of them provide supporting tools. Questions on when a prioritization technique should be preferred to another one as well as on how to characterize and measure their properties arise. Several empirical studies have been conducted to analyze characteristics of the available approaches, but their results are often difficult to compare. In this paper we discuss an empirical study aiming at evaluating two state-of-the art, tool-supported requirements prioritization techniques, AHP and CBRanking. The experiment has been conducted with 18 experienced subjects on a set of 20 requirements from a real project. We focus on a crucial variable, namely the ranking accuracy. We discuss different ways to measure it and analyze the data collected in the experimental study with reference to this variable. Results indicate that AHP gives more accurate rankings than CBRanking, but the ranks produced by the two methods are similar for all the involved subjects.","PeriodicalId":137204,"journal":{"name":"2007 Fifth International Workshop on Comparative Evaluation in Requirements Engineering","volume":"116 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2007-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"50","resultStr":"{\"title\":\"An Empirical Study to Compare the Accuracy of AHP and CBRanking Techniques for Requirements Prioritization\",\"authors\":\"A. Perini, A. Susi, Filippo Ricca, Cinzia Bazzanella\",\"doi\":\"10.1109/CERE.2007.1\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Requirements prioritization aims at identifying the most important requirements for a system (or a release). A large number of approaches have been proposed so far, to help decision makers in performing this activity. Some of them provide supporting tools. Questions on when a prioritization technique should be preferred to another one as well as on how to characterize and measure their properties arise. Several empirical studies have been conducted to analyze characteristics of the available approaches, but their results are often difficult to compare. In this paper we discuss an empirical study aiming at evaluating two state-of-the art, tool-supported requirements prioritization techniques, AHP and CBRanking. The experiment has been conducted with 18 experienced subjects on a set of 20 requirements from a real project. We focus on a crucial variable, namely the ranking accuracy. We discuss different ways to measure it and analyze the data collected in the experimental study with reference to this variable. Results indicate that AHP gives more accurate rankings than CBRanking, but the ranks produced by the two methods are similar for all the involved subjects.\",\"PeriodicalId\":137204,\"journal\":{\"name\":\"2007 Fifth International Workshop on Comparative Evaluation in Requirements Engineering\",\"volume\":\"116 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2007-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"50\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2007 Fifth International Workshop on Comparative Evaluation in Requirements Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CERE.2007.1\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2007 Fifth International Workshop on Comparative Evaluation in Requirements Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CERE.2007.1","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
An Empirical Study to Compare the Accuracy of AHP and CBRanking Techniques for Requirements Prioritization
Requirements prioritization aims at identifying the most important requirements for a system (or a release). A large number of approaches have been proposed so far, to help decision makers in performing this activity. Some of them provide supporting tools. Questions on when a prioritization technique should be preferred to another one as well as on how to characterize and measure their properties arise. Several empirical studies have been conducted to analyze characteristics of the available approaches, but their results are often difficult to compare. In this paper we discuss an empirical study aiming at evaluating two state-of-the art, tool-supported requirements prioritization techniques, AHP and CBRanking. The experiment has been conducted with 18 experienced subjects on a set of 20 requirements from a real project. We focus on a crucial variable, namely the ranking accuracy. We discuss different ways to measure it and analyze the data collected in the experimental study with reference to this variable. Results indicate that AHP gives more accurate rankings than CBRanking, but the ranks produced by the two methods are similar for all the involved subjects.