Pub Date : 2004-09-11DOI: 10.1109/METRIC.2004.1357922
Paul Umbers, G. Miles
In the field of software engineering, several empirical methods have been developed to model software projects before they are undertaken to provide an estimate of required effort, development time and cost. In the case of Web applications, this process is complicated by their complexity, multitiered nature, extensive use of noncode artifacts such as multimedia and often short time-scales. In this paper we describe a simple, highly adaptable model using COSMIC full function points for application size measurement and design patterns as a measurement reference. Rather than a true derived model the aim is to provide a procedural framework for expert judgement which guides the practitioner through the estimation process, seeking to limit or mitigate variance in their judgement through algorithmic or statistical techniques. This hybrid has so far proven as accurate as expert judgement while remaining capable of application by a relatively inexperienced estimator.
{"title":"Resource estimation for Web applications","authors":"Paul Umbers, G. Miles","doi":"10.1109/METRIC.2004.1357922","DOIUrl":"https://doi.org/10.1109/METRIC.2004.1357922","url":null,"abstract":"In the field of software engineering, several empirical methods have been developed to model software projects before they are undertaken to provide an estimate of required effort, development time and cost. In the case of Web applications, this process is complicated by their complexity, multitiered nature, extensive use of noncode artifacts such as multimedia and often short time-scales. In this paper we describe a simple, highly adaptable model using COSMIC full function points for application size measurement and design patterns as a measurement reference. Rather than a true derived model the aim is to provide a procedural framework for expert judgement which guides the practitioner through the estimation process, seeking to limit or mitigate variance in their judgement through algorithmic or statistical techniques. This hybrid has so far proven as accurate as expert judgement while remaining capable of application by a relatively inexperienced estimator.","PeriodicalId":261807,"journal":{"name":"10th International Symposium on Software Metrics, 2004. Proceedings.","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115454296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-09-11DOI: 10.1109/METRIC.2004.1357900
T. Khoshgoftaar, Yi Liu, Naeem Seliya
The problem of quality assurance is important for software systems. The extent to which software reliability improvements can be achieved is often dictated by the amount of resources available for the same. A prediction for risk-based rankings of software modules can assist in the cost-effective delegation of the limited resources. A module-order model (MOM) is used to gauge the performance of the predicted rankings. Depending on the software system under consideration, multiple software quality objectives may be desired for a MOM; e.g., the desired rankings may be such that if 20% of modules were targeted for reliability enhancements then 80% of the faults would be detected. In addition, it may also be desired that if 50% of modules were targeted then 100% of the faults would be detected. Existing works related to MOM(s) have used an underlying prediction model to obtain the rankings, implying that only the average, relative, or mean square errors are minimized. Such an approach does not provide an insight into the behavior of a MOM, the performance of which focusses on how many faults are accounted for by the given percentage of modules enhanced. We propose a methodology for building MOM (s) by implementing a multiobjective optimization with genetic programming. It facilitates the simultaneous optimization of multiple performance objectives for a MOM. Other prediction techniques, e.g., multiple linear regression and neural networks, cannot achieve multiobjective optimization for MOM(s). A case study of a high-assurance telecommunications software system is presented. The observed results show a new promise in the modeling of goal-oriented software quality estimation models.
{"title":"Module-order modeling using an evolutionary multi-objective optimization approach","authors":"T. Khoshgoftaar, Yi Liu, Naeem Seliya","doi":"10.1109/METRIC.2004.1357900","DOIUrl":"https://doi.org/10.1109/METRIC.2004.1357900","url":null,"abstract":"The problem of quality assurance is important for software systems. The extent to which software reliability improvements can be achieved is often dictated by the amount of resources available for the same. A prediction for risk-based rankings of software modules can assist in the cost-effective delegation of the limited resources. A module-order model (MOM) is used to gauge the performance of the predicted rankings. Depending on the software system under consideration, multiple software quality objectives may be desired for a MOM; e.g., the desired rankings may be such that if 20% of modules were targeted for reliability enhancements then 80% of the faults would be detected. In addition, it may also be desired that if 50% of modules were targeted then 100% of the faults would be detected. Existing works related to MOM(s) have used an underlying prediction model to obtain the rankings, implying that only the average, relative, or mean square errors are minimized. Such an approach does not provide an insight into the behavior of a MOM, the performance of which focusses on how many faults are accounted for by the given percentage of modules enhanced. We propose a methodology for building MOM (s) by implementing a multiobjective optimization with genetic programming. It facilitates the simultaneous optimization of multiple performance objectives for a MOM. Other prediction techniques, e.g., multiple linear regression and neural networks, cannot achieve multiobjective optimization for MOM(s). A case study of a high-assurance telecommunications software system is presented. The observed results show a new promise in the modeling of goal-oriented software quality estimation models.","PeriodicalId":261807,"journal":{"name":"10th International Symposium on Software Metrics, 2004. Proceedings.","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127158175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-09-11DOI: 10.1109/METRIC.2004.1357917
Michael Berry, Michael Berry, au Ross Jeffery, R. Jeffery
This paper reports on the first phase of an empirical research project concerning methods to assess the quality of the information in software measurement products. Two measurement assessment instruments are developed and deployed in order to generate two sets of analyses and conclusions. These sets will be subjected to an evaluation of their information quality in phase two of the project. One assessment instrument was based on AIMQ, a generic model of information quality. The other instrument was developed by targeting specific practices relating to software project management and identifying requirements for information support. Both assessment instruments delivered data that could be used to identify opportunities to improve measurement The generic instrument is cheap to acquire and deploy, while the targeted instrument requires more effort to build. Conclusions about the relative merits of the methods, in terms of their suitability for improvement purposes, await the results from the second phase of the project.
{"title":"Assessment of software measurement: an information quality study","authors":"Michael Berry, Michael Berry, au Ross Jeffery, R. Jeffery","doi":"10.1109/METRIC.2004.1357917","DOIUrl":"https://doi.org/10.1109/METRIC.2004.1357917","url":null,"abstract":"This paper reports on the first phase of an empirical research project concerning methods to assess the quality of the information in software measurement products. Two measurement assessment instruments are developed and deployed in order to generate two sets of analyses and conclusions. These sets will be subjected to an evaluation of their information quality in phase two of the project. One assessment instrument was based on AIMQ, a generic model of information quality. The other instrument was developed by targeting specific practices relating to software project management and identifying requirements for information support. Both assessment instruments delivered data that could be used to identify opportunities to improve measurement The generic instrument is cheap to acquire and deploy, while the targeted instrument requires more effort to build. Conclusions about the relative merits of the methods, in terms of their suitability for improvement purposes, await the results from the second phase of the project.","PeriodicalId":261807,"journal":{"name":"10th International Symposium on Software Metrics, 2004. Proceedings.","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126936511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-09-11DOI: 10.1109/METRIC.2004.1357913
S. Kusumoto, F. Matsukawa, Katsuro Inoue, Shigeo Hanabusa, Yuusuke Maegawa
Use case point (UCP) method has been proposed to estimate software development effort in early phase of software project and used in a lot of software organizations. Intuitively, UCP is measured by counting the number of actors and transactions included in use case models. Several tools to support calculating UCP have been developed. However, they only extract actors and use cases and the complexity classification of them are conducted manually. We have been introducing UCP method to software projects in Hitachi Systems & Services, Ltd. To effective introduction of UCP method, we have developed an automatic use case measurement tool, called U-EST. This paper describes the idea to automatically classify the complexity of actors and use cases from use case model. We have also applied the U-EST to actual use case models and examined the difference between the value by the tool and one by the specialist. As the results, UCPs measured by the U-EST are similar to ones by the specialist.
{"title":"Estimating effort by use case points: method, tool and case study","authors":"S. Kusumoto, F. Matsukawa, Katsuro Inoue, Shigeo Hanabusa, Yuusuke Maegawa","doi":"10.1109/METRIC.2004.1357913","DOIUrl":"https://doi.org/10.1109/METRIC.2004.1357913","url":null,"abstract":"Use case point (UCP) method has been proposed to estimate software development effort in early phase of software project and used in a lot of software organizations. Intuitively, UCP is measured by counting the number of actors and transactions included in use case models. Several tools to support calculating UCP have been developed. However, they only extract actors and use cases and the complexity classification of them are conducted manually. We have been introducing UCP method to software projects in Hitachi Systems & Services, Ltd. To effective introduction of UCP method, we have developed an automatic use case measurement tool, called U-EST. This paper describes the idea to automatically classify the complexity of actors and use cases from use case model. We have also applied the U-EST to actual use case models and examined the difference between the value by the tool and one by the specialist. As the results, UCPs measured by the U-EST are similar to ones by the specialist.","PeriodicalId":261807,"journal":{"name":"10th International Symposium on Software Metrics, 2004. Proceedings.","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126766821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-09-11DOI: 10.1109/METRIC.2004.1357908
B. Bernárdez, M. Genero, A. Durán, M. Toro
Natural language requirements documents are often verified by means of some reading technique. Some recommendations for defining a good reading technique point out that a concrete technique must not only be suitable for specific classes of defects, but also for a concrete notation in which requirements are written. Following this suggestion, we have proposed a metric-based reading (MBR) technique used for requirements inspections, whose main goal is to identify specific types of defects in use cases. The systematic approach of MBR is basically based on a set of rules as "if the metric value is too low (or high) the presence of defects of type de fType/sub 1/,...de fType/sub n/ must be checked". We hypothesised that if the reviewers know these rules, the inspection process is more effective and efficient, which means that the defects detection rate is higher and the number of defects identified per unit of time increases. But this hypotheses lacks validity if it is not empirically validated. For that reason the main goal is to describe a controlled experiment we carried out to ascertain if the usage of MBR really helps in the detection of defects in comparison with a simple checklist technique. The experiment result revealed that MBR reviewers were more effective at detecting defects than checklist reviewers, but they were not more efficient, because MBR reviewers took longer than checklist reviewers on average.
{"title":"A controlled experiment for evaluating a metric-based reading technique for requirements inspection","authors":"B. Bernárdez, M. Genero, A. Durán, M. Toro","doi":"10.1109/METRIC.2004.1357908","DOIUrl":"https://doi.org/10.1109/METRIC.2004.1357908","url":null,"abstract":"Natural language requirements documents are often verified by means of some reading technique. Some recommendations for defining a good reading technique point out that a concrete technique must not only be suitable for specific classes of defects, but also for a concrete notation in which requirements are written. Following this suggestion, we have proposed a metric-based reading (MBR) technique used for requirements inspections, whose main goal is to identify specific types of defects in use cases. The systematic approach of MBR is basically based on a set of rules as \"if the metric value is too low (or high) the presence of defects of type de fType/sub 1/,...de fType/sub n/ must be checked\". We hypothesised that if the reviewers know these rules, the inspection process is more effective and efficient, which means that the defects detection rate is higher and the number of defects identified per unit of time increases. But this hypotheses lacks validity if it is not empirically validated. For that reason the main goal is to describe a controlled experiment we carried out to ascertain if the usage of MBR really helps in the detection of defects in comparison with a simple checklist technique. The experiment result revealed that MBR reviewers were more effective at detecting defects than checklist reviewers, but they were not more efficient, because MBR reviewers took longer than checklist reviewers on average.","PeriodicalId":261807,"journal":{"name":"10th International Symposium on Software Metrics, 2004. Proceedings.","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116468372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Metrics, whether collected statically or dynamically, and whether constructed from source code, systems or processes, are largely regarded as a means of evaluating some property of interest. This viewpoint has been very successful in developing a body of knowledge, theory and experience in the application of metrics to estimation, predication, assessment, diagnosis, analysis and improvement. This paper shows that there is an alternative, complementary, view of a metric: as a fitness function, used to guide a search for optimal or near optimal individuals in a search space of possible solutions. This 'Metrics as Fitness Functions' (MAFF) approach offers a number of additional benefits to metrics research and practice because it allows metrics to be used to improve software as well as to assess it and because it provides an additional mechanism of metric analysis and validation. This paper presents a brief survey of search-based approaches and shows how metrics have been combined with the search based techniques to improve software systems. It describes the properties of a metric which make it a good fitness function and explains the benefits for metric analysis and validation which accrue from the MAFF approach.
{"title":"Metrics are fitness functions too","authors":"M. Harman, J. A. Clark","doi":"10.1109/METRICS.2004.30","DOIUrl":"https://doi.org/10.1109/METRICS.2004.30","url":null,"abstract":"Metrics, whether collected statically or dynamically, and whether constructed from source code, systems or processes, are largely regarded as a means of evaluating some property of interest. This viewpoint has been very successful in developing a body of knowledge, theory and experience in the application of metrics to estimation, predication, assessment, diagnosis, analysis and improvement. This paper shows that there is an alternative, complementary, view of a metric: as a fitness function, used to guide a search for optimal or near optimal individuals in a search space of possible solutions. This 'Metrics as Fitness Functions' (MAFF) approach offers a number of additional benefits to metrics research and practice because it allows metrics to be used to improve software as well as to assess it and because it provides an additional mechanism of metric analysis and validation. This paper presents a brief survey of search-based approaches and shows how metrics have been combined with the search based techniques to improve software systems. It describes the properties of a metric which make it a good fitness function and explains the benefits for metric analysis and validation which accrue from the MAFF approach.","PeriodicalId":261807,"journal":{"name":"10th International Symposium on Software Metrics, 2004. Proceedings.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128776660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Managing a large software project involves initial estimates that may turn out to be erroneous or that might be expressed with some degree of uncertainty. Furthermore, as the project progresses, it often becomes necessary to rework some of the work packages that make up the overall project. Other work packages might have to be abandoned for a variety of reasons. In the presence of these difficulties, optimal allocation of staff to project teams and teams to work packages is far from trivial. This paper shows how genetic algorithms can be combined with a queuing simulation model to address these problems in a robust manner. A tandem genetic algorithm is used to search for the best sequence in which to process work packages and the best allocation of staff to project teams. The simulation model, that computes the project estimated completion date, guides the search. The possible impact of rework, abandonment and erroneous or uncertain initial estimates are characterised by separate error distributions. The paper presents results from the application of these techniques to data obtained from a large scale commercial software maintenance project.
{"title":"A robust search-based approach to project management in the presence of abandonment, rework, error and uncertainty","authors":"G. Antoniol, M. D. Penta, M. Harman","doi":"10.1109/METRICS.2004.4","DOIUrl":"https://doi.org/10.1109/METRICS.2004.4","url":null,"abstract":"Managing a large software project involves initial estimates that may turn out to be erroneous or that might be expressed with some degree of uncertainty. Furthermore, as the project progresses, it often becomes necessary to rework some of the work packages that make up the overall project. Other work packages might have to be abandoned for a variety of reasons. In the presence of these difficulties, optimal allocation of staff to project teams and teams to work packages is far from trivial. This paper shows how genetic algorithms can be combined with a queuing simulation model to address these problems in a robust manner. A tandem genetic algorithm is used to search for the best sequence in which to process work packages and the best allocation of staff to project teams. The simulation model, that computes the project estimated completion date, guides the search. The possible impact of rework, abandonment and erroneous or uncertain initial estimates are characterised by separate error distributions. The paper presents results from the application of these techniques to data obtained from a large scale commercial software maintenance project.","PeriodicalId":261807,"journal":{"name":"10th International Symposium on Software Metrics, 2004. Proceedings.","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126552152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-09-11DOI: 10.1109/METRIC.2004.1357919
G. Saward, Tracy Hall, T. Barker
Information scent is an establish concept for assessing how users interact with information retrieval systems. This paper proposes two ways of measuring user perceptions of information scent in order to assess the product quality of Web or Internet information retrieval systems. An empirical study is presented which validates these measures through an evaluation based on a live e-commerce application. This study shows a strong correlation between the measures of perceived scent and system usability. Finally the wider applicability of these methods is discussed.
{"title":"Assessing usability through perceptions of information scent","authors":"G. Saward, Tracy Hall, T. Barker","doi":"10.1109/METRIC.2004.1357919","DOIUrl":"https://doi.org/10.1109/METRIC.2004.1357919","url":null,"abstract":"Information scent is an establish concept for assessing how users interact with information retrieval systems. This paper proposes two ways of measuring user perceptions of information scent in order to assess the product quality of Web or Internet information retrieval systems. An empirical study is presented which validates these measures through an evaluation based on a live e-commerce application. This study shows a strong correlation between the measures of perceived scent and system usability. Finally the wider applicability of these methods is discussed.","PeriodicalId":261807,"journal":{"name":"10th International Symposium on Software Metrics, 2004. Proceedings.","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116500463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-09-11DOI: 10.1109/METRIC.2004.1357888
Nikolaos Tsantalis, A. Chatzigeorgiou, G. Stephanides, Ignatios S. Deligiannis
The goal of this study is the development of a probabilistic model for the evaluation of flexibility of an object-oriented design. In particular, the model estimates the probability that a certain class of the system gets affected when new functionality is added or when existing functionality is modified. It is obvious that when a system exhibits a large sensitivity to changes, the corresponding design quality is questionable. Useful conclusions can be drawn from this model regarding the comparative evaluation of two or more object-oriented systems or even the assessment of several generations of the same system, in order to determine whether or not good design principles have been applied. The proposed model has been implemented in a Java program that can automatically analyze the class diagram of a given system.
{"title":"Probabilistic evaluation of object-oriented systems","authors":"Nikolaos Tsantalis, A. Chatzigeorgiou, G. Stephanides, Ignatios S. Deligiannis","doi":"10.1109/METRIC.2004.1357888","DOIUrl":"https://doi.org/10.1109/METRIC.2004.1357888","url":null,"abstract":"The goal of this study is the development of a probabilistic model for the evaluation of flexibility of an object-oriented design. In particular, the model estimates the probability that a certain class of the system gets affected when new functionality is added or when existing functionality is modified. It is obvious that when a system exhibits a large sensitivity to changes, the corresponding design quality is questionable. Useful conclusions can be drawn from this model regarding the comparative evaluation of two or more object-oriented systems or even the assessment of several generations of the same system, in order to determine whether or not good design principles have been applied. The proposed model has been implemented in a Java program that can automatically analyze the class diagram of a given system.","PeriodicalId":261807,"journal":{"name":"10th International Symposium on Software Metrics, 2004. Proceedings.","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121595212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-09-11DOI: 10.1109/METRIC.2004.1357914
A. Calazans, K. Oliveira, R. R. D. Santos
To better control the time, cost and resources assigned to software projects, organizations need a proper estimate of their size even before the projects actually start. Accordingly, several approaches were proposed to estimate the size of a software project, as the well-known function point analysis (FPA), which is largely used in traditional software development projects. However, we observed in our company that it is not fit for data mart software measurement. Data mart (DM) systems have particularities in their development that are different from the traditional software systems (e.g. a DM uses other software systems as data sources and does not create new information). It is important, therefore, to have a measurement approach that considers those particularities while measuring the DM size. We present an adaptation of the FPA approach for DM size measurement and discuss results on 10 data marts project developed in the industry.
{"title":"Adapting function point analysis to estimate data mart size","authors":"A. Calazans, K. Oliveira, R. R. D. Santos","doi":"10.1109/METRIC.2004.1357914","DOIUrl":"https://doi.org/10.1109/METRIC.2004.1357914","url":null,"abstract":"To better control the time, cost and resources assigned to software projects, organizations need a proper estimate of their size even before the projects actually start. Accordingly, several approaches were proposed to estimate the size of a software project, as the well-known function point analysis (FPA), which is largely used in traditional software development projects. However, we observed in our company that it is not fit for data mart software measurement. Data mart (DM) systems have particularities in their development that are different from the traditional software systems (e.g. a DM uses other software systems as data sources and does not create new information). It is important, therefore, to have a measurement approach that considers those particularities while measuring the DM size. We present an adaptation of the FPA approach for DM size measurement and discuss results on 10 data marts project developed in the industry.","PeriodicalId":261807,"journal":{"name":"10th International Symposium on Software Metrics, 2004. Proceedings.","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121035186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}