Pub Date : 2009-07-01DOI: 10.1080/1941658X.2009.10462221
Timothy P. Anderson
Abstract General error regression methods (GERM) have given rise to a wide variety of functional forms for cost-estimating relationships but have so far lacked a means for evaluating the “significance” of the individual regression fit parameters in a way that is analogous to the roles played by the t-statistic and associated p-value in ordinary least squares (OLS) regression. This article attempts to remedy that situation by developing and describing an analogous “significance” metric for GERM regression fit parameters that is independent of the nature of the underlying error distribution. Significance metrics developed herein are comparable across CERs, regardless of the functional form of the regression equation or the underlying error specification. Moreover, they are developed heuristically, require no distributional assumptions, and provide a collection of simple metrics by which to judge the “significance” of the individual regression fit parameters. These metrics will be of benefit to anyone who uses GERM to develop CERs. The author is willing to share any data involved in thisd study.
{"title":"A Distribution-Free Measure of the Significance of CER Regression Fit Parameters Established Using General Error Regression Methods","authors":"Timothy P. Anderson","doi":"10.1080/1941658X.2009.10462221","DOIUrl":"https://doi.org/10.1080/1941658X.2009.10462221","url":null,"abstract":"Abstract General error regression methods (GERM) have given rise to a wide variety of functional forms for cost-estimating relationships but have so far lacked a means for evaluating the “significance” of the individual regression fit parameters in a way that is analogous to the roles played by the t-statistic and associated p-value in ordinary least squares (OLS) regression. This article attempts to remedy that situation by developing and describing an analogous “significance” metric for GERM regression fit parameters that is independent of the nature of the underlying error distribution. Significance metrics developed herein are comparable across CERs, regardless of the functional form of the regression equation or the underlying error specification. Moreover, they are developed heuristically, require no distributional assumptions, and provide a collection of simple metrics by which to judge the “significance” of the individual regression fit parameters. These metrics will be of benefit to anyone who uses GERM to develop CERs. The author is willing to share any data involved in thisd study.","PeriodicalId":390877,"journal":{"name":"Journal of Cost Analysis and Parametrics","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132416285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-07-01DOI: 10.1080/1941658X.2009.10462220
Stephen Bagby
For the past few years Department of Defense (DoD) budget reductions have been looming just at the edge of the horizon, and now those budget limitations are upon us. The cost parameter will now play a greater role in the decision-making process than it did in the past. Leaders will rely on quality cost estimates to aid their decisions as competition for limited resources increases. The Army will have to provide cost estimates earlier in the lifecycle process and show proof that programs are funded in accordance with the estimates.
{"title":"Title 10, Section 2366a Certification Requirements and Early Cost Estimating in the Army","authors":"Stephen Bagby","doi":"10.1080/1941658X.2009.10462220","DOIUrl":"https://doi.org/10.1080/1941658X.2009.10462220","url":null,"abstract":"For the past few years Department of Defense (DoD) budget reductions have been looming just at the edge of the horizon, and now those budget limitations are upon us. The cost parameter will now play a greater role in the decision-making process than it did in the past. Leaders will rely on quality cost estimates to aid their decisions as competition for limited resources increases. The Army will have to provide cost estimates earlier in the lifecycle process and show proof that programs are funded in accordance with the estimates.","PeriodicalId":390877,"journal":{"name":"Journal of Cost Analysis and Parametrics","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114359292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-07-01DOI: 10.1080/1941658X.2009.10462223
Thomas Whalen, Jan M. Smolarski, S. Samaddar
Abstract Increased competition has forced companies to focus more attention on producing at a globally competitive cost. To cope, firms focus on flexible manufacturing, integration, and automation to help ensure that firm-specific manufacturing environments remain competitive. Firms also focus on cost efficiencies, which enable them to be competitive over specific production runs and product life cycles. A common way of reducing cost at this level is to reduce set-up times. Previous research has shown that reduction of average machine setup time virtually guarantees lower production costs. The same is also true of the variance of machine setup time. However, recent research has found that reducing setup time, without any change in variance, can increase waiting time and work-in-process (WIP) inventory levels potentially reducing benefits from continuous improvement techniques. On the other hand, adding fixed idle time while holding the variance constant may reduce waiting time. The optimal fixed idle time depends only on the means and variances of setup, service, and arrival times. We show that an even greater reduction is achievable when the distribution of setup time is known by adding variable idle time, which is a non-increasing function of setup time, thereby reducing the combined setup time variance. We present procedures for finding the optimal variable idle time as a function of setup time. We also show how to implement our results.
{"title":"Improved Performance and Cost in Cyclic Production Systems","authors":"Thomas Whalen, Jan M. Smolarski, S. Samaddar","doi":"10.1080/1941658X.2009.10462223","DOIUrl":"https://doi.org/10.1080/1941658X.2009.10462223","url":null,"abstract":"Abstract Increased competition has forced companies to focus more attention on producing at a globally competitive cost. To cope, firms focus on flexible manufacturing, integration, and automation to help ensure that firm-specific manufacturing environments remain competitive. Firms also focus on cost efficiencies, which enable them to be competitive over specific production runs and product life cycles. A common way of reducing cost at this level is to reduce set-up times. Previous research has shown that reduction of average machine setup time virtually guarantees lower production costs. The same is also true of the variance of machine setup time. However, recent research has found that reducing setup time, without any change in variance, can increase waiting time and work-in-process (WIP) inventory levels potentially reducing benefits from continuous improvement techniques. On the other hand, adding fixed idle time while holding the variance constant may reduce waiting time. The optimal fixed idle time depends only on the means and variances of setup, service, and arrival times. We show that an even greater reduction is achievable when the distribution of setup time is known by adding variable idle time, which is a non-increasing function of setup time, thereby reducing the combined setup time variance. We present procedures for finding the optimal variable idle time as a function of setup time. We also show how to implement our results.","PeriodicalId":390877,"journal":{"name":"Journal of Cost Analysis and Parametrics","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130076819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-01DOI: 10.1080/1941658X.2008.10462214
Mike Ross
Abstract It's about time we in the software development community revisit the assumptions, relationships, and flexibility contained in our currently available software estimating models. Most of the current models still implement fundamental relationships that are based on at least 25-years-old data and assumptions. In the meantime, data from many thousands of projects have since been collected and offer an opportunity to revisit old assumptions and relationships. This paper documents the basis, assumptions, and derivations behind a set of general software effort, duration, and defects estimating relationships that are based on the notion that software development is the cumulative effect of people laboring to do work (effort) over some duration (period of elapsed calendar time) that produces a desired software product (size or content) and unwanted byproducts (defects). This set of relationships is derived from several evidently good correlations, the primary three being 1) effort generally trends upward with increasing size, 2) duration generally trends upward with increasing effort, and 3) effort generally trends upward with increasing defects. This derivation ultimately yields three limited tradeoff relationships: one between effort and duration, one between cost and duration, and one between defects and duration.
{"title":"Next Generation Software Estimating Framework: 25 Years and Thousands of Projects Later","authors":"Mike Ross","doi":"10.1080/1941658X.2008.10462214","DOIUrl":"https://doi.org/10.1080/1941658X.2008.10462214","url":null,"abstract":"Abstract It's about time we in the software development community revisit the assumptions, relationships, and flexibility contained in our currently available software estimating models. Most of the current models still implement fundamental relationships that are based on at least 25-years-old data and assumptions. In the meantime, data from many thousands of projects have since been collected and offer an opportunity to revisit old assumptions and relationships. This paper documents the basis, assumptions, and derivations behind a set of general software effort, duration, and defects estimating relationships that are based on the notion that software development is the cumulative effect of people laboring to do work (effort) over some duration (period of elapsed calendar time) that produces a desired software product (size or content) and unwanted byproducts (defects). This set of relationships is derived from several evidently good correlations, the primary three being 1) effort generally trends upward with increasing size, 2) duration generally trends upward with increasing effort, and 3) effort generally trends upward with increasing defects. This derivation ultimately yields three limited tradeoff relationships: one between effort and duration, one between cost and duration, and one between defects and duration.","PeriodicalId":390877,"journal":{"name":"Journal of Cost Analysis and Parametrics","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122202259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-01DOI: 10.1080/1941658X.2008.10462213
R. Janda
Abstract In the Summer 2006 issue of The Journal of Parametrics, Rich Hartley, Deputy Assistant Secretary of the Air Force for Cost and Economics, began this series of essays on what constitutes a quality cost estimate with his insightful article, “What Are Quality Cost Estimates?” Rich staked out the high ground, providing guidance on what constitutes a high quality cost estimate, his perspective on cost analysis in source selections and where cost estimating is headed in the U.S. Air Force.
{"title":"Quality Cost Estimates in the Quest for Contractor Equilibrium","authors":"R. Janda","doi":"10.1080/1941658X.2008.10462213","DOIUrl":"https://doi.org/10.1080/1941658X.2008.10462213","url":null,"abstract":"Abstract In the Summer 2006 issue of The Journal of Parametrics, Rich Hartley, Deputy Assistant Secretary of the Air Force for Cost and Economics, began this series of essays on what constitutes a quality cost estimate with his insightful article, “What Are Quality Cost Estimates?” Rich staked out the high ground, providing guidance on what constitutes a high quality cost estimate, his perspective on cost analysis in source selections and where cost estimating is headed in the U.S. Air Force.","PeriodicalId":390877,"journal":{"name":"Journal of Cost Analysis and Parametrics","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128225449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-01DOI: 10.1080/1941658X.2008.10462215
Eric M. Hawkes, E. White
Abstract Since the mid-1990's, senior Air Force leaders have pointed to an aging fleet for justification to procure new aircraft. They believe rising Operating and Support (O&S) costs are inhibiting new investments and leading the Air Force into a ‘death spiral’ (Tirpak 2006). Using 11 years of data from the Air Force Total Ownership Cost (AFTOC) database, this study investigates the relationship between aircraft age and ownership Cost Per Flying Hour (CPFH) cost growth from seventy four different airframes in the Air Force's inventory. The analysis reveals that cost growth follows a ‘bathtub’ curve where the burn-in, steady-state, and end-life display cost growth rates of 6.6%, 3.1%, and 6.9% respectively. Furthermore, the variability associated with this increase follows the same functional form as the cost growth. This research suggests that very young and very old aircraft exhibit significantly higher levels of cost growth and variability, but the magnitude of the cost growth and variability for old aircraft is relatively equal to that of young aircraft.
{"title":"Empirical Evidence Relating Aircraft Age and Operating and Support Cost Growth","authors":"Eric M. Hawkes, E. White","doi":"10.1080/1941658X.2008.10462215","DOIUrl":"https://doi.org/10.1080/1941658X.2008.10462215","url":null,"abstract":"Abstract Since the mid-1990's, senior Air Force leaders have pointed to an aging fleet for justification to procure new aircraft. They believe rising Operating and Support (O&S) costs are inhibiting new investments and leading the Air Force into a ‘death spiral’ (Tirpak 2006). Using 11 years of data from the Air Force Total Ownership Cost (AFTOC) database, this study investigates the relationship between aircraft age and ownership Cost Per Flying Hour (CPFH) cost growth from seventy four different airframes in the Air Force's inventory. The analysis reveals that cost growth follows a ‘bathtub’ curve where the burn-in, steady-state, and end-life display cost growth rates of 6.6%, 3.1%, and 6.9% respectively. Furthermore, the variability associated with this increase follows the same functional form as the cost growth. This research suggests that very young and very old aircraft exhibit significantly higher levels of cost growth and variability, but the magnitude of the cost growth and variability for old aircraft is relatively equal to that of young aircraft.","PeriodicalId":390877,"journal":{"name":"Journal of Cost Analysis and Parametrics","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121475517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-01DOI: 10.1080/1941658X.2008.10462217
J. Hamaker, P. Componation
Abstract If one asks knowledgeable observers of the space industry why similar space projects have different costs, the response is often: “Oh, it's the way the projects were managed.” The budgets are originally set for most of these projects using parametric cost models. These models use regression equations (called cost estimating relationships or CERs) to relate cost to space project independent variables. But these models seem to be usually built in a very left-brained construct—they work off of primarily technical variables such as mass, power, data rate, and the like. If management differences really drive cost, shouldn't these cost models contain management variables? Our paper describes the introduction of some unique engineering management variables—some right-brained factors—into the regressions to improve their predictive capabilities.
{"title":"Using our Right Brains to Improve Space Project Cost Estimating","authors":"J. Hamaker, P. Componation","doi":"10.1080/1941658X.2008.10462217","DOIUrl":"https://doi.org/10.1080/1941658X.2008.10462217","url":null,"abstract":"Abstract If one asks knowledgeable observers of the space industry why similar space projects have different costs, the response is often: “Oh, it's the way the projects were managed.” The budgets are originally set for most of these projects using parametric cost models. These models use regression equations (called cost estimating relationships or CERs) to relate cost to space project independent variables. But these models seem to be usually built in a very left-brained construct—they work off of primarily technical variables such as mass, power, data rate, and the like. If management differences really drive cost, shouldn't these cost models contain management variables? Our paper describes the introduction of some unique engineering management variables—some right-brained factors—into the regressions to improve their predictive capabilities.","PeriodicalId":390877,"journal":{"name":"Journal of Cost Analysis and Parametrics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131202790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-01DOI: 10.1080/1941658X.2008.10462216
Q. Redman, G. Stratton, E. Casey, Diana Patane
Abstract Architecting a product requires a defined set of requirements for the finished product, e.g., size, weight, volume, range, power, color, cost, payload, etc. One very necessary requirement is the anticipated product production cost. Failure to set the production cost requirements at design kick-off allows for unexpected and unacceptable production costs. Previously, all too often, programs were allowing establishment of the production cost goal to slip, or they would wait for their customer to establish it for them. Raytheon Missile Systems’ (RMS) Engineering Directorate has since specified that all development programs will now establish a production cost goal using the Design-to-Cost (DTC) metric described within this article and will monitor their design progress toward meeting this goal. Each program's DTC metric is now collected monthly and reviewed by senior management. This article will focus on the creation of the Design-to-Cost Metric (DTC), its purpose and its use at RMS. The DTC metric is designed to allow business-unit management to quickly review the status of their programs, as to how well the various program designs are progressing with respect to their ability to be produced at the specified value.
{"title":"Engineering and Implementing RMS Engineering's DTC Metric","authors":"Q. Redman, G. Stratton, E. Casey, Diana Patane","doi":"10.1080/1941658X.2008.10462216","DOIUrl":"https://doi.org/10.1080/1941658X.2008.10462216","url":null,"abstract":"Abstract Architecting a product requires a defined set of requirements for the finished product, e.g., size, weight, volume, range, power, color, cost, payload, etc. One very necessary requirement is the anticipated product production cost. Failure to set the production cost requirements at design kick-off allows for unexpected and unacceptable production costs. Previously, all too often, programs were allowing establishment of the production cost goal to slip, or they would wait for their customer to establish it for them. Raytheon Missile Systems’ (RMS) Engineering Directorate has since specified that all development programs will now establish a production cost goal using the Design-to-Cost (DTC) metric described within this article and will monitor their design progress toward meeting this goal. Each program's DTC metric is now collected monthly and reviewed by senior management. This article will focus on the creation of the Design-to-Cost Metric (DTC), its purpose and its use at RMS. The DTC metric is designed to allow business-unit management to quickly review the status of their programs, as to how well the various program designs are progressing with respect to their ability to be produced at the specified value.","PeriodicalId":390877,"journal":{"name":"Journal of Cost Analysis and Parametrics","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126953484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-01DOI: 10.1080/1941658X.2008.10462218
Gerald K. Debusk, T. Forsyth
Abstract Support department cost allocation is important to external and internal users of a company's financial information. It is widely recognized that the reciprocal method provides the most meaningful allocation because it fully recognizes services provided by support departments for other support departments. Despite its superiority over other methods, the reciprocal method is rarely applied in practice because of perceived complexities in its application. This article illustrates a simple method of applying the reciprocal method to allocate multiple support department costs using an Excel spreadsheet.
{"title":"An Easy Way to Allocate Support Department Costs using the Reciprocal Method","authors":"Gerald K. Debusk, T. Forsyth","doi":"10.1080/1941658X.2008.10462218","DOIUrl":"https://doi.org/10.1080/1941658X.2008.10462218","url":null,"abstract":"Abstract Support department cost allocation is important to external and internal users of a company's financial information. It is widely recognized that the reciprocal method provides the most meaningful allocation because it fully recognizes services provided by support departments for other support departments. Despite its superiority over other methods, the reciprocal method is rarely applied in practice because of perceived complexities in its application. This article illustrates a simple method of applying the reciprocal method to allocate multiple support department costs using an Excel spreadsheet.","PeriodicalId":390877,"journal":{"name":"Journal of Cost Analysis and Parametrics","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115462390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-03-01DOI: 10.1080/1941658X.2008.10462210
D. Foley, Brenda K. Wetzel
Executive Summary Most commonly available software cost estimation methodologies and tools use either source lines of code (SLOC) or function point (FP) measures as the basis for the estimate. While these methodologies and tools may suffice for software projects that use structured software development techniques, when object-oriented (OO) techniques such as use-case modeling are employed, it is more effective to use a methodology that takes advantage of the information provided by the use cases, scenarios, and more in-depth object-oriented metrics. Furthermore, as an organization becomes proficient in developing object-oriented system software, the traditional metrics of SLOC and FPs become less useful because each individual line of code becomes less likely to be like the others. Using use cases as the basis for effort estimation is a valuable addition to the tools available to a project manager, particularly for those projects where use cases are produced as part of the life-cycle process. Though use cases have a wide range of interpretation, which makes it difficult for them to be used confidently as a sizing metric outside a relatively uniform group of applications and practitioners, with standardization similar to that used for function point analysis, this method has the potential to become a mature and widely accepted estimation tool. More in-depth OO metrics, such as the number of top-level classes, weighted methods per class, average depth of inheritance tree, and number of children per base class, can provide a more accurate estimate of the resulting effort required to build the system, but these metrics are typically not available until a little further along in the system life cycle — the high-level design. Virtually every estimation method is susceptible to error and requires accurate historical data for productivity to be useful within the context of an organization. A critical consideration for any estimation model and tool is the ability to calibrate the results to reflect an individual organization's historical productivity data. Other considerations for an effective estimation model and tool include a relatively minimal amount of formal training to use the tool, well-defined sizing inputs (to include a wizard and/or an automated data import/export mechanism), visibility into the estimating algorithm, and adequate documentation. An effective tool provides early estimates that can be refined as more data and detail are developed, provides accurate estimates, and allows for variation in resource expertise, application languages, platform, and reuse. Research on the effectiveness of using OO metrics to estimate effort and schedule is on-going, and validation is still in its infancy. OO size estimation models are continuing to evolve, along with automated tools to support the model calculations. Key to the effective use of these models and tools is the ability to capture project data from the tools that are used to document and anal
{"title":"Object-Oriented Software Cost Estimation Methodologies Compared","authors":"D. Foley, Brenda K. Wetzel","doi":"10.1080/1941658X.2008.10462210","DOIUrl":"https://doi.org/10.1080/1941658X.2008.10462210","url":null,"abstract":"Executive Summary Most commonly available software cost estimation methodologies and tools use either source lines of code (SLOC) or function point (FP) measures as the basis for the estimate. While these methodologies and tools may suffice for software projects that use structured software development techniques, when object-oriented (OO) techniques such as use-case modeling are employed, it is more effective to use a methodology that takes advantage of the information provided by the use cases, scenarios, and more in-depth object-oriented metrics. Furthermore, as an organization becomes proficient in developing object-oriented system software, the traditional metrics of SLOC and FPs become less useful because each individual line of code becomes less likely to be like the others. Using use cases as the basis for effort estimation is a valuable addition to the tools available to a project manager, particularly for those projects where use cases are produced as part of the life-cycle process. Though use cases have a wide range of interpretation, which makes it difficult for them to be used confidently as a sizing metric outside a relatively uniform group of applications and practitioners, with standardization similar to that used for function point analysis, this method has the potential to become a mature and widely accepted estimation tool. More in-depth OO metrics, such as the number of top-level classes, weighted methods per class, average depth of inheritance tree, and number of children per base class, can provide a more accurate estimate of the resulting effort required to build the system, but these metrics are typically not available until a little further along in the system life cycle — the high-level design. Virtually every estimation method is susceptible to error and requires accurate historical data for productivity to be useful within the context of an organization. A critical consideration for any estimation model and tool is the ability to calibrate the results to reflect an individual organization's historical productivity data. Other considerations for an effective estimation model and tool include a relatively minimal amount of formal training to use the tool, well-defined sizing inputs (to include a wizard and/or an automated data import/export mechanism), visibility into the estimating algorithm, and adequate documentation. An effective tool provides early estimates that can be refined as more data and detail are developed, provides accurate estimates, and allows for variation in resource expertise, application languages, platform, and reuse. Research on the effectiveness of using OO metrics to estimate effort and schedule is on-going, and validation is still in its infancy. OO size estimation models are continuing to evolve, along with automated tools to support the model calculations. Key to the effective use of these models and tools is the ability to capture project data from the tools that are used to document and anal","PeriodicalId":390877,"journal":{"name":"Journal of Cost Analysis and Parametrics","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125965274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}