Pub Date : 2010-11-01DOI: 10.1080/1941658X.2010.10462231
Shu-Ping Hu
Abstract Cost estimating relationships (CERs) with multiplicative-error assumptions are commonly used in cost analysis. Consequently, we need to apply appropriate statistical measures to evaluate a CER's quality when developing multiplicative error CERs such as minimum-unbiased-percentage error (MUPE) and minimum-percentage error under zero-percentage bias (ZMPE) CERs. Generalized R-squared (GRSQ, also denoted by the symbol r2) is commonly used for measuring the quality of a nonlinear CER. GRSQ is defined as the square of Pearson's correlation coefficient between the actual observations and CER-predicted values (see Young 1992). Many statistical analysts believe GRSQ is an appropriate analog to measure the proportion of variation explained by a nonlinear CER (see Nguyen and Lozzi, 1994), including MUPE and ZMPE CERs; some even use it to measure the appropriateness of shape of a CER. Adjusted R2 in unit space is a frequently used alternative measure for CER quality. This statistic translates the sum of squares due to error (SSE) from the absolute scale to the relative scale. This metric is used to measure how well the CER-predicted costs match the actual data set, adjusting for the number of estimated coefficients used in the model. There have been academic concerns over the years about the relevance of using Adjusted R2 and Pearson's r2. For example, some insist that Adjusted R2, calculated by the traditional formula, has no value as a metric except for ordinary least squares (OLS); others argue that Pearson's r2 does not measure how well the estimate matches database actuals for nonlinear CERs. This article discusses these concerns and examines the properties of these statistics, along with pros and cons of using each for CER development. In addition, this article proposes 1) a modified Adjusted R2 for evaluating MUPE and ZMPE CERs and 2) a modified GRSQ to account for degrees of freedom (DF).
{"title":"R2 versus r2","authors":"Shu-Ping Hu","doi":"10.1080/1941658X.2010.10462231","DOIUrl":"https://doi.org/10.1080/1941658X.2010.10462231","url":null,"abstract":"Abstract Cost estimating relationships (CERs) with multiplicative-error assumptions are commonly used in cost analysis. Consequently, we need to apply appropriate statistical measures to evaluate a CER's quality when developing multiplicative error CERs such as minimum-unbiased-percentage error (MUPE) and minimum-percentage error under zero-percentage bias (ZMPE) CERs. Generalized R-squared (GRSQ, also denoted by the symbol r2) is commonly used for measuring the quality of a nonlinear CER. GRSQ is defined as the square of Pearson's correlation coefficient between the actual observations and CER-predicted values (see Young 1992). Many statistical analysts believe GRSQ is an appropriate analog to measure the proportion of variation explained by a nonlinear CER (see Nguyen and Lozzi, 1994), including MUPE and ZMPE CERs; some even use it to measure the appropriateness of shape of a CER. Adjusted R2 in unit space is a frequently used alternative measure for CER quality. This statistic translates the sum of squares due to error (SSE) from the absolute scale to the relative scale. This metric is used to measure how well the CER-predicted costs match the actual data set, adjusting for the number of estimated coefficients used in the model. There have been academic concerns over the years about the relevance of using Adjusted R2 and Pearson's r2. For example, some insist that Adjusted R2, calculated by the traditional formula, has no value as a metric except for ordinary least squares (OLS); others argue that Pearson's r2 does not measure how well the estimate matches database actuals for nonlinear CERs. This article discusses these concerns and examines the properties of these statistics, along with pros and cons of using each for CER development. In addition, this article proposes 1) a modified Adjusted R2 for evaluating MUPE and ZMPE CERs and 2) a modified GRSQ to account for degrees of freedom (DF).","PeriodicalId":390877,"journal":{"name":"Journal of Cost Analysis and Parametrics","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132743156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-01DOI: 10.1080/1941658X.2010.10462233
Elizabeth T. Poulos, E. White
Abstract This study demonstrates the utility of using nonlinear growth modeling as an alternative to using the cost performance index (CPI), the schedule cost index (SCI), or the composite index methods for calculating estimates at completion (EAC) for over target baseline (OTB) contracts. Using contract performance report (CPR), data, we adopted the Gompertz Growth Curve Model to produce three EAC models: a production model, a development model, and a combined model. We used the mean absolute percentage error (MAPE) to evaluate the developed models. For 63% to 78% of OTB contracts, depending on model, the Gompertz Growth Models out-performed all three index-based methods for predicting the EAC.
{"title":"Using Growth Models to Improve Accuracy of Estimates at Completion for over Target Baseline Contracts","authors":"Elizabeth T. Poulos, E. White","doi":"10.1080/1941658X.2010.10462233","DOIUrl":"https://doi.org/10.1080/1941658X.2010.10462233","url":null,"abstract":"Abstract This study demonstrates the utility of using nonlinear growth modeling as an alternative to using the cost performance index (CPI), the schedule cost index (SCI), or the composite index methods for calculating estimates at completion (EAC) for over target baseline (OTB) contracts. Using contract performance report (CPR), data, we adopted the Gompertz Growth Curve Model to produce three EAC models: a production model, a development model, and a combined model. We used the mean absolute percentage error (MAPE) to evaluate the developed models. For 63% to 78% of OTB contracts, depending on model, the Gompertz Growth Models out-performed all three index-based methods for predicting the EAC.","PeriodicalId":390877,"journal":{"name":"Journal of Cost Analysis and Parametrics","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123250814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-01DOI: 10.1080/1941658X.2010.10462230
P. Foussier
The word “median” is used in this article as a generic term. It is generally used for a distribution (a set of values, such as the income of the population) that usually means a “onedimensional set.” In this article, the concept is expanded to also apply to multidimensional sets. Working with multidimensional sets of variables is what we do when we want to build a cost-estimating tool, such as a cost-estimating relationship (CER).
{"title":"Improving CER Building: Basing a CER on the Median","authors":"P. Foussier","doi":"10.1080/1941658X.2010.10462230","DOIUrl":"https://doi.org/10.1080/1941658X.2010.10462230","url":null,"abstract":"The word “median” is used in this article as a generic term. It is generally used for a distribution (a set of values, such as the income of the population) that usually means a “onedimensional set.” In this article, the concept is expanded to also apply to multidimensional sets. Working with multidimensional sets of variables is what we do when we want to build a cost-estimating tool, such as a cost-estimating relationship (CER).","PeriodicalId":390877,"journal":{"name":"Journal of Cost Analysis and Parametrics","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128572906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-01-01DOI: 10.1080/1941658X.2010.10462227
W. Alvarado, D. Barkmeyer, E. Burgess
Abstract Government attempts to procure space systems in a commercial-like manner usually involve a fixed-price contract for slightly modified commercial satellites or, in some cases, a completely new payload on a commercial product-line bus. Cost to deliver these systems ends up somewhere between a purely commercial contract and a traditional government cost-reimbursable program. While technical risks and engineering complexity play a big role, a system's final cost within this spectrum is also influenced by its “acquisition complexity,” which includes factors such as the amount and type of third-party oversight, the number of contract data deliverables, subcontractor management processes, parts/materials management requirements, and contract scope. Size of the contractor's business base is also a significant factor. Government cost estimating methods for commercial-like programs in the past have relied on decrements to government cost models by analogy to selected commercial programs. These methods have been difficult to substantiate and defend, so an approach to measure the cost impact of commercial acquisition practices is needed. With full participation from industry, the National Reconnaissance Office Cost Analysis Improvement Group (NRO CAIG) conducted a detailed data collection, survey, and analysis of over 60 commercial and commercial-like satellite acquisitions. Included were interviews with program managers, system engineers, and cost/pricing analysts from multiple satellite vendors. Results of this work built on prior commercial-vs.-government studies by quantifying the “acquisition complexity” of the systems studied and showing that it was strongly correlated to actual system costs. Our article includes an overview of the underlying commercial and government data, a description of the metrics collected, the acquisition-complexity scoring method, and the resulting model for estimating commercial-like acquisitions. This model assigns a score to any government or commercial procurement based on the details of its acquisition approach and then translates that score into a cost estimate. It is a valuable addition to the NRO CAIG's estimating toolkit, but it also serves as a feedback mechanism to NRO management about when a commercial-like acquisition approach may or may not be appropriate.
{"title":"Commercial-Like Acquisitions: Practices and Costs","authors":"W. Alvarado, D. Barkmeyer, E. Burgess","doi":"10.1080/1941658X.2010.10462227","DOIUrl":"https://doi.org/10.1080/1941658X.2010.10462227","url":null,"abstract":"Abstract Government attempts to procure space systems in a commercial-like manner usually involve a fixed-price contract for slightly modified commercial satellites or, in some cases, a completely new payload on a commercial product-line bus. Cost to deliver these systems ends up somewhere between a purely commercial contract and a traditional government cost-reimbursable program. While technical risks and engineering complexity play a big role, a system's final cost within this spectrum is also influenced by its “acquisition complexity,” which includes factors such as the amount and type of third-party oversight, the number of contract data deliverables, subcontractor management processes, parts/materials management requirements, and contract scope. Size of the contractor's business base is also a significant factor. Government cost estimating methods for commercial-like programs in the past have relied on decrements to government cost models by analogy to selected commercial programs. These methods have been difficult to substantiate and defend, so an approach to measure the cost impact of commercial acquisition practices is needed. With full participation from industry, the National Reconnaissance Office Cost Analysis Improvement Group (NRO CAIG) conducted a detailed data collection, survey, and analysis of over 60 commercial and commercial-like satellite acquisitions. Included were interviews with program managers, system engineers, and cost/pricing analysts from multiple satellite vendors. Results of this work built on prior commercial-vs.-government studies by quantifying the “acquisition complexity” of the systems studied and showing that it was strongly correlated to actual system costs. Our article includes an overview of the underlying commercial and government data, a description of the metrics collected, the acquisition-complexity scoring method, and the resulting model for estimating commercial-like acquisitions. This model assigns a score to any government or commercial procurement based on the details of its acquisition approach and then translates that score into a cost estimate. It is a valuable addition to the NRO CAIG's estimating toolkit, but it also serves as a feedback mechanism to NRO management about when a commercial-like acquisition approach may or may not be appropriate.","PeriodicalId":390877,"journal":{"name":"Journal of Cost Analysis and Parametrics","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132327265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-01-01DOI: 10.1080/1941658X.2010.10462228
Tom Coonce, R. Bitten, Joseph Hamaker, H. Hertzfeld
Abstract Studying the productivity of any government organization is difficult. Agencies have multiple mission objectives and budget and accounting systems that are very different from those of proft-making firms, and, particularly for a research and development (R&D) agency such as the National Aeronautical Space Administration (NASA), obtaining the greatest quantity using the least resources is not always the best way of producing cutting-edge technology. In short, a government agency is not a private-sector company with the principal objective of making a profit for investors. NASA is a complex R&D organization, producing or managing the production of many different space components. For this study, only the subset of NASA programs that involve the manufacturing of satellites is used for measuring productivity because 1) there is a rich historical data base of cost estimates for NASA satellite manufacturing, 2) similar estimates exist for other government agencies that build satellites (the Air Force and other defense agencies), and 3) the commercial sector produces many private (particularly communications) satellites. Our study has three components. The first is a direct evaluation of NASA's efficiency over time in manufacturing both communications and scientific satellites. Since NASA management of this production includes both in-house and contractor efforts, a more direct comparison is possible between commercial efforts for NASA and the commercial production of private use satellites. The second component of the study compares NASA production programs with those of other agencies and even with similar manufacturing programs at the European Space Agency (ESA). The third approach to this analysis convened a workshop that had representatives from various government agencies, major commercial manufacturers, and academics and consultants who have first-hand knowledge of the satellite manufacturing process and of managing relevant government and commercial procurement projects. The results of the workshop provided an excellent check on the results of our analysis of government operations and comparative inputs from the commercial satellite manufacturing sector. The results of these three separate approaches were remarkably similar. NASA, on average, seems no better or worse in efficiency than other government agencies, including foreign manufacturing programs such as those of ESA. Government R&D agencies, however, often cannot match the efficiency and productivity of commercial satellite manufacturers. Their products are sufficiently different from those of NASA and need to be compared only very selectively in terms of productivity. Interestingly, it was also found that NASA could improve its efficiency in a variety of ways. And, contrary to popular literature, not all of the reasons for NASA (and other government agencies) inefficiencies result from Congressional mandates such as the lack of multi-year funding commitments for long-term projec
{"title":"NASA Productivity","authors":"Tom Coonce, R. Bitten, Joseph Hamaker, H. Hertzfeld","doi":"10.1080/1941658X.2010.10462228","DOIUrl":"https://doi.org/10.1080/1941658X.2010.10462228","url":null,"abstract":"Abstract Studying the productivity of any government organization is difficult. Agencies have multiple mission objectives and budget and accounting systems that are very different from those of proft-making firms, and, particularly for a research and development (R&D) agency such as the National Aeronautical Space Administration (NASA), obtaining the greatest quantity using the least resources is not always the best way of producing cutting-edge technology. In short, a government agency is not a private-sector company with the principal objective of making a profit for investors. NASA is a complex R&D organization, producing or managing the production of many different space components. For this study, only the subset of NASA programs that involve the manufacturing of satellites is used for measuring productivity because 1) there is a rich historical data base of cost estimates for NASA satellite manufacturing, 2) similar estimates exist for other government agencies that build satellites (the Air Force and other defense agencies), and 3) the commercial sector produces many private (particularly communications) satellites. Our study has three components. The first is a direct evaluation of NASA's efficiency over time in manufacturing both communications and scientific satellites. Since NASA management of this production includes both in-house and contractor efforts, a more direct comparison is possible between commercial efforts for NASA and the commercial production of private use satellites. The second component of the study compares NASA production programs with those of other agencies and even with similar manufacturing programs at the European Space Agency (ESA). The third approach to this analysis convened a workshop that had representatives from various government agencies, major commercial manufacturers, and academics and consultants who have first-hand knowledge of the satellite manufacturing process and of managing relevant government and commercial procurement projects. The results of the workshop provided an excellent check on the results of our analysis of government operations and comparative inputs from the commercial satellite manufacturing sector. The results of these three separate approaches were remarkably similar. NASA, on average, seems no better or worse in efficiency than other government agencies, including foreign manufacturing programs such as those of ESA. Government R&D agencies, however, often cannot match the efficiency and productivity of commercial satellite manufacturers. Their products are sufficiently different from those of NASA and need to be compared only very selectively in terms of productivity. Interestingly, it was also found that NASA could improve its efficiency in a variety of ways. And, contrary to popular literature, not all of the reasons for NASA (and other government agencies) inefficiencies result from Congressional mandates such as the lack of multi-year funding commitments for long-term projec","PeriodicalId":390877,"journal":{"name":"Journal of Cost Analysis and Parametrics","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121166573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-01-01DOI: 10.1080/1941658X.2010.10462225
H. Joumier
Since 2005, a number of renowned personalities of the profession have already expressed their views in the Journal of Parametrics and the Journal of Cost Analysis and Parametrics (JCAP) about what makes a good cost estimate (Hartley 2006; Hamaker 2007; Janda 2008; Bagby 2009). All that has been written so far makes a lot of sense, and I fully concur with it, so I am left with the difficult task to add something equally smart. I will quickly list ideas of what makes a quality cost estimate based on what has already been identified by the aforementioned authors, and I invite everyone to read these excellent articles in detail. The term “quality cost estimate” induces notions such as accuracy and effort commensurate with the acquisition stage, completeness including a cost–risk assessment, solidity of the basis of estimate, link to the schedule, realism, non-advocacy, independence, fairness, cross-checks with results derived from independent methods, and communicability. I will then add a few words on some peculiarities that may be of help for further understanding of what quality estimates are:
{"title":"Quality Cost Estimates to Serve the Economic Recovery","authors":"H. Joumier","doi":"10.1080/1941658X.2010.10462225","DOIUrl":"https://doi.org/10.1080/1941658X.2010.10462225","url":null,"abstract":"Since 2005, a number of renowned personalities of the profession have already expressed their views in the Journal of Parametrics and the Journal of Cost Analysis and Parametrics (JCAP) about what makes a good cost estimate (Hartley 2006; Hamaker 2007; Janda 2008; Bagby 2009). All that has been written so far makes a lot of sense, and I fully concur with it, so I am left with the difficult task to add something equally smart. I will quickly list ideas of what makes a quality cost estimate based on what has already been identified by the aforementioned authors, and I invite everyone to read these excellent articles in detail. The term “quality cost estimate” induces notions such as accuracy and effort commensurate with the acquisition stage, completeness including a cost–risk assessment, solidity of the basis of estimate, link to the schedule, realism, non-advocacy, independence, fairness, cross-checks with results derived from independent methods, and communicability. I will then add a few words on some peculiarities that may be of help for further understanding of what quality estimates are:","PeriodicalId":390877,"journal":{"name":"Journal of Cost Analysis and Parametrics","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116653105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-01-01DOI: 10.1080/1941658X.2010.10462229
Roy E. Smoker
This article addresses some of the classical methods of analysis, data collection, presentation, and interpretation of cost-estimating problems in the context of applied economic theory. The purpose is to show that basic microeconomic principles imply mathematical properties for which the uncertain values of the parameters may be measured using statistical methods of analysis. While statistics may be defined as the collection, presentation, analysis, and interpretation of numerical data, economic theory is concerned with relationships among variables. Since economic phenomena are not obtained from statistically controlled experiments1, special methods of analysis of non-experimental data have to be devised to explain related patterns of behavior. In this article, I examine a simple equation where cost is a function of one or more cost drivers and shows that specifications of the process by which the independent variables (cost drivers) are generated, the process by which unobserved disturbances are generated, and the relationship connecting these to the observed dependent variables (e.g., cost) are necessary to rely on the rules and criteria of statistical inference to develop a rational method of measuring the economic theory relationship from a given sample of observations. Finally, I explore specific tests of hypotheses to check whether or not the classical statistical assumptions of normality, homoscedasticity, and independence of successive errors are met. Before turning attention to the properties of statistical experiments and data collection, it is necessary to describe some of the basic technical aspects of the system that a cost estimator must consider when developing or applying cost-estimating relationships. First, each system is made up of several subsystems and their corresponding components. These components, identified as the lowest-level elements for which a cost estimate is required, are defined as the basic work breakdown structure (WBS) end items that, when integrated into the total system, define the scope of work to be estimated by the application of CERs to each element. For each end item in the WBS, the cost estimator must select the appropriate cost model, CER, or analogous cost-progress curve to estimate the costs. Consideration as to what is appropriate depends on the definition for each WBS end item. The following questions must be addressed:
{"title":"Basic Economic Principles: A Methodical Approach to Deriving Production Cost Estimating Relationships (CERs)","authors":"Roy E. Smoker","doi":"10.1080/1941658X.2010.10462229","DOIUrl":"https://doi.org/10.1080/1941658X.2010.10462229","url":null,"abstract":"This article addresses some of the classical methods of analysis, data collection, presentation, and interpretation of cost-estimating problems in the context of applied economic theory. The purpose is to show that basic microeconomic principles imply mathematical properties for which the uncertain values of the parameters may be measured using statistical methods of analysis. While statistics may be defined as the collection, presentation, analysis, and interpretation of numerical data, economic theory is concerned with relationships among variables. Since economic phenomena are not obtained from statistically controlled experiments1, special methods of analysis of non-experimental data have to be devised to explain related patterns of behavior. In this article, I examine a simple equation where cost is a function of one or more cost drivers and shows that specifications of the process by which the independent variables (cost drivers) are generated, the process by which unobserved disturbances are generated, and the relationship connecting these to the observed dependent variables (e.g., cost) are necessary to rely on the rules and criteria of statistical inference to develop a rational method of measuring the economic theory relationship from a given sample of observations. Finally, I explore specific tests of hypotheses to check whether or not the classical statistical assumptions of normality, homoscedasticity, and independence of successive errors are met. Before turning attention to the properties of statistical experiments and data collection, it is necessary to describe some of the basic technical aspects of the system that a cost estimator must consider when developing or applying cost-estimating relationships. First, each system is made up of several subsystems and their corresponding components. These components, identified as the lowest-level elements for which a cost estimate is required, are defined as the basic work breakdown structure (WBS) end items that, when integrated into the total system, define the scope of work to be estimated by the application of CERs to each element. For each end item in the WBS, the cost estimator must select the appropriate cost model, CER, or analogous cost-progress curve to estimate the costs. Consideration as to what is appropriate depends on the definition for each WBS end item. The following questions must be addressed:","PeriodicalId":390877,"journal":{"name":"Journal of Cost Analysis and Parametrics","volume":"192 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124272260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-01-01DOI: 10.1080/1941658X.2010.10462226
J. Silny, R. Little, D. S. Remer
Abstract This article presents the methods used by U.S. government agencies to assign a monetary value to human life as mandated by three Executive Orders. Since 1978, all regulations with an impact greater than $100 million require a supporting analysis. By accounting for inflation over the past 28 years, this threshold has effectively been reduced by 2.5 to 3.0 times in real dollars. Monetary values assigned to a life directly impact the approval of government proposed projects through cost-benefit and cost-effectiveness analysis. Two major methods are used to determine the value of a life. First, the value of a statistical life emphasizes that monetary value is not directly placed on life but rather on methodologies to prevent the statistical loss of life. Second, willingness to pay is the monetary value that an individual would pay to prevent the loss of their life. Previous works cite values of life ranging from $0.1 to $86.8 million (in year 2006 dollars U.S.). The values of a human life used by U.S. government agencies have significantly changed over time. The values used by government agencies in 2006 are as follows: the Department of Transportation and the Federal Aviation Administration — $3.0 million, the Environmental Protection Agency — $6.1 million, the Food and Drug Administration — $6.5 million, and the Consumer Products Safety Commission—$5.0 million. We believe that a single value should be used across all U.S. government agencies to provide consistency and fairness. We also believe that the $100 million minimum regulation threshold should be increased to account for inflation.
{"title":"Economic Survey of the Monetary Value Placed on Human Life by Government Agencies in the United States of America","authors":"J. Silny, R. Little, D. S. Remer","doi":"10.1080/1941658X.2010.10462226","DOIUrl":"https://doi.org/10.1080/1941658X.2010.10462226","url":null,"abstract":"Abstract This article presents the methods used by U.S. government agencies to assign a monetary value to human life as mandated by three Executive Orders. Since 1978, all regulations with an impact greater than $100 million require a supporting analysis. By accounting for inflation over the past 28 years, this threshold has effectively been reduced by 2.5 to 3.0 times in real dollars. Monetary values assigned to a life directly impact the approval of government proposed projects through cost-benefit and cost-effectiveness analysis. Two major methods are used to determine the value of a life. First, the value of a statistical life emphasizes that monetary value is not directly placed on life but rather on methodologies to prevent the statistical loss of life. Second, willingness to pay is the monetary value that an individual would pay to prevent the loss of their life. Previous works cite values of life ranging from $0.1 to $86.8 million (in year 2006 dollars U.S.). The values of a human life used by U.S. government agencies have significantly changed over time. The values used by government agencies in 2006 are as follows: the Department of Transportation and the Federal Aviation Administration — $3.0 million, the Environmental Protection Agency — $6.1 million, the Food and Drug Administration — $6.5 million, and the Consumer Products Safety Commission—$5.0 million. We believe that a single value should be used across all U.S. government agencies to provide consistency and fairness. We also believe that the $100 million minimum regulation threshold should be increased to account for inflation.","PeriodicalId":390877,"journal":{"name":"Journal of Cost Analysis and Parametrics","volume":"2 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131406837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-07-01DOI: 10.1080/1941658X.2009.10462222
J. Hihn, K. Lum, E. Monson
The Jet Propulsion Laboratory (JPL) has a long record of successful deep-space missions from Explorer to Voyager, to Mars Pathfinder, to Galileo to Mars Odyssey, to name but a few. Our experience and success, as with the rest of the aerospace industry, is built on our hardware and system-level expertise. Throughout the 1990s, software came to play an increasingly more significant role in spacecraft integration, risk, and overall workforce at JPL as well as at other aerospace organizations. During the late 1990s, the importance of software was magnified when a number of JPL-managed missions experienced significant flight software cost growth. In addition, several missions exhibited software-related schedule slips impacting or threatening to impact the planned launch dates. These slips occurred on both in-house and contracted software development projects. In response, JPL funded a 1999 study to identify the systemic causes of reported flight software cost growth and to develop a set of recommendations to reduce the flight software-development cost risk. The results of the 1999 study were reported in two articles. The first article identified the root causes of the observed flight software cost growth (Hihn and Habib-agahi, 2000a), and the second described a set of proposed strategies and policies to reduce software cost growth on future missions (Hihn and Habib-agahi, 2000b). A major recommendation of these studies was to change the organizational structure of a software project so that the software manager had more responsibilities and accessible reporting relationships. In 2003 to 2004, a follow-up study was conducted on seven flight projects that launched from summer of 2001 to 2005, to see if anything had changed since 1999 and if any of the initial report’s recommendations had been implemented. The preliminary results of the follow-up study (Hihn et al., 2003) indicated that having a software management team in place well before system preliminary design review (PDR) with budget and design authority did significantly reduce the likelihood of post-PDR software cost growth. The result that organizational and management structure of software projects has a significant impact on software cost growth, is consistent with Capers Jones’ conclusion that, “deficiencies of the project management function is a fundamental root cause of software disaster” (Jones 1996, 152). The importance of communication was also documented in (Hihn et al. 1990), which showed that the impact of software volatility on software development productivity was greatly mitigated when there was extensive communication within a software development team and between the team and customers. This article summarizes the final results of the follow-up study updating the estimated software effort growth for those projects that were still under development and includes an evaluation of the software management roles versus observed cost risk for the missions included in the original stud
喷气推进实验室(JPL)在深空探测任务方面有着悠久的成功记录,从探索者到旅行者,到火星探路者,从伽利略到火星奥德赛,仅举几例。我们的经验和成功,与其他航空航天工业一样,是建立在我们的硬件和系统级专业知识之上的。在整个20世纪90年代,软件开始在航天器集成、风险和JPL以及其他航空航天组织的整体劳动力中扮演越来越重要的角色。在20世纪90年代后期,当许多喷气推进实验室管理的任务经历了显著的飞行软件成本增长时,软件的重要性被放大了。此外,一些任务显示出与软件相关的时间表延误,影响或可能影响计划的发射日期。这些失误发生在内部和外包的软件开发项目中。作为回应,JPL资助了1999年的一项研究,以确定报告的飞行软件成本增长的系统原因,并制定一套建议来减少飞行软件开发成本风险。1999年的研究结果发表在两篇文章中。第一篇文章确定了观测到的飞行软件成本增长的根本原因(Hihn and Habib-agahi, 2000a),第二篇文章描述了一组拟议的策略和政策,以减少未来任务的软件成本增长(Hihn and Habib-agahi, 2000b)。这些研究的主要建议是改变软件项目的组织结构,以便软件经理有更多的责任和可访问的报告关系。2003年至2004年,对2001年夏季至2005年夏季启动的七个飞行项目进行了一项后续研究,以了解自1999年以来是否有任何变化,以及最初报告的建议是否得到了实施。后续研究的初步结果(Hihn et al., 2003)表明,在拥有预算和设计权限的系统初步设计审查(PDR)之前就有一个软件管理团队,确实显著降低了PDR后软件成本增长的可能性。软件项目的组织和管理结构对软件成本增长有显著影响的结果,与Capers Jones的结论“项目管理功能的缺陷是软件灾难的根本根源”(Jones 1996,152)是一致的。(Hihn et al. 1990)中也记录了沟通的重要性,这表明当软件开发团队内部以及团队与客户之间有广泛的沟通时,软件波动性对软件开发生产力的影响会大大减轻。本文总结了后续研究的最终结果,更新了那些仍在开发中的项目的估计软件工作增长,并包括对软件管理角色与原始研究中包括的任务的观察成本风险的评估,该研究将数据集扩展到15个任务。在2004年研究报告的最后版本中,更新了那些尚未启动的特派团的估计费用增长,并评价了1999年研究报告中各特派团软件管理人员的角色说明,以便扩大数据集,更好地记录他们在两项研究之间角色的任何变化。扩展后的数据集可以扩展分析,以包括对成本增长的潜在影响的影响
{"title":"Organizational Structure Impacts Flight Software Cost Risk","authors":"J. Hihn, K. Lum, E. Monson","doi":"10.1080/1941658X.2009.10462222","DOIUrl":"https://doi.org/10.1080/1941658X.2009.10462222","url":null,"abstract":"The Jet Propulsion Laboratory (JPL) has a long record of successful deep-space missions from Explorer to Voyager, to Mars Pathfinder, to Galileo to Mars Odyssey, to name but a few. Our experience and success, as with the rest of the aerospace industry, is built on our hardware and system-level expertise. Throughout the 1990s, software came to play an increasingly more significant role in spacecraft integration, risk, and overall workforce at JPL as well as at other aerospace organizations. During the late 1990s, the importance of software was magnified when a number of JPL-managed missions experienced significant flight software cost growth. In addition, several missions exhibited software-related schedule slips impacting or threatening to impact the planned launch dates. These slips occurred on both in-house and contracted software development projects. In response, JPL funded a 1999 study to identify the systemic causes of reported flight software cost growth and to develop a set of recommendations to reduce the flight software-development cost risk. The results of the 1999 study were reported in two articles. The first article identified the root causes of the observed flight software cost growth (Hihn and Habib-agahi, 2000a), and the second described a set of proposed strategies and policies to reduce software cost growth on future missions (Hihn and Habib-agahi, 2000b). A major recommendation of these studies was to change the organizational structure of a software project so that the software manager had more responsibilities and accessible reporting relationships. In 2003 to 2004, a follow-up study was conducted on seven flight projects that launched from summer of 2001 to 2005, to see if anything had changed since 1999 and if any of the initial report’s recommendations had been implemented. The preliminary results of the follow-up study (Hihn et al., 2003) indicated that having a software management team in place well before system preliminary design review (PDR) with budget and design authority did significantly reduce the likelihood of post-PDR software cost growth. The result that organizational and management structure of software projects has a significant impact on software cost growth, is consistent with Capers Jones’ conclusion that, “deficiencies of the project management function is a fundamental root cause of software disaster” (Jones 1996, 152). The importance of communication was also documented in (Hihn et al. 1990), which showed that the impact of software volatility on software development productivity was greatly mitigated when there was extensive communication within a software development team and between the team and customers. This article summarizes the final results of the follow-up study updating the estimated software effort growth for those projects that were still under development and includes an evaluation of the software management roles versus observed cost risk for the missions included in the original stud","PeriodicalId":390877,"journal":{"name":"Journal of Cost Analysis and Parametrics","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128936941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-07-01DOI: 10.1080/1941658X.2009.10462224
Stephen A. Book
Abstract Suppose we have contracted for or otherwise obtained n probabilistic estimates, expressed as random variables, independent of each other, of the same system or project. This assumption means that we have, for each estimate, a random variable having a probability distribution (likely something close to the lognormal), an S-curve, a mean, and a standard deviation. We want to combine these n estimates to obtain one estimate that contains less uncertainty than each of the n estimates individually. There are two questions that we have to answer: 1) How should we “combine” the estimates? and 2) Will the combined estimate actually be less uncertain than each of the n independent estimates individually? For this issue to be meaningful, we must assume that each of the estimates is “correct,” i.e., 1) they are neither too optimistic, nor too pessimistic, but are based on risk assessments validly drawn from the same risk information available to each estimating team; 2) each estimating team has applied appropriate mathematical techniques to the cost-risk analysis, including, for example, inter-element correlations when appropriate; and 3) each estimating team was working from the same ground rules but may have applied different estimating methods and made different assumptions when encountering the absence of some information required by their estimating method.
{"title":"Combining Probabilistic Estimates to Reduce Uncertainty","authors":"Stephen A. Book","doi":"10.1080/1941658X.2009.10462224","DOIUrl":"https://doi.org/10.1080/1941658X.2009.10462224","url":null,"abstract":"Abstract Suppose we have contracted for or otherwise obtained n probabilistic estimates, expressed as random variables, independent of each other, of the same system or project. This assumption means that we have, for each estimate, a random variable having a probability distribution (likely something close to the lognormal), an S-curve, a mean, and a standard deviation. We want to combine these n estimates to obtain one estimate that contains less uncertainty than each of the n estimates individually. There are two questions that we have to answer: 1) How should we “combine” the estimates? and 2) Will the combined estimate actually be less uncertain than each of the n independent estimates individually? For this issue to be meaningful, we must assume that each of the estimates is “correct,” i.e., 1) they are neither too optimistic, nor too pessimistic, but are based on risk assessments validly drawn from the same risk information available to each estimating team; 2) each estimating team has applied appropriate mathematical techniques to the cost-risk analysis, including, for example, inter-element correlations when appropriate; and 3) each estimating team was working from the same ground rules but may have applied different estimating methods and made different assumptions when encountering the absence of some information required by their estimating method.","PeriodicalId":390877,"journal":{"name":"Journal of Cost Analysis and Parametrics","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114959004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}