{"title":"The Controversy over the National Assessment Governing Board Standards","authors":"M. Reckase","doi":"10.1353/PEP.2001.0014","DOIUrl":null,"url":null,"abstract":"This paper provides an analysis of the controversy surround ing the stan ard setting process conducted by ACT Inc. for the National Assessment Governing Board (NAGB).1 This process is the most thoroughly planned, carefully executed, exhaustively evaluated, completely documented, and most visible of any standard setting process of which I am aware. Extensive research was conducted to determine how best to develop each step in the process.2 A distinguished team of experts guided the process through its development and implementation.3 And, the process has been open to scrutiny with evaluators observing the design and implementation of every step. Any process can be improved with experience and with continuing research and development. Better methods for setting standards likely will be created in the future. Until such developments occur, however, this process?called the achievement levels setting (ALS) process by NAGB?is the model for how standard setting should be done. The question I attempt to answer here is: If the standard setting process is of such high quality, why are the standards set by the process so controversial? Although I think extremely well of the NAGB standard setting process, interpreting the results of the ALS process is a very complex undertaking. A difference has become evident between the technical accuracy of the stan dards and the clarity of meaning for the standards that were set. The techni cal quality of the standards is very high. Statistical analyses have shown that the standards are well within the accepted bounds for amount of error in the estimated cutscores, and follow-up validity studies have provided supportive 231","PeriodicalId":9272,"journal":{"name":"Brookings Papers on Education Policy","volume":"167 1","pages":"231 - 265"},"PeriodicalIF":0.0000,"publicationDate":"2001-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Brookings Papers on Education Policy","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1353/PEP.2001.0014","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
This paper provides an analysis of the controversy surround ing the stan ard setting process conducted by ACT Inc. for the National Assessment Governing Board (NAGB).1 This process is the most thoroughly planned, carefully executed, exhaustively evaluated, completely documented, and most visible of any standard setting process of which I am aware. Extensive research was conducted to determine how best to develop each step in the process.2 A distinguished team of experts guided the process through its development and implementation.3 And, the process has been open to scrutiny with evaluators observing the design and implementation of every step. Any process can be improved with experience and with continuing research and development. Better methods for setting standards likely will be created in the future. Until such developments occur, however, this process?called the achievement levels setting (ALS) process by NAGB?is the model for how standard setting should be done. The question I attempt to answer here is: If the standard setting process is of such high quality, why are the standards set by the process so controversial? Although I think extremely well of the NAGB standard setting process, interpreting the results of the ALS process is a very complex undertaking. A difference has become evident between the technical accuracy of the stan dards and the clarity of meaning for the standards that were set. The techni cal quality of the standards is very high. Statistical analyses have shown that the standards are well within the accepted bounds for amount of error in the estimated cutscores, and follow-up validity studies have provided supportive 231