{"title":"Assessing Academic Rigor in Mathematics Instruction: The Development of the Instructional Quality Assessment Toolkit. CSE Technical Report 672.","authors":"M. Boston, M. Wolf","doi":"10.1037/e644922011-001","DOIUrl":"https://doi.org/10.1037/e644922011-001","url":null,"abstract":"","PeriodicalId":19116,"journal":{"name":"National Center for Research on Evaluation, Standards, and Student Testing","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2006-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82848893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Metric-Free Measures of Test Score Trends and Gaps with Policy-Relevant Examples. CSE Report 665.","authors":"Andrew D. Ho, Edward H. Haertel","doi":"10.1037/e645082011-001","DOIUrl":"https://doi.org/10.1037/e645082011-001","url":null,"abstract":"","PeriodicalId":19116,"journal":{"name":"National Center for Research on Evaluation, Standards, and Student Testing","volume":"29 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2006-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81623541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Issues of Structure and Issues of Scale in Assessment from a Situative/Sociocultural Perspective. CSE Technical Report 668.","authors":"R. Mislevy","doi":"10.1037/e645022011-001","DOIUrl":"https://doi.org/10.1037/e645022011-001","url":null,"abstract":"","PeriodicalId":19116,"journal":{"name":"National Center for Research on Evaluation, Standards, and Student Testing","volume":"40 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2006-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74137593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Bailey, Robin S. Stevens, Frances A. Butler, Becky H. Huang, Judy N. Miyoshi
The work we report focuses on utilizing linguistic profiles of mathematics, science and social studies textbook selections for the creation of reading test specifications. Once we determined that a text and associated tasks fit within the parameters established in Butler et al. (2004), they underwent both internal and external review by language experts and content-area teachers. The external review provided data based on background questionnaires, text and item reviews used to judge representative aspects of topics and linguistic characteristics, and group interviews. Based on this information, the texts were either retained or rejected and items were retained, rejected or reserved for future modification. In the future, retained texts and items can be further analyzed for fit with empirically established text profiles. Part I: Introduction As specified in the abstract, the purpose of this report is to apply the information acquired from comprehensive linguistic analyses of fifth-grade texts previously conducted (Butler, Bailey, Stevens, Huang, & Lord, 2004) to the development of standards-informed academic language items. The work described 1 Acknowledgments: We would like to thank the following for their role in the preparation of this work: the teachers who took part in the review and discussion of the texts and reading items developed here, administrative assistance from Soo Dennison and Morgan Joeck at the early stages of the work, Joan Herman for valuable feedback on an earlier draft of this report, and Fred Moss and Wade Contreras for the final formatting.
{"title":"Using Standards and Empirical Evidence to Develop Academic English Proficiency Test Items in Reading. CSE Technical Report 664.","authors":"A. Bailey, Robin S. Stevens, Frances A. Butler, Becky H. Huang, Judy N. Miyoshi","doi":"10.1037/e645092011-001","DOIUrl":"https://doi.org/10.1037/e645092011-001","url":null,"abstract":"The work we report focuses on utilizing linguistic profiles of mathematics, science and social studies textbook selections for the creation of reading test specifications. Once we determined that a text and associated tasks fit within the parameters established in Butler et al. (2004), they underwent both internal and external review by language experts and content-area teachers. The external review provided data based on background questionnaires, text and item reviews used to judge representative aspects of topics and linguistic characteristics, and group interviews. Based on this information, the texts were either retained or rejected and items were retained, rejected or reserved for future modification. In the future, retained texts and items can be further analyzed for fit with empirically established text profiles. Part I: Introduction As specified in the abstract, the purpose of this report is to apply the information acquired from comprehensive linguistic analyses of fifth-grade texts previously conducted (Butler, Bailey, Stevens, Huang, & Lord, 2004) to the development of standards-informed academic language items. The work described 1 Acknowledgments: We would like to thank the following for their role in the preparation of this work: the teachers who took part in the review and discussion of the texts and reading items developed here, administrative assistance from Soo Dennison and Morgan Joeck at the early stages of the work, Joan Herman for valuable feedback on an earlier draft of this report, and Fred Moss and Wade Contreras for the final formatting.","PeriodicalId":19116,"journal":{"name":"National Center for Research on Evaluation, Standards, and Student Testing","volume":"51 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2005-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73816774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The ever-increasing reliance on student performance on tests as a way of holding schools and educators accountable is discussed. Comparisons are made between state accountability requirements and the accountability requirements of the No Child Left Behind (NCLB) Act of 2001. The resulting mixed messages being given by the two systems are discussed. Features of NCLB accountability and state accountability systems that contribute to the identification of a school as meeting goals according to NCLB but failing to do so according to the state accountability system, or vise versa, are discussed. These include the multiple hurdles of NCLB, the comparison of performance against a fixed target rather than changes in achievement, and the definition of performance goals. Some suggestions are provided for improving the NCLB accountability system. The assessment of student achievement has long been an integral part of education. Test results for individual students have been used for myriad purposes, such as monitoring progress, assigning grades, placement, college admissions, and in grade-to-grade promotion, and high school graduation decisions. The use of student test results to judge programs and schools, with a few exceptions (see, for example, Resnick, 1982), has a shorter, but still substantial, history. Both states and the federal government have moved away from resource and process measures as a means of judging the quality of schools to an ever-increasing reliance on student test results to hold schools accountable. The characteristics of the school accountability systems evolved over the last 40 years and the systems vary a good deal from one state to another, as do the state and federal accountability systems.
{"title":"Test-Based Educational Accountability in the Era of No Child Left Behind. CSE Report 651.","authors":"R. Linn","doi":"10.1037/e645322011-001","DOIUrl":"https://doi.org/10.1037/e645322011-001","url":null,"abstract":"The ever-increasing reliance on student performance on tests as a way of holding schools and educators accountable is discussed. Comparisons are made between state accountability requirements and the accountability requirements of the No Child Left Behind (NCLB) Act of 2001. The resulting mixed messages being given by the two systems are discussed. Features of NCLB accountability and state accountability systems that contribute to the identification of a school as meeting goals according to NCLB but failing to do so according to the state accountability system, or vise versa, are discussed. These include the multiple hurdles of NCLB, the comparison of performance against a fixed target rather than changes in achievement, and the definition of performance goals. Some suggestions are provided for improving the NCLB accountability system. The assessment of student achievement has long been an integral part of education. Test results for individual students have been used for myriad purposes, such as monitoring progress, assigning grades, placement, college admissions, and in grade-to-grade promotion, and high school graduation decisions. The use of student test results to judge programs and schools, with a few exceptions (see, for example, Resnick, 1982), has a shorter, but still substantial, history. Both states and the federal government have moved away from resource and process measures as a means of judging the quality of schools to an ever-increasing reliance on student test results to hold schools accountable. The characteristics of the school accountability systems evolved over the last 40 years and the systems vary a good deal from one state to another, as do the state and federal accountability systems.","PeriodicalId":19116,"journal":{"name":"National Center for Research on Evaluation, Standards, and Student Testing","volume":"38 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2005-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75242498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}