考察直接写作考核任务的普遍性。CSE技术报告

Eva Chen, D. Niemi, Jia Wang, Haiwen Wang, J. Mirocha
{"title":"考察直接写作考核任务的普遍性。CSE技术报告","authors":"Eva Chen, D. Niemi, Jia Wang, Haiwen Wang, J. Mirocha","doi":"10.1037/e643812011-001","DOIUrl":null,"url":null,"abstract":"This study investigated the level of generalizability across a few high quality assessment tasks and the validity of measuring student writing ability using a limited number of essay tasks. More specifically, the research team explored how well writing prompts could measure student general writing ability and if student performance from one writing task could be generalized to other similar writing tasks. A total of four writing prompts were used in the study, with three tasks being literature-based and one task based on a short story. A total of 397 students participated in the study and each student was randomly assigned to complete two of the four tasks. The research team found that three to five essays were required to evaluate and make a reliable judgment of student writing performance. Examining the Generalizability of Direct Writing Assessment Tasks Performance assessment can serve to measure important and complex learning outcomes (Resnick & Resnick, 1989), provide a more direct measurement of student ability (Frederiksen, 1984; Glaser, 1991; Guthrie, 1984), and help guide improvement in instructional practices (Baron, 1991; Bennett, 1993). Of the various types of performance assessment, direct tests of writing ability have experienced the most acceptance in state and national assessment programs (Afflebach, 1985; Applebee, Langer, Jenkins, Mullins & Foertsch, 1990; Applebee, Langer, & Mullis, 1995). Advocates of direct writing assessment point out that students need more exposure to writing in the form of instruction and more frequent examinations (Breland, 1983). However, there are problems associated with using essays to measure students’ writing abilities, like objectivity of ratings, generalizability of scores across raters and tasks (Crehan, 1997). Previous generalizability studies of direct writing assessment","PeriodicalId":19116,"journal":{"name":"National Center for Research on Evaluation, Standards, and Student Testing","volume":"24 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2007-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Examining the Generalizability of Direct Writing Assessment Tasks. CSE Technical Report 718.\",\"authors\":\"Eva Chen, D. Niemi, Jia Wang, Haiwen Wang, J. Mirocha\",\"doi\":\"10.1037/e643812011-001\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This study investigated the level of generalizability across a few high quality assessment tasks and the validity of measuring student writing ability using a limited number of essay tasks. More specifically, the research team explored how well writing prompts could measure student general writing ability and if student performance from one writing task could be generalized to other similar writing tasks. A total of four writing prompts were used in the study, with three tasks being literature-based and one task based on a short story. A total of 397 students participated in the study and each student was randomly assigned to complete two of the four tasks. The research team found that three to five essays were required to evaluate and make a reliable judgment of student writing performance. Examining the Generalizability of Direct Writing Assessment Tasks Performance assessment can serve to measure important and complex learning outcomes (Resnick & Resnick, 1989), provide a more direct measurement of student ability (Frederiksen, 1984; Glaser, 1991; Guthrie, 1984), and help guide improvement in instructional practices (Baron, 1991; Bennett, 1993). Of the various types of performance assessment, direct tests of writing ability have experienced the most acceptance in state and national assessment programs (Afflebach, 1985; Applebee, Langer, Jenkins, Mullins & Foertsch, 1990; Applebee, Langer, & Mullis, 1995). Advocates of direct writing assessment point out that students need more exposure to writing in the form of instruction and more frequent examinations (Breland, 1983). However, there are problems associated with using essays to measure students’ writing abilities, like objectivity of ratings, generalizability of scores across raters and tasks (Crehan, 1997). Previous generalizability studies of direct writing assessment\",\"PeriodicalId\":19116,\"journal\":{\"name\":\"National Center for Research on Evaluation, Standards, and Student Testing\",\"volume\":\"24 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2007-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"National Center for Research on Evaluation, Standards, and Student Testing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1037/e643812011-001\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"National Center for Research on Evaluation, Standards, and Student Testing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1037/e643812011-001","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7

摘要

本研究调查了几个高质量的评估任务的普遍性水平,以及使用有限数量的论文任务来衡量学生写作能力的有效性。更具体地说,研究小组探索了写作提示如何很好地衡量学生的总体写作能力,以及学生在一个写作任务中的表现是否可以推广到其他类似的写作任务中。研究中总共使用了四个写作提示,其中三个任务是基于文学的,一个任务是基于一个短篇故事的。共有397名学生参加了这项研究,每个学生被随机分配完成四项任务中的两项。研究小组发现,要评估并对学生的写作表现做出可靠的判断,需要三到五篇文章。绩效评估可以用来衡量重要和复杂的学习成果(Resnick & Resnick, 1989),提供对学生能力的更直接的衡量(Frederiksen, 1984;格拉泽,1991;Guthrie, 1984),并帮助指导教学实践的改进(Baron, 1991;班尼特,1993)。在各种类型的绩效评估中,写作能力的直接测试在州和国家评估计划中得到了最广泛的接受(Afflebach, 1985;Applebee, Langer, Jenkins, Mullins & Foertsch, 1990;Applebee, Langer, & Mullis, 1995)。提倡直接写作评估的人指出,学生需要更多地以教学的形式接触写作,需要更频繁地参加考试(Breland, 1983)。然而,用论文来衡量学生的写作能力存在一些问题,比如评分的客观性,评分者和任务的分数的普遍性(Crehan, 1997)。以往直接写作评价的概括性研究
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Examining the Generalizability of Direct Writing Assessment Tasks. CSE Technical Report 718.
This study investigated the level of generalizability across a few high quality assessment tasks and the validity of measuring student writing ability using a limited number of essay tasks. More specifically, the research team explored how well writing prompts could measure student general writing ability and if student performance from one writing task could be generalized to other similar writing tasks. A total of four writing prompts were used in the study, with three tasks being literature-based and one task based on a short story. A total of 397 students participated in the study and each student was randomly assigned to complete two of the four tasks. The research team found that three to five essays were required to evaluate and make a reliable judgment of student writing performance. Examining the Generalizability of Direct Writing Assessment Tasks Performance assessment can serve to measure important and complex learning outcomes (Resnick & Resnick, 1989), provide a more direct measurement of student ability (Frederiksen, 1984; Glaser, 1991; Guthrie, 1984), and help guide improvement in instructional practices (Baron, 1991; Bennett, 1993). Of the various types of performance assessment, direct tests of writing ability have experienced the most acceptance in state and national assessment programs (Afflebach, 1985; Applebee, Langer, Jenkins, Mullins & Foertsch, 1990; Applebee, Langer, & Mullis, 1995). Advocates of direct writing assessment point out that students need more exposure to writing in the form of instruction and more frequent examinations (Breland, 1983). However, there are problems associated with using essays to measure students’ writing abilities, like objectivity of ratings, generalizability of scores across raters and tasks (Crehan, 1997). Previous generalizability studies of direct writing assessment
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Aligning Instruction and Assessment with Game and Simulation Design. CRESST Report 780. Evaluation of Seeds of Science/Roots of Reading: Effective Tools for Developing Literacy through Science in the Early Grades-Light Energy Unit. CRESST Report 781. Accessible Reading Assessments for Students with Disabilities: The Role of Cognitive, Grammatical, Lexical, and Textual/Visual Features. CRESST Report 785. Preparing Students for the 21st Century: Exploring the Effect of Afterschool Participation on Students' Collaboration Skills, Oral Communication Skills, and Self-Efficacy. CRESST Report 777. What Works? Common Practices in High Functioning Afterschool Programs across the Nation in Math, Reading, Science, Arts, Technology, and Homework--A Study by the National Partnership. The Afterschool Program Assessment Guide. CRESST Report 768.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1