Pub Date : 2022-04-27DOI: 10.1080/0969594X.2022.2054941
John Jerrim
ABSTRACT A substantial body of research suggests that young people’s emotions – both positive and negative – are linked to a wide range of future outcomes. This paper contributes to this literature by investigating the link between young people’s positive and negative emotions and their performance in high-stakes examinations. Using Programme for International Student Assessment (PISA) data from England linked to the National Pupil Database (NPD), I investigate how 15-year-olds positive affect, negative affect and fear of failure is associated with the grades they achieve in high-stakes examinations. I find that low levels of positive affect – i.e. pupils rarely feeling happy, lively and cheerful – is associated with a 0.10–0.15 standard deviation reduction in young people’s examination grades. On the other hand, little evidence is found of a substantive link between negative affect or fear of failure and examination performance.
{"title":"The power of positive emotions? The link between young people’s positive and negative affect and performance in high-stakes examinations","authors":"John Jerrim","doi":"10.1080/0969594X.2022.2054941","DOIUrl":"https://doi.org/10.1080/0969594X.2022.2054941","url":null,"abstract":"ABSTRACT A substantial body of research suggests that young people’s emotions – both positive and negative – are linked to a wide range of future outcomes. This paper contributes to this literature by investigating the link between young people’s positive and negative emotions and their performance in high-stakes examinations. Using Programme for International Student Assessment (PISA) data from England linked to the National Pupil Database (NPD), I investigate how 15-year-olds positive affect, negative affect and fear of failure is associated with the grades they achieve in high-stakes examinations. I find that low levels of positive affect – i.e. pupils rarely feeling happy, lively and cheerful – is associated with a 0.10–0.15 standard deviation reduction in young people’s examination grades. On the other hand, little evidence is found of a substantive link between negative affect or fear of failure and examination performance.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"80 1","pages":"310 - 331"},"PeriodicalIF":3.2,"publicationDate":"2022-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85686993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-24DOI: 10.1080/0969594X.2022.2067123
E. Panadero, J. Fraile, Leire Pinedo, C. Rodríguez-Hernández, Fernando Díez
ABSTRACT This study explores the effects of the shift to emergency remote teaching on assessment practices due to COVID-19 lockdown. A total of 936 Spanish teachers from all educational levels ranging from early childhood to university participated in this nationwide survey. Four aspects were explored: (1) changes in the use of assessment instruments (e.g. exams); (2) changes in assessment criteria, standards and grading; (3) changes in the delivery of feedback and use of rubrics; and (4) changes in students’ involvement in assessment (i.e. self- and peer assessment). In general, results are mixed, with some areas undergoing certain changes with the aim of adapting to the new situation (e.g. primary education teachers lowering their grading standards), whereas many other assessment practices have remained similar, especially among higher education teachers. Unfortunately, some of the assessment practices have worsened, such as students’ involvement in assessment which has decreased.
{"title":"Changes in classroom assessment practices during emergency remote teaching due to COVID-19","authors":"E. Panadero, J. Fraile, Leire Pinedo, C. Rodríguez-Hernández, Fernando Díez","doi":"10.1080/0969594X.2022.2067123","DOIUrl":"https://doi.org/10.1080/0969594X.2022.2067123","url":null,"abstract":"ABSTRACT This study explores the effects of the shift to emergency remote teaching on assessment practices due to COVID-19 lockdown. A total of 936 Spanish teachers from all educational levels ranging from early childhood to university participated in this nationwide survey. Four aspects were explored: (1) changes in the use of assessment instruments (e.g. exams); (2) changes in assessment criteria, standards and grading; (3) changes in the delivery of feedback and use of rubrics; and (4) changes in students’ involvement in assessment (i.e. self- and peer assessment). In general, results are mixed, with some areas undergoing certain changes with the aim of adapting to the new situation (e.g. primary education teachers lowering their grading standards), whereas many other assessment practices have remained similar, especially among higher education teachers. Unfortunately, some of the assessment practices have worsened, such as students’ involvement in assessment which has decreased.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"95 1","pages":"361 - 382"},"PeriodicalIF":3.2,"publicationDate":"2022-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73106590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-24DOI: 10.1080/0969594X.2022.2069084
Amanda Cooper, Christopher DeLuca, M. Holden, Stephen MacGregor
ABSTRACT Systemic disruptions from COVID-19 have transformed the assessment landscape in Canada and across the world. Alongside repeated shifts to emergency remote teaching, large-scale assessments and summative evaluations were cancelled in many jurisdictions, and repeated concerns were raised about ensuring equity and access to quality education. This paper investigates the rapid – and in many cases innovative – responses teachers offered to these challenges at the height of the pandemic. Drawing on prolonged semi-structured interviews with 17 secondary school teachers in Ontario, Canada, the paper provides a detailed account of Ontario’s approach to assessment during COVID-19, exemplified by participants’ lived experiences. Results highlight the notion of emergency remote assessment, the vital role of assessment in stemming widening equity and well-being gaps, and emerging consequences from this period. These data offer critical insights into the future of our forever-changed education landscape, and position classroom assessment as a priority player in this work.
{"title":"Emergency assessment: rethinking classroom practices and priorities amid remote teaching","authors":"Amanda Cooper, Christopher DeLuca, M. Holden, Stephen MacGregor","doi":"10.1080/0969594X.2022.2069084","DOIUrl":"https://doi.org/10.1080/0969594X.2022.2069084","url":null,"abstract":"ABSTRACT Systemic disruptions from COVID-19 have transformed the assessment landscape in Canada and across the world. Alongside repeated shifts to emergency remote teaching, large-scale assessments and summative evaluations were cancelled in many jurisdictions, and repeated concerns were raised about ensuring equity and access to quality education. This paper investigates the rapid – and in many cases innovative – responses teachers offered to these challenges at the height of the pandemic. Drawing on prolonged semi-structured interviews with 17 secondary school teachers in Ontario, Canada, the paper provides a detailed account of Ontario’s approach to assessment during COVID-19, exemplified by participants’ lived experiences. Results highlight the notion of emergency remote assessment, the vital role of assessment in stemming widening equity and well-being gaps, and emerging consequences from this period. These data offer critical insights into the future of our forever-changed education landscape, and position classroom assessment as a priority player in this work.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"240 1","pages":"534 - 554"},"PeriodicalIF":3.2,"publicationDate":"2022-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73173276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-19DOI: 10.1080/0969594X.2022.2067834
I. Nisbet, Stuart Shaw
ABSTRACT Fairness in assessment has become increasingly topical and controversial in recent years. Assessment theoreticians are writing more about fairness and assessment practitioners have developed processes and good practice to minimise unfairness. There is also increased scrutiny by students, parents and the wider public – not only of the fairness of assessments themselves and their outcomes, but of their use, notably for selection for college or university. This is in a context of continued awareness of inequalities in society and their impact on education and assessment. And on top of all these questions has been the impact – and the continuing long shadow – of Covid. Can there be fair assessment in such an unfair world? We consider three types of challenge to fair assessment: •Theoretical challenges •Challenges from thinking about social justice •Challenges from the way that statistics were used to award assessment outcomes in 2020 (particularly in England)
{"title":"Fair high-stakes assessment in the long shadow of Covid-19","authors":"I. Nisbet, Stuart Shaw","doi":"10.1080/0969594X.2022.2067834","DOIUrl":"https://doi.org/10.1080/0969594X.2022.2067834","url":null,"abstract":"ABSTRACT Fairness in assessment has become increasingly topical and controversial in recent years. Assessment theoreticians are writing more about fairness and assessment practitioners have developed processes and good practice to minimise unfairness. There is also increased scrutiny by students, parents and the wider public – not only of the fairness of assessments themselves and their outcomes, but of their use, notably for selection for college or university. This is in a context of continued awareness of inequalities in society and their impact on education and assessment. And on top of all these questions has been the impact – and the continuing long shadow – of Covid. Can there be fair assessment in such an unfair world? We consider three types of challenge to fair assessment: •Theoretical challenges •Challenges from thinking about social justice •Challenges from the way that statistics were used to award assessment outcomes in 2020 (particularly in England)","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"80 1","pages":"518 - 533"},"PeriodicalIF":3.2,"publicationDate":"2022-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84112471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-04DOI: 10.1080/0969594X.2022.2054942
Paul M. Rogers, Jonathan Marine, Samantha T. Ives, Seth A. Parsons, Ashlee Horton, Chase Young
ABSTRACT This article reports on the implementation of a formative assessment tool (the Writing Engagement Scale, or WES) in grades 3–5 in schools in the United States. We used confirmatory factor analysis (CFA) to collect validity evidence for the WES for our population. Results demonstrated acceptable validity and reliability. In addition, survey results indicated that teachers perceived the WES to be useful as a formative writing assessment. We make the case that the WES provides an opportunity to inform teachers’ practice and help researchers understand the dimensions of students’ engagement in writing.
{"title":"Validity evidence for a formative writing engagement assessment in elementary grades","authors":"Paul M. Rogers, Jonathan Marine, Samantha T. Ives, Seth A. Parsons, Ashlee Horton, Chase Young","doi":"10.1080/0969594X.2022.2054942","DOIUrl":"https://doi.org/10.1080/0969594X.2022.2054942","url":null,"abstract":"ABSTRACT This article reports on the implementation of a formative assessment tool (the Writing Engagement Scale, or WES) in grades 3–5 in schools in the United States. We used confirmatory factor analysis (CFA) to collect validity evidence for the WES for our population. Results demonstrated acceptable validity and reliability. In addition, survey results indicated that teachers perceived the WES to be useful as a formative writing assessment. We make the case that the WES provides an opportunity to inform teachers’ practice and help researchers understand the dimensions of students’ engagement in writing.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"17 1","pages":"262 - 284"},"PeriodicalIF":3.2,"publicationDate":"2022-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78021923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-04DOI: 10.1080/0969594X.2022.2089488
G. B. Skar, Steve Graham, Gert Rijlaarsdam
This current special issue centres on formative writing assessment with children in the elementary grades. Participants in the investigations included in this special issue represent a span from the very youngest students just learning to write to students in fifth and sixth grades who generally have overcome the barriers of knowing how to encode writing, but who face increased demands for producing discursive, audience adapted texts. As editors, we limited papers in the special issue to include studies conducted with students in this grade span because it has been under-researched compared to other grade spans. That these grades have received less attention does not reflect on the importance of early writing instruction; becoming a skilled writer takes time, and the first writing instruction is essential. Becoming a good writer is the result of many complex interactions–including but not limited to–interactions between a writer’s attitude towards writing, her cognitive capacity, the kind of writing instruction she is exposed to, as well as the writer’s perception of textual norms in relation to the reader’s perception of the same norms, and thereby the reader’s textual expectations (Graham, 2018a; Rijlaarsdam et al., 2012; Skar & Aasen, 2021). To help children progress as writers, then, there is a need for tools that can elicit information about students’ writing skills in different domains (e.g. affective, cognitive, textual) and tools that help teachers transform that information into instruction. Such tools are often described as tools for formative assessment. Formative writing assessment has proven to be effective in increasing the writing skills of students. A review by (Graham, 2018b) reported positive effect sizes for text response (d = 0.36), adult feedback (d = 0.87), peer feedback (0.58), self-feedback (d = 0.62) and computerised feedback (d = 0.38). An earlier study by Graham et al. (2011) reported an effect size of d = 1.01 for feedback from adults or peers. So, formative writing assessment can work, and it can lead to positive change. But what is it? Graham (2018b, pp. 145–147) suggested the following definition of formative writing assessment: ‘instructional feedback in writing as information provided by another person, group of people, agency, machine, self, or experience that allows a writer, one learning to write, or a writing teacher/mentor to compare some aspect of performance to an expected, desired, or idealized performance’ and that ‘Formative feedback is derived from assessments that involve collecting information or evidence about student learning, interpreting it in terms of learners’ needs, and using it to alter what happens.’ In other words, formative writing assessment concerns taking actions based on information about a writer’s skills in order to make that writer even more skilled. One might therefore say that formative writing assessment – in the end – is all about consequences. ASSESSMENT IN EDUCATION: PRINCIPLES
本期特刊关注的是小学阶段儿童的形成性写作评估。本期特刊中调查的参与者涵盖了从刚开始学习写作的最年轻的学生到五年级和六年级的学生,这些学生通常已经克服了知道如何编码写作的障碍,但他们对创作话语性的、适合读者的文本的需求越来越大。作为编辑,我们将特刊中的论文限制在包含对这个年级的学生进行的研究,因为与其他年级相比,这个年级的研究不足。这些分数受到的关注较少并不能反映早期写作指导的重要性;成为一个熟练的作家需要时间,第一次写作指导是必不可少的。成为一名优秀的作家是许多复杂互动的结果,包括但不限于作家对写作的态度、她的认知能力、她所接触到的写作指导类型,以及作家对文本规范的感知与读者对同一规范的感知之间的互动,从而影响读者对文本的期望(Graham, 2018a;Rijlaarsdam et al., 2012;Skar & Aasen, 2021)。因此,为了帮助儿童在写作方面取得进步,需要一些工具来引出学生在不同领域(如情感、认知、文本)的写作技能信息,并帮助教师将这些信息转化为教学。这些工具通常被描述为形成性评估的工具。形成性写作评估已被证明对提高学生的写作技能是有效的。(Graham, 2018b)的一篇综述报告了文本回复(d = 0.36)、成人反馈(d = 0.87)、同伴反馈(0.58)、自我反馈(d = 0.62)和计算机反馈(d = 0.38)的积极效应大小。Graham et al.(2011)的早期研究报告了来自成年人或同伴反馈的效应量d = 1.01。因此,形成性写作评估是有效的,它可以带来积极的变化。但它是什么呢?Graham (2018b, pp. 145-147)提出了形成性写作评估的以下定义:“写作中的指教性反馈是由另一个人、一群人、机构、机器、自我或经验提供的信息,它允许作家、一个学习写作的人或写作老师/导师将表现的某些方面与预期的、期望的或理想的表现进行比较”,“形成性反馈来自于收集有关学生学习的信息或证据的评估,并根据学习者的需求进行解释。”用它来改变发生的事情。换句话说,形成性写作评估关注的是基于作者技能的信息采取行动,以使作者更加熟练。因此,有人可能会说,最终形成性写作评估是关于结果的。教育中的评估:原则、政策与实践,2022,第29卷,第2期。2,121 - 126 https://doi.org/10.1080/0969594X.2022.2089488
{"title":"Formative writing assessment for change – introduction to the special issue","authors":"G. B. Skar, Steve Graham, Gert Rijlaarsdam","doi":"10.1080/0969594X.2022.2089488","DOIUrl":"https://doi.org/10.1080/0969594X.2022.2089488","url":null,"abstract":"This current special issue centres on formative writing assessment with children in the elementary grades. Participants in the investigations included in this special issue represent a span from the very youngest students just learning to write to students in fifth and sixth grades who generally have overcome the barriers of knowing how to encode writing, but who face increased demands for producing discursive, audience adapted texts. As editors, we limited papers in the special issue to include studies conducted with students in this grade span because it has been under-researched compared to other grade spans. That these grades have received less attention does not reflect on the importance of early writing instruction; becoming a skilled writer takes time, and the first writing instruction is essential. Becoming a good writer is the result of many complex interactions–including but not limited to–interactions between a writer’s attitude towards writing, her cognitive capacity, the kind of writing instruction she is exposed to, as well as the writer’s perception of textual norms in relation to the reader’s perception of the same norms, and thereby the reader’s textual expectations (Graham, 2018a; Rijlaarsdam et al., 2012; Skar & Aasen, 2021). To help children progress as writers, then, there is a need for tools that can elicit information about students’ writing skills in different domains (e.g. affective, cognitive, textual) and tools that help teachers transform that information into instruction. Such tools are often described as tools for formative assessment. Formative writing assessment has proven to be effective in increasing the writing skills of students. A review by (Graham, 2018b) reported positive effect sizes for text response (d = 0.36), adult feedback (d = 0.87), peer feedback (0.58), self-feedback (d = 0.62) and computerised feedback (d = 0.38). An earlier study by Graham et al. (2011) reported an effect size of d = 1.01 for feedback from adults or peers. So, formative writing assessment can work, and it can lead to positive change. But what is it? Graham (2018b, pp. 145–147) suggested the following definition of formative writing assessment: ‘instructional feedback in writing as information provided by another person, group of people, agency, machine, self, or experience that allows a writer, one learning to write, or a writing teacher/mentor to compare some aspect of performance to an expected, desired, or idealized performance’ and that ‘Formative feedback is derived from assessments that involve collecting information or evidence about student learning, interpreting it in terms of learners’ needs, and using it to alter what happens.’ In other words, formative writing assessment concerns taking actions based on information about a writer’s skills in order to make that writer even more skilled. One might therefore say that formative writing assessment – in the end – is all about consequences. ASSESSMENT IN EDUCATION: PRINCIPLES","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"59 1","pages":"121 - 126"},"PeriodicalIF":3.2,"publicationDate":"2022-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84341978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-04DOI: 10.1080/0969594X.2022.2037509
D. McNamara, Panayiota Kendeou
ABSTRACT We propose a framework designed to guide the development of automated writing practice and formative evaluation and feedback for young children (K-5th grade) – the early Automated Writing Evaluation (early-AWE) Framework. e-AWE is grounded on the fundamental assumption that e-AWE is needed for young developing readers, but must incorporate advanced technologies inherent to AWE, speech recognition, and games. In line with interdisciplinary views on writing to support learners in the classroom, e-AWE must support a community of learners and interlace reading and writing instructional activities combined with feedback to use reading and writing strategies. The e-AWE Framework provides a guide for the development of tools that leverage and integrate cutting-edge technologies, some of which only recently have become widely available in educational settings. These tools can continue to provide usable and feasible means to offer high-quality automated writing practice and feedback to a diverse and large number of students.
{"title":"The early automated writing evaluation (eAWE) framework","authors":"D. McNamara, Panayiota Kendeou","doi":"10.1080/0969594X.2022.2037509","DOIUrl":"https://doi.org/10.1080/0969594X.2022.2037509","url":null,"abstract":"ABSTRACT We propose a framework designed to guide the development of automated writing practice and formative evaluation and feedback for young children (K-5th grade) – the early Automated Writing Evaluation (early-AWE) Framework. e-AWE is grounded on the fundamental assumption that e-AWE is needed for young developing readers, but must incorporate advanced technologies inherent to AWE, speech recognition, and games. In line with interdisciplinary views on writing to support learners in the classroom, e-AWE must support a community of learners and interlace reading and writing instructional activities combined with feedback to use reading and writing strategies. The e-AWE Framework provides a guide for the development of tools that leverage and integrate cutting-edge technologies, some of which only recently have become widely available in educational settings. These tools can continue to provide usable and feasible means to offer high-quality automated writing practice and feedback to a diverse and large number of students.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"14 1","pages":"150 - 182"},"PeriodicalIF":3.2,"publicationDate":"2022-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76672344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-04DOI: 10.1080/0969594X.2022.2080178
Steve Graham, Allen G. Harbaugh-Schattenkirk, A. Aitken, K. Harris, Clarence Ng, Amber B. Ray, John M. Wilson, Jeanne Wdowin
ABSTRACT This study evaluated the validity of a multi-dimensional measure of motives for writing. Based on an earlier instrument and theoretical conceptualisations of writing beliefs, we developed the Writing Motivation Questionnaire (WMQ). A sample of 2,186 fourth- (558 girls; 521 boys) and fifth-grade students (546 girls; 561 boys) completed 28 writing motivation items assessing seven motives for writing. Two of these motives addressed intrinsic reasons for writing (curiosity, involvement); three motives assessed extrinsic reasons (grades, competition, and social recognition); and two motives examined self-regulatory reasons (emotional regulation, relief from boredom). Confirmatory factor analyses supported the hypothesised structure of the WMQ, and each of the seven motives evidenced adequate reliability for research purposes. Measurement invariance was established for grades four and five students, girls and boys, White and non-White students, children receiving or not receiving free/reduced lunch, and students receiving or not receiving special education services. The WMQ predicted students’ writing performance.
{"title":"Writing motivation questionnaire: validation and application as a formative assessment","authors":"Steve Graham, Allen G. Harbaugh-Schattenkirk, A. Aitken, K. Harris, Clarence Ng, Amber B. Ray, John M. Wilson, Jeanne Wdowin","doi":"10.1080/0969594X.2022.2080178","DOIUrl":"https://doi.org/10.1080/0969594X.2022.2080178","url":null,"abstract":"ABSTRACT This study evaluated the validity of a multi-dimensional measure of motives for writing. Based on an earlier instrument and theoretical conceptualisations of writing beliefs, we developed the Writing Motivation Questionnaire (WMQ). A sample of 2,186 fourth- (558 girls; 521 boys) and fifth-grade students (546 girls; 561 boys) completed 28 writing motivation items assessing seven motives for writing. Two of these motives addressed intrinsic reasons for writing (curiosity, involvement); three motives assessed extrinsic reasons (grades, competition, and social recognition); and two motives examined self-regulatory reasons (emotional regulation, relief from boredom). Confirmatory factor analyses supported the hypothesised structure of the WMQ, and each of the seven motives evidenced adequate reliability for research purposes. Measurement invariance was established for grades four and five students, girls and boys, White and non-White students, children receiving or not receiving free/reduced lunch, and students receiving or not receiving special education services. The WMQ predicted students’ writing performance.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"73 1","pages":"238 - 261"},"PeriodicalIF":3.2,"publicationDate":"2022-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86015447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-04DOI: 10.1080/0969594X.2022.2057424
G. B. Skar, Alan Huebner
ABSTRACT This study aimed to investigate the predictability of writing development and if scores on a writing test in the first weeks of first grade accurately predict students’ placements into different proficiency groups. Participants were 832 first grade students in Norway. Writing proficiency was measured twice, at the start and at the end of first grade (time 1 and time 2, respectively). Multilevel linear regression analysis showed that writing proficiency measures at time 1 were significant predictors of writing proficiency at time 2. The results also showed that measures at time 1 could identify students running the risk of not meeting expectations with high precision. However, the results also revealed a substantial proportion of false positives. The results are interpreted and discussed from a formative writing assessment perspective.
{"title":"Predicting first grade students’ writing proficiency","authors":"G. B. Skar, Alan Huebner","doi":"10.1080/0969594X.2022.2057424","DOIUrl":"https://doi.org/10.1080/0969594X.2022.2057424","url":null,"abstract":"ABSTRACT This study aimed to investigate the predictability of writing development and if scores on a writing test in the first weeks of first grade accurately predict students’ placements into different proficiency groups. Participants were 832 first grade students in Norway. Writing proficiency was measured twice, at the start and at the end of first grade (time 1 and time 2, respectively). Multilevel linear regression analysis showed that writing proficiency measures at time 1 were significant predictors of writing proficiency at time 2. The results also showed that measures at time 1 could identify students running the risk of not meeting expectations with high precision. However, the results also revealed a substantial proportion of false positives. The results are interpreted and discussed from a formative writing assessment perspective.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"1 1","pages":"219 - 237"},"PeriodicalIF":3.2,"publicationDate":"2022-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81999243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-03DOI: 10.1080/0969594X.2022.2047608
Eithne Kennedy, G. Shiel
ABSTRACT Formative assessment is an important driver in supporting children’s writing development. This paper describes a writing rubric designed for use by teachers to formatively assess the writing of children in Pre-K to Grade 2, how the rubric was received by teachers, and its implementation in classrooms. Writing samples from 337 children in 33 classrooms in 7 schools in the Write to Read literacy improvement project were scored on conventions, organisation, ideas, word choice and voice. Agreement among raters was good as overall weighted Kappa values at each grade level ranged from .62 to .80. Confirmatory factor analysis supported three- and five-factor models. Coaches endorsed use of the rubric for providing formative feedback to students, identifying learning needs, and differentiating instruction. They highlighted how the rubric provides a framework through which teachers and students engage with the language of writing assessment and raise expectations about writing quality.
形成性评价是支持儿童写作发展的重要驱动力。本文描述了一种写作标准,设计用于教师对学前班至二年级儿童的写作进行形式化评估,教师如何接受该标准,以及它在课堂上的实施。在Write to Read读写能力改善项目中,来自7所学校33个教室的337名儿童的写作样本在惯例、组织、想法、词汇选择和声音方面进行了评分。评分者之间的一致性很好,因为每个年级的总体加权Kappa值从0.62到0.80不等。验证性因子分析支持三因子和五因子模型。教练们赞同使用标准来为学生提供形成性的反馈,识别学习需求,区分教学。他们强调,该标准提供了一个框架,通过该框架,教师和学生可以参与写作语言评估,并提高对写作质量的期望。
{"title":"Writing assessment for communities of writers: rubric validation to support formative assessment of writing in Pre-K to grade 2","authors":"Eithne Kennedy, G. Shiel","doi":"10.1080/0969594X.2022.2047608","DOIUrl":"https://doi.org/10.1080/0969594X.2022.2047608","url":null,"abstract":"ABSTRACT Formative assessment is an important driver in supporting children’s writing development. This paper describes a writing rubric designed for use by teachers to formatively assess the writing of children in Pre-K to Grade 2, how the rubric was received by teachers, and its implementation in classrooms. Writing samples from 337 children in 33 classrooms in 7 schools in the Write to Read literacy improvement project were scored on conventions, organisation, ideas, word choice and voice. Agreement among raters was good as overall weighted Kappa values at each grade level ranged from .62 to .80. Confirmatory factor analysis supported three- and five-factor models. Coaches endorsed use of the rubric for providing formative feedback to students, identifying learning needs, and differentiating instruction. They highlighted how the rubric provides a framework through which teachers and students engage with the language of writing assessment and raise expectations about writing quality.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"3 1","pages":"127 - 149"},"PeriodicalIF":3.2,"publicationDate":"2022-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88570541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}