Pub Date : 2022-02-28DOI: 10.1080/0969594X.2022.2043240
M. Matta, Sterett H. Mercer, Milena A. Keller-Margulis
ABSTRACT Written expression curriculum-based measurement (WE-CBM) is a formative assessment approach for screening and progress monitoring. To extend evaluation of WE-CBM, we compared hand-calculated and automated scoring approaches in relation to the number of screening samples needed per student for valid scores, the long-term predictive validity and diagnostic accuracy of scores, and predictive and diagnostic bias for underrepresented student groups. Second- to fifth-grade students (n = 609) completed five WE-CBM tasks during one academic year and a standardised writing test in fourth and seventh grade. Averaging WE-CBM scores across multiple samples improved validity. Complex hand-calculated metrics and automated tools outperformed simpler metrics for the long-term prediction of writing performance. No evidence of bias was observed between African American and Hispanic students. The study will illustrate the absence of test bias as necessary condition for fair and equitable screening procedures and the importance of future research to include comparisons with majority groups.
{"title":"Evaluating validity and bias for hand-calculated and automated written expression curriculum-based measurement scores","authors":"M. Matta, Sterett H. Mercer, Milena A. Keller-Margulis","doi":"10.1080/0969594X.2022.2043240","DOIUrl":"https://doi.org/10.1080/0969594X.2022.2043240","url":null,"abstract":"ABSTRACT Written expression curriculum-based measurement (WE-CBM) is a formative assessment approach for screening and progress monitoring. To extend evaluation of WE-CBM, we compared hand-calculated and automated scoring approaches in relation to the number of screening samples needed per student for valid scores, the long-term predictive validity and diagnostic accuracy of scores, and predictive and diagnostic bias for underrepresented student groups. Second- to fifth-grade students (n = 609) completed five WE-CBM tasks during one academic year and a standardised writing test in fourth and seventh grade. Averaging WE-CBM scores across multiple samples improved validity. Complex hand-calculated metrics and automated tools outperformed simpler metrics for the long-term prediction of writing performance. No evidence of bias was observed between African American and Hispanic students. The study will illustrate the absence of test bias as necessary condition for fair and equitable screening procedures and the importance of future research to include comparisons with majority groups.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"1 1","pages":"200 - 218"},"PeriodicalIF":3.2,"publicationDate":"2022-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74796485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-23DOI: 10.1080/0969594X.2022.2025762
Joshua Wilson, Matthew C. Myers, Andrew Potter
ABSTRACT We investigated the promise of a novel approach to formative writing assessment at scale that involved an automated writing evaluation (AWE) system called MI Write. Specifically, we investigated elementary teachers’ perceptions and implementation of MI Write and changes in students’ writing performance in three genres from Fall to Spring associated with this implementation. Teachers in Grades 3–5 (n = 14) reported that MI Write was usable and acceptable, useful, and desirable; however, teachers tended to implement MI Write in a limited manner. Multilevel repeated measures analyses indicated that students in Grades 3–5 (n = 570) tended not to increase their performance from Fall to Spring except for third graders in all genres and fourth graders’ narrative writing. Findings illustrate the importance of educators utilising scalable formative assessments to evaluate and adjust core instruction.
{"title":"Investigating the promise of automated writing evaluation for supporting formative writing assessment at scale","authors":"Joshua Wilson, Matthew C. Myers, Andrew Potter","doi":"10.1080/0969594X.2022.2025762","DOIUrl":"https://doi.org/10.1080/0969594X.2022.2025762","url":null,"abstract":"ABSTRACT We investigated the promise of a novel approach to formative writing assessment at scale that involved an automated writing evaluation (AWE) system called MI Write. Specifically, we investigated elementary teachers’ perceptions and implementation of MI Write and changes in students’ writing performance in three genres from Fall to Spring associated with this implementation. Teachers in Grades 3–5 (n = 14) reported that MI Write was usable and acceptable, useful, and desirable; however, teachers tended to implement MI Write in a limited manner. Multilevel repeated measures analyses indicated that students in Grades 3–5 (n = 570) tended not to increase their performance from Fall to Spring except for third graders in all genres and fourth graders’ narrative writing. Findings illustrate the importance of educators utilising scalable formative assessments to evaluate and adjust core instruction.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"82 1","pages":"183 - 199"},"PeriodicalIF":3.2,"publicationDate":"2022-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72999246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-02DOI: 10.1080/0969594X.2022.2068480
Therese N. Hopfenbeck
The year we left behind, 2021, was a year like no other, with the pandemic impacting students’ learning and assessment globally. While some countries have decided to open up, other countries have adopted a stricter approach and still use lockdown in areas of COVID outbreak. The impact of school lockdowns for students’ learning and the assessment of their achievements following pandemic will be the theme for a forthcoming Special Issue in this journal (O’Leary & Hayward to be published). What we already know, is that we are still in challenging times, where uncertainty is in the centre of our lives. Even worse, the uncertainty now includes dreadful scenes from the war in Ukraine, where children, instead of going to school and preparing for their exams this Spring, are fleeing their country in fear of bombs. Wars, whether they occur in Afghanistan, Ethiopia, Syria, or Yemen, are taking away the future of a generation of children who should be living together peacefully, connecting through music and art, preparing for adulthood, and developing the skills to solve problems we have not yet discovered. In challenging times, with a global pandemic, wars and personal losses, it is worth reminding ourselves of the reasons so many of us committed our lives to education. In Europe, the President of the assessment organisation AEA-Europe, Dr Christina Wikström, made the following statement to members in March 2022: ‘We place our trust in all our members, irrespective of nationality, to always stand up for peace, democracy, and equality for all. We must also work together in our scholarly profession since scholarship is necessary for the survival and prosperity of humanity’ (Wikström, 2022). It is in this spirit we publish the first 2022 issue of Assessment in Education – knowing that education matters, and particularly in times of uncertainty and challenges. As an international journal, we hope that researchers will continue to connect, collaborate, and stand up for the values which enhance learning for all students, no matter which continent they live on. We owe it to them to offer aspirations for the future, and quality education for all can provide that hope. In the first article in this regular issue, Steinmann et al. (this issue) presents findings from a study investigating the reliability of questionnaire scales. Student self-report scales have been known for their limitations, both with respect to validity and reliability (Samuelstuen & Bråten, 2007; Samuelstuen et al., 2007), but the current study specifically investigates the use of mixed wording in items used in international large-scale studies. More specifically, students were asked to report their agreement on items, such as ‘I usually do well in mathematics’ and ‘I am just not good at mathematics’, mixing positive and negative statements. Steinmann et al. used data from the international IEA studies PIRLS and TIMSS 2011, to further investigate how students responded to these scales, and demonst
{"title":"Assessment and learning in times of change and uncertainty","authors":"Therese N. Hopfenbeck","doi":"10.1080/0969594X.2022.2068480","DOIUrl":"https://doi.org/10.1080/0969594X.2022.2068480","url":null,"abstract":"The year we left behind, 2021, was a year like no other, with the pandemic impacting students’ learning and assessment globally. While some countries have decided to open up, other countries have adopted a stricter approach and still use lockdown in areas of COVID outbreak. The impact of school lockdowns for students’ learning and the assessment of their achievements following pandemic will be the theme for a forthcoming Special Issue in this journal (O’Leary & Hayward to be published). What we already know, is that we are still in challenging times, where uncertainty is in the centre of our lives. Even worse, the uncertainty now includes dreadful scenes from the war in Ukraine, where children, instead of going to school and preparing for their exams this Spring, are fleeing their country in fear of bombs. Wars, whether they occur in Afghanistan, Ethiopia, Syria, or Yemen, are taking away the future of a generation of children who should be living together peacefully, connecting through music and art, preparing for adulthood, and developing the skills to solve problems we have not yet discovered. In challenging times, with a global pandemic, wars and personal losses, it is worth reminding ourselves of the reasons so many of us committed our lives to education. In Europe, the President of the assessment organisation AEA-Europe, Dr Christina Wikström, made the following statement to members in March 2022: ‘We place our trust in all our members, irrespective of nationality, to always stand up for peace, democracy, and equality for all. We must also work together in our scholarly profession since scholarship is necessary for the survival and prosperity of humanity’ (Wikström, 2022). It is in this spirit we publish the first 2022 issue of Assessment in Education – knowing that education matters, and particularly in times of uncertainty and challenges. As an international journal, we hope that researchers will continue to connect, collaborate, and stand up for the values which enhance learning for all students, no matter which continent they live on. We owe it to them to offer aspirations for the future, and quality education for all can provide that hope. In the first article in this regular issue, Steinmann et al. (this issue) presents findings from a study investigating the reliability of questionnaire scales. Student self-report scales have been known for their limitations, both with respect to validity and reliability (Samuelstuen & Bråten, 2007; Samuelstuen et al., 2007), but the current study specifically investigates the use of mixed wording in items used in international large-scale studies. More specifically, students were asked to report their agreement on items, such as ‘I usually do well in mathematics’ and ‘I am just not good at mathematics’, mixing positive and negative statements. Steinmann et al. used data from the international IEA studies PIRLS and TIMSS 2011, to further investigate how students responded to these scales, and demonst","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"51 1","pages":"1 - 4"},"PeriodicalIF":3.2,"publicationDate":"2022-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86513103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-02DOI: 10.1080/0969594X.2022.2053945
E. Papanastasiou, Agni Stylianou-Georgiou
ABSTRACT Α frequently used indicator to reflect student performance is that of a test score. However, although tests are designed to assess students’ knowledge or skills, other factors can also affect test results such as test-taking strategies. Therefore, the purpose of this study was to model the interrelationships among test-taking strategy instruction (with a focus on metacognition), answer changing bias and performance on multiple-choice tests among college students through structural equation modelling. This study, which was based on a sample of 1512 students from Greece, has managed to extend the findings of previous research by proposing a model that demonstrates the interplay between these variables.
{"title":"Should they change their answers or not? Modelling Achievement through a Metacognitive Lens","authors":"E. Papanastasiou, Agni Stylianou-Georgiou","doi":"10.1080/0969594X.2022.2053945","DOIUrl":"https://doi.org/10.1080/0969594X.2022.2053945","url":null,"abstract":"ABSTRACT Α frequently used indicator to reflect student performance is that of a test score. However, although tests are designed to assess students’ knowledge or skills, other factors can also affect test results such as test-taking strategies. Therefore, the purpose of this study was to model the interrelationships among test-taking strategy instruction (with a focus on metacognition), answer changing bias and performance on multiple-choice tests among college students through structural equation modelling. This study, which was based on a sample of 1512 students from Greece, has managed to extend the findings of previous research by proposing a model that demonstrates the interplay between these variables.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"1 1","pages":"77 - 94"},"PeriodicalIF":3.2,"publicationDate":"2022-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82298116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-02DOI: 10.1080/0969594X.2022.2056576
Jessica D. Hoffmann, Rachel Baumsteiger, Jennifer Seibyl, E. Hills, Christina M. Bradley, Christina Cipriano, M. Brackett
ABSTRACT Practical educational assessment tools for adolescents champion student voice, use technology to enhance engagement, highlight discrepancies in school experience, and provide actionable feedback. We report a series of studies that detail an iterative process for developing a new school climate assessment tool: (1) item generation that centres student voice, (2) the design of a web-based app, (3) item revisions informed by student and educator feedback, and (4) the identification and confirmation of the underlying factor structure of the assessment tool. The resulting School Climate Walkthrough provides scores on nine dimensions of school climate and 73 additional observational items. The web-based application produces instantaneous reports that display systemic differences in how various student demographic groups experience school. This process can guide future research in building the next generation of educational assessments for adolescents and disrupting harmful or exclusionary school practices.
{"title":"Building useful, web-based educational assessment tools for students, with students: a demonstration with the school climate walkthrough","authors":"Jessica D. Hoffmann, Rachel Baumsteiger, Jennifer Seibyl, E. Hills, Christina M. Bradley, Christina Cipriano, M. Brackett","doi":"10.1080/0969594X.2022.2056576","DOIUrl":"https://doi.org/10.1080/0969594X.2022.2056576","url":null,"abstract":"ABSTRACT Practical educational assessment tools for adolescents champion student voice, use technology to enhance engagement, highlight discrepancies in school experience, and provide actionable feedback. We report a series of studies that detail an iterative process for developing a new school climate assessment tool: (1) item generation that centres student voice, (2) the design of a web-based app, (3) item revisions informed by student and educator feedback, and (4) the identification and confirmation of the underlying factor structure of the assessment tool. The resulting School Climate Walkthrough provides scores on nine dimensions of school climate and 73 additional observational items. The web-based application produces instantaneous reports that display systemic differences in how various student demographic groups experience school. This process can guide future research in building the next generation of educational assessments for adolescents and disrupting harmful or exclusionary school practices.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"34 1","pages":"95 - 120"},"PeriodicalIF":3.2,"publicationDate":"2022-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82626616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-02DOI: 10.1080/0969594X.2022.2037508
Teresa M. Ober, Maxwell R. Hong, Matthew F. Carter, Alex S. Brodersen, Daniella A. Rebouças-Ju, Cheng Liu, Ying Cheng
ABSTRACT We examined whether students were accurate in predicting their test performance in both low-stakes and high-stakes testing contexts. The sample comprised U.S. high school students enrolled in an advanced placement (AP) statistics course during the 2017–2018 academic year (N = 209; Mage = 16.6 years). We found that even two months before taking the AP exam, a high stakes summative assessment, students were moderately accurate in predicting their actual scores (κweighted = .62). When the same variables were entered into models predicting inaccuracy and overconfidence bias, results did not provide evidence that age, gender, parental education, number of mathematics classes previously taken, or course engagement accounted for variation in accuracy. Overconfidence bias differed between students enrolled at different schools. Results indicated that students’ predictions of performance were positively associated with performance in both low- and high-stakes testing contexts. The findings shed light on ways to leverage students’ self-assessment for learning.
{"title":"Are high school students accurate in predicting their AP exam scores?: Examining inaccuracy and overconfidence of students’ predictions","authors":"Teresa M. Ober, Maxwell R. Hong, Matthew F. Carter, Alex S. Brodersen, Daniella A. Rebouças-Ju, Cheng Liu, Ying Cheng","doi":"10.1080/0969594X.2022.2037508","DOIUrl":"https://doi.org/10.1080/0969594X.2022.2037508","url":null,"abstract":"ABSTRACT We examined whether students were accurate in predicting their test performance in both low-stakes and high-stakes testing contexts. The sample comprised U.S. high school students enrolled in an advanced placement (AP) statistics course during the 2017–2018 academic year (N = 209; Mage = 16.6 years). We found that even two months before taking the AP exam, a high stakes summative assessment, students were moderately accurate in predicting their actual scores (κweighted = .62). When the same variables were entered into models predicting inaccuracy and overconfidence bias, results did not provide evidence that age, gender, parental education, number of mathematics classes previously taken, or course engagement accounted for variation in accuracy. Overconfidence bias differed between students enrolled at different schools. Results indicated that students’ predictions of performance were positively associated with performance in both low- and high-stakes testing contexts. The findings shed light on ways to leverage students’ self-assessment for learning.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"124 1","pages":"27 - 50"},"PeriodicalIF":3.2,"publicationDate":"2022-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79491546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-02DOI: 10.1080/0969594X.2022.2053946
K. Brown, Kevin Woods
ABSTRACT In England, Wales and Northern Ireland, the General Certificate of Secondary Education (GCSE) has been the qualification by which students’ attainment at age sixteen has been measured for the last thirty years. Despite the longevity of GCSEs, relatively little research has explored the views and experiences of those undertaking them. Using a systematic literature review methodology and critical appraisal frameworks, the current study synthesises the literature in this area in order to elucidate young people’s views and experiences of GCSE study and assessment. Findings suggest that although there are positive aspects of GCSE study and assessment, for some young people, GCSE study, assessment and recent reforms appear to be relatively negative experiences, characterised by low levels of enjoyment and well-being and high levels of stress and test anxiety. Findings also suggest that agency, equality and fairness and relatedness are important factors in mediating young people’s experiences of GCSE.
{"title":"Thirty years of GCSE: A review of student views and experiences","authors":"K. Brown, Kevin Woods","doi":"10.1080/0969594X.2022.2053946","DOIUrl":"https://doi.org/10.1080/0969594X.2022.2053946","url":null,"abstract":"ABSTRACT In England, Wales and Northern Ireland, the General Certificate of Secondary Education (GCSE) has been the qualification by which students’ attainment at age sixteen has been measured for the last thirty years. Despite the longevity of GCSEs, relatively little research has explored the views and experiences of those undertaking them. Using a systematic literature review methodology and critical appraisal frameworks, the current study synthesises the literature in this area in order to elucidate young people’s views and experiences of GCSE study and assessment. Findings suggest that although there are positive aspects of GCSE study and assessment, for some young people, GCSE study, assessment and recent reforms appear to be relatively negative experiences, characterised by low levels of enjoyment and well-being and high levels of stress and test anxiety. Findings also suggest that agency, equality and fairness and relatedness are important factors in mediating young people’s experiences of GCSE.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"348 1","pages":"51 - 76"},"PeriodicalIF":3.2,"publicationDate":"2022-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86798843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-22DOI: 10.1080/0969594X.2021.2005302
Isa Steinmann, Daniel Sánchez, Saskia van Laar, J. Braeken
ABSTRACT Questionnaire scales that are mixed-worded, i.e. include both positively and negatively worded items, often suffer from issues like low reliability and more complex latent structures than intended. Part of the problem might be that some responders fail to respond consistently to the mixed-worded items. We investigated the prevalence and impact of inconsistent responders in 37 primary education systems participating in the joint PIRLS/TIMSS 2011 assessment. Using the mean absolute difference method and three mixed-worded self-concept scales, we identified between 2%‒36% of students as inconsistent responders across education systems. Consistent with expectations, these students showed lower average achievement scores and had a higher risk of being identified as inconsistent on more than one scale. We also found that the inconsistent responders biased the estimated dimensionality and reliability of the scales. The impact on external validity measures was limited and unsystematic. We discuss implications for the use and development of questionnaire scales.
{"title":"The impact of inconsistent responders to mixed-worded scales on inferences in international large-scale assessments","authors":"Isa Steinmann, Daniel Sánchez, Saskia van Laar, J. Braeken","doi":"10.1080/0969594X.2021.2005302","DOIUrl":"https://doi.org/10.1080/0969594X.2021.2005302","url":null,"abstract":"ABSTRACT Questionnaire scales that are mixed-worded, i.e. include both positively and negatively worded items, often suffer from issues like low reliability and more complex latent structures than intended. Part of the problem might be that some responders fail to respond consistently to the mixed-worded items. We investigated the prevalence and impact of inconsistent responders in 37 primary education systems participating in the joint PIRLS/TIMSS 2011 assessment. Using the mean absolute difference method and three mixed-worded self-concept scales, we identified between 2%‒36% of students as inconsistent responders across education systems. Consistent with expectations, these students showed lower average achievement scores and had a higher risk of being identified as inconsistent on more than one scale. We also found that the inconsistent responders biased the estimated dimensionality and reliability of the scales. The impact on external validity measures was limited and unsystematic. We discuss implications for the use and development of questionnaire scales.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"16 1","pages":"5 - 26"},"PeriodicalIF":3.2,"publicationDate":"2021-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83182542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-02DOI: 10.1080/0969594X.2021.1999210
J. Fernández-Ruiz, E. Panadero, Daniel García-Pérez
ABSTRACT The role of the academic discipline is a major factor in the assessment design and implementation in higher education. Unfortunately, a clear understanding of how teachers from different disciplines approach assessment is still missing; this information can lead to teacher training programmes that are better designed and more focussed. The present study compared assessment design and implementation in three programmes (sport science, mathematics, and medicine) each representing a discipline from 4 Spanish universities. Using a mixed-methods approach, data from syllabi (N = 385) and semi-structured interviews with teachers (N = 19) were analysed. The results showed different approaches to assessment design and implementation in each programme in two main axes: summative or formative purposes of assessment, and content-based or authentic assessment. Implications for further research are discussed.
{"title":"Assessment from a disciplinary approach: design and implementation in three undergraduate programmes","authors":"J. Fernández-Ruiz, E. Panadero, Daniel García-Pérez","doi":"10.1080/0969594X.2021.1999210","DOIUrl":"https://doi.org/10.1080/0969594X.2021.1999210","url":null,"abstract":"ABSTRACT The role of the academic discipline is a major factor in the assessment design and implementation in higher education. Unfortunately, a clear understanding of how teachers from different disciplines approach assessment is still missing; this information can lead to teacher training programmes that are better designed and more focussed. The present study compared assessment design and implementation in three programmes (sport science, mathematics, and medicine) each representing a discipline from 4 Spanish universities. Using a mixed-methods approach, data from syllabi (N = 385) and semi-structured interviews with teachers (N = 19) were analysed. The results showed different approaches to assessment design and implementation in each programme in two main axes: summative or formative purposes of assessment, and content-based or authentic assessment. Implications for further research are discussed.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"18 4 1","pages":"703 - 723"},"PeriodicalIF":3.2,"publicationDate":"2021-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83603924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-31DOI: 10.1080/0969594X.2021.1988510
Zi Yan, Ronnel B. King, Joseph Y. Haw
ABSTRACT Both formative assessment and growth mindset scholars aim to understand how to enhance achievement. While research on formative assessment focuses on external teaching practices, work on growth mindset emphasises internal psychological processes. This study examined the interplay between three formative assessment strategies (i.e. sharing learning progressions, providing feedback, and instructional adjustments) and growth mindset in predicting reading achievement using the PISA2018 data. We focused specifically on samples from the West (the United States, the United Kingdom, Ireland, Canada, Australia, and New Zealand) and the East (Mainland China, Hong Kong SAR, Macau SAR, Chinese Taipei, Japan and Korea) which comprised of 109,204 15-year old students. The results showed that formative assessment strategies were positively, albeit weakly, related to a growth mindset in the East, but not in the West. In contrast, growth mindset was positively related to reading achievement only in the West, but not in the East. The impacts of different formative assessment strategies on reading achievement demonstrated cross-cultural variability, but the strongest positive predictor was instructional adjustments. These findings highlight the potential synergy between formative assessment and growth mindset in enhancing academic achievement as well as the importance of cultural contexts in understanding their roles in student learning.
{"title":"Formative assessment, growth mindset, and achievement: examining their relations in the East and the West","authors":"Zi Yan, Ronnel B. King, Joseph Y. Haw","doi":"10.1080/0969594X.2021.1988510","DOIUrl":"https://doi.org/10.1080/0969594X.2021.1988510","url":null,"abstract":"ABSTRACT Both formative assessment and growth mindset scholars aim to understand how to enhance achievement. While research on formative assessment focuses on external teaching practices, work on growth mindset emphasises internal psychological processes. This study examined the interplay between three formative assessment strategies (i.e. sharing learning progressions, providing feedback, and instructional adjustments) and growth mindset in predicting reading achievement using the PISA2018 data. We focused specifically on samples from the West (the United States, the United Kingdom, Ireland, Canada, Australia, and New Zealand) and the East (Mainland China, Hong Kong SAR, Macau SAR, Chinese Taipei, Japan and Korea) which comprised of 109,204 15-year old students. The results showed that formative assessment strategies were positively, albeit weakly, related to a growth mindset in the East, but not in the West. In contrast, growth mindset was positively related to reading achievement only in the West, but not in the East. The impacts of different formative assessment strategies on reading achievement demonstrated cross-cultural variability, but the strongest positive predictor was instructional adjustments. These findings highlight the potential synergy between formative assessment and growth mindset in enhancing academic achievement as well as the importance of cultural contexts in understanding their roles in student learning.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"16 1","pages":"676 - 702"},"PeriodicalIF":3.2,"publicationDate":"2021-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90676693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}