Pub Date : 2022-07-04DOI: 10.1080/0969594X.2022.2121911
C. Hannigan, D. Alonzo, C. Z. Oo
ABSTRACT Students’ engagement in assessment is theoretically and empirically supported to increase their learning outcomes, but the assessment knowledge and skills they need to actively engage in assessment remain poorly understood. We used a scoping review method to inform the screening and selection of literature, with 98 included in the analysis. Analysis and synthesis of the articles resulted in 45 indicators of student assessment literacy, grouped into six domains: general knowledge of assessment, development of strategies to engage in assessment, active engagement in assessment, monitoring learning progress, engagement in reflective practice; and disposition in assessment. These domains were used to develop the framework for student assessment literacy. The results of the study highlight a new conceptualisation of student assessment literacy by providing a more comprehensive definition that clearly defines the domains and identifies individual indicators that comprise students’ assessment knowledge and skills, including dispositions needed to effectively engage in assessment.
{"title":"Student assessment literacy: indicators and domains from the literature","authors":"C. Hannigan, D. Alonzo, C. Z. Oo","doi":"10.1080/0969594X.2022.2121911","DOIUrl":"https://doi.org/10.1080/0969594X.2022.2121911","url":null,"abstract":"ABSTRACT Students’ engagement in assessment is theoretically and empirically supported to increase their learning outcomes, but the assessment knowledge and skills they need to actively engage in assessment remain poorly understood. We used a scoping review method to inform the screening and selection of literature, with 98 included in the analysis. Analysis and synthesis of the articles resulted in 45 indicators of student assessment literacy, grouped into six domains: general knowledge of assessment, development of strategies to engage in assessment, active engagement in assessment, monitoring learning progress, engagement in reflective practice; and disposition in assessment. These domains were used to develop the framework for student assessment literacy. The results of the study highlight a new conceptualisation of student assessment literacy by providing a more comprehensive definition that clearly defines the domains and identifies individual indicators that comprise students’ assessment knowledge and skills, including dispositions needed to effectively engage in assessment.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"87 1","pages":"482 - 504"},"PeriodicalIF":3.2,"publicationDate":"2022-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74153447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-04DOI: 10.1080/0969594X.2022.2130242
Therese N. Hopfenbeck
The current regular issue contains several articles specifically looking into the testing situation for students across the globe, with empirical studies from New Zealand, Spain, Israel, and England. While some of the articles look at students ́ specific situation and how they are accommodated to do their best in their testing situation, other articles look at how the different stakes of a test influence students’ effort and motivation. In the first article in this issue, Zhao et al. (2022) examines students’ conceptions of tests and test-taking motivation using an experimental design including a sample of 497 students from senior secondary education in New Zealand. Students were assigned to one of three different test conditions (none, country or self) and responded to self-reports about their motivation and anxiety depending upon the different conditions. It might not come as a surprise that students ́ efforts were notably lower when the country was at stake versus the self-at-stake conditions. The question raised by the authors is whether New Zealand’s rankings in international large-scale assessments are valid. These discussions are not new, and more empirical studies are needed to investigate how students’ motivation might vary across the globe, and under different conditions, particularly as similar empirical studies in Scandinavia have demonstrated high effort from students during international tests (Eklöf & Hopfenbeck, 2019; Hopfenbeck & Kjærnsli, 2016). The second article by De La Fuente Fernández and Pascual (2022) investigates the current influence of university entrance exams on the teaching of chemistry in upper secondary education in Spain. A total of 447 chemistry teachers responded to a survey, demonstrating that the content taught in schools is closer to what is required to pass the university entrance exam, than to what the teachers themselves believed was important for students to learn. In other words, a clear washback effect of these admission tests was found. In addition, it found significant differences between regions with respect to curriculum taught, findings which are discussed in relation to future directions and possibilities. Saka et al. (2022) has assessed differential prediction and differential validity in higher education admission policy for students who have a variety of disabilities. More specifically, they investigated students who were either granted or denied test accommodations on the Israeli Psychometric Entrance Test (PET). The sample comprised 124,501 records of first year students from six universities and more than 2000 academic departments. The results demonstrated that the accommodation policy was generally fair towards students with disabilities, but the authors also found that the failure of applicants to provide adequate documentation of their disability could result in technical rejection, which could in turn lead to the under-prediction of their academic performance. A similar approach was tak
目前的定期刊物包含了几篇文章,专门研究了全球学生的考试情况,其中包括来自新西兰、西班牙、以色列和英国的实证研究。虽然一些文章着眼于学生的具体情况,以及如何适应他们在考试中做到最好,但其他文章着眼于考试的不同利害关系如何影响学生的努力和动机。在本期的第一篇文章中,Zhao et al.(2022)采用了一项实验设计,包括新西兰高中497名学生的样本,研究了学生对考试的概念和考试动机。学生们被分配到三种不同的测试条件(无,国家或自我)中的一种,并根据不同的条件对他们的动机和焦虑的自我报告做出反应。当国家处于危急状态时,学生们的努力程度明显低于自我处于危急状态时,这可能并不令人惊讶。作者提出的问题是,新西兰在国际大规模评估中的排名是否有效。这些讨论并不新鲜,需要更多的实证研究来调查学生的动机在全球范围内以及不同条件下的差异,特别是斯堪的纳维亚半岛的类似实证研究表明,学生在国际测试期间付出了很高的努力(Eklöf & Hopfenbeck, 2019;Hopfenbeck & k ærnsli, 2016)。De La Fuente Fernández和Pascual(2022)的第二篇文章调查了目前高考对西班牙高中化学教学的影响。一项共有447名化学教师参与的调查显示,学校教授的内容更接近于通过高考所需的内容,而不是教师自己认为对学生来说重要的内容。换句话说,发现了这些入学测试的明显反拨效应。此外,报告还发现各地区在所教授的课程方面存在重大差异,并就未来的方向和可能性对这些发现进行了讨论。Saka等人(2022)评估了高等教育录取政策对各种残疾学生的差异预测和差异效度。更具体地说,他们调查了在以色列心理测量入学考试(PET)中获得或拒绝考试住宿的学生。样本包括来自6所大学和2000多个院系的124,501名一年级学生的记录。结果表明,住宿政策对残疾学生总体上是公平的,但作者也发现,申请人未能提供足够的残疾证明文件可能导致技术拒绝,这反过来可能导致对其学业成绩的低估。Rodeiro和Macinska(2022)在他们的调查中也采用了类似的方法,他们认为有考试便利的学生被给予了不公平的优势,而不是教育评估:原则,政策和实践2022,VOL. 29, NO。4,395 - 396 https://doi.org/10.1080/0969594X.2022.2130242
{"title":"Global perspectives on the impact of situational factors on student testing","authors":"Therese N. Hopfenbeck","doi":"10.1080/0969594X.2022.2130242","DOIUrl":"https://doi.org/10.1080/0969594X.2022.2130242","url":null,"abstract":"The current regular issue contains several articles specifically looking into the testing situation for students across the globe, with empirical studies from New Zealand, Spain, Israel, and England. While some of the articles look at students ́ specific situation and how they are accommodated to do their best in their testing situation, other articles look at how the different stakes of a test influence students’ effort and motivation. In the first article in this issue, Zhao et al. (2022) examines students’ conceptions of tests and test-taking motivation using an experimental design including a sample of 497 students from senior secondary education in New Zealand. Students were assigned to one of three different test conditions (none, country or self) and responded to self-reports about their motivation and anxiety depending upon the different conditions. It might not come as a surprise that students ́ efforts were notably lower when the country was at stake versus the self-at-stake conditions. The question raised by the authors is whether New Zealand’s rankings in international large-scale assessments are valid. These discussions are not new, and more empirical studies are needed to investigate how students’ motivation might vary across the globe, and under different conditions, particularly as similar empirical studies in Scandinavia have demonstrated high effort from students during international tests (Eklöf & Hopfenbeck, 2019; Hopfenbeck & Kjærnsli, 2016). The second article by De La Fuente Fernández and Pascual (2022) investigates the current influence of university entrance exams on the teaching of chemistry in upper secondary education in Spain. A total of 447 chemistry teachers responded to a survey, demonstrating that the content taught in schools is closer to what is required to pass the university entrance exam, than to what the teachers themselves believed was important for students to learn. In other words, a clear washback effect of these admission tests was found. In addition, it found significant differences between regions with respect to curriculum taught, findings which are discussed in relation to future directions and possibilities. Saka et al. (2022) has assessed differential prediction and differential validity in higher education admission policy for students who have a variety of disabilities. More specifically, they investigated students who were either granted or denied test accommodations on the Israeli Psychometric Entrance Test (PET). The sample comprised 124,501 records of first year students from six universities and more than 2000 academic departments. The results demonstrated that the accommodation policy was generally fair towards students with disabilities, but the authors also found that the failure of applicants to provide adequate documentation of their disability could result in technical rejection, which could in turn lead to the under-prediction of their academic performance. A similar approach was tak","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"25 1","pages":"395 - 396"},"PeriodicalIF":3.2,"publicationDate":"2022-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88447386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-04DOI: 10.1080/0969594X.2022.2101043
Anran Zhao, Gavin T. L. Brown, Kane Meissel
ABSTRACT Students’ test-taking motivation confounds test performance. This study examines students’ conceptions of tests and test-taking motivation when different test consequences are at play. In a between-subjects experimental design, a sample of 479 New Zealand senior secondary school students were randomly assigned to one of the three vignette consequence conditions (i.e. none, country or self). Students self-reported conceptions of tests and test-taking motivation which were analysed with confirmatory factor analysis, structural equation modelling and latent mean analyses across test-consequence conditions. Students’ general belief about tests was a positive indicator of their perceived effort expenditure (β = .25). Both effort and anxiety increased significantly as consequence increased by medium-to-large effect size (ranges between 0.42 and 1.30). Effort and motivation were notably lower when the country was at stake versus the self at stake condition, raising doubts about the validity of New Zealand rankings in international large-scale assessments.
{"title":"New Zealand students’ test-taking motivation: an experimental study examining the effects of stakes","authors":"Anran Zhao, Gavin T. L. Brown, Kane Meissel","doi":"10.1080/0969594X.2022.2101043","DOIUrl":"https://doi.org/10.1080/0969594X.2022.2101043","url":null,"abstract":"ABSTRACT Students’ test-taking motivation confounds test performance. This study examines students’ conceptions of tests and test-taking motivation when different test consequences are at play. In a between-subjects experimental design, a sample of 479 New Zealand senior secondary school students were randomly assigned to one of the three vignette consequence conditions (i.e. none, country or self). Students self-reported conceptions of tests and test-taking motivation which were analysed with confirmatory factor analysis, structural equation modelling and latent mean analyses across test-consequence conditions. Students’ general belief about tests was a positive indicator of their perceived effort expenditure (β = .25). Both effort and anxiety increased significantly as consequence increased by medium-to-large effect size (ranges between 0.42 and 1.30). Effort and motivation were notably lower when the country was at stake versus the self at stake condition, raising doubts about the validity of New Zealand rankings in international large-scale assessments.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"18 1","pages":"397 - 421"},"PeriodicalIF":3.2,"publicationDate":"2022-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81489140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-04DOI: 10.1080/0969594X.2022.2118664
Almudena de la Fuente Fernández, M. C. Calvo Pascual
ABSTRACT The washback effect is particularly relevant when it comes to high-stakes tests, such as university entrance exams. This article presents a research study performed with the aim of determining the current influence of these exams on the teaching of chemistry in upper secondary education. The study was based on a questionnaire distributed among chemistry teachers ‒447 participants‒ from across Spain. The results show that the chemistry content taught in upper secondary education is closer to what is requested in university entrance exams than to what teachers themselves believe to be important for students to learn. For the aspects examined, significant differences between regions were observed; in fact, relevant changes in the curriculum taught and the teaching methodology were apparent in those regions that incorporated content in their tests which was distinct from the traditional approach. Based on the findings, some implications and suggestions are drawn for future university entrance exams.
{"title":"Washback of Spanish university entrance examination on chemistry teaching in upper secondary education","authors":"Almudena de la Fuente Fernández, M. C. Calvo Pascual","doi":"10.1080/0969594X.2022.2118664","DOIUrl":"https://doi.org/10.1080/0969594X.2022.2118664","url":null,"abstract":"ABSTRACT The washback effect is particularly relevant when it comes to high-stakes tests, such as university entrance exams. This article presents a research study performed with the aim of determining the current influence of these exams on the teaching of chemistry in upper secondary education. The study was based on a questionnaire distributed among chemistry teachers ‒447 participants‒ from across Spain. The results show that the chemistry content taught in upper secondary education is closer to what is requested in university entrance exams than to what teachers themselves believe to be important for students to learn. For the aspects examined, significant differences between regions were observed; in fact, relevant changes in the curriculum taught and the teaching methodology were apparent in those regions that incorporated content in their tests which was distinct from the traditional approach. Based on the findings, some implications and suggestions are drawn for future university entrance exams.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"81 1","pages":"422 - 440"},"PeriodicalIF":3.2,"publicationDate":"2022-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88974100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-04DOI: 10.1080/0969594X.2022.2121680
Carmen Vidal Rodeiro, Sylwia Macinska
ABSTRACT There has been controversy around the practice of providing accommodations, with some suggesting that they may give an unfair advantage rather than level the playing field. If that were the case, the assessment results of students with accommodations could be inflated, leading to a detrimental effect on the assessment’s validity. This research investigated this claim by comparing the performance of students who completed high-stakes examinations with and without test accommodations. To account for group differences that could affect performance, students were matched on background characteristics. The results revealed that students with accommodations performed similarly to or slightly worse than students without accommodations, suggesting that, in most cases, the accommodations worked as intended and helped levelling the playing field.
{"title":"Equal opportunity or unfair advantage? The impact of test accommodations on performance in high-stakes assessments","authors":"Carmen Vidal Rodeiro, Sylwia Macinska","doi":"10.1080/0969594X.2022.2121680","DOIUrl":"https://doi.org/10.1080/0969594X.2022.2121680","url":null,"abstract":"ABSTRACT There has been controversy around the practice of providing accommodations, with some suggesting that they may give an unfair advantage rather than level the playing field. If that were the case, the assessment results of students with accommodations could be inflated, leading to a detrimental effect on the assessment’s validity. This research investigated this claim by comparing the performance of students who completed high-stakes examinations with and without test accommodations. To account for group differences that could affect performance, students were matched on background characteristics. The results revealed that students with accommodations performed similarly to or slightly worse than students without accommodations, suggesting that, in most cases, the accommodations worked as intended and helped levelling the playing field.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"26 1","pages":"462 - 481"},"PeriodicalIF":3.2,"publicationDate":"2022-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91395975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-26DOI: 10.1080/0969594X.2022.2068503
H. Braun, Scott F. Marion
ABSTRACT State education systems in the U.S. experienced major disruptions due to the COVID-19 pandemic. Results from assessments administered during, and at the conclusion of, the 2020-21 school year indicate substantial ‘unfinished learning’, with the losses generally greater among disadvantaged and marginalized students. States’ assessment systems are strongly tilted toward meeting Federal accountability requirements, especially state-wide comparability of end-of year test results, which severely limits innovation. The two-year pause in accountability due to the pandemic presents an opportunity to radically rethink school accountability, allowing states greater flexibility in developing creative interventions and more balanced assessment systems to better support student learning. A new trade-off between comparability and local flexibility is long overdue, especially given the poor record of the current system in promoting learning and closing achievement gaps. We offer some examples of how the trade-off can be accomplished and the potential benefits that would ensue.
{"title":"Accountability and assessment in U.S. education: let’s not allow another crisis go to waste!","authors":"H. Braun, Scott F. Marion","doi":"10.1080/0969594X.2022.2068503","DOIUrl":"https://doi.org/10.1080/0969594X.2022.2068503","url":null,"abstract":"ABSTRACT State education systems in the U.S. experienced major disruptions due to the COVID-19 pandemic. Results from assessments administered during, and at the conclusion of, the 2020-21 school year indicate substantial ‘unfinished learning’, with the losses generally greater among disadvantaged and marginalized students. States’ assessment systems are strongly tilted toward meeting Federal accountability requirements, especially state-wide comparability of end-of year test results, which severely limits innovation. The two-year pause in accountability due to the pandemic presents an opportunity to radically rethink school accountability, allowing states greater flexibility in developing creative interventions and more balanced assessment systems to better support student learning. A new trade-off between comparability and local flexibility is long overdue, especially given the poor record of the current system in promoting learning and closing achievement gaps. We offer some examples of how the trade-off can be accomplished and the potential benefits that would ensue.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"19 1","pages":"555 - 574"},"PeriodicalIF":3.2,"publicationDate":"2022-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84439662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-04DOI: 10.1080/0969594X.2022.2110667
Therese N. Hopfenbeck
In this regular issue, Street et al. (2022) present a study where they have explored the effect of students’ perceived task difficulty on the mathematics self-efficacy – performance relationship. The study adds important knowledge as it investigates students’ self-efficacy of different levels of task difficulty empirically, such as easy, medium difficulty and hard tasks and performances on a national mathematics test. The longitudinal study of 95 Norwegian students, in grade 8 and 9, demonstrated differential relationships between self-efficacy for different levels of task difficulty and national test performance. Further, the study found that grade 8 national test performances predicted grade 9 self-efficacy for medium and hard tasks but not for easy tasks. The authors emphasise the importance of supporting students’ engagement with challenging tasks to strengthen both their performance and self-efficacy. The second article in this regular issue also investigates students’ perceptions of emotions, both positive and negative and how they are linked to a range of outcomes (Jerrim, 2022). More specifically, the author uses data from the Programme for International Student Assessment (PISA) in England and links the dataset to the National Pupil Database (NPD) to investigate students’ positive affect, negative affect and their fear of failure. One of the results reported is that low levels of positive affect, such as rarely feeling happy, lively or cheerful, is associated with a 0.10–0.15 standard deviation reduction in young people’s examination grades. Although little evidence is found for a link between negative affect or fear of failure and examination performance, the article raises some overall concerns: Students in England reported lower overall levels of life-satisfaction than their peers in almost all other developed countries (OECD, 2019). Knowing the impact of such emotions on students’ mental health long term, the author emphasises that well-being, mental health and young people’s overall emotional state have become a major political issue in England. The author argues, though, that policy and practice should focus upon these issues independently of its impact on results in high-stakes examinations or ILSA studies, but simply because students wellbeing is an important concern in its own right. In the third article, the authors investigated the item variance in PISA 2018 cognitive domains of reading, mathematics and science literacy (Marcq & Braeken, 2022). As the authors point out, International Large-Scale Assessment studies, such as PISA, mainly report average country achievement scores, while items are overlooked and rarely studied. Of particular interest is their key finding indicating ‘across domains and countries, it mattered more for the correctness of an item response which items were responded to by a pupil (27–35%) than which pupil responded to these items (10–12%) or which school the pupil attended (5–7%)’. The present article
{"title":"Assessing the role of students’ emotions on achievement","authors":"Therese N. Hopfenbeck","doi":"10.1080/0969594X.2022.2110667","DOIUrl":"https://doi.org/10.1080/0969594X.2022.2110667","url":null,"abstract":"In this regular issue, Street et al. (2022) present a study where they have explored the effect of students’ perceived task difficulty on the mathematics self-efficacy – performance relationship. The study adds important knowledge as it investigates students’ self-efficacy of different levels of task difficulty empirically, such as easy, medium difficulty and hard tasks and performances on a national mathematics test. The longitudinal study of 95 Norwegian students, in grade 8 and 9, demonstrated differential relationships between self-efficacy for different levels of task difficulty and national test performance. Further, the study found that grade 8 national test performances predicted grade 9 self-efficacy for medium and hard tasks but not for easy tasks. The authors emphasise the importance of supporting students’ engagement with challenging tasks to strengthen both their performance and self-efficacy. The second article in this regular issue also investigates students’ perceptions of emotions, both positive and negative and how they are linked to a range of outcomes (Jerrim, 2022). More specifically, the author uses data from the Programme for International Student Assessment (PISA) in England and links the dataset to the National Pupil Database (NPD) to investigate students’ positive affect, negative affect and their fear of failure. One of the results reported is that low levels of positive affect, such as rarely feeling happy, lively or cheerful, is associated with a 0.10–0.15 standard deviation reduction in young people’s examination grades. Although little evidence is found for a link between negative affect or fear of failure and examination performance, the article raises some overall concerns: Students in England reported lower overall levels of life-satisfaction than their peers in almost all other developed countries (OECD, 2019). Knowing the impact of such emotions on students’ mental health long term, the author emphasises that well-being, mental health and young people’s overall emotional state have become a major political issue in England. The author argues, though, that policy and practice should focus upon these issues independently of its impact on results in high-stakes examinations or ILSA studies, but simply because students wellbeing is an important concern in its own right. In the third article, the authors investigated the item variance in PISA 2018 cognitive domains of reading, mathematics and science literacy (Marcq & Braeken, 2022). As the authors point out, International Large-Scale Assessment studies, such as PISA, mainly report average country achievement scores, while items are overlooked and rarely studied. Of particular interest is their key finding indicating ‘across domains and countries, it mattered more for the correctness of an item response which items were responded to by a pupil (27–35%) than which pupil responded to these items (10–12%) or which school the pupil attended (5–7%)’. The present article ","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"32 1","pages":"285 - 287"},"PeriodicalIF":3.2,"publicationDate":"2022-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86748542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-04DOI: 10.1080/0969594X.2022.2095980
K. S. Street, G. Stylianides, L. Malmberg
ABSTRACT We explore the effect of students’ perceived task difficulty on the mathematics self-efficacy – performance relationship. Specifically, we expand on previous reciprocal effects studies through including students’ self-efficacy for different levels of task difficulty in an empirical investigation. We examined students’ self-efficacy for easy, medium difficulty, and hard tasks and performance on a national mathematics test in a longitudinal study of 95 Norwegian students from grade 8 to grade 9. We found differential relationships between self-efficacy for different levels of task difficulty and national test performance. In support of the ‘skill development’ model, grade 8 national test performance predicted grade 9 self-efficacy for medium and hard, but not easy, tasks. While mastery experiences are likely to arise more easily on easier tasks, such experiences are likely to matter more on harder tasks. Our findings highlight the importance of supporting students’ engagement with challenging tasks to strengthen both their performance and self-efficacy.
{"title":"Differential relationships between mathematics self-efficacy and national test performance according to perceived task difficulty","authors":"K. S. Street, G. Stylianides, L. Malmberg","doi":"10.1080/0969594X.2022.2095980","DOIUrl":"https://doi.org/10.1080/0969594X.2022.2095980","url":null,"abstract":"ABSTRACT We explore the effect of students’ perceived task difficulty on the mathematics self-efficacy – performance relationship. Specifically, we expand on previous reciprocal effects studies through including students’ self-efficacy for different levels of task difficulty in an empirical investigation. We examined students’ self-efficacy for easy, medium difficulty, and hard tasks and performance on a national mathematics test in a longitudinal study of 95 Norwegian students from grade 8 to grade 9. We found differential relationships between self-efficacy for different levels of task difficulty and national test performance. In support of the ‘skill development’ model, grade 8 national test performance predicted grade 9 self-efficacy for medium and hard, but not easy, tasks. While mastery experiences are likely to arise more easily on easier tasks, such experiences are likely to matter more on harder tasks. Our findings highlight the importance of supporting students’ engagement with challenging tasks to strengthen both their performance and self-efficacy.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"17 1","pages":"288 - 309"},"PeriodicalIF":3.2,"publicationDate":"2022-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79359887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-04DOI: 10.1080/0969594X.2022.2097199
Kseniia Marcq, J. Braeken
ABSTRACT Communication of International Large-Scale Assessment (ILSA) results is dominated by reporting average country achievement scores that conceal individual differences between pupils, schools, and items. Educational research primarily focuses on examining differences between pupils and schools, while differences between items are overlooked. Using a variance components model on the Programme for International Student Assessment (PISA) 2018 cognitive domains of reading, mathematics, and science literacy, we estimated how much of the response variation can be attributed to differences between pupils, schools, and items. The results show that uniformly across domains and countries, it mattered more for the correctness of an item response which items were responded to by a pupil (27–35%) than which pupil responded to these items (10–12%) or which school the pupil attended (5–7%). Given the findings, we argue that differences between items in ILSAs constitute a source of substantial untapped potential for secondary research.
{"title":"The blind side: Exploring item variance in PISA 2018 cognitive domains","authors":"Kseniia Marcq, J. Braeken","doi":"10.1080/0969594X.2022.2097199","DOIUrl":"https://doi.org/10.1080/0969594X.2022.2097199","url":null,"abstract":"ABSTRACT Communication of International Large-Scale Assessment (ILSA) results is dominated by reporting average country achievement scores that conceal individual differences between pupils, schools, and items. Educational research primarily focuses on examining differences between pupils and schools, while differences between items are overlooked. Using a variance components model on the Programme for International Student Assessment (PISA) 2018 cognitive domains of reading, mathematics, and science literacy, we estimated how much of the response variation can be attributed to differences between pupils, schools, and items. The results show that uniformly across domains and countries, it mattered more for the correctness of an item response which items were responded to by a pupil (27–35%) than which pupil responded to these items (10–12%) or which school the pupil attended (5–7%). Given the findings, we argue that differences between items in ILSAs constitute a source of substantial untapped potential for secondary research.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"1 1","pages":"332 - 360"},"PeriodicalIF":3.2,"publicationDate":"2022-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73960715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-04DOI: 10.1080/0969594X.2022.2103516
I. Raudienė, Lina Kaminskienė, Liying Cheng
ABSTRACT This article presents a historical and contemporary account of Lithuania’s national public education assessment system and its transformation since the country’s declaration of independence from the Soviet Union in 1990. We explore how the external examination system has developed in relation to ongoing curriculum reforms over the last 30 years, and how external examinations and standardised testing have taken priority over classroom assessment throughout this period. What becomes clear is that certain political decisions, guided by increased accountability demands, have significantly impacted classroom assessment practices, school cultures, and the mindsets of stakeholders about the role and function of assessment in Lithuania. Finally, we deploy our national and international expertise to recommend some changes to the current education system to make assessment an effective tool to improve student learning.
{"title":"The Education and Assessment System in Lithuania","authors":"I. Raudienė, Lina Kaminskienė, Liying Cheng","doi":"10.1080/0969594X.2022.2103516","DOIUrl":"https://doi.org/10.1080/0969594X.2022.2103516","url":null,"abstract":"ABSTRACT This article presents a historical and contemporary account of Lithuania’s national public education assessment system and its transformation since the country’s declaration of independence from the Soviet Union in 1990. We explore how the external examination system has developed in relation to ongoing curriculum reforms over the last 30 years, and how external examinations and standardised testing have taken priority over classroom assessment throughout this period. What becomes clear is that certain political decisions, guided by increased accountability demands, have significantly impacted classroom assessment practices, school cultures, and the mindsets of stakeholders about the role and function of assessment in Lithuania. Finally, we deploy our national and international expertise to recommend some changes to the current education system to make assessment an effective tool to improve student learning.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"38 1","pages":"383 - 394"},"PeriodicalIF":3.2,"publicationDate":"2022-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87251817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}