Pub Date : 2020-11-16DOI: 10.1080/13803611.2021.1977152
S. Yildirim-Erbasli, O. Bulut
ABSTRACT This study investigated the impact of students’ test-taking effort on their growth estimates in reading. The sample consisted of 7,602 students (Grades 1 to 4) in the United States who participated in the fall and spring administrations of a computer-based reading assessment. First, a new response dataset was created by flagging both rapid-guessing and slow-responding behaviours and recoding these non-effortful responses as missing. Second, students’ academic growth (i.e., daily increase in ability levels) from fall to spring was calculated based on their original responses and responses in the new dataset excluding non-effortful responses. The results indicated that students’ growth estimates changed significantly after recoding non-effortful responses as missing. Also, the difference in the growth estimates varied depending on the grade level. Overall, students’ test-taking effort appeared to be influential in the estimation of students’ reading growth. Implications for practice were discussed.
{"title":"The impact of students’ test-taking effort on growth estimates in low-stakes educational assessments","authors":"S. Yildirim-Erbasli, O. Bulut","doi":"10.1080/13803611.2021.1977152","DOIUrl":"https://doi.org/10.1080/13803611.2021.1977152","url":null,"abstract":"ABSTRACT This study investigated the impact of students’ test-taking effort on their growth estimates in reading. The sample consisted of 7,602 students (Grades 1 to 4) in the United States who participated in the fall and spring administrations of a computer-based reading assessment. First, a new response dataset was created by flagging both rapid-guessing and slow-responding behaviours and recoding these non-effortful responses as missing. Second, students’ academic growth (i.e., daily increase in ability levels) from fall to spring was calculated based on their original responses and responses in the new dataset excluding non-effortful responses. The results indicated that students’ growth estimates changed significantly after recoding non-effortful responses as missing. Also, the difference in the growth estimates varied depending on the grade level. Overall, students’ test-taking effort appeared to be influential in the estimation of students’ reading growth. Implications for practice were discussed.","PeriodicalId":47025,"journal":{"name":"Educational Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2020-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43302422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-16DOI: 10.1080/13803611.2021.1949355
G. Silm, O. Must, Karin Täht, M. Pedaste
ABSTRACT Test-taking motivation (TTM) has been associated with test performance in low-stakes testing contexts. However, there have been few studies about TTM in high-stakes testing contexts, and these have contradictory results. Our aim was to explore the relationship between test-taking effort and test performance in a real-life high-stakes testing context (n = 1,515). We collected time-based and self-reported data about test-taking effort and used a structural equation model (SEM) to predict test performance. We found that the motivational indicators added about 15% of predictive power to the SEM model, where gender and previous performance had been controlled for. Altogether, the SEM model predicted 69% of the variance in test results. We compared the findings to previous studies and concluded that the possible effect of TTM should be considered in various testing contexts, whether low-stakes or high-stakes.
{"title":"Does test-taking motivation predict test results in a high-stakes testing context?","authors":"G. Silm, O. Must, Karin Täht, M. Pedaste","doi":"10.1080/13803611.2021.1949355","DOIUrl":"https://doi.org/10.1080/13803611.2021.1949355","url":null,"abstract":"ABSTRACT Test-taking motivation (TTM) has been associated with test performance in low-stakes testing contexts. However, there have been few studies about TTM in high-stakes testing contexts, and these have contradictory results. Our aim was to explore the relationship between test-taking effort and test performance in a real-life high-stakes testing context (n = 1,515). We collected time-based and self-reported data about test-taking effort and used a structural equation model (SEM) to predict test performance. We found that the motivational indicators added about 15% of predictive power to the SEM model, where gender and previous performance had been controlled for. Altogether, the SEM model predicted 69% of the variance in test results. We compared the findings to previous studies and concluded that the possible effect of TTM should be considered in various testing contexts, whether low-stakes or high-stakes.","PeriodicalId":47025,"journal":{"name":"Educational Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2020-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/13803611.2021.1949355","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45569696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-17DOI: 10.1080/13803611.2021.1963941
Hongwen Guo, Kadriye Ercikan
ABSTRACT Rapid response behaviour, a type of test disengagement, cannot be interpreted as a true indicator of the targeted constructs and may compromise score accuracy as well as score validity for interpretation. Rapid responding may be due to multiple factors for diverse populations. In this study, using Programme for International Student Assessment (PISA) 2018 Science data, we examined the comparability of rapid response behaviours for nine different language and cultural groups. Statistical methods were developed to flag rapid responses on different item types and to test differential rapid responding rates between groups. Results showed that rapid response rates and their association with performance were different among the studied groups. However, regardless of students’ group membership, a rapid response led to a chance correct rate, and it had a negative impact on performance. The results also indicated limited impact of different test engagement on performance ranking for the studied groups.
{"title":"Differential rapid responding across language and cultural groups","authors":"Hongwen Guo, Kadriye Ercikan","doi":"10.1080/13803611.2021.1963941","DOIUrl":"https://doi.org/10.1080/13803611.2021.1963941","url":null,"abstract":"ABSTRACT Rapid response behaviour, a type of test disengagement, cannot be interpreted as a true indicator of the targeted constructs and may compromise score accuracy as well as score validity for interpretation. Rapid responding may be due to multiple factors for diverse populations. In this study, using Programme for International Student Assessment (PISA) 2018 Science data, we examined the comparability of rapid response behaviours for nine different language and cultural groups. Statistical methods were developed to flag rapid responses on different item types and to test differential rapid responding rates between groups. Results showed that rapid response rates and their association with performance were different among the studied groups. However, regardless of students’ group membership, a rapid response led to a chance correct rate, and it had a negative impact on performance. The results also indicated limited impact of different test engagement on performance ranking for the studied groups.","PeriodicalId":47025,"journal":{"name":"Educational Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41439984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-17DOI: 10.1080/13803611.2021.1963942
S. Wise
ABSTRACT There has been increasing concern about the presence of disengaged test taking in international assessment programmes and its implications for the validity of inferences made regarding a country’s level of educational attainment. This issue has received a growing research interest over the past 20 years, with notable advances in both the measurement of disengagement as well as our understanding of its distortive impact on both individual and aggregated scores. In this paper, I discuss six important insights yielded by this research and their implications for assessment programmes.
{"title":"Six insights regarding test-taking disengagement","authors":"S. Wise","doi":"10.1080/13803611.2021.1963942","DOIUrl":"https://doi.org/10.1080/13803611.2021.1963942","url":null,"abstract":"ABSTRACT There has been increasing concern about the presence of disengaged test taking in international assessment programmes and its implications for the validity of inferences made regarding a country’s level of educational attainment. This issue has received a growing research interest over the past 20 years, with notable advances in both the measurement of disengagement as well as our understanding of its distortive impact on both individual and aggregated scores. In this paper, I discuss six important insights yielded by this research and their implications for assessment programmes.","PeriodicalId":47025,"journal":{"name":"Educational Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46329911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-17DOI: 10.1080/13803611.2021.1963943
Gavin T. L. Brown, Kane Meissel
{"title":"Cross-cultural study of test effort in PISA","authors":"Gavin T. L. Brown, Kane Meissel","doi":"10.1080/13803611.2021.1963943","DOIUrl":"https://doi.org/10.1080/13803611.2021.1963943","url":null,"abstract":"","PeriodicalId":47025,"journal":{"name":"Educational Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44595437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-17DOI: 10.1080/13803611.2021.1963940
Erik Lundgren, Hanna Eklöf
ABSTRACT The present study used process data from a computer-based problem-solving task as indications of behavioural level of test-taking effort, and explored how behavioural item-level effort related to overall test performance and self-reported effort. Variables were extracted from raw process data and clustered. Four distinct clusters were obtained and characterised as high effort, medium effort, low effort, and planner. Regression modelling indicated that among students that failed to solve the task, level of effort invested before giving up positively predicted overall test performance. Among students that solved the task, level of effort was instead weakly negatively related to test performance. A low level of behavioural effort before giving up the task was also related to lower self-reported effort. Results suggest that effort invested before giving up provides information about test-takers’ motivation to spend effort on the test. We conclude that process data could augment existing methods of assessing test-taking effort.
{"title":"Within-item response processes as indicators of test-taking effort and motivation","authors":"Erik Lundgren, Hanna Eklöf","doi":"10.1080/13803611.2021.1963940","DOIUrl":"https://doi.org/10.1080/13803611.2021.1963940","url":null,"abstract":"ABSTRACT The present study used process data from a computer-based problem-solving task as indications of behavioural level of test-taking effort, and explored how behavioural item-level effort related to overall test performance and self-reported effort. Variables were extracted from raw process data and clustered. Four distinct clusters were obtained and characterised as high effort, medium effort, low effort, and planner. Regression modelling indicated that among students that failed to solve the task, level of effort invested before giving up positively predicted overall test performance. Among students that solved the task, level of effort was instead weakly negatively related to test performance. A low level of behavioural effort before giving up the task was also related to lower self-reported effort. Results suggest that effort invested before giving up provides information about test-takers’ motivation to spend effort on the test. We conclude that process data could augment existing methods of assessing test-taking effort.","PeriodicalId":47025,"journal":{"name":"Educational Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45210217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-17DOI: 10.1080/13803611.2021.1963939
M. Ivanova, M. Michaelides, Hanna Eklöf
ABSTRACT Collecting process data in computer-based assessments provides opportunities to describe examinee behaviour during a test-taking session. The number of actions taken by students while interacting with an item is in this context a variable that has been gaining attention. The present study aims to investigate how the number of actions performed on constructed-response items relates to self-reported effort, performance, and item cluster position in the test. The theory of planned behaviour was used as an interpretative framework. Data from two item clusters of the 2015 Swedish Programme for International Student Assessment (PISA) Science administration were utilised. Results showed that the number of actions was significantly related to performance on the items, self-reported test-taking effort, and cluster position. Latent variable models were examined separately for performance-level groups. Overall, the number of actions performed on constructed-response items as a behavioural indicator in testing situations may be useful in gauging test-taking engagement.
{"title":"How does the number of actions on constructed-response items relate to test-taking effort and performance?","authors":"M. Ivanova, M. Michaelides, Hanna Eklöf","doi":"10.1080/13803611.2021.1963939","DOIUrl":"https://doi.org/10.1080/13803611.2021.1963939","url":null,"abstract":"ABSTRACT Collecting process data in computer-based assessments provides opportunities to describe examinee behaviour during a test-taking session. The number of actions taken by students while interacting with an item is in this context a variable that has been gaining attention. The present study aims to investigate how the number of actions performed on constructed-response items relates to self-reported effort, performance, and item cluster position in the test. The theory of planned behaviour was used as an interpretative framework. Data from two item clusters of the 2015 Swedish Programme for International Student Assessment (PISA) Science administration were utilised. Results showed that the number of actions was significantly related to performance on the items, self-reported test-taking effort, and cluster position. Latent variable models were examined separately for performance-level groups. Overall, the number of actions performed on constructed-response items as a behavioural indicator in testing situations may be useful in gauging test-taking engagement.","PeriodicalId":47025,"journal":{"name":"Educational Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46284404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-17DOI: 10.1080/13803611.2021.1963938
Anran Zhao, Gavin T. L. Brown, Kane Meissel
ABSTRACT Students’ test-taking motivation has been found to be a predictor of performance. This study tests whether Shanghai students’ conceptions of tests and test-taking motivation differ when the consequence of tests have different foci (i.e., none, country, or personal). A between-subjects experiment with vignette instructions systematically assigned 1,003 Shanghai senior secondary school students to one of the three vignettes. Students’ conceptions of tests and test-taking motivation scales were evaluated using factor analyses. Invariance testing suggested invariant relationships between the two constructs across the three groups. Students’ general conception of tests meaningfully predicted their reported effort (β = .18). Latent mean analyses suggested that students’ reported effort, anxiety, and importance were not significantly different between country at stakes and personal stakes groups, but higher than when no consequences were attached. This study suggests that Shanghai students’ test-taking attitudes may contribute to high effort and consequently high performance on international large-scale assessments.
{"title":"Manipulating the consequences of tests: how Shanghai teens react to different consequences","authors":"Anran Zhao, Gavin T. L. Brown, Kane Meissel","doi":"10.1080/13803611.2021.1963938","DOIUrl":"https://doi.org/10.1080/13803611.2021.1963938","url":null,"abstract":"ABSTRACT Students’ test-taking motivation has been found to be a predictor of performance. This study tests whether Shanghai students’ conceptions of tests and test-taking motivation differ when the consequence of tests have different foci (i.e., none, country, or personal). A between-subjects experiment with vignette instructions systematically assigned 1,003 Shanghai senior secondary school students to one of the three vignettes. Students’ conceptions of tests and test-taking motivation scales were evaluated using factor analyses. Invariance testing suggested invariant relationships between the two constructs across the three groups. Students’ general conception of tests meaningfully predicted their reported effort (β = .18). Latent mean analyses suggested that students’ reported effort, anxiety, and importance were not significantly different between country at stakes and personal stakes groups, but higher than when no consequences were attached. This study suggests that Shanghai students’ test-taking attitudes may contribute to high effort and consequently high performance on international large-scale assessments.","PeriodicalId":47025,"journal":{"name":"Educational Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48958720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-18DOI: 10.1080/13803611.2021.1924791
B. See, S. Gorard, N. El-Soufi, Binwei Lu, N. Siddiqui, Lan Dong
ABSTRACT There is considerable evidence that the level of parental involvement is closely associated with children’s school outcomes 1 . Schools are increasingly using digital technology to engage parents, but the impact of such technology on students’ learning behaviour is still unclear. This paper reviews and synthesises international evidence from 29 studies to establish whether technology-mediated parental engagement can improve student outcomes. While the review suggests promising evidence in school–parent communication via phone, texts, or emails on children’s attainment, attendance, and homework completion, such communications have to be two-way, personalised, and positive. The evidence for home computers and other portable devices is inconclusive. There is no evidence so far that online technological devices and digital media are effective for improving school outcomes. Current research on the use of such technology is weak. Research in this field needs to consider a more careful and scientific approach to improve the evidence base.
{"title":"A systematic review of the impact of technology-mediated parental engagement on student outcomes","authors":"B. See, S. Gorard, N. El-Soufi, Binwei Lu, N. Siddiqui, Lan Dong","doi":"10.1080/13803611.2021.1924791","DOIUrl":"https://doi.org/10.1080/13803611.2021.1924791","url":null,"abstract":"ABSTRACT There is considerable evidence that the level of parental involvement is closely associated with children’s school outcomes 1 . Schools are increasingly using digital technology to engage parents, but the impact of such technology on students’ learning behaviour is still unclear. This paper reviews and synthesises international evidence from 29 studies to establish whether technology-mediated parental engagement can improve student outcomes. While the review suggests promising evidence in school–parent communication via phone, texts, or emails on children’s attainment, attendance, and homework completion, such communications have to be two-way, personalised, and positive. The evidence for home computers and other portable devices is inconclusive. There is no evidence so far that online technological devices and digital media are effective for improving school outcomes. Current research on the use of such technology is weak. Research in this field needs to consider a more careful and scientific approach to improve the evidence base.","PeriodicalId":47025,"journal":{"name":"Educational Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2020-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/13803611.2021.1924791","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45009226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-18DOI: 10.1080/13803611.2021.1934034
S. Wilkinson, Jennifer Freeman, Brandi Simonsen, Sandra Sears, S. Byun, Xin Xu, Hao-Jan Luh
ABSTRACT The ability of teachers to manage their classrooms is critical to achieving positive educational outcomes for students. Many teachers receive limited pre-service training in classroom management, creating a need for effective in-service professional development (PD). This literature review summarizes the results of 74 empirical studies examining the effects of PD on teachers’ classroom management behaviours. It identifies the characteristics of the existing literature base, the most frequent components of effective PD, and teacher and student outcomes related to PD. The results support a prior review that also suggested effective PD (i.e., desired changes in teacher and student behaviour) is predominantly studied at the elementary school level and, in addition to generic in-service trainings, most frequently includes didactic (direct) instruction, coaching, and performance feedback. These results have important implications for developing effective PD opportunities in the area of classroom management for in-service educators.
{"title":"Professional development for classroom management: a review of the literature","authors":"S. Wilkinson, Jennifer Freeman, Brandi Simonsen, Sandra Sears, S. Byun, Xin Xu, Hao-Jan Luh","doi":"10.1080/13803611.2021.1934034","DOIUrl":"https://doi.org/10.1080/13803611.2021.1934034","url":null,"abstract":"ABSTRACT The ability of teachers to manage their classrooms is critical to achieving positive educational outcomes for students. Many teachers receive limited pre-service training in classroom management, creating a need for effective in-service professional development (PD). This literature review summarizes the results of 74 empirical studies examining the effects of PD on teachers’ classroom management behaviours. It identifies the characteristics of the existing literature base, the most frequent components of effective PD, and teacher and student outcomes related to PD. The results support a prior review that also suggested effective PD (i.e., desired changes in teacher and student behaviour) is predominantly studied at the elementary school level and, in addition to generic in-service trainings, most frequently includes didactic (direct) instruction, coaching, and performance feedback. These results have important implications for developing effective PD opportunities in the area of classroom management for in-service educators.","PeriodicalId":47025,"journal":{"name":"Educational Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2020-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/13803611.2021.1934034","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44096222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}