Pub Date : 2025-02-08DOI: 10.1007/s10648-025-09988-0
Jihyun Rho, Martina A. Rau
Misleading data visualizations have become a significant issue in our information-rich world due to their negative impact on informed decision-making. Consequently, it is crucial to understand the factors that make viewers vulnerable to misleading data visualizations and to explore effective instructional supports that can help viewers combat the negative effects of such visualizations. Drawing upon the framework of graph comprehension, this article examines how poorly designed data visualizations can deceive viewers. A systematic review identified 26 pertinent articles that met our inclusion criteria. We identified two primary factors leading to viewers’ misinterpretations of misleading data visualizations: the graphical and contextual elements within the data visualizations themselves. Further, we identified two types of interventions aimed at reducing the negative impact of misleading data visualizations. One type of intervention focuses on providing external aids for viewers to recognize the misleading graphical and contextual elements within the data visualization. In contrast, another type of intervention aims at enhancing viewers’ ability to engage with data visualizations through additional interactions for reflection. Based on these findings, we identify areas that remain under-investigated, specifically those aiming at teaching viewers to interact with data visualizations. We conclude by proposing directions for future research to investigate interventions that strengthen viewers’ ability to go beyond their first (potentially false) impression with data visualizations through additional interactions with the data visualization.
{"title":"Exploring Educational Approaches to Addressing Misleading Visualizations","authors":"Jihyun Rho, Martina A. Rau","doi":"10.1007/s10648-025-09988-0","DOIUrl":"https://doi.org/10.1007/s10648-025-09988-0","url":null,"abstract":"<p>Misleading data visualizations have become a significant issue in our information-rich world due to their negative impact on informed decision-making. Consequently, it is crucial to understand the factors that make viewers vulnerable to misleading data visualizations and to explore effective instructional supports that can help viewers combat the negative effects of such visualizations. Drawing upon the framework of graph comprehension, this article examines how poorly designed data visualizations can deceive viewers. A systematic review identified 26 pertinent articles that met our inclusion criteria. We identified two primary factors leading to viewers’ misinterpretations of misleading data visualizations: the graphical and contextual elements within the data visualizations themselves. Further, we identified two types of interventions aimed at reducing the negative impact of misleading data visualizations. One type of intervention focuses on providing external aids for viewers to recognize the misleading graphical and contextual elements within the data visualization. In contrast, another type of intervention aims at enhancing viewers’ ability to engage with data visualizations through additional interactions for reflection. Based on these findings, we identify areas that remain under-investigated, specifically those aiming at teaching viewers to interact with data visualizations. We conclude by proposing directions for future research to investigate interventions that strengthen viewers’ ability to go beyond their first (potentially false) impression with data visualizations through additional interactions with the data visualization.</p>","PeriodicalId":48344,"journal":{"name":"Educational Psychology Review","volume":"14 1","pages":""},"PeriodicalIF":10.1,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143367260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Students involve themselves in learning activities multidimensionally, including behaviorally, cognitively, emotionally, and agentically. This multidimensional involvement predicts important outcomes, but it is also possible that each type of engagement might have its own specialized purpose or function. To investigate this possibility, we proposed and tested the specialized purpose hypothesis, which is that each type of engagement has its own specialized function targeted toward a specific purpose, such as to boost achievement, social support, motivation, or well-being. To test this hypothesis, we conducted four meta-analyses, utilizing multilevel random effects models. Each meta-analysis tested whether type of engagement differentially predicted students’ achievement (meta-analysis #1), social support (meta-analysis #2), motivation (meta-analysis #3), or well-being (meta-analysis #4). The database included 652 effect sizes from 62 studies within 54 articles involving 32,403 P-16 student-participants (Mage = 16.8 years-old; 51.2% female). All 62 studies measured all four types of engagement so that we could compare the relative strength of association between each type of engagement and each correlate. Behavioral engagement was the strongest predictor of achievement. Agentic engagement was the strongest predictor of social support. Cognitive engagement did not show a specialized relation with any outcome. Emotional engagement was strongly associated with both motivation and well-being. These findings generally support the specialized purpose hypothesis, but they also raise important and challenging questions for future theory and research about how to better conceptualize and measure each type of engagement.
{"title":"Specialized Purpose of Each Type of Student Engagement: A Meta-Analysis","authors":"Johnmarshall Reeve, Geetanjali Basarkod, Hye-Ryen Jang, Rafael Gargurevich, Hyungshim Jang, Sung Hyeon Cheon","doi":"10.1007/s10648-025-09989-z","DOIUrl":"https://doi.org/10.1007/s10648-025-09989-z","url":null,"abstract":"<p>Students involve themselves in learning activities multidimensionally, including behaviorally, cognitively, emotionally, and agentically. This multidimensional involvement predicts important outcomes, but it is also possible that each type of engagement might have its own specialized purpose or function. To investigate this possibility, we proposed and tested the specialized purpose hypothesis, which is that each type of engagement has its own specialized function targeted toward a specific purpose, such as to boost achievement, social support, motivation, or well-being. To test this hypothesis, we conducted four meta-analyses, utilizing multilevel random effects models. Each meta-analysis tested whether type of engagement differentially predicted students’ achievement (meta-analysis #1), social support (meta-analysis #2), motivation (meta-analysis #3), or well-being (meta-analysis #4). The database included 652 effect sizes from 62 studies within 54 articles involving 32,403 P-16 student-participants (<i>M</i><sub>age</sub> = 16.8 years-old; 51.2% female). All 62 studies measured all four types of engagement so that we could compare the relative strength of association between each type of engagement and each correlate. Behavioral engagement was the strongest predictor of achievement. Agentic engagement was the strongest predictor of social support. Cognitive engagement did not show a specialized relation with any outcome. Emotional engagement was strongly associated with both motivation and well-being. These findings generally support the specialized purpose hypothesis, but they also raise important and challenging questions for future theory and research about how to better conceptualize and measure each type of engagement.</p>","PeriodicalId":48344,"journal":{"name":"Educational Psychology Review","volume":"9 1","pages":""},"PeriodicalIF":10.1,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143125211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Students with or at risk of reading difficulties (RD) benefit from accurate early identification and intervention. Previous research has employed various decoding measures to screen students for RD, but the criteria for identification have been inconsistent. Assessing students with RD is especially challenging in English Language Learners (ELLs), as vocabulary deficits can impact decoding. Additionally, few research syntheses have examined whether researchers use different measures to screen ELLs and EL1s for RD, and whether these differences result in distinct decoding profiles between ELLs with RD and EL1s with RD. To address these gaps, this study uses a meta-analysis to examine the decoding measures used in RD assessments and whether outcomes differ for ELLs and EL1s. The findings show that real word reading assessments identify students with more pronounced decoding deficits than nonword reading assessments. Despite the use of different RD screening measures for ELLs and EL1s, the gap between ELLs with and without RD was similar to that between EL1s with and without RD. These results suggest that real word-reliant measures, which are influenced by word knowledge, provide a more comprehensive assessment of RD than nonword-reliant measures for both ELLs and EL1s. We encourage future researchers to use consistent decoding measures when screening RD in both populations, to maximize comparability of findings.
{"title":"Using Decoding Measures to Identify Reading Difficulties: A Meta-analysis on English as a First Language Learners and English Language Learners","authors":"Miao Li, Shuai Zhang, Yuting Liu, Catherine Snow, Huan Zhang, Bing Han","doi":"10.1007/s10648-025-09987-1","DOIUrl":"https://doi.org/10.1007/s10648-025-09987-1","url":null,"abstract":"<p>Students with or at risk of reading difficulties (RD) benefit from accurate early identification and intervention. Previous research has employed various decoding measures to screen students for RD, but the criteria for identification have been inconsistent. Assessing students with RD is especially challenging in English Language Learners (ELLs), as vocabulary deficits can impact decoding. Additionally, few research syntheses have examined whether researchers use different measures to screen ELLs and EL1s for RD, and whether these differences result in distinct decoding profiles between ELLs with RD and EL1s with RD. To address these gaps, this study uses a meta-analysis to examine the decoding measures used in RD assessments and whether outcomes differ for ELLs and EL1s. The findings show that real word reading assessments identify students with more pronounced decoding deficits than nonword reading assessments. Despite the use of different RD screening measures for ELLs and EL1s, the gap between ELLs with and without RD was similar to that between EL1s with and without RD. These results suggest that real word-reliant measures, which are influenced by word knowledge, provide a more comprehensive assessment of RD than nonword-reliant measures for both ELLs and EL1s. We encourage future researchers to use consistent decoding measures when screening RD in both populations, to maximize comparability of findings.</p>","PeriodicalId":48344,"journal":{"name":"Educational Psychology Review","volume":"29 1","pages":""},"PeriodicalIF":10.1,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143056626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-28DOI: 10.1007/s10648-025-09986-2
Pauliina Avola, Tiina Soini-Ikonen, Anne Jyrkiäinen, Viivi Pentikäinen
Teacher burnout, stress, and turnover are increasing globally, underscoring the need to explore ways to reduce burnout and support teacher well-being. This scoping review identifies the contents, characteristics, and results of interventions to increase teacher well-being and reduce burnout. The search was conducted using two databases (Education Research Complete and ERIC). Out of 958 studies, 46 addressed interventions to support teacher well-being or reduce teacher burnout. The data covered 7369 participants in 15 countries. Of the 46 studies, 14 used mixed methods, four qualitative approaches, and 28 used quantitative approaches. The content of the interventions primarily focused on improving individual well-being, with some interventions incorporating communal activities. The qualitative content analysis revealed a broad spectrum of intervention activities, including physical activity, mindfulness and meditation, professional development, therapy-based techniques, gratitude practices, and a mix of multiple activities. The PERMA-H model of positive psychology is applied to unify the heterogeneous field of teacher well-being intervention research. The PERMA-H model's contents were broadly consistent with the intervention's contents, emphasising engagement (E), positive emotions (P), relationships (R), and health (H). The gratitude interventions, therapy-based interventions, physical activity interventions, and most mindfulness and meditation interventions, professional development and mixed activities interventions positively contributed to teacher well-being. Overall, the review highlights the diverse methods and theoretical frameworks employed to address teacher well-being, which the PERMA-H model can unify.
{"title":"Interventions to Teacher Well-Being and Burnout A Scoping Review","authors":"Pauliina Avola, Tiina Soini-Ikonen, Anne Jyrkiäinen, Viivi Pentikäinen","doi":"10.1007/s10648-025-09986-2","DOIUrl":"https://doi.org/10.1007/s10648-025-09986-2","url":null,"abstract":"<p>Teacher burnout, stress, and turnover are increasing globally, underscoring the need to explore ways to reduce burnout and support teacher well-being. This scoping review identifies the contents, characteristics, and results of interventions to increase teacher well-being and reduce burnout. The search was conducted using two databases (Education Research Complete and ERIC). Out of 958 studies, 46 addressed interventions to support teacher well-being or reduce teacher burnout. The data covered 7369 participants in 15 countries. Of the 46 studies, 14 used mixed methods, four qualitative approaches, and 28 used quantitative approaches. The content of the interventions primarily focused on improving individual well-being, with some interventions incorporating communal activities. The qualitative content analysis revealed a broad spectrum of intervention activities, including physical activity, mindfulness and meditation, professional development, therapy-based techniques, gratitude practices, and a mix of multiple activities. The PERMA-H model of positive psychology is applied to unify the heterogeneous field of teacher well-being intervention research. The PERMA-H model's contents were broadly consistent with the intervention's contents, emphasising engagement (E), positive emotions (P), relationships (R), and health (H). The gratitude interventions, therapy-based interventions, physical activity interventions, and most mindfulness and meditation interventions, professional development and mixed activities interventions positively contributed to teacher well-being. Overall, the review highlights the diverse methods and theoretical frameworks employed to address teacher well-being, which the PERMA-H model can unify.</p>","PeriodicalId":48344,"journal":{"name":"Educational Psychology Review","volume":"39 1","pages":""},"PeriodicalIF":10.1,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143049976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-25DOI: 10.1007/s10648-025-09985-3
Katlynn Dahl-Leonard, Colby Hall, Eunsoo Cho, Philip Capin, Garrett J. Roberts, Karen F. Kehoe, Christa Haring, Delanie Peacott, Alisha Demchak
There is considerable research evaluating the effects of family members implementing shared book reading interventions, especially during early childhood. However, less is known about the effects of family members providing instruction to help their school-aged children develop literacy skills, including both code-focused and meaning-focused skills that facilitate reading comprehension. The purpose of this meta-analysis was to describe and evaluate recent research examining the effects of at-home, family-implemented literacy interventions for school-aged children. A total of 25 interventions across 22 studies (12 with group designs and 10 with single-case experimental designs) were analyzed. The average effect on combined literacy outcomes was estimated as g = 0.36 (p < .01; Q = 191.83; I2 = 36.17) for group design studies and g = 1.50 (p < .01; Q = 114.58; I2 = 38.58) for single-case experimental design studies. Notably, for group design studies, effects varied by literacy outcome type. The mean effect for code-focused outcomes (i.e., PA, decoding/word reading, spelling, text reading) was g = 0.28 (p < .01) and the mean effect for meaning-focused outcomes (i.e., vocabulary, listening comprehension, reading comprehension) was g = 0.41 (p < .01). Overall, these findings support the implementation of family-delivered literacy interventions to improve literacy outcomes for school-aged children. At the same time, this meta-analysis revealed the paucity of research examining the effects of family-implemented literacy interventions, especially for older children, indicating a need for more research on this topic.
{"title":"Examining the Effects of Family-Implemented Literacy Interventions for School-Aged Children: A Meta-Analysis","authors":"Katlynn Dahl-Leonard, Colby Hall, Eunsoo Cho, Philip Capin, Garrett J. Roberts, Karen F. Kehoe, Christa Haring, Delanie Peacott, Alisha Demchak","doi":"10.1007/s10648-025-09985-3","DOIUrl":"https://doi.org/10.1007/s10648-025-09985-3","url":null,"abstract":"<p>There is considerable research evaluating the effects of family members implementing shared book reading interventions, especially during early childhood. However, less is known about the effects of family members providing instruction to help their school-aged children develop literacy skills, including both code-focused and meaning-focused skills that facilitate reading comprehension. The purpose of this meta-analysis was to describe and evaluate recent research examining the effects of at-home, family-implemented literacy interventions for school-aged children. A total of 25 interventions across 22 studies (12 with group designs and 10 with single-case experimental designs) were analyzed. The average effect on combined literacy outcomes was estimated as <i>g</i> = 0.36 (<i>p</i> < .01; <i>Q</i> = 191.83; <i>I</i><sup>2</sup> = 36.17) for group design studies and <i>g</i> = 1.50 (<i>p</i> < .01; <i>Q</i> = 114.58; <i>I</i><sup>2</sup> = 38.58) for single-case experimental design studies. Notably, for group design studies, effects varied by literacy outcome type. The mean effect for code-focused outcomes (i.e., PA, decoding/word reading, spelling, text reading) was <i>g</i> = 0.28 (<i>p</i> < .01) and the mean effect for meaning-focused outcomes (i.e., vocabulary, listening comprehension, reading comprehension) was <i>g</i> = 0.41 (<i>p</i> < .01). Overall, these findings support the implementation of family-delivered literacy interventions to improve literacy outcomes for school-aged children. At the same time, this meta-analysis revealed the paucity of research examining the effects of family-implemented literacy interventions, especially for older children, indicating a need for more research on this topic.</p>","PeriodicalId":48344,"journal":{"name":"Educational Psychology Review","volume":"22 1","pages":""},"PeriodicalIF":10.1,"publicationDate":"2025-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143031040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-24DOI: 10.1007/s10648-025-09984-4
Alex Barrett, Nuodi Zhang, Shiyao Wei
Immersive learning is predominantly constrained to technology-based interventions but has the potential for more diverse applications. This study reports on an experiment investigating the learning affordances of psychological immersion evoked by narrative absorption. A total of 228 participants were randomly assigned to one of three forms of media, an image, a word list, and a narrative, all of which contained identical items to be memorized for immediate and delayed free recall memory tests. Other variables measured were immersion, extraneous cognitive load, and mental imagery. ANOVA and correlation analyses showed that the narrative media was found to be significantly more immersive and that it evoked mental imagery in individuals at higher levels than both the list and image media. Importantly, there was more decay in memory recall between immediate and delayed tests for those exposed to the list and the image than for those who read the narrative. This implies the utility of immersive narratives for spontaneous mental image generation, which leads to improved knowledge retention. Other implications for immersive learning theory are discussed, and practical solutions for incorporating narrative immersion in learning are also suggested.
{"title":"The Virtual Reality in Your Head: How Immersion and Mental Imagery Are Connected to Knowledge Retention","authors":"Alex Barrett, Nuodi Zhang, Shiyao Wei","doi":"10.1007/s10648-025-09984-4","DOIUrl":"https://doi.org/10.1007/s10648-025-09984-4","url":null,"abstract":"<p>Immersive learning is predominantly constrained to technology-based interventions but has the potential for more diverse applications. This study reports on an experiment investigating the learning affordances of psychological immersion evoked by narrative absorption. A total of 228 participants were randomly assigned to one of three forms of media, an image, a word list, and a narrative, all of which contained identical items to be memorized for immediate and delayed free recall memory tests. Other variables measured were immersion, extraneous cognitive load, and mental imagery. ANOVA and correlation analyses showed that the narrative media was found to be significantly more immersive and that it evoked mental imagery in individuals at higher levels than both the list and image media. Importantly, there was more decay in memory recall between immediate and delayed tests for those exposed to the list and the image than for those who read the narrative. This implies the utility of immersive narratives for spontaneous mental image generation, which leads to improved knowledge retention. Other implications for immersive learning theory are discussed, and practical solutions for incorporating narrative immersion in learning are also suggested.</p>","PeriodicalId":48344,"journal":{"name":"Educational Psychology Review","volume":"119 1","pages":""},"PeriodicalIF":10.1,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143026755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-22DOI: 10.1007/s10648-024-09977-9
Allyson F. Hadwin, Ramin Rostampour, Philip H. Winne
Self-report measures are essential sources of information about learners’ studying perceptions. These perceptions also guide self-regulated learning (SRL) decisions and strategies in future studying. However, the development of self-report methods has not kept pace with other multi-modal methodological advancements, particularly in the field of self-regulated learning. The purpose of this study was to test the psychometric adequacy and predictive utility of four complementary SRL-grounded measures examining students’ perceptions of SRL during studying. Participants were two samples (N = 220; N = 473) of post-secondary students enrolled in various academic disciplines. Exploratory and confirmatory factor analyses confirmed the measurement adequacy of (a) a 4-factor SRL self-efficacy measure, (b) a 4-factor SRL importance measure, (c) a 6-factor self-regulated learning practices measure, and (d) a 6-factor academic challenges measure. The predictive validity of factors within each measure revealed that (a) prioritizing and feeling confident about planning and foundational academic behaviors positively predicted academic performance, and (b) SRL practices were either positively associated with academic performance or negatively associated with academic challenges. Despite being underrepresented in most measures of SRL, task understanding practices were found to be important for predicting academic performance beyond other SRL practices. Overall, findings indicate that student’s self-reports about SRL beliefs and practices can predict academic outcomes.
{"title":"Advancing Self-Reports of Self-Regulated Learning: Validating New Measures to Assess Students’ Beliefs, Practices, and Challenges","authors":"Allyson F. Hadwin, Ramin Rostampour, Philip H. Winne","doi":"10.1007/s10648-024-09977-9","DOIUrl":"https://doi.org/10.1007/s10648-024-09977-9","url":null,"abstract":"<p>Self-report measures are essential sources of information about learners’ studying perceptions. These perceptions also guide self-regulated learning (SRL) decisions and strategies in future studying. However, the development of self-report methods has not kept pace with other multi-modal methodological advancements, particularly in the field of self-regulated learning. The purpose of this study was to test the psychometric adequacy and predictive utility of four complementary SRL-grounded measures examining students’ perceptions of SRL during studying. Participants were two samples (<i>N</i> = 220; <i>N</i> = 473) of post-secondary students enrolled in various academic disciplines. Exploratory and confirmatory factor analyses confirmed the measurement adequacy of (a) a 4-factor SRL self-efficacy measure, (b) a 4-factor SRL importance measure, (c) a 6-factor self-regulated learning practices measure, and (d) a 6-factor academic challenges measure. The predictive validity of factors within each measure revealed that (a) prioritizing and feeling confident about planning and foundational academic behaviors positively predicted academic performance, and (b) SRL practices were either positively associated with academic performance or negatively associated with academic challenges. Despite being underrepresented in most measures of SRL, task understanding practices were found to be important for predicting academic performance beyond other SRL practices. Overall, findings indicate that student’s self-reports about SRL beliefs and practices can predict academic outcomes.</p>","PeriodicalId":48344,"journal":{"name":"Educational Psychology Review","volume":"45 1","pages":""},"PeriodicalIF":10.1,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142992239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-18DOI: 10.1007/s10648-024-09983-x
Joachim Wirth, Xenia-Lea Weber-Reuter, Corinna Schuster, Jens Fleischer, Detlev Leutner, Ferdinand Stebner
Training of self-regulated learning is most effective if it supports learning strategies in combination with metacognitive regulation, and learners can transfer their acquired metacognitive regulation skills to different tasks that require the use of the same learning strategy (near transfer). However, whether learners can transfer metacognitive regulation skills acquired in combination with a specific learning strategy to the regulation of a different learning strategy (far transfer) is still under debate. While there is empirical evidence that learners can transfer metacognitive regulation between different learning strategies of the same type (e.g., from one cognitive learning strategy to another), whether transfer also occurs between learning strategies of different types is an open question. Here, we conducted an experimental field study with 5th and 6th grade students (N = 777). Students were cluster-randomized and assigned to one of three groups: two experimental groups receiving different training on the metacognitive regulation of a cognitive learning strategy and one control group receiving no training. After training, students worked on two different tasks; after each task, we measured their metacognitive regulation of a resource management strategy, that is, investing mental effort. Results (based on data from 368 students due to pandemic conditions) indicated far metacognitive regulation transfer: After training, students in the training groups were better able to metacognitively regulate their mental effort than students in the control group. Although effect sizes were small, our results support the hypothesis of far transfer of metacognitive regulation.
{"title":"Far Transfer of Metacognitive Regulation: From Cognitive Learning Strategy Use to Mental Effort Regulation","authors":"Joachim Wirth, Xenia-Lea Weber-Reuter, Corinna Schuster, Jens Fleischer, Detlev Leutner, Ferdinand Stebner","doi":"10.1007/s10648-024-09983-x","DOIUrl":"https://doi.org/10.1007/s10648-024-09983-x","url":null,"abstract":"<p>Training of self-regulated learning is most effective if it supports learning strategies in combination with metacognitive regulation, and learners can transfer their acquired metacognitive regulation skills to different tasks that require the use of the same learning strategy (near transfer). However, whether learners can transfer metacognitive regulation skills acquired in combination with a specific learning strategy to the regulation of a different learning strategy (far transfer) is still under debate. While there is empirical evidence that learners can transfer metacognitive regulation between different learning strategies of the same type (e.g., from one cognitive learning strategy to another), whether transfer also occurs between learning strategies of different types is an open question. Here, we conducted an experimental field study with 5th and 6th grade students (<i>N</i> = 777). Students were cluster-randomized and assigned to one of three groups: two experimental groups receiving different training on the metacognitive regulation of a cognitive learning strategy and one control group receiving no training. After training, students worked on two different tasks; after each task, we measured their metacognitive regulation of a resource management strategy, that is, investing mental effort. Results (based on data from 368 students due to pandemic conditions) indicated far metacognitive regulation transfer: After training, students in the training groups were better able to metacognitively regulate their mental effort than students in the control group. Although effect sizes were small, our results support the hypothesis of far transfer of metacognitive regulation.</p>","PeriodicalId":48344,"journal":{"name":"Educational Psychology Review","volume":"1 1","pages":""},"PeriodicalIF":10.1,"publicationDate":"2025-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142988475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-14DOI: 10.1007/s10648-024-09981-z
Martin Brunner, Sophie E. Stallasch, Cordula Artelt, Oliver Lüdtke
There is a need for robust evidence about which educational interventions work in preschool to foster children’s cognitive and socio-emotional learning (SEL) outcomes. Lab-based individually randomized experiments can develop and refine such interventions, and field-based randomized experiments (e.g., cluster randomized trials) evaluate their effectiveness in real-world daycare center settings. Applying reliable estimates of design parameters in the context of a priori power analyses is essential to ensure that the sample size of these studies is adequate to support strong statistical conclusions regarding the strength of the intervention effect. However, there is little knowledge on relevant design parameters with preschool children. We therefore utilized a systematic collection of individual participant data from four German probability samples (554 ≤ N ≤ 2928) with preschool children (aged two to six years) to estimate and meta-analyze design parameters. These parameters are relevant for planning single-level (e.g., in non-clustered lab-based settings), two-level (children nested in daycare centers), and three-level (children nested in groups, with groups nested in daycare centers) randomized intervention studies targeting cognitive and SEL outcomes assessed with three methods (standardized tests, parent ratings, and educator ratings). The design parameters depict between-group and -center differences as well as the proportion of variance in the outcomes explained by different covariate sets (socio-demographic characteristics, baseline measures, and their combination) at the child, group, and center level. In conclusion, this paper provides a rich source of design parameters, recommendations, and illustrations to support a priori power analyses for randomized intervention studies in early childhood education research.
我们需要强有力的证据来证明哪些教育干预措施在学前教育中能够促进儿童的认知和社会情感学习(SEL)成果。基于实验室的个别随机实验可以开发和完善此类干预措施,而基于现场的随机实验(如群组随机试验)则可以评估这些干预措施在实际日托中心环境中的有效性。在先验功率分析中应用可靠的设计参数估计,对于确保这些研究的样本量足以支持有关干预效果强度的有力统计结论至关重要。然而,我们对学龄前儿童的相关设计参数知之甚少。因此,我们利用从四个德国学龄前儿童(2-6 岁)概率样本(554 ≤ N ≤ 2928)中系统收集的个体参与者数据,对设计参数进行了估计和元分析。这些参数适用于规划单层次(例如,在非聚类实验室环境中)、双层次(儿童嵌套在日托中心)和三层次(儿童嵌套在小组中,小组嵌套在日托中心)随机干预研究,这些研究的目标是用三种方法(标准化测试、家长评分和教育者评分)评估认知和 SEL 结果。设计参数描述了组间和中心间的差异,以及不同协变量集(社会人口特征、基线测量及其组合)在儿童、小组和中心层面所解释的结果差异比例。总之,本文提供了丰富的设计参数、建议和图示,为幼儿教育研究中的随机干预研究的先验功率分析提供了支持。
{"title":"An Individual Participant Data Meta-Analysis to Support Power Analyses for Randomized Intervention Studies in Preschool: Cognitive and Socio-Emotional Learning Outcomes","authors":"Martin Brunner, Sophie E. Stallasch, Cordula Artelt, Oliver Lüdtke","doi":"10.1007/s10648-024-09981-z","DOIUrl":"https://doi.org/10.1007/s10648-024-09981-z","url":null,"abstract":"<p>There is a need for robust evidence about which educational interventions work in preschool to foster children’s cognitive and socio-emotional learning (SEL) outcomes. Lab-based individually randomized experiments can develop and refine such interventions, and field-based randomized experiments (e.g., cluster randomized trials) evaluate their effectiveness in real-world daycare center settings. Applying reliable estimates of design parameters in the context of a priori power analyses is essential to ensure that the sample size of these studies is adequate to support strong statistical conclusions regarding the strength of the intervention effect. However, there is little knowledge on relevant design parameters with preschool children. We therefore utilized a systematic collection of individual participant data from four German probability samples (554 ≤ <i>N</i> ≤ 2928) with preschool children (aged two to six years) to estimate and meta-analyze design parameters. These parameters are relevant for planning single-level (e.g., in non-clustered lab-based settings), two-level (children nested in daycare centers), and three-level (children nested in groups, with groups nested in daycare centers) randomized intervention studies targeting cognitive and SEL outcomes assessed with three methods (standardized tests, parent ratings, and educator ratings). The design parameters depict between-group and -center differences as well as the proportion of variance in the outcomes explained by different covariate sets (socio-demographic characteristics, baseline measures, and their combination) at the child, group, and center level. In conclusion, this paper provides a rich source of design parameters, recommendations, and illustrations to support a priori power analyses for randomized intervention studies in early childhood education research.</p>","PeriodicalId":48344,"journal":{"name":"Educational Psychology Review","volume":"82 1","pages":""},"PeriodicalIF":10.1,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142974836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-10DOI: 10.1007/s10648-024-09978-8
Tino Endres, Lisa Bender, Stoo Sepp, Shirong Zhang, Louise David, Melanie Trypke, Dwayne Lieck, Juliette C. Désiron, Johanna Bohm, Sophia Weissgerber, Juan Cristobal Castro-Alonso, Fred Paas
Assessing cognitive demand is crucial for research on self-regulated learning; however, discrepancies in translating essential concepts across languages can hinder the comparison of research findings. Different languages often emphasize various components and interpret certain constructs differently. This paper aims to develop a translingual set of items distinguishing between intentionally invested mental effort and passively perceived mental load as key differentiations of cognitive demand in a broad range of learning situations, as they occur in self-regulated learning. Using a mixed-methods approach, we evaluated the content, criterion, convergent, and incremental validity of this scale in different languages. To establish content validity, we conducted qualitative interviews with bilingual participants who discussed their understanding of mental effort and load. These participants translated and back-translated established and new items from the cognitive-demand literature into English, Dutch, Spanish, German, Chinese, and French. To establish criterion validity, we conducted preregistered experiments using the English, Chinese, and German versions of the scale. Within those experiments, we validated the translated items using established demand manipulations from the cognitive load literature with first-language participants. In a within-subjects design with eight measurements (N = 131), we demonstrated the scale’s criterion validity by showing sensitivity to differences in task complexity, extraneous load manipulation, and motivation for complex tasks. We found evidence for convergent and incremental validity shown by medium-size correlations with established cognitive load measures. We offer a set of translated and validated items as a common foundation for translingual research. As best practice, we recommend four items within a reference point evaluation.
{"title":"Developing the Mental Effort and Load–Translingual Scale (MEL-TS) as a Foundation for Translingual Research in Self-Regulated Learning","authors":"Tino Endres, Lisa Bender, Stoo Sepp, Shirong Zhang, Louise David, Melanie Trypke, Dwayne Lieck, Juliette C. Désiron, Johanna Bohm, Sophia Weissgerber, Juan Cristobal Castro-Alonso, Fred Paas","doi":"10.1007/s10648-024-09978-8","DOIUrl":"https://doi.org/10.1007/s10648-024-09978-8","url":null,"abstract":"<p>Assessing cognitive demand is crucial for research on self-regulated learning; however, discrepancies in translating essential concepts across languages can hinder the comparison of research findings. Different languages often emphasize various components and interpret certain constructs differently. This paper aims to develop a translingual set of items distinguishing between intentionally invested mental effort and passively perceived mental load as key differentiations of cognitive demand in a broad range of learning situations, as they occur in self-regulated learning. Using a mixed-methods approach, we evaluated the content, criterion, convergent, and incremental validity of this scale in different languages. To establish content validity, we conducted qualitative interviews with bilingual participants who discussed their understanding of mental effort and load. These participants translated and back-translated established and new items from the cognitive-demand literature into English, Dutch, Spanish, German, Chinese, and French. To establish criterion validity, we conducted preregistered experiments using the English, Chinese, and German versions of the scale. Within those experiments, we validated the translated items using established demand manipulations from the cognitive load literature with first-language participants. In a within-subjects design with eight measurements (<i>N</i> = 131), we demonstrated the scale’s criterion validity by showing sensitivity to differences in task complexity, extraneous load manipulation, and motivation for complex tasks. We found evidence for convergent and incremental validity shown by medium-size correlations with established cognitive load measures. We offer a set of translated and validated items as a common foundation for translingual research. As best practice, we recommend four items within a reference point evaluation.</p>","PeriodicalId":48344,"journal":{"name":"Educational Psychology Review","volume":"33 9 1","pages":""},"PeriodicalIF":10.1,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142939943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}