Pub Date : 2025-01-10DOI: 10.1007/s10648-024-09978-8
Tino Endres, Lisa Bender, Stoo Sepp, Shirong Zhang, Louise David, Melanie Trypke, Dwayne Lieck, Juliette C. Désiron, Johanna Bohm, Sophia Weissgerber, Juan Cristobal Castro-Alonso, Fred Paas
Assessing cognitive demand is crucial for research on self-regulated learning; however, discrepancies in translating essential concepts across languages can hinder the comparison of research findings. Different languages often emphasize various components and interpret certain constructs differently. This paper aims to develop a translingual set of items distinguishing between intentionally invested mental effort and passively perceived mental load as key differentiations of cognitive demand in a broad range of learning situations, as they occur in self-regulated learning. Using a mixed-methods approach, we evaluated the content, criterion, convergent, and incremental validity of this scale in different languages. To establish content validity, we conducted qualitative interviews with bilingual participants who discussed their understanding of mental effort and load. These participants translated and back-translated established and new items from the cognitive-demand literature into English, Dutch, Spanish, German, Chinese, and French. To establish criterion validity, we conducted preregistered experiments using the English, Chinese, and German versions of the scale. Within those experiments, we validated the translated items using established demand manipulations from the cognitive load literature with first-language participants. In a within-subjects design with eight measurements (N = 131), we demonstrated the scale’s criterion validity by showing sensitivity to differences in task complexity, extraneous load manipulation, and motivation for complex tasks. We found evidence for convergent and incremental validity shown by medium-size correlations with established cognitive load measures. We offer a set of translated and validated items as a common foundation for translingual research. As best practice, we recommend four items within a reference point evaluation.
{"title":"Developing the Mental Effort and Load–Translingual Scale (MEL-TS) as a Foundation for Translingual Research in Self-Regulated Learning","authors":"Tino Endres, Lisa Bender, Stoo Sepp, Shirong Zhang, Louise David, Melanie Trypke, Dwayne Lieck, Juliette C. Désiron, Johanna Bohm, Sophia Weissgerber, Juan Cristobal Castro-Alonso, Fred Paas","doi":"10.1007/s10648-024-09978-8","DOIUrl":"https://doi.org/10.1007/s10648-024-09978-8","url":null,"abstract":"<p>Assessing cognitive demand is crucial for research on self-regulated learning; however, discrepancies in translating essential concepts across languages can hinder the comparison of research findings. Different languages often emphasize various components and interpret certain constructs differently. This paper aims to develop a translingual set of items distinguishing between intentionally invested mental effort and passively perceived mental load as key differentiations of cognitive demand in a broad range of learning situations, as they occur in self-regulated learning. Using a mixed-methods approach, we evaluated the content, criterion, convergent, and incremental validity of this scale in different languages. To establish content validity, we conducted qualitative interviews with bilingual participants who discussed their understanding of mental effort and load. These participants translated and back-translated established and new items from the cognitive-demand literature into English, Dutch, Spanish, German, Chinese, and French. To establish criterion validity, we conducted preregistered experiments using the English, Chinese, and German versions of the scale. Within those experiments, we validated the translated items using established demand manipulations from the cognitive load literature with first-language participants. In a within-subjects design with eight measurements (<i>N</i> = 131), we demonstrated the scale’s criterion validity by showing sensitivity to differences in task complexity, extraneous load manipulation, and motivation for complex tasks. We found evidence for convergent and incremental validity shown by medium-size correlations with established cognitive load measures. We offer a set of translated and validated items as a common foundation for translingual research. As best practice, we recommend four items within a reference point evaluation.</p>","PeriodicalId":48344,"journal":{"name":"Educational Psychology Review","volume":"33 9 1","pages":""},"PeriodicalIF":10.1,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142939943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-09DOI: 10.1007/s10648-024-09982-y
Peter A. Edelsbrunner, Bianca A. Simonsmeier, Michael Schneider
Knowledge is an important predictor and outcome of learning and development. Its measurement is challenged by the fact that knowledge can be integrated and homogeneous, or fragmented and heterogeneous, which can change through learning. These characteristics of knowledge are at odds with current standards for test development, demanding a high internal consistency (e.g., Cronbach's Alphas greater .70). To provide an initial empirical base for this debate, we conducted a meta-analysis of the Cronbach's Alphas of knowledge tests derived from an available data set. Based on 285 effect sizes from 55 samples, the estimated typical Alpha of domain-specific knowledge tests in publications was α = .85, CI90 [.82; .87]. Alpha was so high despite a low mean item intercorrelation of .22 because the tests were relatively long on average and bias in the test construction or publication process led to an underrepresentation of low Alphas. Alpha was higher in tests with more items, with open answers and in younger age, it increased after interventions and throughout development, and it was higher for knowledge in languages and mathematics than in science and social sciences/humanities. Generally, Alphas varied strongly between different knowledge tests and populations with different characteristics, reflected in a 90% prediction interval of [.35, .96]. We suggest this range as a guideline for the Alphas that researchers can expect for knowledge tests with 20 items, providing guidelines for shorter and longer tests. We discuss implications for our understanding of domain-specific knowledge and how fixed cut-off values for the internal consistency of knowledge tests bias research findings.
{"title":"The Cronbach’s Alpha of Domain-Specific Knowledge Tests Before and After Learning: A Meta-Analysis of Published Studies","authors":"Peter A. Edelsbrunner, Bianca A. Simonsmeier, Michael Schneider","doi":"10.1007/s10648-024-09982-y","DOIUrl":"https://doi.org/10.1007/s10648-024-09982-y","url":null,"abstract":"<p>Knowledge is an important predictor and outcome of learning and development. Its measurement is challenged by the fact that knowledge can be integrated and homogeneous, or fragmented and heterogeneous, which can change through learning. These characteristics of knowledge are at odds with current standards for test development, demanding a high internal consistency (e.g., Cronbach's Alphas greater .70). To provide an initial empirical base for this debate, we conducted a meta-analysis of the Cronbach's Alphas of knowledge tests derived from an available data set. Based on 285 effect sizes from 55 samples, the estimated typical Alpha of domain-specific knowledge tests in publications was α = .85, CI90 [.82; .87]. Alpha was so high despite a low mean item intercorrelation of .22 because the tests were relatively long on average and bias in the test construction or publication process led to an underrepresentation of low Alphas. Alpha was higher in tests with more items, with open answers and in younger age, it increased after interventions and throughout development, and it was higher for knowledge in languages and mathematics than in science and social sciences/humanities. Generally, Alphas varied strongly between different knowledge tests and populations with different characteristics, reflected in a 90% prediction interval of [.35, .96]. We suggest this range as a guideline for the Alphas that researchers can expect for knowledge tests with 20 items, providing guidelines for shorter and longer tests. We discuss implications for our understanding of domain-specific knowledge and how fixed cut-off values for the internal consistency of knowledge tests bias research findings.</p>","PeriodicalId":48344,"journal":{"name":"Educational Psychology Review","volume":"2 1","pages":""},"PeriodicalIF":10.1,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142936942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-06DOI: 10.1007/s10648-024-09979-7
Renata A. Mendes, Natalie J. Loxton, Nicholas G. Browning, Rebecca K. Lawrence
Psychological interventions offer a unique approach to enhancing the educational experience for university students. Unlike traditional teaching methods, these interventions directly address cognitive, emotional, and behavioural factors without requiring changes to course content, delivery methods, or involvement from the teaching team. This systematic review evaluated psychological interventions that were designed to reduce statistics anxiety, boost statistics self-efficacy, and/or foster positive attitudes toward statistics among university students enrolled in statistics courses. All included studies followed a longitudinal design with at least pre- and post-intervention assessments, comprising single group studies, randomised controlled trials, and non-randomised control studies. The protocol of this systematic review was registered with PROSPERO. Search terms were entered into five databases. The screening, assessment of risk of bias, and data extraction processes were conducted by two independent reviewers. Meta-analysis was not conducted due to the heterogeneity across the included studies. Therefore, a narrative synthesis was used to describe the results of 11 studies (1786 participants), encompassing studies targeting statistics anxiety, attitudes, self-efficacy, or a combination of these outcomes. Findings revealed that although no intervention was definitively effective in reducing statistics anxiety, some showed promise, especially those combining exposure with coping strategies. Moreover, the review identified interventions that effectively improved self-efficacy and attitudes, discussed some important methodological considerations, and provided suggestions for future psychological interventions. Finally, further empirical research is necessary to address existing limitations and fully understand the effectiveness of these interventions, particularly regarding statistics anxiety.
{"title":"The Effect of Psychological Interventions on Statistics Anxiety, Statistics Self-Efficacy, and Attitudes Toward Statistics in University Students: A Systematic Review","authors":"Renata A. Mendes, Natalie J. Loxton, Nicholas G. Browning, Rebecca K. Lawrence","doi":"10.1007/s10648-024-09979-7","DOIUrl":"https://doi.org/10.1007/s10648-024-09979-7","url":null,"abstract":"<p>Psychological interventions offer a unique approach to enhancing the educational experience for university students. Unlike traditional teaching methods, these interventions directly address cognitive, emotional, and behavioural factors without requiring changes to course content, delivery methods, or involvement from the teaching team. This systematic review evaluated psychological interventions that were designed to reduce statistics anxiety, boost statistics self-efficacy, and/or foster positive attitudes toward statistics among university students enrolled in statistics courses. All included studies followed a longitudinal design with at least pre- and post-intervention assessments, comprising single group studies, randomised controlled trials, and non-randomised control studies. The protocol of this systematic review was registered with PROSPERO. Search terms were entered into five databases. The screening, assessment of risk of bias, and data extraction processes were conducted by two independent reviewers. Meta-analysis was not conducted due to the heterogeneity across the included studies. Therefore, a narrative synthesis was used to describe the results of 11 studies (1786 participants), encompassing studies targeting statistics anxiety, attitudes, self-efficacy, or a combination of these outcomes. Findings revealed that although no intervention was definitively effective in reducing statistics anxiety, some showed promise, especially those combining exposure with coping strategies. Moreover, the review identified interventions that effectively improved self-efficacy and attitudes, discussed some important methodological considerations, and provided suggestions for future psychological interventions. Finally, further empirical research is necessary to address existing limitations and fully understand the effectiveness of these interventions, particularly regarding statistics anxiety.</p>","PeriodicalId":48344,"journal":{"name":"Educational Psychology Review","volume":"574 1","pages":""},"PeriodicalIF":10.1,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142929533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-06-25DOI: 10.1007/s10648-025-10034-2
Christina A Bauer, Aashna Poddar, Eddie Brummelman, Andrei Cimpian
As societies worldwide grapple with substantial educational inequities, understanding their underlying causes remains a priority. Here, we introduce the Brilliance-Belonging Model, a novel theoretical framework that illuminates how cultural beliefs about exceptional intellectual ability create inequities through their impact on students' sense of belonging. The model identifies two types of widespread cultural beliefs about ability: field-specific ability beliefs (FABs) and brilliance stereotypes. FABs are cultural beliefs about the extent to which success in an educational context requires exceptional intellectual ability or "brilliance" (e.g., math more so than language). In contrast, brilliance stereotypes are cultural beliefs that associate exceptional intellectual ability with some groups more than others (e.g., individuals from high vs. low socioeconomic status backgrounds). According to the Brilliance-Belonging Model, students from groups targeted by negative brilliance stereotypes are perceived-by themselves and others-as not belonging in contexts where brilliance-oriented FABs are common. These perceptions compromise students' psychological safety and lead to disempowering treatment by others, resulting in persistent gaps in achievement and representation. Such effects are amplified by the competitive climates to which brilliance-oriented FABs give rise, where pressure to demonstrate intellectual superiority creates particular challenges for students from intellectually stigmatized groups, who often value cooperation over competition. By revealing how cultural beliefs about intellectual ability shape educational outcomes through their effects on belonging, the Brilliance-Belonging Model provides a roadmap for interventions aimed at fostering a sustained sense of belonging among diverse students.
{"title":"The Brilliance-Belonging Model: How Cultural Beliefs About Intellectual Ability Undermine Educational Equity.","authors":"Christina A Bauer, Aashna Poddar, Eddie Brummelman, Andrei Cimpian","doi":"10.1007/s10648-025-10034-2","DOIUrl":"10.1007/s10648-025-10034-2","url":null,"abstract":"<p><p>As societies worldwide grapple with substantial educational inequities, understanding their underlying causes remains a priority. Here, we introduce the Brilliance-Belonging Model, a novel theoretical framework that illuminates how cultural beliefs about exceptional intellectual ability create inequities through their impact on students' sense of belonging. The model identifies two types of widespread cultural beliefs about ability: field-specific ability beliefs (FABs) and brilliance stereotypes. FABs are cultural beliefs about the extent to which success in an educational context requires exceptional intellectual ability or \"brilliance\" (e.g., math more so than language). In contrast, brilliance stereotypes are cultural beliefs that associate exceptional intellectual ability with some groups more than others (e.g., individuals from high vs. low socioeconomic status backgrounds). According to the Brilliance-Belonging Model, students from groups targeted by negative brilliance stereotypes are perceived-by themselves and others-as not belonging in contexts where brilliance-oriented FABs are common. These perceptions compromise students' psychological safety and lead to disempowering treatment by others, resulting in persistent gaps in achievement and representation. Such effects are amplified by the competitive climates to which brilliance-oriented FABs give rise, where pressure to demonstrate intellectual superiority creates particular challenges for students from intellectually stigmatized groups, who often value cooperation over competition. By revealing how cultural beliefs about intellectual ability shape educational outcomes through their effects on belonging, the Brilliance-Belonging Model provides a roadmap for interventions aimed at fostering a sustained sense of belonging among diverse students.</p>","PeriodicalId":48344,"journal":{"name":"Educational Psychology Review","volume":"37 3","pages":"64"},"PeriodicalIF":10.1,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12198075/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144530474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-06-27DOI: 10.1007/s10648-025-10042-2
Anique B H de Bruin, Eva M Janssen, Julia Waldeyer, Ferdinand Stebner
The effort monitoring and regulation (EMR) model integrates self-regulated learning and cognitive load theory to examine how students monitor, regulate, and optimize effort during learning. Since its introduction in 2020, it has inspired research that explores how to correct learners' misinterpretations of effort and metacognitive biases and how instructional interventions can improve learning strategies. The current topical collection titled Cognitive Load: Challenges in Self-regulation includes seven empirical papers, two review papers, one meta-analysis, and a discussion paper. These contributions build on the EMR model by testing its assumptions, linking it to motivation, and refining our understanding of the basis of effort ratings in learning. Among other findings, the findings in the topical collection (1) show that feedback valence can affect participants' perceived task effort and their willingness to invest effort via feelings of challenge and threat, (2) provide the first evidence of far metacognitive transfer, and (3) propose a novel categorization of effort based on the underlying psychological sources when experiencing and allocating mental effort. In this editorial introduction, we summarize the topical collection papers, connect their findings to the EMR model, and finally reflect on how these novel insights can further develop the model.
{"title":"Cognitive Load and Challenges in Self-regulation: An Introduction and Reflection on the Topical Collection.","authors":"Anique B H de Bruin, Eva M Janssen, Julia Waldeyer, Ferdinand Stebner","doi":"10.1007/s10648-025-10042-2","DOIUrl":"10.1007/s10648-025-10042-2","url":null,"abstract":"<p><p>The effort monitoring and regulation (EMR) model integrates self-regulated learning and cognitive load theory to examine how students monitor, regulate, and optimize effort during learning. Since its introduction in 2020, it has inspired research that explores how to correct learners' misinterpretations of effort and metacognitive biases and how instructional interventions can improve learning strategies. The current topical collection titled <i>Cognitive Load: Challenges in Self-regulation</i> includes seven empirical papers, two review papers, one meta-analysis, and a discussion paper. These contributions build on the EMR model by testing its assumptions, linking it to motivation, and refining our understanding of the basis of effort ratings in learning. Among other findings, the findings in the topical collection (1) show that feedback valence can affect participants' perceived task effort and their willingness to invest effort via feelings of challenge and threat, (2) provide the first evidence of far metacognitive transfer, and (3) propose a novel categorization of effort based on the underlying psychological sources when experiencing and allocating mental effort. In this editorial introduction, we summarize the topical collection papers, connect their findings to the EMR model, and finally reflect on how these novel insights can further develop the model.</p>","PeriodicalId":48344,"journal":{"name":"Educational Psychology Review","volume":"37 3","pages":"65"},"PeriodicalIF":10.1,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12204937/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144530473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-23DOI: 10.1007/s10648-024-09980-0
Felix Krieglstein, Maik Beege, Lukas Wesenberg, Günter Daniel Rey, Sascha Schneider
In research practice, it is common to measure cognitive load after learning using self-report scales. This approach can be considered risky because it is unclear on what basis learners assess cognitive load, particularly when the learning material contains varying levels of complexity. This raises questions that have yet to be answered by educational psychology research: Does measuring cognitive load during and after learning lead to comparable assessments of cognitive load depending on the sequence of complexity? Do learners rely on their first or last impression of complexity of a learning material when reporting the cognitive load of the entire learning material after learning? To address these issues, three learning units were created, differing in terms of intrinsic cognitive load (low, medium, or high complexity) as verified by a pre-study (N = 67). In the main-study (N = 100), the three learning units were studied in two sequences (increasing vs. decreasing complexity) and learners were asked to report cognitive load after each learning unit and after learning as an overall assessment. The results demonstrated that the first impression of complexity is the most accurate predictor of the overall cognitive load associated with the learning material, indicating a primacy effect. This finding contrasts with previous studies on problem-solving tasks, which have identified the most complex task as the primary determinant of the overall assessment. This study suggests that, during learning, the assessment of the overall cognitive load is influenced primarily by the timing of measurement.
{"title":"The Distorting Influence of Primacy Effects on Reporting Cognitive Load in Learning Materials of Varying Complexity","authors":"Felix Krieglstein, Maik Beege, Lukas Wesenberg, Günter Daniel Rey, Sascha Schneider","doi":"10.1007/s10648-024-09980-0","DOIUrl":"https://doi.org/10.1007/s10648-024-09980-0","url":null,"abstract":"<p>In research practice, it is common to measure cognitive load after learning using self-report scales. This approach can be considered risky because it is unclear on what basis learners assess cognitive load, particularly when the learning material contains varying levels of complexity. This raises questions that have yet to be answered by educational psychology research: Does measuring cognitive load during and after learning lead to comparable assessments of cognitive load depending on the sequence of complexity? Do learners rely on their first or last impression of complexity of a learning material when reporting the cognitive load of the entire learning material after learning? To address these issues, three learning units were created, differing in terms of intrinsic cognitive load (low, medium, or high complexity) as verified by a pre-study (<i>N</i> = 67). In the main-study (<i>N</i> = 100), the three learning units were studied in two sequences (increasing vs. decreasing complexity) and learners were asked to report cognitive load after each learning unit and after learning as an overall assessment. The results demonstrated that the first impression of complexity is the most accurate predictor of the overall cognitive load associated with the learning material, indicating a primacy effect. This finding contrasts with previous studies on problem-solving tasks, which have identified the most complex task as the primary determinant of the overall assessment. This study suggests that, during learning, the assessment of the overall cognitive load is influenced primarily by the timing of measurement.</p>","PeriodicalId":48344,"journal":{"name":"Educational Psychology Review","volume":"20 1","pages":""},"PeriodicalIF":10.1,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142873825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mathematical ability is a crucial component of human cognitive function, which is defined as the ability to acquire, process, and store mathematical information. While many studies have documented a close relationship between elementary school children’s inhibitory control and their mathematical ability, existing empirical evidence remains controversial with some other studies showing a null correlation between these two constructs. This preregistered three-level meta-analysis aims to further elucidate the relationship between inhibitory control and mathematical ability in elementary school children by differentiating various types of inhibitory control, domains of mathematical ability, and exploring various potential moderators. This meta-analysis synthesized 241 effect sizes extracted from 86 samples, involving data from a total of 14,223 primary school children with a mean age of 8.67 years. The results showed a moderate positive correlation between inhibitory control and mathematical ability (r = 0.19). Mathematical ability was more strongly correlated with interference inhibition (r = 0.21) than response inhibition (r = 0.14). The relation between inhibitory control and mathematical ability was not moderated by domains of mathematical ability, inhibitory control task, age, gender, developmental status, socioeconomic status, and sample region. These findings provide novel insights into the cognitive underpinnings of mathematical ability in elementary school children. Practical implications are discussed.
{"title":"Inhibitory Control and Mathematical Ability in Elementary School Children: A Preregistered Meta-Analysis","authors":"Xiaoliang Zhu, Yixin Tang, Jiaqi Lu, Minyuan Song, Chunliang Yang, Xin Zhao","doi":"10.1007/s10648-024-09976-w","DOIUrl":"https://doi.org/10.1007/s10648-024-09976-w","url":null,"abstract":"<p>Mathematical ability is a crucial component of human cognitive function, which is defined as the ability to acquire, process, and store mathematical information. While many studies have documented a close relationship between elementary school children’s inhibitory control and their mathematical ability, existing empirical evidence remains controversial with some other studies showing a null correlation between these two constructs. This preregistered three-level meta-analysis aims to further elucidate the relationship between inhibitory control and mathematical ability in elementary school children by differentiating various types of inhibitory control, domains of mathematical ability, and exploring various potential moderators. This meta-analysis synthesized 241 effect sizes extracted from 86 samples, involving data from a total of 14,223 primary school children with a mean age of 8.67 years. The results showed a moderate positive correlation between inhibitory control and mathematical ability (<i>r</i> = 0.19). Mathematical ability was more strongly correlated with interference inhibition (<i>r</i> = 0.21) than response inhibition (<i>r</i> = 0.14). The relation between inhibitory control and mathematical ability was not moderated by domains of mathematical ability, inhibitory control task, age, gender, developmental status, socioeconomic status, and sample region. These findings provide novel insights into the cognitive underpinnings of mathematical ability in elementary school children. Practical implications are discussed.</p>","PeriodicalId":48344,"journal":{"name":"Educational Psychology Review","volume":"263 1","pages":""},"PeriodicalIF":10.1,"publicationDate":"2024-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142869918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-02DOI: 10.1007/s10648-024-09969-9
Héfer Bembenutty, Anastasia Kitsantas, Maria K. DiBenedetto, Allan Wigfield, Jeffrey A. Greene, Ellen L. Usher, Mimi Bong, Timothy J. Cleary, Ernesto Panadero, Carol A. Mullen, Peggy P. Chen
This tribute celebrates the unwavering dedication and contributions of Dale H. Schunk to educational psychology. His research has fundamentally transformed how school-based practitioners support student learning. By pioneering effective teaching strategies and interventions, he has called educators to create dynamic learning environments that cultivate students’ self-efficacy beliefs and self-regulated learning. Beyond his scholarly achievements, Schunk’s commitment to mentoring students and faculty alike has impacted the academic community. His profound influence continues to reshape the landscape of educational psychology, igniting ongoing research and driving innovation to enhance teaching and learning practices among learners. This tribute is a testament to Schunk’s enduring legacy and profound impact on educational psychology.
{"title":"Harnessing Motivation, Self-Efficacy, and Self-Regulation: Dale H. Schunk’s Enduring Influence","authors":"Héfer Bembenutty, Anastasia Kitsantas, Maria K. DiBenedetto, Allan Wigfield, Jeffrey A. Greene, Ellen L. Usher, Mimi Bong, Timothy J. Cleary, Ernesto Panadero, Carol A. Mullen, Peggy P. Chen","doi":"10.1007/s10648-024-09969-9","DOIUrl":"https://doi.org/10.1007/s10648-024-09969-9","url":null,"abstract":"<p>This tribute celebrates the unwavering dedication and contributions of Dale H. Schunk to educational psychology. His research has fundamentally transformed how school-based practitioners support student learning. By pioneering effective teaching strategies and interventions, he has called educators to create dynamic learning environments that cultivate students’ self-efficacy beliefs and self-regulated learning. Beyond his scholarly achievements, Schunk’s commitment to mentoring students and faculty alike has impacted the academic community. His profound influence continues to reshape the landscape of educational psychology, igniting ongoing research and driving innovation to enhance teaching and learning practices among learners. This tribute is a testament to Schunk’s enduring legacy and profound impact on educational psychology.</p>","PeriodicalId":48344,"journal":{"name":"Educational Psychology Review","volume":"82 1","pages":""},"PeriodicalIF":10.1,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142758511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-02DOI: 10.1007/s10648-024-09974-y
Carlton J. Fong, Semilore F. Adelugba, Melissa Garza, Giovanna Lorenzi Pinto, Cassandra Gonzales, Pedram Zarei, Christopher S. Rozek
Given the theorized importance of college belonging for academic success, we conducted a scoping review of studies examining relationships between sense of belonging and academic achievement and persistence for postsecondary students. In our scoping review, we included 69 reports (78 unique samples) published between 2003 and 2023. We observed an unexpected level of heterogeneity among the associations between belonging and academic outcomes (GPA, persistence, and intent to persist); most associations were positive but small with several small, negative associations. Across a few studies, there was a pattern of larger associations between belonging and academic achievement for marginalized college students, such as racially/ethnically minoritized students (compared to students in the racial majority) or women (compared to men) in historically exclusionary settings such as STEM disciplines. We identified gaps in the literature reflecting underreporting of student identities, including but not limited to gender identity, sexual identity, social class, religious identity, disability status, and first-generation status, in sample characteristics and a lack of attention to contextual factors, such as the type of institution (e.g., predominantly White institutions, community colleges, minority-serving institutions). In all, our findings provide an updated mapping of the literature, pointing to a much-needed refinement for how individual and institutional factors may moderate the associations between belonging and academic outcomes in postsecondary settings.
{"title":"A Scoping Review of the Associations Between Sense of Belonging and Academic Outcomes in Postsecondary Education","authors":"Carlton J. Fong, Semilore F. Adelugba, Melissa Garza, Giovanna Lorenzi Pinto, Cassandra Gonzales, Pedram Zarei, Christopher S. Rozek","doi":"10.1007/s10648-024-09974-y","DOIUrl":"https://doi.org/10.1007/s10648-024-09974-y","url":null,"abstract":"<p>Given the theorized importance of college belonging for academic success, we conducted a scoping review of studies examining relationships between sense of belonging and academic achievement and persistence for postsecondary students. In our scoping review, we included 69 reports (78 unique samples) published between 2003 and 2023. We observed an unexpected level of heterogeneity among the associations between belonging and academic outcomes (GPA, persistence, and intent to persist); most associations were positive but small with several small, negative associations. Across a few studies, there was a pattern of larger associations between belonging and academic achievement for marginalized college students, such as racially/ethnically minoritized students (compared to students in the racial majority) or women (compared to men) in historically exclusionary settings such as STEM disciplines. We identified gaps in the literature reflecting underreporting of student identities, including but not limited to gender identity, sexual identity, social class, religious identity, disability status, and first-generation status, in sample characteristics and a lack of attention to contextual factors, such as the type of institution (e.g., predominantly White institutions, community colleges, minority-serving institutions). In all, our findings provide an updated mapping of the literature, pointing to a much-needed refinement for how individual and institutional factors may moderate the associations between belonging and academic outcomes in postsecondary settings.</p>","PeriodicalId":48344,"journal":{"name":"Educational Psychology Review","volume":"260 1","pages":""},"PeriodicalIF":10.1,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142758510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-19DOI: 10.1007/s10648-024-09970-2
Rebecca Covarrubias
Belonging is personal and political. As a fundamental human need, belonging is about self-acceptance and about feeling “accepted” by others. And yet, this process of acceptance is inextricably tied to structures of power that work to include and exclude. Structures of whiteness within higher education systems, for example, relegate low-income, first-generation-to-college students of color to the margins and undermine their capacity and desire to belong. This makes the task of developing institutional practices that foster belonging complex. Such a task prompts important questions about what “acceptance” looks like. For example, in what ways can practices of acceptance attend to existing power structures? Under what conditions can acceptance occur so as not to solely expect students to assimilate or to silence important parts of themselves? How can practices of acceptance recognize the diverse belonging needs of marginalized students and the politics surrounding those needs? To answer these questions, I utilize frameworks that reveal the paradoxes of belonging—the push and pull of being accepted in spaces that marginalize the self. Specifically, drawing from a place-belongingness and politics of belonging framework, I first provide a foundation for understanding the personal and political components of belonging for marginalized students. I then review harmful institutional practices of “acceptance” and discuss more transformative practices that sustain students’ cultural identities. Illuminating the personal and political facets of what it means to be accepted provides a pathway for reimaging who can, wants, and gets to belong.
{"title":"On Being Accepted: Interrogating How University Cultural Scripts Shape Personal and Political Facets of Belonging","authors":"Rebecca Covarrubias","doi":"10.1007/s10648-024-09970-2","DOIUrl":"https://doi.org/10.1007/s10648-024-09970-2","url":null,"abstract":"<p>Belonging is personal and political. As a fundamental human need, belonging is about self-acceptance and about feeling “accepted” by others. And yet, this process of acceptance is inextricably tied to structures of power that work to include and exclude. Structures of whiteness within higher education systems, for example, relegate low-income, first-generation-to-college students of color to the margins and undermine their capacity and desire to belong. This makes the task of developing institutional practices that foster belonging complex. Such a task prompts important questions about what “acceptance” looks like. For example, in what ways can practices of acceptance attend to existing power structures? Under what conditions can acceptance occur so as not to solely expect students to assimilate or to silence important parts of themselves? How can practices of acceptance recognize the diverse belonging needs of marginalized students and the politics surrounding those needs? To answer these questions, I utilize frameworks that reveal the paradoxes of belonging—the push and pull of being accepted in spaces that marginalize the self. Specifically, drawing from a <i>place-belongingness and politics of belonging</i> framework, I first provide a foundation for understanding the personal and political components of belonging for marginalized students. I then review harmful institutional practices of “acceptance” and discuss more transformative practices that sustain students’ cultural identities. Illuminating the personal and political facets of what it means to be accepted provides a pathway for reimaging who can, wants, and gets to belong.</p>","PeriodicalId":48344,"journal":{"name":"Educational Psychology Review","volume":"13 1","pages":""},"PeriodicalIF":10.1,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142670893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}