Pub Date : 2023-07-01DOI: 10.3102/00346543231178830
Y. Kim, Alissa Wolters, J. Lee
We conducted a meta-analysis to investigate reading-writing relations. Beyond the overall relation, we systematically investigated moderation of the relation as a function of linguistic grain size (word reading and spelling versus reading comprehension and written composition), measurement of reading comprehension (e.g., multiple choice, open-ended, cloze), and written composition (e.g., writing quality, writing productivity, writing fluency, writing syntax), and developmental phase of reading and writing (grade levels as a proxy). A total of 395 studies (k = 2,265, N = 120,669) met inclusion criteria. Overall, reading and writing were strongly related (r = .72). However, the relation differed depending on the subskills of reading and writing such that word reading and spelling were strongly related (r =.82) whereas reading comprehension and written composition were moderately related (r =.44). In addition, the word reading-spelling relation was stronger for primary-grade students (r =.82) than for university students/adults (r =.69). The relation of reading comprehension with written composition differed depending on measurement of reading comprehension and written composition—reading comprehension measured by multiple choice and open-ended tasks had a stronger relation with writing quality than reading comprehension measured by oral retell tasks; and reading comprehension had moderate relations with writing quality, writing vocabulary, writing syntax, and writing conventions but had weak relations with writing productivity and writing fluency. Relations tended to be stronger when reliability was higher, and the relation between word reading and spelling was stronger for alphabetic languages (r = .83) than for Chinese (r = .71). These results add important nuances about the nature of relations between reading and writing.
{"title":"Reading and Writing Relations Are Not Uniform: They Differ by the Linguistic Grain Size, Developmental Phase, and Measurement","authors":"Y. Kim, Alissa Wolters, J. Lee","doi":"10.3102/00346543231178830","DOIUrl":"https://doi.org/10.3102/00346543231178830","url":null,"abstract":"We conducted a meta-analysis to investigate reading-writing relations. Beyond the overall relation, we systematically investigated moderation of the relation as a function of linguistic grain size (word reading and spelling versus reading comprehension and written composition), measurement of reading comprehension (e.g., multiple choice, open-ended, cloze), and written composition (e.g., writing quality, writing productivity, writing fluency, writing syntax), and developmental phase of reading and writing (grade levels as a proxy). A total of 395 studies (k = 2,265, N = 120,669) met inclusion criteria. Overall, reading and writing were strongly related (r = .72). However, the relation differed depending on the subskills of reading and writing such that word reading and spelling were strongly related (r =.82) whereas reading comprehension and written composition were moderately related (r =.44). In addition, the word reading-spelling relation was stronger for primary-grade students (r =.82) than for university students/adults (r =.69). The relation of reading comprehension with written composition differed depending on measurement of reading comprehension and written composition—reading comprehension measured by multiple choice and open-ended tasks had a stronger relation with writing quality than reading comprehension measured by oral retell tasks; and reading comprehension had moderate relations with writing quality, writing vocabulary, writing syntax, and writing conventions but had weak relations with writing productivity and writing fluency. Relations tended to be stronger when reliability was higher, and the relation between word reading and spelling was stronger for alphabetic languages (r = .83) than for Chinese (r = .71). These results add important nuances about the nature of relations between reading and writing.","PeriodicalId":21145,"journal":{"name":"Review of Educational Research","volume":" ","pages":""},"PeriodicalIF":11.2,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45398857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-24DOI: 10.3102/00346543231174413
Zid Mancenido
Many teacher education researchers have expressed concerns about the lack of rigorous impact evaluations of teacher preparation practices. I summarize these various concerns as they relate to issues of internal validity, measurement, and external validity. I then assess the prevalence of these issues by reviewing 166 impact evaluations of teacher preparation practices published in peer-reviewed journals between 2002–2019. Although I find that very few studies address issues of internal validity, measurement, and external validity, I highlight some innovative approaches and present a checklist of considerations to assist future researchers in designing more rigorous impact evaluations.
{"title":"Impact Evaluations of Teacher Preparation Practices: Challenges and Opportunities for More Rigorous Research","authors":"Zid Mancenido","doi":"10.3102/00346543231174413","DOIUrl":"https://doi.org/10.3102/00346543231174413","url":null,"abstract":"Many teacher education researchers have expressed concerns about the lack of rigorous impact evaluations of teacher preparation practices. I summarize these various concerns as they relate to issues of internal validity, measurement, and external validity. I then assess the prevalence of these issues by reviewing 166 impact evaluations of teacher preparation practices published in peer-reviewed journals between 2002–2019. Although I find that very few studies address issues of internal validity, measurement, and external validity, I highlight some innovative approaches and present a checklist of considerations to assist future researchers in designing more rigorous impact evaluations.","PeriodicalId":21145,"journal":{"name":"Review of Educational Research","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135090534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-20DOI: 10.3102/00346543231171345
Peng Peng, W. Wang, Marissa J. Filderman, Wenxiu Zhang, Lifeng Lin
Based on 52 studies with samples mostly from English-speaking countries, the current study used Bayesian network meta-analysis to investigate the intervention effectiveness of different reading comprehension strategy combinations on reading comprehension among students with reading difficulties in 3rd through 12th grade. We focused on commonly researched strategies: main idea, inference, text structure, retell, prediction, self-monitoring, and graphic organizers. Results showed (1) instruction of more strategies did not necessarily have stronger effects on reading comprehension; (2) there was no single reading comprehension strategy that produced the strongest effect; (3) main idea, text structure, and retell, taught together as the primary strategies, seemed the most effective; and (4) the effects of strategies only held when background knowledge instruction was included. These findings suggest strategy instruction among students with reading difficulties follows an ingredient-interaction model—that is, no single strategy works the best. It is not “the more we teach, the better outcomes to expect.” Instead, different strategy combinations may produce different effects on reading comprehension. Main idea, text structure, and retell together may best optimize the cognitive load during reading comprehension. Background knowledge instruction should be combined with strategy instruction to facilitate knowledge retrieval as to reduce the cognitive load of using strategies.
{"title":"The Active Ingredient in Reading Comprehension Strategy Intervention for Struggling Readers: A Bayesian Network Meta-analysis","authors":"Peng Peng, W. Wang, Marissa J. Filderman, Wenxiu Zhang, Lifeng Lin","doi":"10.3102/00346543231171345","DOIUrl":"https://doi.org/10.3102/00346543231171345","url":null,"abstract":"Based on 52 studies with samples mostly from English-speaking countries, the current study used Bayesian network meta-analysis to investigate the intervention effectiveness of different reading comprehension strategy combinations on reading comprehension among students with reading difficulties in 3rd through 12th grade. We focused on commonly researched strategies: main idea, inference, text structure, retell, prediction, self-monitoring, and graphic organizers. Results showed (1) instruction of more strategies did not necessarily have stronger effects on reading comprehension; (2) there was no single reading comprehension strategy that produced the strongest effect; (3) main idea, text structure, and retell, taught together as the primary strategies, seemed the most effective; and (4) the effects of strategies only held when background knowledge instruction was included. These findings suggest strategy instruction among students with reading difficulties follows an ingredient-interaction model—that is, no single strategy works the best. It is not “the more we teach, the better outcomes to expect.” Instead, different strategy combinations may produce different effects on reading comprehension. Main idea, text structure, and retell together may best optimize the cognitive load during reading comprehension. Background knowledge instruction should be combined with strategy instruction to facilitate knowledge retrieval as to reduce the cognitive load of using strategies.","PeriodicalId":21145,"journal":{"name":"Review of Educational Research","volume":" ","pages":""},"PeriodicalIF":11.2,"publicationDate":"2023-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41785763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-09DOI: 10.3102/00346543231167795
Nathalie Barz, Manuela Benick, Laura Dörrenbächer-Ulrich, F. Perels
Digital game-based learning (DGBL) interventions can be superior to traditional instruction methods for learning, but previous meta-analyses covered a huge period and included a variety of different target groups, limiting the results’ transfer on specific target groups. Therefore, the aim of this meta-analysis is a theory-based examination of DGBL interventions’ effects on different learning outcomes (cognitive, metacognitive, affective-motivational) in the school context, using studies published between 2015 and 2020 and meta-analytic techniques (including moderator analyses) to examine the effectiveness of DGBL interventions compared to traditional instruction methods. Results from random-effects models revealed a significant medium effect for overall learning (g = .54) and cognitive learning outcomes (g = .67). Also found were a small effect for affective-motivational learning outcomes (g = .32) and no significant effect for metacognitive learning outcomes. Additionally, there was no evidence of publication bias. Further meta-regression models did not reveal evidence of moderating personal, environmental, or confounding factors. The findings partially support the positive impact of DGBL interventions in school, and the study addresses its practical implications.
{"title":"The Effect of Digital Game-Based Learning Interventions on Cognitive, Metacognitive, and Affective-Motivational Learning Outcomes in School: A Meta-Analysis","authors":"Nathalie Barz, Manuela Benick, Laura Dörrenbächer-Ulrich, F. Perels","doi":"10.3102/00346543231167795","DOIUrl":"https://doi.org/10.3102/00346543231167795","url":null,"abstract":"Digital game-based learning (DGBL) interventions can be superior to traditional instruction methods for learning, but previous meta-analyses covered a huge period and included a variety of different target groups, limiting the results’ transfer on specific target groups. Therefore, the aim of this meta-analysis is a theory-based examination of DGBL interventions’ effects on different learning outcomes (cognitive, metacognitive, affective-motivational) in the school context, using studies published between 2015 and 2020 and meta-analytic techniques (including moderator analyses) to examine the effectiveness of DGBL interventions compared to traditional instruction methods. Results from random-effects models revealed a significant medium effect for overall learning (g = .54) and cognitive learning outcomes (g = .67). Also found were a small effect for affective-motivational learning outcomes (g = .32) and no significant effect for metacognitive learning outcomes. Additionally, there was no evidence of publication bias. Further meta-regression models did not reveal evidence of moderating personal, environmental, or confounding factors. The findings partially support the positive impact of DGBL interventions in school, and the study addresses its practical implications.","PeriodicalId":21145,"journal":{"name":"Review of Educational Research","volume":" ","pages":""},"PeriodicalIF":11.2,"publicationDate":"2023-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46126090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-19DOI: 10.3102/00346543231160474
Qing Liu, J. Nesbit
Need for cognition is conceptualized as an individual’s intrinsic motivation to engage in and enjoy effortful cognitive activities. Over the past three decades, there has been increasing interest in how need for cognition impacts and correlates with learning performance. This meta-analysis summarized 136 independent effect sizes (N = 53,258) for the association between need for cognition and academic achievement and investigated the moderating effects of variables related to research context, methodology, and instrumentation. The overall effect size weighted by inverse variance and using a random effects model was found to be small, r = .20, with a 95% confidence interval ranging from .18 to .22. The association between need for cognition and learning performance was moderated by grade level, geographic region, exposure to intervention, and outcome measurement tool. The implications of these findings for practice and future research are discussed.
{"title":"The Relation Between Need for Cognition and Academic Achievement: A Meta-Analysis","authors":"Qing Liu, J. Nesbit","doi":"10.3102/00346543231160474","DOIUrl":"https://doi.org/10.3102/00346543231160474","url":null,"abstract":"Need for cognition is conceptualized as an individual’s intrinsic motivation to engage in and enjoy effortful cognitive activities. Over the past three decades, there has been increasing interest in how need for cognition impacts and correlates with learning performance. This meta-analysis summarized 136 independent effect sizes (N = 53,258) for the association between need for cognition and academic achievement and investigated the moderating effects of variables related to research context, methodology, and instrumentation. The overall effect size weighted by inverse variance and using a random effects model was found to be small, r = .20, with a 95% confidence interval ranging from .18 to .22. The association between need for cognition and learning performance was moderated by grade level, geographic region, exposure to intervention, and outcome measurement tool. The implications of these findings for practice and future research are discussed.","PeriodicalId":21145,"journal":{"name":"Review of Educational Research","volume":" ","pages":""},"PeriodicalIF":11.2,"publicationDate":"2023-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45527791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-19DOI: 10.3102/00346543231156182
Gena Nelson, H. Carter, Peter Boedeker, E. Knowles, Claire Buckmiller, Jessica Eames
The purposes of this study included conducting a meta-analysis and reviewing the study reporting quality of math interventions implemented in informal learning environments (e.g., the home) by children’s caregivers. This meta-analysis included 25 preschool to third-grade math interventions with 83 effect sizes that yielded a statistically significant summary effect (g = 0.26, 95% CI [0.07, 0.45) on children’s math achievement. Significant moderators of the treatment effect included the intensity of caregiver training and type of outcome measure. There were larger average effects for interventions with caregiver training that included follow-up support and for outcomes that were comprehensive early numeracy measures. Studies met 58.0% of reporting quality indicators, and analyses revealed that quality of reporting has improved in recent years. The results of this study offer several recommendations for researchers and practitioners, particularly given the growing evidence base of math interventions conducted in informal learning environments.
{"title":"A Meta-Analysis and Quality Review of Mathematics Interventions Conducted in Informal Learning Environments with Caregivers and Children","authors":"Gena Nelson, H. Carter, Peter Boedeker, E. Knowles, Claire Buckmiller, Jessica Eames","doi":"10.3102/00346543231156182","DOIUrl":"https://doi.org/10.3102/00346543231156182","url":null,"abstract":"The purposes of this study included conducting a meta-analysis and reviewing the study reporting quality of math interventions implemented in informal learning environments (e.g., the home) by children’s caregivers. This meta-analysis included 25 preschool to third-grade math interventions with 83 effect sizes that yielded a statistically significant summary effect (g = 0.26, 95% CI [0.07, 0.45) on children’s math achievement. Significant moderators of the treatment effect included the intensity of caregiver training and type of outcome measure. There were larger average effects for interventions with caregiver training that included follow-up support and for outcomes that were comprehensive early numeracy measures. Studies met 58.0% of reporting quality indicators, and analyses revealed that quality of reporting has improved in recent years. The results of this study offer several recommendations for researchers and practitioners, particularly given the growing evidence base of math interventions conducted in informal learning environments.","PeriodicalId":21145,"journal":{"name":"Review of Educational Research","volume":" ","pages":""},"PeriodicalIF":11.2,"publicationDate":"2023-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43664591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-21DOI: 10.3102/00346543231152262
M. Wadhwa, Jingwen Zheng, Thomas D. Cook
Clearinghouses set standards of scientific quality to vet existing research to determine how “evidence-based” an intervention is. This paper examines 12 educational clearinghouses to describe their effectiveness criteria, to estimate how consistently they rate the same program, and to probe why their judgments differ. All the clearinghouses value random assignment, but they differ in how they treat its implementation, how they weight quasi-experiments, and how they value ancillary causal factors like independent replication and persisting effects. A total of 1359 programs were analyzed over 10 clearinghouses; 83% of them were assessed by a single clearinghouse and, of those rated by more than one, similar ratings were achieved for only about 30% of the programs. This high level of inconsistency seems to be mostly due to clearinghouses disagreeing about whether a high program rating requires effects that are replicated and/or temporally persisting. Clearinghouses exist to identify “evidence-based” programs, but the inconsistency in their recommendations of the same program suggests that identifying “evidence-based” interventions is still more of a policy aspiration than a reliable research practice.
{"title":"How Consistent Are Meanings of “Evidence-Based”? A Comparative Review of 12 Clearinghouses that Rate the Effectiveness of Educational Programs","authors":"M. Wadhwa, Jingwen Zheng, Thomas D. Cook","doi":"10.3102/00346543231152262","DOIUrl":"https://doi.org/10.3102/00346543231152262","url":null,"abstract":"Clearinghouses set standards of scientific quality to vet existing research to determine how “evidence-based” an intervention is. This paper examines 12 educational clearinghouses to describe their effectiveness criteria, to estimate how consistently they rate the same program, and to probe why their judgments differ. All the clearinghouses value random assignment, but they differ in how they treat its implementation, how they weight quasi-experiments, and how they value ancillary causal factors like independent replication and persisting effects. A total of 1359 programs were analyzed over 10 clearinghouses; 83% of them were assessed by a single clearinghouse and, of those rated by more than one, similar ratings were achieved for only about 30% of the programs. This high level of inconsistency seems to be mostly due to clearinghouses disagreeing about whether a high program rating requires effects that are replicated and/or temporally persisting. Clearinghouses exist to identify “evidence-based” programs, but the inconsistency in their recommendations of the same program suggests that identifying “evidence-based” interventions is still more of a policy aspiration than a reliable research practice.","PeriodicalId":21145,"journal":{"name":"Review of Educational Research","volume":" ","pages":""},"PeriodicalIF":11.2,"publicationDate":"2023-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47379535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-15DOI: 10.3102/00346543221141658
E. Greaves, Deborah Wilson, A. Nairn
School-choice programs may increase schools' incentives for marketing rather than improving their educational offering. This article systematically reviews the literature on the marketing activities of primary and secondary schools worldwide. The 81 articles reviewed show that schools’ marketing has yet to be tackled by marketing academics or other social scientists outside the education field. Market-oriented U.S. charter schools and their international equivalents have stimulated recent research, but geographical gaps remain, particularly in countries with long-established school-choice policies and in rural areas. Schools deploy a range of marketing techniques with the intensity of activity directly correlated to the level of local competition and their position in the local hierarchy. Studies have analyzed schools’ use of market scanning, specific words and images in brochures, branding, segmentation, and targeting. These marketing activities are rarely accompanied by substantive curricular change, however, and may even contribute to social division through targeting or deceptive marketing activity.
{"title":"Marketing and School Choice: A Systematic Literature Review","authors":"E. Greaves, Deborah Wilson, A. Nairn","doi":"10.3102/00346543221141658","DOIUrl":"https://doi.org/10.3102/00346543221141658","url":null,"abstract":"School-choice programs may increase schools' incentives for marketing rather than improving their educational offering. This article systematically reviews the literature on the marketing activities of primary and secondary schools worldwide. The 81 articles reviewed show that schools’ marketing has yet to be tackled by marketing academics or other social scientists outside the education field. Market-oriented U.S. charter schools and their international equivalents have stimulated recent research, but geographical gaps remain, particularly in countries with long-established school-choice policies and in rural areas. Schools deploy a range of marketing techniques with the intensity of activity directly correlated to the level of local competition and their position in the local hierarchy. Studies have analyzed schools’ use of market scanning, specific words and images in brochures, branding, segmentation, and targeting. These marketing activities are rarely accompanied by substantive curricular change, however, and may even contribute to social division through targeting or deceptive marketing activity.","PeriodicalId":21145,"journal":{"name":"Review of Educational Research","volume":" ","pages":""},"PeriodicalIF":11.2,"publicationDate":"2023-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43909549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-13DOI: 10.3102/00346543221130017
Lolita A. Tabron, Amanda K. Thomas
Although the critical research cannon is often associated with qualitative scholars, there is a growing number of critical scholars who are refusing positivist-informed quantitative analyses. However, as a growing number of education scholars engaged in critical approaches to quantitative inquiry, instances of conflation began to surface. We understood this conflation as the interchangeable use of the terms quantitative criticalism, QuantCrit, and critical quantitative throughout the literature and even within the same chapter or article. The purpose of our systematic literature review is twofold: (a) to understand how critical approaches to quantitative inquiry emerged as a new paradigm within quantitative methods and (b) whether there is any distinction between quantitative criticalism, QuantCrit, and critical quantitative inquiries or simply interchangeable wordplay. We share how critical quantitative approaches are definite shifts within the quantitative research paradigm, highlight relevant assumptions, and share strategies and future directions for applied practice in this emergent field.
{"title":"Deeper than Wordplay: A Systematic Review of Critical Quantitative Approaches in Education Research (2007–2021)","authors":"Lolita A. Tabron, Amanda K. Thomas","doi":"10.3102/00346543221130017","DOIUrl":"https://doi.org/10.3102/00346543221130017","url":null,"abstract":"Although the critical research cannon is often associated with qualitative scholars, there is a growing number of critical scholars who are refusing positivist-informed quantitative analyses. However, as a growing number of education scholars engaged in critical approaches to quantitative inquiry, instances of conflation began to surface. We understood this conflation as the interchangeable use of the terms quantitative criticalism, QuantCrit, and critical quantitative throughout the literature and even within the same chapter or article. The purpose of our systematic literature review is twofold: (a) to understand how critical approaches to quantitative inquiry emerged as a new paradigm within quantitative methods and (b) whether there is any distinction between quantitative criticalism, QuantCrit, and critical quantitative inquiries or simply interchangeable wordplay. We share how critical quantitative approaches are definite shifts within the quantitative research paradigm, highlight relevant assumptions, and share strategies and future directions for applied practice in this emergent field.","PeriodicalId":21145,"journal":{"name":"Review of Educational Research","volume":"93 1","pages":"756 - 786"},"PeriodicalIF":11.2,"publicationDate":"2023-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47514557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}