Pub Date : 2022-02-17DOI: 10.1080/13803611.2021.2022318
Noemí Suárez Monzón, Vanessa Gómez Suárez, Diego Gudberto Lara Paredes
ABSTRACT Previous studies have identified a positive relationship between students’ perceptions of student evaluations of teaching (SET) and the grades that students provide in SET, controlling for other bias factors. The research by Spooren and Christiaens in 2017 at the University of Antwerp supported this finding. In this study, the methodology used by Spooren and Christiaens was replicated at the Technological Indoamerica University in Ecuador, in a close conceptual replication. In the replicated study, 967 undergraduate participants answered the questionnaires used by the original authors. The replication study sample was very similar in size, seniority, and gender to the original study but not in academic disciplines studied. Most of the students agreed that the evaluation was relevant and could improve teaching practices. Results show a statistically significant but small positive relation among perceptions of SET and SET scores (0.20 for the Belgian university and 0.27 for the Ecuadorian university).
{"title":"Is my opinion important in evaluating lecturers? Students’ perceptions of student evaluations of teaching (SET) and their relationship to SET scores","authors":"Noemí Suárez Monzón, Vanessa Gómez Suárez, Diego Gudberto Lara Paredes","doi":"10.1080/13803611.2021.2022318","DOIUrl":"https://doi.org/10.1080/13803611.2021.2022318","url":null,"abstract":"ABSTRACT Previous studies have identified a positive relationship between students’ perceptions of student evaluations of teaching (SET) and the grades that students provide in SET, controlling for other bias factors. The research by Spooren and Christiaens in 2017 at the University of Antwerp supported this finding. In this study, the methodology used by Spooren and Christiaens was replicated at the Technological Indoamerica University in Ecuador, in a close conceptual replication. In the replicated study, 967 undergraduate participants answered the questionnaires used by the original authors. The replication study sample was very similar in size, seniority, and gender to the original study but not in academic disciplines studied. Most of the students agreed that the evaluation was relevant and could improve teaching practices. Results show a statistically significant but small positive relation among perceptions of SET and SET scores (0.20 for the Belgian university and 0.27 for the Ecuadorian university).","PeriodicalId":47025,"journal":{"name":"Educational Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48888494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-07DOI: 10.1080/13803611.2021.2022316
P. Starkey, Alice E. Klein, Ben Clarke, S. Baker, Jaime Thomas
ABSTRACT A socioeconomic status (SES)-related achievement gap in mathematics emerges prior to school entry, and increases in elementary school. This gap makes implementation of demanding mathematics standards (e.g., the Common Core State Standards) an ongoing challenge. Early educational intervention is a strategy for addressing this challenge. A randomised controlled trial was conducted in public American preschools to (1) replicate the efficacy of an intervention, Pre-K Mathematics, for low-SES children, and (2) test the combined impact of this intervention and a Common-Core-aligned kindergarten intervention, Early Learning in Mathematics. Forty-one clusters of pre-kindergarten and kindergarten classrooms, containing a sample of 389 low-SES children from an agricultural region, were randomly assigned to treatment and control conditions. The original impact findings were replicated: Child mathematics outcomes in pre-kindergarten were positive and significant. Gains were maintained in kindergarten. Thus, the gap can be reduced and gains maintained by sustained early intervention.
{"title":"Effects of early mathematics intervention for low-SES pre-kindergarten and kindergarten students: a replication study","authors":"P. Starkey, Alice E. Klein, Ben Clarke, S. Baker, Jaime Thomas","doi":"10.1080/13803611.2021.2022316","DOIUrl":"https://doi.org/10.1080/13803611.2021.2022316","url":null,"abstract":"ABSTRACT A socioeconomic status (SES)-related achievement gap in mathematics emerges prior to school entry, and increases in elementary school. This gap makes implementation of demanding mathematics standards (e.g., the Common Core State Standards) an ongoing challenge. Early educational intervention is a strategy for addressing this challenge. A randomised controlled trial was conducted in public American preschools to (1) replicate the efficacy of an intervention, Pre-K Mathematics, for low-SES children, and (2) test the combined impact of this intervention and a Common-Core-aligned kindergarten intervention, Early Learning in Mathematics. Forty-one clusters of pre-kindergarten and kindergarten classrooms, containing a sample of 389 low-SES children from an agricultural region, were randomly assigned to treatment and control conditions. The original impact findings were replicated: Child mathematics outcomes in pre-kindergarten were positive and significant. Gains were maintained in kindergarten. Thus, the gap can be reduced and gains maintained by sustained early intervention.","PeriodicalId":47025,"journal":{"name":"Educational Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45455738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-01DOI: 10.1080/13803611.2021.2022320
D. Foung, Lucas Kohnke
ABSTRACT Replication studies are uncommon in education, and replications of validation studies are rarer. This study aimed to replicate, reproduce, and expand the study by Jellicoe and Forsythe published in 2019 that validated the Feedback in Learning Scale. We followed the original procedures, conducting a full validation process. We found only an 87% agreement between our model parameters and those of the original study. The differences were derived from the number of factors retained and the fit indices of alternative models. Fuller details of the methods used in the original study would have helped us to better ensure replicability. We also suggest that feedback in higher education (the context for our study) might be more effective if it were less personal and more task-related than workplace feedback (the context from which the Feedback in Learning Scale was derived).
{"title":"The development and validation of the Feedback in Learning Scale (FLS): a replication study","authors":"D. Foung, Lucas Kohnke","doi":"10.1080/13803611.2021.2022320","DOIUrl":"https://doi.org/10.1080/13803611.2021.2022320","url":null,"abstract":"ABSTRACT Replication studies are uncommon in education, and replications of validation studies are rarer. This study aimed to replicate, reproduce, and expand the study by Jellicoe and Forsythe published in 2019 that validated the Feedback in Learning Scale. We followed the original procedures, conducting a full validation process. We found only an 87% agreement between our model parameters and those of the original study. The differences were derived from the number of factors retained and the fit indices of alternative models. Fuller details of the methods used in the original study would have helped us to better ensure replicability. We also suggest that feedback in higher education (the context for our study) might be more effective if it were less personal and more task-related than workplace feedback (the context from which the Feedback in Learning Scale was derived).","PeriodicalId":47025,"journal":{"name":"Educational Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41979180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-31DOI: 10.1080/13803611.2021.2022314
K. Morrison
ABSTRACT Conceptual replications have received increased coverage in the educational research agenda. This article argues for clarity in, and justification of, the definition, scope, and boundaries of a conceptual replication and what it can and cannot do. It argues for clear justifications when changing components from those of the original study. The article raises issues concerning internal validity and construct validity which arise from the elision of replication with applicability and generalisability in a conceptual replication, and questions how far the “concept” needs, and can obtain, greater separation from context. It indicates limits to the power of conceptual replications to falsify and verify the original study, and argues for greater specificity, precision, accuracy, and attention to contexts, conditions, and causality and their influence on outcomes. Implications are drawn for preparing, planning, conducting, analysing, judging, and reporting “fair” conceptual replications in education, identifying 10 “rules” for a fair conceptual replication.
{"title":"Conceptual replications, research, and the “what works” agenda in education","authors":"K. Morrison","doi":"10.1080/13803611.2021.2022314","DOIUrl":"https://doi.org/10.1080/13803611.2021.2022314","url":null,"abstract":"ABSTRACT Conceptual replications have received increased coverage in the educational research agenda. This article argues for clarity in, and justification of, the definition, scope, and boundaries of a conceptual replication and what it can and cannot do. It argues for clear justifications when changing components from those of the original study. The article raises issues concerning internal validity and construct validity which arise from the elision of replication with applicability and generalisability in a conceptual replication, and questions how far the “concept” needs, and can obtain, greater separation from context. It indicates limits to the power of conceptual replications to falsify and verify the original study, and argues for greater specificity, precision, accuracy, and attention to contexts, conditions, and causality and their influence on outcomes. Implications are drawn for preparing, planning, conducting, analysing, judging, and reporting “fair” conceptual replications in education, identifying 10 “rules” for a fair conceptual replication.","PeriodicalId":47025,"journal":{"name":"Educational Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43433836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-31DOI: 10.1080/13803611.2021.2022308
John F. Brown
ABSTRACT This paper discusses adapting Churches’ approach to large-scale teacher/researcher conceptual replications of major “science of learning” findings, to increase teachers’ engagement with empirical research on, and building research networks for, gathering data on the science of learning. The project here demonstrated the feasibility of teacher-led randomised controlled trials for conceptually replicating the effects of cognitive science on learning, as specified by researchers. It also indicated high levels of interest by teachers in applying more science of learning in their practice. The approach gave freedom to teachers to design interventions, choose research methods, and measure outcomes, even though such freedom would be in tension with some scientific research which relies on constraining the sources of variation. This paper discusses how a balance can be struck between the objectives of teachers and researchers engaged in replicating cognitive science findings, and promoting teacher engagement in conceptual replication research.
{"title":"Replication studies: an essay in praise of ground-up conceptual replications in the science of learning","authors":"John F. Brown","doi":"10.1080/13803611.2021.2022308","DOIUrl":"https://doi.org/10.1080/13803611.2021.2022308","url":null,"abstract":"ABSTRACT This paper discusses adapting Churches’ approach to large-scale teacher/researcher conceptual replications of major “science of learning” findings, to increase teachers’ engagement with empirical research on, and building research networks for, gathering data on the science of learning. The project here demonstrated the feasibility of teacher-led randomised controlled trials for conceptually replicating the effects of cognitive science on learning, as specified by researchers. It also indicated high levels of interest by teachers in applying more science of learning in their practice. The approach gave freedom to teachers to design interventions, choose research methods, and measure outcomes, even though such freedom would be in tension with some scientific research which relies on constraining the sources of variation. This paper discusses how a balance can be struck between the objectives of teachers and researchers engaged in replicating cognitive science findings, and promoting teacher engagement in conceptual replication research.","PeriodicalId":47025,"journal":{"name":"Educational Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47534372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-31DOI: 10.1080/13803611.2021.2022319
C. Bokhove
ABSTRACT An article by Kim et al. from 2014 examined individual- and school-level variables affecting the information and communication technology (ICT) literacy level of Korean elementary school students, finding differential gender effects. In this secondary data replication, we used data from the 2018 International Computer and Information Literacy Study, focusing on data from Korea as main replication. As many characteristics of the study as possible, such as variables and analytical strategy, were modelled in the analysis. Additional analyses included 13 countries and jurisdictions, varied centring techniques for variables, and missing data treatment. The replication and analyses were pre-registered via the Open Science Framework. The main analysis did not replicate the main gender finding. However, it was also clear that, despite care taken in a rigorous replication, analytical variability still plays a large role in replications of findings, and with secondary datasets. We discuss the implications of this for secondary data replications.
{"title":"The role of analytical variability in secondary data replications: a replication of Kim et al. (2014)","authors":"C. Bokhove","doi":"10.1080/13803611.2021.2022319","DOIUrl":"https://doi.org/10.1080/13803611.2021.2022319","url":null,"abstract":"ABSTRACT An article by Kim et al. from 2014 examined individual- and school-level variables affecting the information and communication technology (ICT) literacy level of Korean elementary school students, finding differential gender effects. In this secondary data replication, we used data from the 2018 International Computer and Information Literacy Study, focusing on data from Korea as main replication. As many characteristics of the study as possible, such as variables and analytical strategy, were modelled in the analysis. Additional analyses included 13 countries and jurisdictions, varied centring techniques for variables, and missing data treatment. The replication and analyses were pre-registered via the Open Science Framework. The main analysis did not replicate the main gender finding. However, it was also clear that, despite care taken in a rigorous replication, analytical variability still plays a large role in replications of findings, and with secondary datasets. We discuss the implications of this for secondary data replications.","PeriodicalId":47025,"journal":{"name":"Educational Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41841730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-31DOI: 10.1080/13803611.2021.2022309
D. Wiliam
For anyone who understands the logic of null-hypothesis significance testing, the so-called “replication crisis” in the behavioural sciences (Bryan et al., 2021) would not have come as much of a surprise. Since the pioneering work of Carlo Bonferroni (1935) – and subsequent work in the 1950s by Henry Scheffé (1953), John Tukey (1953/1994), and Olive Jean Dunn (1961) – statisticians have repeatedly pointed out the logically obvious fact that the probability of making a Type I error (mistakenly rejecting the null hypothesis) increases when multiple comparisons are made. And yet, studies in leading psychology and education journals commonly present dozens if not hundreds of comparisons of means, correlations, or other statistics, and then go on to claim that any statistic that has a probability of less than 0.05 is “significant”. However, as Gelman and Loken (2013) point out, even when researchers do not engage in such “fishing expeditions”, if decisions about the analysis are made after the data are collected – “hypothesizing after results are known” or “HARKing” (Kerr, 1998) – then the probability of Type 1 errors is increased. At each stage in the analysis, the researcher is presented with many choices – what Gelman and Loken call “the garden of forking paths” after a short story by Argentinian author Jorge Luis (Borges, 1941/1964) – that can profoundly influence the results obtained. Some of these, such as cleaning data, or eliminating outliers, seem innocent, but nevertheless, because these decisions are taken after the results are seen, they are inconsistent with the assumptions of nullhypothesis significance testing. Other, more egregious, examples include outcome switching, collecting additional data, or changing the analytical approach when the desired level of statistical significance is not reached. A good example of how these issues play out in practice is provided by Bokhove (2022) in his replication of a study on gender differences in computer literacy, where he found that different, reasonable, analytical choices lead to very different conclusions.
对于任何理解零假设显著性检验逻辑的人来说,行为科学中所谓的“复制危机”(Bryan et al., 2021)都不会让人感到意外。自从Carlo Bonferroni(1935)的开创性工作,以及20世纪50年代Henry scheff(1953)、John Tukey(1953/1994)和Olive Jean Dunn(1961)的后续工作以来,统计学家们一再指出一个逻辑上显而易见的事实,即当进行多次比较时,犯第一类错误(错误地拒绝零假设)的概率会增加。然而,主要的心理学和教育期刊上的研究通常会提出几十个(如果不是几百个的话)对平均值、相关性或其他统计数据的比较,然后继续声称任何概率小于0.05的统计数据都是“显著的”。然而,正如Gelman和Loken(2013)所指出的那样,即使研究人员不进行这种“钓鱼考察”,如果在收集数据后做出有关分析的决定-“在结果已知后假设”或“HARKing”(Kerr, 1998) -那么类型1错误的可能性就会增加。在分析的每个阶段,研究人员都会面临许多选择——Gelman和Loken以阿根廷作家Jorge Luis(博尔赫斯,1941/1964)的一个短篇小说命名,将其称为“分叉路径的花园”——这些选择会深刻地影响所获得的结果。其中一些,如清理数据或消除异常值,似乎是无害的,但是,由于这些决定是在看到结果之后做出的,因此它们与零假设显著性检验的假设不一致。其他,更令人震惊的例子包括结果转换,收集额外的数据,或者在没有达到预期的统计显著性水平时改变分析方法。Bokhove(2022)在复制一项关于计算机素养性别差异的研究中提供了一个很好的例子,他发现不同的、合理的、分析性的选择会导致非常不同的结论。
{"title":"How should educational research respond to the replication “crisis” in the social sciences? Reflections on the papers in the Special Issue","authors":"D. Wiliam","doi":"10.1080/13803611.2021.2022309","DOIUrl":"https://doi.org/10.1080/13803611.2021.2022309","url":null,"abstract":"For anyone who understands the logic of null-hypothesis significance testing, the so-called “replication crisis” in the behavioural sciences (Bryan et al., 2021) would not have come as much of a surprise. Since the pioneering work of Carlo Bonferroni (1935) – and subsequent work in the 1950s by Henry Scheffé (1953), John Tukey (1953/1994), and Olive Jean Dunn (1961) – statisticians have repeatedly pointed out the logically obvious fact that the probability of making a Type I error (mistakenly rejecting the null hypothesis) increases when multiple comparisons are made. And yet, studies in leading psychology and education journals commonly present dozens if not hundreds of comparisons of means, correlations, or other statistics, and then go on to claim that any statistic that has a probability of less than 0.05 is “significant”. However, as Gelman and Loken (2013) point out, even when researchers do not engage in such “fishing expeditions”, if decisions about the analysis are made after the data are collected – “hypothesizing after results are known” or “HARKing” (Kerr, 1998) – then the probability of Type 1 errors is increased. At each stage in the analysis, the researcher is presented with many choices – what Gelman and Loken call “the garden of forking paths” after a short story by Argentinian author Jorge Luis (Borges, 1941/1964) – that can profoundly influence the results obtained. Some of these, such as cleaning data, or eliminating outliers, seem innocent, but nevertheless, because these decisions are taken after the results are seen, they are inconsistent with the assumptions of nullhypothesis significance testing. Other, more egregious, examples include outcome switching, collecting additional data, or changing the analytical approach when the desired level of statistical significance is not reached. A good example of how these issues play out in practice is provided by Bokhove (2022) in his replication of a study on gender differences in computer literacy, where he found that different, reasonable, analytical choices lead to very different conclusions.","PeriodicalId":47025,"journal":{"name":"Educational Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43997167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-31DOI: 10.1080/13803611.2021.2022312
A. Davis, Matthew C. Makel
In an emerging research space, it is a tall order to compile a litany of literature, synthesise complex concepts, and incorporate rapidly evolving perspectives. In Keith Morrison’s Replication Research in Education: A Guide to Designing, Conducting, and Analysing Studies, he makes this daunting task doable. This textbook unpacks replication research at a high level while diving deeper into what one might consider when designing their own study. Along the way, the reader will find useful bullet point summaries at the chapter openings, meditative reminders of important terms in the middle, as well as handy tables and figures worth bookmarking throughout. The nomenclature for “replication” is dense. As Morrison describes, “replication” is an ambiguous term, stemming from its Latin roots – replicare –meaning to repeat, to unroll, or to fold back. More circularly, replication:
{"title":"Replication research in education: a guide to designing, conducting, and analysing studies","authors":"A. Davis, Matthew C. Makel","doi":"10.1080/13803611.2021.2022312","DOIUrl":"https://doi.org/10.1080/13803611.2021.2022312","url":null,"abstract":"In an emerging research space, it is a tall order to compile a litany of literature, synthesise complex concepts, and incorporate rapidly evolving perspectives. In Keith Morrison’s Replication Research in Education: A Guide to Designing, Conducting, and Analysing Studies, he makes this daunting task doable. This textbook unpacks replication research at a high level while diving deeper into what one might consider when designing their own study. Along the way, the reader will find useful bullet point summaries at the chapter openings, meditative reminders of important terms in the middle, as well as handy tables and figures worth bookmarking throughout. The nomenclature for “replication” is dense. As Morrison describes, “replication” is an ambiguous term, stemming from its Latin roots – replicare –meaning to repeat, to unroll, or to fold back. More circularly, replication:","PeriodicalId":47025,"journal":{"name":"Educational Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44817154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-31DOI: 10.1080/13803611.2021.2022311
Rashida Banerjee
The education community must aim to conduct, encourage, and support rigorous research that is transparent, actionable, and focused on consequential outcomes. The ultimate goal of an educational scientific endeavour is to provide empirical evidence to improve education practice and policy and share that evidence in a way that can be used by educators, families, policymakers, researchers, and the community. Reichow (2016) operationalised evidence-based practice as a five-step process involving:
{"title":"Replication research in education: is the tide turning?","authors":"Rashida Banerjee","doi":"10.1080/13803611.2021.2022311","DOIUrl":"https://doi.org/10.1080/13803611.2021.2022311","url":null,"abstract":"The education community must aim to conduct, encourage, and support rigorous research that is transparent, actionable, and focused on consequential outcomes. The ultimate goal of an educational scientific endeavour is to provide empirical evidence to improve education practice and policy and share that evidence in a way that can be used by educators, families, policymakers, researchers, and the community. Reichow (2016) operationalised evidence-based practice as a five-step process involving:","PeriodicalId":47025,"journal":{"name":"Educational Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42471575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-31DOI: 10.1080/13803611.2021.2022313
Jinfa Cai
Despite the abundant and frequent calls for replication studies from research communities (e.g., Shavelson & Towne, 2002) and funding agencies (e.g., Institute of Education Sciences [IES] & National Science Foundation [NSF], 2013), the number of such studies remains stubbornly small. For example, in an analysis of all articles published since 1900 in the top-10 psychological journals, Makel et al. (2012) found that less than 1% were replication studies. Moreover, from the top100 education journals, as ranked by a 5-year impact factor, Makel and Plucker (2014) found that only 0.13% of articles were replication studies, with most successful replications being authored by the same individuals who had carried out the initial studies. Among all research articles published in the Journal for Research in Mathematics Education (JRME) from its inception in 1970 through 2016, only about 3% clearly intended to replicate prior studies (Cai et al., 2018). And, at the IES in the United States, the majority of funded grant applications have not explicitly stated an intent to conduct a replication (Chhin et al., 2018). There are many reasons for the limited number of published replication studies. One major reason is a lack of clarity with respect to the nature of replications and their significance. There has, however, been increasing recognition of the importance of replication studies. Replication studies provide new knowledge and can help researchers, practitioners, and policymakers gain insights about which interventions improve (or do not improve) education outcomes, for whom, and under what conditions (Cai et al., 2018; NSF & IES, 2018). Perry et al. (2022) found that despite their small number, the rate of replication studies in education has gradually increased from 2011 to 2020. Fortunately, some journals have published special issues on replication studies, such as this special issue of Educational Research and Evaluation (ERE) and that in JRME (Cai et al., 2018). In addition, funding agencies such as the NSF and IES in the United States have explicitly called for grant proposals for replication studies (NSF & IES, 2018).
尽管研究界(例如,Shavelson&Towne,2002)和资助机构(例如,教育科学研究所[IES]和国家科学基金会[NSF],2013)频繁呼吁进行复制研究,但此类研究的数量仍然很少。例如,在对1900年以来发表在前十大心理学期刊上的所有文章的分析中,Makel等人(2012)发现,只有不到1%的文章是复制研究。此外,Makel和Plucker(2014)在按5年影响因素排名的前100种教育期刊中发现,只有0.13%的文章是复制研究,大多数成功的复制是由进行初步研究的同一个人撰写的。在《数学教育研究杂志》(JRME)从1970年创刊到2016年发表的所有研究文章中,只有约3%的文章明确打算复制先前的研究(Cai et al.,2018)。而且,在美国IES,大多数资助的拨款申请都没有明确表示有意进行复制(Chhin等人,2018)。发表的复制研究数量有限有很多原因。一个主要原因是对复制的性质及其意义缺乏明确性。然而,人们越来越认识到复制研究的重要性。复制研究提供了新的知识,可以帮助研究人员、从业者和政策制定者深入了解哪些干预措施可以改善(或不能改善)教育成果,对谁有利,以及在什么条件下(Cai et al.,2018;NSF&IES,2018)。Perry等人(2022)发现,尽管数量很少,但从2011年到2020年,教育中的复制研究率逐渐上升。幸运的是,一些期刊已经发表了关于复制研究的特刊,如本期《教育研究与评价》(ERE)和《JRME》(Cai et al.,2018)。此外,美国国家科学基金会和IES等资助机构明确呼吁为复制研究提供拨款建议(NSF&IES,2018)。
{"title":"Promoting conceptual replications in educational research","authors":"Jinfa Cai","doi":"10.1080/13803611.2021.2022313","DOIUrl":"https://doi.org/10.1080/13803611.2021.2022313","url":null,"abstract":"Despite the abundant and frequent calls for replication studies from research communities (e.g., Shavelson & Towne, 2002) and funding agencies (e.g., Institute of Education Sciences [IES] & National Science Foundation [NSF], 2013), the number of such studies remains stubbornly small. For example, in an analysis of all articles published since 1900 in the top-10 psychological journals, Makel et al. (2012) found that less than 1% were replication studies. Moreover, from the top100 education journals, as ranked by a 5-year impact factor, Makel and Plucker (2014) found that only 0.13% of articles were replication studies, with most successful replications being authored by the same individuals who had carried out the initial studies. Among all research articles published in the Journal for Research in Mathematics Education (JRME) from its inception in 1970 through 2016, only about 3% clearly intended to replicate prior studies (Cai et al., 2018). And, at the IES in the United States, the majority of funded grant applications have not explicitly stated an intent to conduct a replication (Chhin et al., 2018). There are many reasons for the limited number of published replication studies. One major reason is a lack of clarity with respect to the nature of replications and their significance. There has, however, been increasing recognition of the importance of replication studies. Replication studies provide new knowledge and can help researchers, practitioners, and policymakers gain insights about which interventions improve (or do not improve) education outcomes, for whom, and under what conditions (Cai et al., 2018; NSF & IES, 2018). Perry et al. (2022) found that despite their small number, the rate of replication studies in education has gradually increased from 2011 to 2020. Fortunately, some journals have published special issues on replication studies, such as this special issue of Educational Research and Evaluation (ERE) and that in JRME (Cai et al., 2018). In addition, funding agencies such as the NSF and IES in the United States have explicitly called for grant proposals for replication studies (NSF & IES, 2018).","PeriodicalId":47025,"journal":{"name":"Educational Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42392375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}