Pub Date : 2024-08-01Epub Date: 2023-08-23DOI: 10.1177/0193841X231197253
Madeline Sands, Robert Aunger
This paper describes a process evaluation of a 'wise' intervention that took place in six acute care units in two medical-surgical teaching hospitals in the United States during 2016-2017. 'Wise' interventions are short, inexpensive interventions that depend on triggering specific psychological mechanisms to achieve behaviour change. This study sought to increase the hand hygiene compliance (HHC) rates before entering a patient's room among nurses. The intervention centred on the use of threat to professional identity to prompt improved HHC. Through questionnaires administered to intervention participants and the implementation facilitator, together with independent observation of intervention delivery, we examined whether the steps in the Theory of Change occurred as expected. We found that aspects of the implementation-including mode of delivery, use of incentives, and how nurses were recruited and complied with the intervention-affected reach and likely effectiveness. While components of the intervention's mechanisms of impact-such as the element of surprise-were successful, they ultimately did not translate into performance of the target behaviour. Performance was also not affected by use of an implementation intention as repeated performance of HHC over years of being a nurse has likely already established well-ingrained practices. Context did have an effect; the safety culture of the units, the involvement of the Nurse Managers, the level of accountability for HHC in each unit, and the hospitals themselves all influenced levels of engagement. These conclusions should have implications for those interested in the applicability of 'wise' interventions and those seeking to improve HHC in hospitals.
{"title":"Process Evaluation of an Acute-Care Nurse-Centred Hand Hygiene Intervention in US Hospitals.","authors":"Madeline Sands, Robert Aunger","doi":"10.1177/0193841X231197253","DOIUrl":"10.1177/0193841X231197253","url":null,"abstract":"<p><p>This paper describes a process evaluation of a 'wise' intervention that took place in six acute care units in two medical-surgical teaching hospitals in the United States during 2016-2017. 'Wise' interventions are short, inexpensive interventions that depend on triggering specific psychological mechanisms to achieve behaviour change. This study sought to increase the hand hygiene compliance (HHC) rates before entering a patient's room among nurses. The intervention centred on the use of threat to professional identity to prompt improved HHC. Through questionnaires administered to intervention participants and the implementation facilitator, together with independent observation of intervention delivery, we examined whether the steps in the Theory of Change occurred as expected. We found that aspects of the implementation-including mode of delivery, use of incentives, and how nurses were recruited and complied with the intervention-affected reach and likely effectiveness. While components of the intervention's mechanisms of impact-such as the element of surprise-were successful, they ultimately did not translate into performance of the target behaviour. Performance was also not affected by use of an implementation intention as repeated performance of HHC over years of being a nurse has likely already established well-ingrained practices. Context did have an effect; the safety culture of the units, the involvement of the Nurse Managers, the level of accountability for HHC in each unit, and the hospitals themselves all influenced levels of engagement. These conclusions should have implications for those interested in the applicability of 'wise' interventions and those seeking to improve HHC in hospitals.</p>","PeriodicalId":47533,"journal":{"name":"Evaluation Review","volume":" ","pages":"663-691"},"PeriodicalIF":3.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11193912/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10433979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2023-08-07DOI: 10.1177/0193841X231193483
Emily Cardon, Leonard Lopoo
Background: While randomized controlled trials (RCTs) are typically considered the gold standard of program evaluation, they are infrequently chosen by public sector leaders, defined as government and nonprofit decision-makers, when an impact evaluation is required. Objectives: This study provides descriptive evidence on RCT aversion among public sector leaders and attempts to understand what factors affect their likelihood of choosing RCTs for impact evaluations. Research Design: The authors ask if public sector leaders follow similar preference patterns found among non-public sector leaders when choosing either an RCT or a quasi-experimental design and use a survey experiment to determine which factors affect the RCT choice. Subjects: The study sample includes 2050 public sector leaders and a comparison group of 2060 respondents who do not lead public sector organizations. Measures: The primary outcome measure is selecting an RCT as the preferred evaluation option. Results: When asked to make a decision about an impact evaluation, the majority of people do not choose an RCT. While also averse to RCTs, public sector leaders are about 13% more likely to prefer a RCT to a quasi-experimental evaluation compared to the general population. Public sector leaders are less likely to use RCTs for evaluations of more intense interventions, potentially because they are perceived to be superior to the options available for the control group. Conclusion: Funders should be aware that when given a choice, public sector leaders prefer other options to RCTs. Greater awareness of the benefits of RCTs could increase their use in the public sector.
{"title":"Randomized Controlled Trial Aversion among Public Sector Leadership: A Survey Experiment.","authors":"Emily Cardon, Leonard Lopoo","doi":"10.1177/0193841X231193483","DOIUrl":"10.1177/0193841X231193483","url":null,"abstract":"<p><p><i>Background:</i> While randomized controlled trials (RCTs) are typically considered the gold standard of program evaluation, they are infrequently chosen by public sector leaders, defined as government and nonprofit decision-makers, when an impact evaluation is required. <i>Objectives</i>: This study provides descriptive evidence on RCT aversion among public sector leaders and attempts to understand what factors affect their likelihood of choosing RCTs for impact evaluations. <i>Research Design</i>: The authors ask if public sector leaders follow similar preference patterns found among non-public sector leaders when choosing either an RCT or a quasi-experimental design and use a survey experiment to determine which factors affect the RCT choice. <i>Subjects</i>: The study sample includes 2050 public sector leaders and a comparison group of 2060 respondents who do not lead public sector organizations. <i>Measures:</i> The primary outcome measure is selecting an RCT as the preferred evaluation option. <i>Results</i>: When asked to make a decision about an impact evaluation, the majority of people do not choose an RCT. While also averse to RCTs, public sector leaders are about 13% more likely to prefer a RCT to a quasi-experimental evaluation compared to the general population. Public sector leaders are less likely to use RCTs for evaluations of more intense interventions, potentially because they are perceived to be superior to the options available for the control group. <i>Conclusion</i>: Funders should be aware that when given a choice, public sector leaders prefer other options to RCTs. Greater awareness of the benefits of RCTs could increase their use in the public sector.</p>","PeriodicalId":47533,"journal":{"name":"Evaluation Review","volume":" ","pages":"579-609"},"PeriodicalIF":3.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9953612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-01Epub Date: 2024-01-31DOI: 10.1177/0193841X241228335
Danielle V Handel, Eric A Hanushek
Recent attention to the causal identification of spending impacts provides improved estimates of spending outcomes in a variety of circumstances, but the estimates are substantially different across studies. Half of the variation in estimated funding impact on test scores and over three-quarters of the variation of impacts on school attainment reflect differences in the true parameters across study contexts. Unfortunately, inability to describe the circumstances underlying effective school spending impedes any attempts to generalize from the extant results to new policy situations. The evidence indicates that how funds are used is crucial to the outcomes, but such factors as targeting of funds or court interventions fail to explain the existing pattern of results.
{"title":"Contexts of Convenience: Generalizing from Published Evaluations of School Finance Policies.","authors":"Danielle V Handel, Eric A Hanushek","doi":"10.1177/0193841X241228335","DOIUrl":"10.1177/0193841X241228335","url":null,"abstract":"<p><p>Recent attention to the causal identification of spending impacts provides improved estimates of spending outcomes in a variety of circumstances, but the estimates are substantially different across studies. Half of the variation in estimated funding impact on test scores and over three-quarters of the variation of impacts on school attainment reflect differences in the true parameters across study contexts. Unfortunately, inability to describe the circumstances underlying effective school spending impedes any attempts to generalize from the extant results to new policy situations. The evidence indicates that how funds are used is crucial to the outcomes, but such factors as targeting of funds or court interventions fail to explain the existing pattern of results.</p>","PeriodicalId":47533,"journal":{"name":"Evaluation Review","volume":" ","pages":"461-494"},"PeriodicalIF":0.9,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139651807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-01Epub Date: 2024-01-23DOI: 10.1177/0193841X241227481
Julia H Littell
Systematic reviews and meta-analyses are viewed as potent tools for generalized causal inference. These reviews are routinely used to inform decision makers about expected effects of interventions. However, the logic of generalization from research reviews to diverse policy and practice contexts is not well developed. Building on sampling theory, concerns about epistemic uncertainty, and principles of generalized causal inference, this article presents a pragmatic approach to generalizability assessment for use with systematic reviews and meta-analyses. This approach is applied to two systematic reviews and meta-analyses of effects of "evidence-based" psychosocial interventions for youth and families. Evaluations included in systematic reviews are not necessarily representative of populations and treatments of interest. Generalizability of results is limited by high risks of bias, uncertain estimates, and insufficient descriptive data from impact evaluations. Systematic reviews and meta-analyses can be used to test generalizability claims, explore heterogeneity, and identify potential moderators of effects. These reviews can also produce pooled estimates that are not representative of any larger sets of studies, programs, or people. Further work is needed to improve the conduct and reporting of impact evaluations and systematic reviews, and to develop practical approaches to generalizability assessment and guide applications of interventions in diverse policy and practice contexts.
{"title":"The Logic of Generalization From Systematic Reviews and Meta-Analyses of Impact Evaluations.","authors":"Julia H Littell","doi":"10.1177/0193841X241227481","DOIUrl":"10.1177/0193841X241227481","url":null,"abstract":"<p><p>Systematic reviews and meta-analyses are viewed as potent tools for generalized causal inference. These reviews are routinely used to inform decision makers about expected effects of interventions. However, the logic of generalization from research reviews to diverse policy and practice contexts is not well developed. Building on sampling theory, concerns about epistemic uncertainty, and principles of generalized causal inference, this article presents a pragmatic approach to generalizability assessment for use with systematic reviews and meta-analyses. This approach is applied to two systematic reviews and meta-analyses of effects of \"evidence-based\" psychosocial interventions for youth and families. Evaluations included in systematic reviews are not necessarily representative of populations and treatments of interest. Generalizability of results is limited by high risks of bias, uncertain estimates, and insufficient descriptive data from impact evaluations. Systematic reviews and meta-analyses can be used to test generalizability claims, explore heterogeneity, and identify potential moderators of effects. These reviews can also produce pooled estimates that are not representative of any larger sets of studies, programs, or people. Further work is needed to improve the conduct and reporting of impact evaluations and systematic reviews, and to develop practical approaches to generalizability assessment and guide applications of interventions in diverse policy and practice contexts.</p>","PeriodicalId":47533,"journal":{"name":"Evaluation Review","volume":" ","pages":"427-460"},"PeriodicalIF":0.9,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139543102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-01Epub Date: 2024-02-03DOI: 10.1177/0193841X241229885
Rebecca Maynard
This chapter begins with an overview of recent developments that have encouraged and facilitated greater use of research syntheses, including Meta-Analysis, to guide public policy and practice in education, workforce development, and social services. It discusses the role of Meta-Analysis for improving knowledge of the effectiveness of programs, policies, and practices and the applicability and generalizability of that knowledge to conditions other than those represented by the study samples and settings. The chapter concludes with recommendations for improving the potential of Meta-Analysis to accelerate knowledge development through changing how we design, conduct, and report findings of individual studies to maximize their usefulness in Meta-Analysis as well as how we produce and report Meta-Analysis findings. The paper includes references to resources supporting the recommendations.
{"title":"Improving the Usefulness and Use of Meta-Analysis to Inform Policy and Practice.","authors":"Rebecca Maynard","doi":"10.1177/0193841X241229885","DOIUrl":"10.1177/0193841X241229885","url":null,"abstract":"<p><p>This chapter begins with an overview of recent developments that have encouraged and facilitated greater use of research syntheses, including Meta-Analysis, to guide public policy and practice in education, workforce development, and social services. It discusses the role of Meta-Analysis for improving knowledge of the effectiveness of programs, policies, and practices and the applicability and generalizability of that knowledge to conditions other than those represented by the study samples and settings. The chapter concludes with recommendations for improving the potential of Meta-Analysis to accelerate knowledge development through changing how we design, conduct, and report findings of individual studies to maximize their usefulness in Meta-Analysis as well as how we produce and report Meta-Analysis findings. The paper includes references to resources supporting the recommendations.</p>","PeriodicalId":47533,"journal":{"name":"Evaluation Review","volume":" ","pages":"515-543"},"PeriodicalIF":0.9,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11003195/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139673299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-01Epub Date: 2024-01-18DOI: 10.1177/0193841X241228332
Tom Ling
Assessing the transferability of lessons from social research or evaluation continues to raise challenges. Efforts to identify transferable lessons can be based on two different forms of argumentation. The first draws upon statistics and causal inferences. The second involves constructing a reasoned case based on weighing up different data collected along the causal chain from designing to delivery. Both approaches benefit from designing research based upon existing evidence and ensuring that the descriptions of the programme, context, and intended beneficiaries are sufficiently rich. Identifying transferable lessons should not be thought of as a one-off event but involves contributing to the iterative and learning of a scientific community. To understand the circumstances under which findings can be confidently transferred, we need to understand: (1) How far and why outcomes of interest have multiple, interacting and fluctuating causes. (2) The program design and implementation capacity. (3) Prior knowledge and causal landscapes (and how far these are included in the theory of change). (4) New and relevant knowledge; what can we learn in our 'disputatious community of truth seekers'.
{"title":"Transferability of Lessons From Program Evaluations: Iron Laws, Hiding Hands and the Evidence Ecosystem.","authors":"Tom Ling","doi":"10.1177/0193841X241228332","DOIUrl":"10.1177/0193841X241228332","url":null,"abstract":"<p><p>Assessing the transferability of lessons from social research or evaluation continues to raise challenges. Efforts to identify transferable lessons can be based on two different forms of argumentation. The first draws upon statistics and causal inferences. The second involves constructing a reasoned case based on weighing up different data collected along the causal chain from designing to delivery. Both approaches benefit from designing research based upon existing evidence and ensuring that the descriptions of the programme, context, and intended beneficiaries are sufficiently rich. Identifying transferable lessons should not be thought of as a one-off event but involves contributing to the iterative and learning of a scientific community. To understand the circumstances under which findings can be confidently transferred, we need to understand: (1) How far and why outcomes of interest have multiple, interacting and fluctuating causes. (2) The program design and implementation capacity. (3) Prior knowledge and causal landscapes (and how far these are included in the theory of change). (4) New and relevant knowledge; what can we learn in our 'disputatious community of truth seekers'.</p>","PeriodicalId":47533,"journal":{"name":"Evaluation Review","volume":" ","pages":"410-426"},"PeriodicalIF":0.9,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139486569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-01Epub Date: 2024-02-01DOI: 10.1177/0193841X241227480
Burt S Barnow, Sanjay K Pandey, Qian Eric Luo
This paper describes how mixed methods can improve the value and policy relevance of impact evaluations, paying particular attention to how mixed methods can be used to address external validity and generalization issues. We briefly review the literature on the rationales for using mixed methods; provide documentation of the extent to which mixed methods have been used in impact evaluations in recent years; describe how we developed a list of recent impact evaluations using mixed methods and the process used to conduct full-text reviews of these articles; summarize the findings from our analysis of the articles; discuss three exemplars of using mixed methods in impact evaluations; and discuss how mixed methods have been used for studying and improving external validity and potential improvements that could be made in this area. We find that mixed methods are rarely used in impact evaluations, and we believe that increased use of mixed methods would be useful because they can reinforce findings from the quantitative analysis (triangulation), and they can also help us understand the mechanism by which programs have their impacts and the reasons why programs fail.
{"title":"How Mixed-Methods Research Can Improve the Policy Relevance of Impact Evaluations.","authors":"Burt S Barnow, Sanjay K Pandey, Qian Eric Luo","doi":"10.1177/0193841X241227480","DOIUrl":"10.1177/0193841X241227480","url":null,"abstract":"<p><p>This paper describes how mixed methods can improve the value and policy relevance of impact evaluations, paying particular attention to how mixed methods can be used to address external validity and generalization issues. We briefly review the literature on the rationales for using mixed methods; provide documentation of the extent to which mixed methods have been used in impact evaluations in recent years; describe how we developed a list of recent impact evaluations using mixed methods and the process used to conduct full-text reviews of these articles; summarize the findings from our analysis of the articles; discuss three exemplars of using mixed methods in impact evaluations; and discuss how mixed methods have been used for studying and improving external validity and potential improvements that could be made in this area. We find that mixed methods are rarely used in impact evaluations, and we believe that increased use of mixed methods would be useful because they can reinforce findings from the quantitative analysis (triangulation), and they can also help us understand the mechanism by which programs have their impacts and the reasons why programs fail.</p>","PeriodicalId":47533,"journal":{"name":"Evaluation Review","volume":" ","pages":"495-514"},"PeriodicalIF":0.9,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139651808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-30DOI: 10.1177/0193841x241248864
Pamela R. Buckley, Katie Massey Combs, Karen M. Drewelow, Brittany L. Hubler, Marion Amanda Lain
As evidence-based interventions are scaled, fidelity of implementation, and thus effectiveness, often wanes. Validated fidelity measures can improve researchers’ ability to attribute outcomes to the intervention and help practitioners feel more confident in implementing the intervention as intended. We aim to provide a model for the validation of fidelity observation protocols to guide future research studying evidence-based interventions scaled-up under real-world conditions. We describe a process to build evidence of validity for items within the Session Review Form, an observational tool measuring fidelity to interactive drug prevention programs such as the Botvin LifeSkills Training program. Following Kane’s (2006) assumptions framework requiring that validity evidence be built across four areas (scoring, generalizability, extrapolation, and decision), confirmatory factor analysis supported the hypothesized two-factor structure measuring quality of delivery (seven items assessing how well the material is implemented) and participant responsiveness (three items evaluating how well the intervention is received), and measurement invariance tests suggested the structure held across grade level and schools serving different student populations. These findings provide some evidence supporting the extrapolation assumption, though additional research is warranted since a more complete overall depiction of the validity argument is needed to evaluate fidelity measures.
{"title":"Validity Evidence for an Observational Fidelity Measure to Inform Scale-Up of Evidence-Based Interventions","authors":"Pamela R. Buckley, Katie Massey Combs, Karen M. Drewelow, Brittany L. Hubler, Marion Amanda Lain","doi":"10.1177/0193841x241248864","DOIUrl":"https://doi.org/10.1177/0193841x241248864","url":null,"abstract":"As evidence-based interventions are scaled, fidelity of implementation, and thus effectiveness, often wanes. Validated fidelity measures can improve researchers’ ability to attribute outcomes to the intervention and help practitioners feel more confident in implementing the intervention as intended. We aim to provide a model for the validation of fidelity observation protocols to guide future research studying evidence-based interventions scaled-up under real-world conditions. We describe a process to build evidence of validity for items within the Session Review Form, an observational tool measuring fidelity to interactive drug prevention programs such as the Botvin LifeSkills Training program. Following Kane’s (2006) assumptions framework requiring that validity evidence be built across four areas (scoring, generalizability, extrapolation, and decision), confirmatory factor analysis supported the hypothesized two-factor structure measuring quality of delivery (seven items assessing how well the material is implemented) and participant responsiveness (three items evaluating how well the intervention is received), and measurement invariance tests suggested the structure held across grade level and schools serving different student populations. These findings provide some evidence supporting the extrapolation assumption, though additional research is warranted since a more complete overall depiction of the validity argument is needed to evaluate fidelity measures.","PeriodicalId":47533,"journal":{"name":"Evaluation Review","volume":"11 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140833906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-16DOI: 10.1177/0193841x241246833
Bruno Arpino, Silvia Bacci, Leonardo Grilli, Raffaele Guetto, Carla Rampichini
We consider estimating the effect of a treatment on a given outcome measured on subjects tested both before and after treatment assignment in observational studies. A vast literature compares the competing approaches of modelling the post-test score conditionally on the pre-test score versus modelling the difference, namely, the gain score. Our contribution lies in analyzing the merits and drawbacks of two approaches in a multilevel setting. This is relevant in many fields, such as education, where students are nested within schools. The multilevel structure raises peculiar issues related to contextual effects and the distinction between individual-level and cluster-level treatments. We compare the two approaches through a simulation study. For individual-level treatments, our findings align with existing literature. However, for cluster-level treatments, the scenario is more complex, as the cluster mean of the pre-test score plays a key role. Its reliability crucially depends on the cluster size, leading to potentially unsatisfactory estimators with small clusters.
{"title":"Conditioning on the Pre-Test versus Gain Score Modelling: Revisiting the Controversy in a Multilevel Setting","authors":"Bruno Arpino, Silvia Bacci, Leonardo Grilli, Raffaele Guetto, Carla Rampichini","doi":"10.1177/0193841x241246833","DOIUrl":"https://doi.org/10.1177/0193841x241246833","url":null,"abstract":"We consider estimating the effect of a treatment on a given outcome measured on subjects tested both before and after treatment assignment in observational studies. A vast literature compares the competing approaches of modelling the post-test score conditionally on the pre-test score versus modelling the difference, namely, the gain score. Our contribution lies in analyzing the merits and drawbacks of two approaches in a multilevel setting. This is relevant in many fields, such as education, where students are nested within schools. The multilevel structure raises peculiar issues related to contextual effects and the distinction between individual-level and cluster-level treatments. We compare the two approaches through a simulation study. For individual-level treatments, our findings align with existing literature. However, for cluster-level treatments, the scenario is more complex, as the cluster mean of the pre-test score plays a key role. Its reliability crucially depends on the cluster size, leading to potentially unsatisfactory estimators with small clusters.","PeriodicalId":47533,"journal":{"name":"Evaluation Review","volume":"2012 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140612307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-01Epub Date: 2023-06-06DOI: 10.1177/0193841X231181747
Mehmet Akif Destek, İbrahim Halil Oğuz, Nuh Okumuş
The adoption of growth strategies based on foreign trade, especially in the previous century when liberal policies began to dominate, is one of the main reasons for the increase in output and indirectly for environmental concerns. On the other hand, there are complex claims about the environmental effects of liberal policies and thus of globalization. This study intends to analyze the effects of global collaborations involving 11 transition economies that have completed the transition process on the environmentally sustainable development of these nations. In this direction, the effects of financial and commercial globalization indices on carbon emissions are investigated. The distinctions of globalization are used to distinguish the consequences of the two types of globalization. In doing so, the de facto and de jure indicator distinctions of globalization are used to differentiate the consequences of two types of globalization. In addition, the effects of real GDP, energy efficiency, and use of renewable energy on environmental pollution are dissected. For the main purpose of the study, the CS-ARDL estimation technique that allows cross-sectional dependency among observed countries is used to separate the short and long-run influences of explanatory variables. In addition, CCE-MG estimator is used for robustness check. According to the empirical findings, the economic growth and increasing energy intensity increases carbon emissions, but the increase in renewable energy consumption improves environmental quality. Furthermore, trade globalization does not have a significant impact on the environment in the context of globalization. On the other hand, the increase in de facto and de jure financial globalization indices results in an increase in carbon emissions, but de jure financial globalization causes more environmental damage. The harmful impact of de jure financial globalization on environmental quality suggests that the decreasing investment restrictions and international investment agreements of transition countries have been implemented in a manner that facilitates the relocation of investments from pollution-intensive industries to these countries.
{"title":"Do Trade and Financial Cooperation Improve Environmentally Sustainable Development: A Distinction Between <i>de facto</i> and <i>de jure</i> Globalization.","authors":"Mehmet Akif Destek, İbrahim Halil Oğuz, Nuh Okumuş","doi":"10.1177/0193841X231181747","DOIUrl":"10.1177/0193841X231181747","url":null,"abstract":"<p><p>The adoption of growth strategies based on foreign trade, especially in the previous century when liberal policies began to dominate, is one of the main reasons for the increase in output and indirectly for environmental concerns. On the other hand, there are complex claims about the environmental effects of liberal policies and thus of globalization. This study intends to analyze the effects of global collaborations involving 11 transition economies that have completed the transition process on the environmentally sustainable development of these nations. In this direction, the effects of financial and commercial globalization indices on carbon emissions are investigated. The distinctions of globalization are used to distinguish the consequences of the two types of globalization. In doing so, the de facto and de jure indicator distinctions of globalization are used to differentiate the consequences of two types of globalization. In addition, the effects of real GDP, energy efficiency, and use of renewable energy on environmental pollution are dissected. For the main purpose of the study, the CS-ARDL estimation technique that allows cross-sectional dependency among observed countries is used to separate the short and long-run influences of explanatory variables. In addition, CCE-MG estimator is used for robustness check. According to the empirical findings, the economic growth and increasing energy intensity increases carbon emissions, but the increase in renewable energy consumption improves environmental quality. Furthermore, trade globalization does not have a significant impact on the environment in the context of globalization. On the other hand, the increase in de facto and de jure financial globalization indices results in an increase in carbon emissions, but de jure financial globalization causes more environmental damage. The harmful impact of de jure financial globalization on environmental quality suggests that the decreasing investment restrictions and international investment agreements of transition countries have been implemented in a manner that facilitates the relocation of investments from pollution-intensive industries to these countries.</p>","PeriodicalId":47533,"journal":{"name":"Evaluation Review","volume":" ","pages":"251-273"},"PeriodicalIF":0.9,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9957122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}