Pub Date : 2023-02-20DOI: 10.1177/10982140221138031
Samantha Abbato
The merit of narrative film methods to support participatory approaches and professional development has been increasingly demonstrated by research in several fields and education. However, the use of digital storytelling and other film methods in evaluation remains largely unchartered territory. This article provides a case study of a digital storytelling evaluation initiative in monitoring and evaluation (M&E) in an Australian community not-for-profit. The aim is to offer practical insights for evaluators and organizations considering digital storytelling and other film narrative methods for participant-centered evaluation. Embedding digital evaluation stories into M&E evolved through collaboration between the external evaluation team and organizational leadership, requiring capacity building in evaluation, digital and qualitative methods, and new systems and processes. Benefits include transformation into a participant-centered evaluation and learning culture. Several challenges are discussed, including the extent of organizational change required, the associated time, energy, and cost, and the positive bias of visual narratives.
{"title":"Digital Evaluation Stories: A Case Study of Implementation for Monitoring and Evaluation in an Australian Community not-for-Profit","authors":"Samantha Abbato","doi":"10.1177/10982140221138031","DOIUrl":"https://doi.org/10.1177/10982140221138031","url":null,"abstract":"The merit of narrative film methods to support participatory approaches and professional development has been increasingly demonstrated by research in several fields and education. However, the use of digital storytelling and other film methods in evaluation remains largely unchartered territory. This article provides a case study of a digital storytelling evaluation initiative in monitoring and evaluation (M&E) in an Australian community not-for-profit. The aim is to offer practical insights for evaluators and organizations considering digital storytelling and other film narrative methods for participant-centered evaluation. Embedding digital evaluation stories into M&E evolved through collaboration between the external evaluation team and organizational leadership, requiring capacity building in evaluation, digital and qualitative methods, and new systems and processes. Benefits include transformation into a participant-centered evaluation and learning culture. Several challenges are discussed, including the extent of organizational change required, the associated time, energy, and cost, and the positive bias of visual narratives.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47377110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-13DOI: 10.1177/10982140221116096
Rosalind E. Keith, Shannon Heitkamp, J. Little, Victoria Peebles, Rumin Sarwar, Dana M. Petersen, A. O'Malley
Formative evaluation provides stakeholders with timely feedback to support an intervention's improvement during implementation to maximize its effectiveness. We describe the qualitative methods that guided one study within a formative evaluation of a multicomponent care delivery intervention. We then describe the challenges and lessons learned that emerged from this study, organizing them by the study's four overarching challenges: (1) addressing multiple research questions, (2) working with a large interdisciplinary team, (3) triangulating qualitative results with quantitative results, and (4) studying implementation in real-world delivery settings. Overall, the evaluation generated important findings to support improvement of the intervention during implementation. We hope that sharing the lessons learned will increase the rigor and efficiency with which formative evaluations of complex care delivery interventions are conducted and the likelihood that they will improve implementation in real time. We also hope the lessons learned will enhance the satisfaction of the researchers working on these evaluations.
{"title":"Challenges and Lessons Learned Conducting a Formative Evaluation of a Multicomponent Care Delivery Intervention","authors":"Rosalind E. Keith, Shannon Heitkamp, J. Little, Victoria Peebles, Rumin Sarwar, Dana M. Petersen, A. O'Malley","doi":"10.1177/10982140221116096","DOIUrl":"https://doi.org/10.1177/10982140221116096","url":null,"abstract":"Formative evaluation provides stakeholders with timely feedback to support an intervention's improvement during implementation to maximize its effectiveness. We describe the qualitative methods that guided one study within a formative evaluation of a multicomponent care delivery intervention. We then describe the challenges and lessons learned that emerged from this study, organizing them by the study's four overarching challenges: (1) addressing multiple research questions, (2) working with a large interdisciplinary team, (3) triangulating qualitative results with quantitative results, and (4) studying implementation in real-world delivery settings. Overall, the evaluation generated important findings to support improvement of the intervention during implementation. We hope that sharing the lessons learned will increase the rigor and efficiency with which formative evaluations of complex care delivery interventions are conducted and the likelihood that they will improve implementation in real time. We also hope the lessons learned will enhance the satisfaction of the researchers working on these evaluations.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48626015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-13DOI: 10.1177/10982140211062267
M. Kabare, Jeremy Northcote
The importance of considering wider contexts when evaluating the success or failure of programs has been increasingly acknowledged with the shift towards culturally responsive evaluation. But one of the important advantages of contextual approaches has been mostly overlooked—that they can provide more “realist” evaluations for why programs fail or succeed. The careful identification of causal mechanisms involved in program delivery is important for avoiding spurious conclusions about the effectiveness of programs. Drawing on findings from a mixed-methods study conducted in Western Kenya among the Luo to evaluate the impacts of a HIV prevention program involving voluntary male medical circumcision (VMMC), it is shown that the VMMC program was one of several variables that contributed to the desired outcome, being not so much the cause but a catalyst for accelerating the desired behavioral change that the surrounding context was already amenable to and contributing to, even before the program was introduced. The need for context evaluations is particularly obvious when programs are part of broader campaigns involving scale-up from one context to another.
{"title":"The Importance of Context for Determining Causal Mechanisms in Program Evaluation: The Case of Medical Male Circumcision for HIV Prevention Among the Luo in Western Kenya","authors":"M. Kabare, Jeremy Northcote","doi":"10.1177/10982140211062267","DOIUrl":"https://doi.org/10.1177/10982140211062267","url":null,"abstract":"The importance of considering wider contexts when evaluating the success or failure of programs has been increasingly acknowledged with the shift towards culturally responsive evaluation. But one of the important advantages of contextual approaches has been mostly overlooked—that they can provide more “realist” evaluations for why programs fail or succeed. The careful identification of causal mechanisms involved in program delivery is important for avoiding spurious conclusions about the effectiveness of programs. Drawing on findings from a mixed-methods study conducted in Western Kenya among the Luo to evaluate the impacts of a HIV prevention program involving voluntary male medical circumcision (VMMC), it is shown that the VMMC program was one of several variables that contributed to the desired outcome, being not so much the cause but a catalyst for accelerating the desired behavioral change that the surrounding context was already amenable to and contributing to, even before the program was introduced. The need for context evaluations is particularly obvious when programs are part of broader campaigns involving scale-up from one context to another.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"44 1","pages":"221 - 235"},"PeriodicalIF":1.7,"publicationDate":"2023-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45108186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-13DOI: 10.1177/10982140221146140
Leanne M. Kelly, Phyo Pyae Thida (aka Sophia) Htwe
This paper unpacks our efforts as external evaluators to work toward decolonizing our evaluation practice. Undertaking this writing exercise as a form of reflective practice demonstrated that decolonization is much more complex than simply translating materials, organizing locals to collect data, and building participants’ capacity around Western modalities. While this complexity is clear in the decolonization literature, practice-based examples that depict barriers and thought processes are rarely presented. Through this paper, we deconstruct our deeply held beliefs around what constitutes good evaluation to assess the effectiveness of our decolonizing approach. Through sharing our critical consciousness-raising dialoguing, this paper reports our progress thus far and provides information and provocations to support others attempting to decolonize their practice.
{"title":"Decolonizing Community Development Evaluation in Rakhine State, Myanmar","authors":"Leanne M. Kelly, Phyo Pyae Thida (aka Sophia) Htwe","doi":"10.1177/10982140221146140","DOIUrl":"https://doi.org/10.1177/10982140221146140","url":null,"abstract":"This paper unpacks our efforts as external evaluators to work toward decolonizing our evaluation practice. Undertaking this writing exercise as a form of reflective practice demonstrated that decolonization is much more complex than simply translating materials, organizing locals to collect data, and building participants’ capacity around Western modalities. While this complexity is clear in the decolonization literature, practice-based examples that depict barriers and thought processes are rarely presented. Through this paper, we deconstruct our deeply held beliefs around what constitutes good evaluation to assess the effectiveness of our decolonizing approach. Through sharing our critical consciousness-raising dialoguing, this paper reports our progress thus far and provides information and provocations to support others attempting to decolonize their practice.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41593026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-13DOI: 10.1177/10982140221122767
Krystall Dunaway, K. Gardner, Karly Grieve
As part of its Guiding Principles for Evaluators, the American Evaluation Association (AEA) requires that evaluators develop cultural competencies. Using a successive-independent-samples design, the researchers sought to compare perceptions of cultural competence across a duration of 10 years. Qualitative data were collected via online surveying, which included 168 program evaluators in 2009 and 110 program evaluators in 2019. Content analysis was utilized, and content categories were identified and quantified for both data collections. The data reflect that, from 2009 to 2019, there has been an increased recognition of what cultural competence entails and a closer alignment between what the Guiding Principles for Evaluators promotes and what evaluators demonstrate. However, the data also indicate that perhaps preferences have evolved past the current cultural competence paradigm as well as the term “cultural competence” itself. These findings and implications are discussed in further detail.
{"title":"Cultural Competence: 10-Year Comparison of Program Evaluators’ Perceptions","authors":"Krystall Dunaway, K. Gardner, Karly Grieve","doi":"10.1177/10982140221122767","DOIUrl":"https://doi.org/10.1177/10982140221122767","url":null,"abstract":"As part of its Guiding Principles for Evaluators, the American Evaluation Association (AEA) requires that evaluators develop cultural competencies. Using a successive-independent-samples design, the researchers sought to compare perceptions of cultural competence across a duration of 10 years. Qualitative data were collected via online surveying, which included 168 program evaluators in 2009 and 110 program evaluators in 2019. Content analysis was utilized, and content categories were identified and quantified for both data collections. The data reflect that, from 2009 to 2019, there has been an increased recognition of what cultural competence entails and a closer alignment between what the Guiding Principles for Evaluators promotes and what evaluators demonstrate. However, the data also indicate that perhaps preferences have evolved past the current cultural competence paradigm as well as the term “cultural competence” itself. These findings and implications are discussed in further detail.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49366630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-05DOI: 10.1177/10982140221148432
M. Mark, R. Hopson, Valerie J. Caracelli, R. Miller
Since 2003, the Oral History Project Team has conducted interviews with individuals who have made substantial contributions to evaluation theory and practice. The previous interviews were conducted with individuals who have a major identification within the field of evaluation and whose professional development has been intertwined with the history of evaluation as a distinct field. Over a similar period some members in the field of evaluation have worked to highlight more of the field’s history, especially in pointing out the contributions of individuals from traditionally underrepresented groups, including those who were early in addressing how perceptions and realities of race and class affect our programs and their evaluations. This is especially the case in educational evaluation, where a “collective ignorance” about the scholarship of African Americans has sparked efforts to more fully represent voices that can enlighten and enrich our scholarship and our recorded history (e.g., Hood, 2001; Hood & Hopson, 2008). In keeping with this endeavor, the present interview extends the previous scope of the oral history project to celebrate the life and work of Dr. Edmund Wyatt Gordon, a leading intellectual in the field of education. Dr. Gordon is a centenarian who remains actively engaged in research at The Edmund W. Gordon Institute for Urban and Minority Education (IUME) within Teachers College at Columbia University. This center, founded by Dr. Gordon in 1974, was renamed in his honor in 2021 to recognize his contributions in educational justice, equity, and education. Long-time members of the Oral History Project Team (Robin Lin Miller, Melvin M. Mark, Valerie J. Caracelli) along with Rodney K. Hopson conducted three interviews with Dr. Gordon between October 2021 and December 2021. The interview transcripts have been combined and edited for clarity, length, and content. Dr. Gordon reviewed and approved the final product prior to its submission to the American Journal of Evaluation.
自2003年以来,口述历史项目组对对评价理论和实践做出重大贡献的个人进行了访谈。之前的访谈对象是在评价领域具有重要身份的个人,他们的专业发展与评价作为一个独特领域的历史交织在一起。在类似的时期,评估领域的一些成员致力于突出该领域的更多历史,特别是指出传统上未被充分代表的群体的个人的贡献,包括那些早期讨论种族和阶级的观念和现实如何影响我们的项目及其评估的人。在教育评估方面尤其如此,对非裔美国人的学术研究的“集体无知”激发了更充分地代表声音的努力,这些声音可以启发和丰富我们的学术研究和我们的记录历史(例如,Hood, 2001;Hood & Hopson, 2008)。为了与这一努力保持一致,本次采访扩展了先前口述历史项目的范围,以庆祝埃德蒙·怀亚特·戈登博士的生活和工作,他是教育领域的领军知识分子。戈登博士是一位百岁老人,他仍然积极从事哥伦比亚大学师范学院埃德蒙·w·戈登城市和少数民族教育研究所(IUME)的研究。该中心由戈登博士于1974年创立,为了表彰他在教育正义、公平和教育方面的贡献,于2021年更名为戈登中心。口述历史项目团队的长期成员(Robin Lin Miller, Melvin M. Mark, Valerie J. Caracelli)和Rodney K. Hopson在2021年10月至2021年12月期间对戈登博士进行了三次采访。为了清晰、长度和内容,采访记录经过了组合和编辑。在提交给美国评估杂志之前,戈登博士审查并批准了最终产品。
{"title":"The Oral History of Evaluation: The Influence of Edmund Wyatt Gordon on Evaluation","authors":"M. Mark, R. Hopson, Valerie J. Caracelli, R. Miller","doi":"10.1177/10982140221148432","DOIUrl":"https://doi.org/10.1177/10982140221148432","url":null,"abstract":"Since 2003, the Oral History Project Team has conducted interviews with individuals who have made substantial contributions to evaluation theory and practice. The previous interviews were conducted with individuals who have a major identification within the field of evaluation and whose professional development has been intertwined with the history of evaluation as a distinct field. Over a similar period some members in the field of evaluation have worked to highlight more of the field’s history, especially in pointing out the contributions of individuals from traditionally underrepresented groups, including those who were early in addressing how perceptions and realities of race and class affect our programs and their evaluations. This is especially the case in educational evaluation, where a “collective ignorance” about the scholarship of African Americans has sparked efforts to more fully represent voices that can enlighten and enrich our scholarship and our recorded history (e.g., Hood, 2001; Hood & Hopson, 2008). In keeping with this endeavor, the present interview extends the previous scope of the oral history project to celebrate the life and work of Dr. Edmund Wyatt Gordon, a leading intellectual in the field of education. Dr. Gordon is a centenarian who remains actively engaged in research at The Edmund W. Gordon Institute for Urban and Minority Education (IUME) within Teachers College at Columbia University. This center, founded by Dr. Gordon in 1974, was renamed in his honor in 2021 to recognize his contributions in educational justice, equity, and education. Long-time members of the Oral History Project Team (Robin Lin Miller, Melvin M. Mark, Valerie J. Caracelli) along with Rodney K. Hopson conducted three interviews with Dr. Gordon between October 2021 and December 2021. The interview transcripts have been combined and edited for clarity, length, and content. Dr. Gordon reviewed and approved the final product prior to its submission to the American Journal of Evaluation.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"44 1","pages":"175 - 189"},"PeriodicalIF":1.7,"publicationDate":"2023-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42021756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-01DOI: 10.1177/10982140221130266
S. Panjwani, Taylor Graves-Boswell, W. Garney, Daenuka Muraleetharan, Mandy N. Spadine, Sara A Flores
Collective impact (CI) is a structured approach that helps drive multi-sector collaborations to address social problems through systems changes. While the CI approach is gaining popularity, practitioners experience challenges in evaluating its implementation and intended outcomes. We conducted a systematic scoping review to understand evaluation methods specific to CI initiatives, identify challenges or limitations with these evaluations, and provide recommendations for the design of CI evaluations. Eighteen studies met the inclusion criteria. Process evaluations were the most frequently used evaluation design. Most studies collected cross-sectional data to evaluate their efforts. The complexity of CI was most frequently cited as the greatest evaluation challenge. Study recommendations primarily focused on improvements during the evaluation planning phase. Taking careful consideration in the planning of CI evaluations, developing context-specific data collection methods, and communicating results intentionally and effectively could prove useful to sufficiently capture and assess this systems-level approach to address social problems.
{"title":"Evaluating Collective Impact Initiatives: A Systematic Scoping Review","authors":"S. Panjwani, Taylor Graves-Boswell, W. Garney, Daenuka Muraleetharan, Mandy N. Spadine, Sara A Flores","doi":"10.1177/10982140221130266","DOIUrl":"https://doi.org/10.1177/10982140221130266","url":null,"abstract":"Collective impact (CI) is a structured approach that helps drive multi-sector collaborations to address social problems through systems changes. While the CI approach is gaining popularity, practitioners experience challenges in evaluating its implementation and intended outcomes. We conducted a systematic scoping review to understand evaluation methods specific to CI initiatives, identify challenges or limitations with these evaluations, and provide recommendations for the design of CI evaluations. Eighteen studies met the inclusion criteria. Process evaluations were the most frequently used evaluation design. Most studies collected cross-sectional data to evaluate their efforts. The complexity of CI was most frequently cited as the greatest evaluation challenge. Study recommendations primarily focused on improvements during the evaluation planning phase. Taking careful consideration in the planning of CI evaluations, developing context-specific data collection methods, and communicating results intentionally and effectively could prove useful to sufficiently capture and assess this systems-level approach to address social problems.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"44 1","pages":"406 - 423"},"PeriodicalIF":1.7,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41379577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-31DOI: 10.1177/10982140221122771
B. Douthwaite, C. Proietti, V. Polar, G. Thiele
This paper develops a novel approach called Outcome Trajectory Evaluation (OTE) in response to the long-causal-chain problem confronting the evaluation of research for development (R4D) projects. OTE strives to tackle four issues resulting from the common practice of evaluating R4D projects based on theory of change developed at the start. The approach was developed iteratively while conducting four evaluations of policy-related outcomes claimed by the CGIAR, a global R4D organization. The first step is to use a middle-range theory (MRT), based on “grand” social science theory, to help delineate and understand the trajectory that generated the set of outcomes being evaluated. The second step is to then identify project contribution to that trajectory. Other types of theory-driven evaluation are single step: they model how projects achieve outcomes without first considering the overarching causal mechanism—the outcome trajectory—from which the outcomes emerged. The use of an MRT allowed us to accrue learning from one evaluation to the next.
{"title":"Outcome Trajectory Evaluation (OTE): An Approach to Tackle Research-for-Development’s Long-Causal-Chain Problem","authors":"B. Douthwaite, C. Proietti, V. Polar, G. Thiele","doi":"10.1177/10982140221122771","DOIUrl":"https://doi.org/10.1177/10982140221122771","url":null,"abstract":"This paper develops a novel approach called Outcome Trajectory Evaluation (OTE) in response to the long-causal-chain problem confronting the evaluation of research for development (R4D) projects. OTE strives to tackle four issues resulting from the common practice of evaluating R4D projects based on theory of change developed at the start. The approach was developed iteratively while conducting four evaluations of policy-related outcomes claimed by the CGIAR, a global R4D organization. The first step is to use a middle-range theory (MRT), based on “grand” social science theory, to help delineate and understand the trajectory that generated the set of outcomes being evaluated. The second step is to then identify project contribution to that trajectory. Other types of theory-driven evaluation are single step: they model how projects achieve outcomes without first considering the overarching causal mechanism—the outcome trajectory—from which the outcomes emerged. The use of an MRT allowed us to accrue learning from one evaluation to the next.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"44 1","pages":"335 - 352"},"PeriodicalIF":1.7,"publicationDate":"2023-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43901607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-23DOI: 10.1177/10982140221122764
Corrado Matta, J. Lindvall, A. Ryve
In this article, we discuss the methodological implications of data and theory integration for Theory-Based Evaluation (TBE). TBE is a family of approaches to program evaluation that use program theories as instruments to answer questions about whether, how, and why a program works. Some of the groundwork about TBE has expressed the idea that a proper program theory should specify the intervening mechanisms underlying the program outcome. In the present article, we discuss in what way data and theory integration can help evaluators in constructing and refining mechanistic program theories. The paper argues that a mechanism is both a network of entities and activities and a network of counterfactual relations. Furthermore, we argue that although data integration typically provides information about different parts of a program, it is the integration of theory that provides the most important mechanistic insights.
{"title":"The Mechanistic Rewards of Data and Theory Integration for Theory-Based Evaluation","authors":"Corrado Matta, J. Lindvall, A. Ryve","doi":"10.1177/10982140221122764","DOIUrl":"https://doi.org/10.1177/10982140221122764","url":null,"abstract":"In this article, we discuss the methodological implications of data and theory integration for Theory-Based Evaluation (TBE). TBE is a family of approaches to program evaluation that use program theories as instruments to answer questions about whether, how, and why a program works. Some of the groundwork about TBE has expressed the idea that a proper program theory should specify the intervening mechanisms underlying the program outcome. In the present article, we discuss in what way data and theory integration can help evaluators in constructing and refining mechanistic program theories. The paper argues that a mechanism is both a network of entities and activities and a network of counterfactual relations. Furthermore, we argue that although data integration typically provides information about different parts of a program, it is the integration of theory that provides the most important mechanistic insights.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42816075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-23DOI: 10.1177/10982140221134618
E. Hedberg
In cluster randomized evaluations, a treatment or intervention is randomly assigned to a set of clusters each with constituent individual units of observations (e.g., student units that attend schools, which are assigned to treatment). One consideration of these designs is how many units are needed per cluster to achieve adequate statistical power. Typically, researchers state that “about 30 units per cluster” is the most that will yield benefit towards statistical precision. To avoid rules of thumb not grounded in statistical theory and practical considerations, and instead provide guidance for this question, the ratio of the minimum detectable effect size (MDES) to the larger MDES with one less unit per cluster is related to the key parameters of the cluster randomized design. Formulas for this subsequent difference effect size ratio (SDESR) at a given number of units are provided, as are formulas for finding the number of units for an assumed SDESR. In general, the point of diminishing returns occurs with smaller numbers of units for larger values of the intraclass correlation.
{"title":"How Many Cases per Cluster? Operationalizing the Number of Units per Cluster Relative to Minimum Detectable Effects in Two-Level Cluster Randomized Evaluations with Linear Outcomes","authors":"E. Hedberg","doi":"10.1177/10982140221134618","DOIUrl":"https://doi.org/10.1177/10982140221134618","url":null,"abstract":"In cluster randomized evaluations, a treatment or intervention is randomly assigned to a set of clusters each with constituent individual units of observations (e.g., student units that attend schools, which are assigned to treatment). One consideration of these designs is how many units are needed per cluster to achieve adequate statistical power. Typically, researchers state that “about 30 units per cluster” is the most that will yield benefit towards statistical precision. To avoid rules of thumb not grounded in statistical theory and practical considerations, and instead provide guidance for this question, the ratio of the minimum detectable effect size (MDES) to the larger MDES with one less unit per cluster is related to the key parameters of the cluster randomized design. Formulas for this subsequent difference effect size ratio (SDESR) at a given number of units are provided, as are formulas for finding the number of units for an assumed SDESR. In general, the point of diminishing returns occurs with smaller numbers of units for larger values of the intraclass correlation.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"44 1","pages":"153 - 168"},"PeriodicalIF":1.7,"publicationDate":"2023-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42152247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}