Pub Date : 2022-11-01DOI: 10.1177/10982140221134246
R. Woodland
For those of us who teach program evaluation, it can be an exciting prospect to enter the summer with a new textbook to consider for inclusion in our fall courses. I had the good fortune to review Evaluation in Today’s World: Respecting Diversity, Improving Quality, and Promoting Usability by Veronica Thomas and Patricia Campbell. It is an accessible, comprehensive, and provocative text that is appropriate for inclusion in a number of courses that are typically taught in a program evaluation certificate sequence or other graduate curricula. The book is organized into 16 chapters, each of which includes learning goals and are replete with helpful visuals, case studies, suggested text reflection and discussion activities, and commentaries from evaluation scholars. The first half of the book explores the context and foundations of social justice, cultural competence, and program evaluation, while the second half of the book presents specifics for how to conduct socially just evaluation. For helpful reference, the book also includes the American Evaluation Association’s Guiding Principles (AEA, 2018) and the Joint Committee on Standards for Educational Evaluation Program Evaluation Standards (Yarbrough et al., 2010), as well as a Glossary of all the bolded terms included in the chapters. In the book, the reader encounters what one would expect to see in standard textbooks in evaluation, including the historical evolution of the field, influential scholars, and an overview of types of evaluation. However, what makes this text particularly compelling is that typical evaluation topics are explicated through the lens of social justice. Indeed, the book’s title matches its intent. Evaluation in today’s world means thinking and doing evaluation in what is unquestionably a racialized society where grave inequities exist and undemocratic relationships persist among people. The authors situate social justice at the heart of evaluation and assert that “evaluators have an ethical obligation to eliminate, or at least mitigate, racial (and other) biases” in our work (p. 42). They acknowledge that evaluators cannot “solve the racism problem,” but entreat us to “at least elevate this harsh reality in the discourse on the eradication of social problems that derive from a national legacy of structural racism, exploitation, and bigotry,” and warn, “evaluations that ignore these factors obscure the impact of social forces on social problems” (p. 218).
{"title":"Book Review: Evaluation in Today's World: Respecting Diversity, Improving Quality, and Promoting Usability","authors":"R. Woodland","doi":"10.1177/10982140221134246","DOIUrl":"https://doi.org/10.1177/10982140221134246","url":null,"abstract":"For those of us who teach program evaluation, it can be an exciting prospect to enter the summer with a new textbook to consider for inclusion in our fall courses. I had the good fortune to review Evaluation in Today’s World: Respecting Diversity, Improving Quality, and Promoting Usability by Veronica Thomas and Patricia Campbell. It is an accessible, comprehensive, and provocative text that is appropriate for inclusion in a number of courses that are typically taught in a program evaluation certificate sequence or other graduate curricula. The book is organized into 16 chapters, each of which includes learning goals and are replete with helpful visuals, case studies, suggested text reflection and discussion activities, and commentaries from evaluation scholars. The first half of the book explores the context and foundations of social justice, cultural competence, and program evaluation, while the second half of the book presents specifics for how to conduct socially just evaluation. For helpful reference, the book also includes the American Evaluation Association’s Guiding Principles (AEA, 2018) and the Joint Committee on Standards for Educational Evaluation Program Evaluation Standards (Yarbrough et al., 2010), as well as a Glossary of all the bolded terms included in the chapters. In the book, the reader encounters what one would expect to see in standard textbooks in evaluation, including the historical evolution of the field, influential scholars, and an overview of types of evaluation. However, what makes this text particularly compelling is that typical evaluation topics are explicated through the lens of social justice. Indeed, the book’s title matches its intent. Evaluation in today’s world means thinking and doing evaluation in what is unquestionably a racialized society where grave inequities exist and undemocratic relationships persist among people. The authors situate social justice at the heart of evaluation and assert that “evaluators have an ethical obligation to eliminate, or at least mitigate, racial (and other) biases” in our work (p. 42). They acknowledge that evaluators cannot “solve the racism problem,” but entreat us to “at least elevate this harsh reality in the discourse on the eradication of social problems that derive from a national legacy of structural racism, exploitation, and bigotry,” and warn, “evaluations that ignore these factors obscure the impact of social forces on social problems” (p. 218).","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"44 1","pages":"308 - 311"},"PeriodicalIF":1.7,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41316592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-22DOI: 10.1177/10982140211062884
L. Wingate, Kelly N. Robertson, Michael FitzGerald, Lana J. Rucks, Takara Tsuzaki, C. Clasen, J. Schwob
In this study, we investigated the impact of the evaluation capacity building (ECB) efforts of an organization by examining the evaluation plans included in funding proposals over a 14-year period. Specifically, we sought to determine the degree to which and how evaluation plans in proposals to one National Science Foundation (NSF) program changed over time and the extent to which the organization dedicated to ECB in that program may have influenced those changes. Independent raters used rubrics to assess the presence of six essential evaluation plan elements. Statistically significant correlations indicate that proposal evaluation plans improved over time, with noticeable differences before and after ECB efforts were integrated into the program. The study adds to the limited literature on using artifacts of evaluation practice rather than self-reports to assess ECB impact.
{"title":"Thinking Outside the Self-Report: Using Evaluation Plans to Assess Evaluation Capacity Building","authors":"L. Wingate, Kelly N. Robertson, Michael FitzGerald, Lana J. Rucks, Takara Tsuzaki, C. Clasen, J. Schwob","doi":"10.1177/10982140211062884","DOIUrl":"https://doi.org/10.1177/10982140211062884","url":null,"abstract":"In this study, we investigated the impact of the evaluation capacity building (ECB) efforts of an organization by examining the evaluation plans included in funding proposals over a 14-year period. Specifically, we sought to determine the degree to which and how evaluation plans in proposals to one National Science Foundation (NSF) program changed over time and the extent to which the organization dedicated to ECB in that program may have influenced those changes. Independent raters used rubrics to assess the presence of six essential evaluation plan elements. Statistically significant correlations indicate that proposal evaluation plans improved over time, with noticeable differences before and after ECB efforts were integrated into the program. The study adds to the limited literature on using artifacts of evaluation practice rather than self-reports to assess ECB impact.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"43 1","pages":"515 - 538"},"PeriodicalIF":1.7,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44000010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-19DOI: 10.1177/10982140211017663
J. Altschuld, H. Hung, Yi-Fang Lee
Surveys are frequently employed in needs assessment to collect information about gaps (the needs) in what is and what should be conditions. Double-scale Likert-type instruments are routinely used for this purpose. Although in accord with the discrepancy definition of need, the quality of such measures is being questioned to the point of suggesting that the results are not to be trusted. Eight factors supporting that proposition are described with explanations of how they operate. Literature-based examples are provided for improving surveys with double scales especially as they relate to attenuating the effects of the factors. Lastly, lessons learned are offered with a call for more research into this issue in assessing needs.
{"title":"What Is and What Should Be Needs Assessment Scales: Factors Affecting the Trustworthiness of Results","authors":"J. Altschuld, H. Hung, Yi-Fang Lee","doi":"10.1177/10982140211017663","DOIUrl":"https://doi.org/10.1177/10982140211017663","url":null,"abstract":"Surveys are frequently employed in needs assessment to collect information about gaps (the needs) in what is and what should be conditions. Double-scale Likert-type instruments are routinely used for this purpose. Although in accord with the discrepancy definition of need, the quality of such measures is being questioned to the point of suggesting that the results are not to be trusted. Eight factors supporting that proposition are described with explanations of how they operate. Literature-based examples are provided for improving surveys with double scales especially as they relate to attenuating the effects of the factors. Lastly, lessons learned are offered with a call for more research into this issue in assessing needs.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"43 1","pages":"607 - 619"},"PeriodicalIF":1.7,"publicationDate":"2022-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42584516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-20DOI: 10.1177/10982140211071017
R. Shand, Stephen M. Leach, Fiona M. Hollands, Florence Chang, Yilin Pan, B. Yan, D. Dossett, Samreen Nayyer-Qureshi, Yixin Wang, Laura Head
We assessed whether an adaptation of value-added analysis (VAA) can provide evidence on the relative effectiveness of interventions implemented in a large school district. We analyzed two datasets, one documenting interventions received by underperforming students, and one documenting interventions received by students in schools benefiting from discretionary funds to invest in specific programs. Results from the former dataset identified several interventions that appear to be more or less effective than the average intervention. Results from the second dataset were counterintuitive. We conclude that, under specific conditions, program VAA can provide evidence to help guide district decision-makers to identify outlier interventions and inform decisions about scaling up or disinvesting in such interventions, with the caveat that if those conditions are not met, the results could be misleading.
{"title":"Program Value-Added: A Feasible Method for Providing Evidence on the Effectiveness of Multiple Programs Implemented Simultaneously in Schools","authors":"R. Shand, Stephen M. Leach, Fiona M. Hollands, Florence Chang, Yilin Pan, B. Yan, D. Dossett, Samreen Nayyer-Qureshi, Yixin Wang, Laura Head","doi":"10.1177/10982140211071017","DOIUrl":"https://doi.org/10.1177/10982140211071017","url":null,"abstract":"We assessed whether an adaptation of value-added analysis (VAA) can provide evidence on the relative effectiveness of interventions implemented in a large school district. We analyzed two datasets, one documenting interventions received by underperforming students, and one documenting interventions received by students in schools benefiting from discretionary funds to invest in specific programs. Results from the former dataset identified several interventions that appear to be more or less effective than the average intervention. Results from the second dataset were counterintuitive. We conclude that, under specific conditions, program VAA can provide evidence to help guide district decision-makers to identify outlier interventions and inform decisions about scaling up or disinvesting in such interventions, with the caveat that if those conditions are not met, the results could be misleading.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"43 1","pages":"584 - 606"},"PeriodicalIF":1.7,"publicationDate":"2022-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44274992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-01DOI: 10.1177/10982140211072420
Kim M. Siegal
Social return on investment (SROI), an evaluation method that compares monetized social value generated to costs invested, is in ascendance. Conceptually akin to cost–benefit analysis, it shares some of its challenges; however, these are heightened due to the expressed promise of using SROI to compare programs and inform philanthropic and public investment decisions. In this paper, I describe the landscape of SROI studies to date, including a review of a representative sample of SROI evaluations, which have been vetted by Social Value International. I also draw on the experience of an organization that has used SROI in earnest as a decision-making tool to provide an assessment of both the methods that underpin it and the ways in which it is applied. I conclude by offering some recommendations to consider to get the most value from this evaluation method while avoiding some potential pitfalls.
{"title":"The Tentative Promise of Social Return on Investment","authors":"Kim M. Siegal","doi":"10.1177/10982140211072420","DOIUrl":"https://doi.org/10.1177/10982140211072420","url":null,"abstract":"Social return on investment (SROI), an evaluation method that compares monetized social value generated to costs invested, is in ascendance. Conceptually akin to cost–benefit analysis, it shares some of its challenges; however, these are heightened due to the expressed promise of using SROI to compare programs and inform philanthropic and public investment decisions. In this paper, I describe the landscape of SROI studies to date, including a review of a representative sample of SROI evaluations, which have been vetted by Social Value International. I also draw on the experience of an organization that has used SROI in earnest as a decision-making tool to provide an assessment of both the methods that underpin it and the ways in which it is applied. I conclude by offering some recommendations to consider to get the most value from this evaluation method while avoiding some potential pitfalls.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"43 1","pages":"438 - 457"},"PeriodicalIF":1.7,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49604772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-01DOI: 10.1177/10982140221111272
J. Hall, Laura R. Peck
We are excited to present the third issue of volume 43 of the American Journal of Evaluation ( AJE ). This is the fi rst issue that we have stewarded as Interim Co-Editors-in-Chief. This issue contains six articles and a Method Note. This issue also includes a section on economic evaluation with a note from the Section Editor, Brooks Bowden. While each article is distinct with its own content and methodological focus, as a collective, these articles give practical guidance on how evaluation practice can be more inclusive and strategically modi fi ed to address the complexity and social issues in our world. It is our aim to re fl ect as much of the diversity of the evaluation fi eld as possible in each issue; and we believe this issue offers something for most evaluation scholars and practitioners. The fi rst article in this issue is authored by Melvin M. Mark, former Editor of AJE. In his article, Mark argues for the necessity of planning for change as program modi fi cations will inevitably occur. Recognizing not all program changes can be predetermined, he suggests that evaluators can work with stakeholders to make informed decisions about possible adaptions. Building on these, and related arguments, he reviews various forms of program modi fi cations and then offers a range of options for how evaluators can plan for such modi fi cations; or, a priori planning for potential adap-tions . Mark outlines the general steps for a priori planning, providing concrete examples of how evaluators can incorporate these steps into their practice. The practical questions included in this piece will prove helpful for evaluators, along with stakeholders, to generate ideas for possible program adaptations.Inthesecond article, Jennifer J. Esala, Liz Sweitzer, Craig Higson-Smith, and Kirsten L. Anderson discuss human rights issues in the context of advocacy evaluation in the Global South. These authors highlight a number of urgent issues not adequately covered in the literature on advocacy evaluation in the Global South. Evaluators and others interested in advocacy evaluation in Global South contexts will fi nd this piece particularly informative because it provides a literature review focused on how work
我们很高兴地向大家介绍《美国评估杂志》(AJE)第43卷的第三期。这是我们作为临时联合总编辑管理的第一期杂志。本期包含六篇文章和一个方法说明。本期还包括经济评估部分,并附有部分编辑布鲁克斯·鲍登的注释。虽然每篇文章都有自己的内容和方法重点,但作为一个整体,这些文章为评估实践如何更具包容性和战略性地调整以解决我们世界的复杂性和社会问题提供了实践指导。我们的目标是在每一期中尽可能多地反映评价领域的多样性;我们相信这个问题为大多数评估学者和实践者提供了一些东西。这期的第一篇文章是由前AJE编辑梅尔文·m·马克撰写的。在他的文章中,Mark论证了计划变更的必要性,因为程序变更将不可避免地发生。认识到并非所有的项目变更都可以预先确定,他建议评估人员可以与利益相关者合作,就可能的调整做出明智的决定。基于这些和相关的论点,他回顾了各种形式的程序修改,然后为评估者如何规划这些修改提供了一系列选择;或者,对潜在的适应性进行先验规划。Mark概述了先验计划的一般步骤,并提供了评估人员如何将这些步骤合并到他们的实践中的具体示例。本文中包含的实际问题将证明对评估人员和涉众有帮助,从而为可能的程序调整产生想法。在第二篇文章中,Jennifer J. Esala, Liz Sweitzer, Craig Higson-Smith和Kirsten L. Anderson在全球南方倡导评估的背景下讨论了人权问题。这些作者强调了一些在关于全球南方倡导评价的文献中没有充分涵盖的紧迫问题。评估人员和其他对全球南方背景下的倡导评估感兴趣的人会发现这篇文章特别有用,因为它提供了一篇关注如何工作的文献综述
{"title":"From the Interim Co-Editors: Thinking Inclusively and Strategically to Address the Complexity of Our World","authors":"J. Hall, Laura R. Peck","doi":"10.1177/10982140221111272","DOIUrl":"https://doi.org/10.1177/10982140221111272","url":null,"abstract":"We are excited to present the third issue of volume 43 of the American Journal of Evaluation ( AJE ). This is the fi rst issue that we have stewarded as Interim Co-Editors-in-Chief. This issue contains six articles and a Method Note. This issue also includes a section on economic evaluation with a note from the Section Editor, Brooks Bowden. While each article is distinct with its own content and methodological focus, as a collective, these articles give practical guidance on how evaluation practice can be more inclusive and strategically modi fi ed to address the complexity and social issues in our world. It is our aim to re fl ect as much of the diversity of the evaluation fi eld as possible in each issue; and we believe this issue offers something for most evaluation scholars and practitioners. The fi rst article in this issue is authored by Melvin M. Mark, former Editor of AJE. In his article, Mark argues for the necessity of planning for change as program modi fi cations will inevitably occur. Recognizing not all program changes can be predetermined, he suggests that evaluators can work with stakeholders to make informed decisions about possible adaptions. Building on these, and related arguments, he reviews various forms of program modi fi cations and then offers a range of options for how evaluators can plan for such modi fi cations; or, a priori planning for potential adap-tions . Mark outlines the general steps for a priori planning, providing concrete examples of how evaluators can incorporate these steps into their practice. The practical questions included in this piece will prove helpful for evaluators, along with stakeholders, to generate ideas for possible program adaptations.Inthesecond article, Jennifer J. Esala, Liz Sweitzer, Craig Higson-Smith, and Kirsten L. Anderson discuss human rights issues in the context of advocacy evaluation in the Global South. These authors highlight a number of urgent issues not adequately covered in the literature on advocacy evaluation in the Global South. Evaluators and others interested in advocacy evaluation in Global South contexts will fi nd this piece particularly informative because it provides a literature review focused on how work","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"43 1","pages":"312 - 313"},"PeriodicalIF":1.7,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46425650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-01DOI: 10.1177/10982140221126774
{"title":"Corrigendum to Evaluator Education Curriculum: Which Competencies Ought to be Prioritized in Master's and Doctoral Programs?","authors":"","doi":"10.1177/10982140221126774","DOIUrl":"https://doi.org/10.1177/10982140221126774","url":null,"abstract":"","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"43 1","pages":"458 - 458"},"PeriodicalIF":1.7,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42784676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1177/10982140211055643
Anna Planas-Lladó, Xavier Úcar
Empowerment is a concept that has become increasingly used over recent years. However, little research has been undertaken into how empowerment can be evaluated, particularly in the case of young people. The aim of this article is to present an inventory of dimensions and indicators of youth empowerment. The article describes the various phases in the construction and validation of the inventory. These phases were (1) a contrast of the inventory of dimensions and indicators against specialized published writings on youth empowerment; (2) the validation of the resulting inventory by experts; and (3) a contrast with young people through four participatory evaluation processes and six life stories. The tool is scientifically and practically useful and enables the impact of youth empowerment programmes to be evaluated; it also serves to plan and implement socio-educational processes aimed at influencing the empowerment of young people.
{"title":"Evaluating Youth Empowerment: The Construction and Validation of an Inventory of Dimensions and Indicators","authors":"Anna Planas-Lladó, Xavier Úcar","doi":"10.1177/10982140211055643","DOIUrl":"https://doi.org/10.1177/10982140211055643","url":null,"abstract":"Empowerment is a concept that has become increasingly used over recent years. However, little research has been undertaken into how empowerment can be evaluated, particularly in the case of young people. The aim of this article is to present an inventory of dimensions and indicators of youth empowerment. The article describes the various phases in the construction and validation of the inventory. These phases were (1) a contrast of the inventory of dimensions and indicators against specialized published writings on youth empowerment; (2) the validation of the resulting inventory by experts; and (3) a contrast with young people through four participatory evaluation processes and six life stories. The tool is scientifically and practically useful and enables the impact of youth empowerment programmes to be evaluated; it also serves to plan and implement socio-educational processes aimed at influencing the empowerment of young people.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45886326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-12DOI: 10.1177/1098214020977692
Kyle Cox, Ben Kelcey
Analysis of the differential treatment effects across targeted subgroups and contexts is a critical objective in many evaluations because it delineates for whom and under what conditions particular programs, therapies or treatments are effective. Unfortunately, it is unclear how to plan efficient and effective evaluations that include these moderated effects when the design includes partial nesting (i.e., disparate grouping structures across treatment conditions). In this study, we develop statistical power formulas to identify requisite sample sizes and guide the planning of evaluations probing moderation under two-level partially nested designs. The results suggest that the power to detect moderation effects in partially nested designs is substantially influenced by sample size, moderation effect size, and moderator variance structure (i.e., varies within groups only or within and between groups). We implement the power formulas in the R-Shiny application PowerUpRShiny and demonstrate their use to plan evaluations.
{"title":"Statistical Power for Detecting Moderation in Partially Nested Designs","authors":"Kyle Cox, Ben Kelcey","doi":"10.1177/1098214020977692","DOIUrl":"https://doi.org/10.1177/1098214020977692","url":null,"abstract":"Analysis of the differential treatment effects across targeted subgroups and contexts is a critical objective in many evaluations because it delineates for whom and under what conditions particular programs, therapies or treatments are effective. Unfortunately, it is unclear how to plan efficient and effective evaluations that include these moderated effects when the design includes partial nesting (i.e., disparate grouping structures across treatment conditions). In this study, we develop statistical power formulas to identify requisite sample sizes and guide the planning of evaluations probing moderation under two-level partially nested designs. The results suggest that the power to detect moderation effects in partially nested designs is substantially influenced by sample size, moderation effect size, and moderator variance structure (i.e., varies within groups only or within and between groups). We implement the power formulas in the R-Shiny application PowerUpRShiny and demonstrate their use to plan evaluations.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"44 1","pages":"133 - 152"},"PeriodicalIF":1.7,"publicationDate":"2022-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41597293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-16DOI: 10.1177/10982140211056913
Rebecca J. Macy, A.L. Eckhardt, Christopher J. Wretman, Ran Hu, Jeongsuk Kim, Xinyi Wang, Cindy Bombeeck
The increasing number of anti-trafficking organizations and funding for anti-trafficking services have greatly out-paced evaluative efforts resulting in critical knowledge gaps, which have been underscored by recent recommendations for the development of greater evaluation capacity in the anti-trafficking field. In response to these calls, this paper reports on the development and feasibility testing of an evaluation protocol to generate practice-based evidence for an anti-trafficking transitional housing program. Guided by formative evaluation and evaluability frameworks, our practitioner-researcher team had two aims: (1) develop an evaluation protocol, and (2) test the protocol with a feasibility trial. To the best of our knowledge, this is one of only a few reports concerning anti-trafficking housing program evaluations, particularly one with many foreign-national survivors as evaluation participants. In addition to presenting evaluation findings, the team herein documented decisions and strategies related to conceptualizing, designing, and conducting the evaluation to offer approaches for future evaluations.
{"title":"Developing Evaluation Approaches for an Anti-Human Trafficking Housing Program","authors":"Rebecca J. Macy, A.L. Eckhardt, Christopher J. Wretman, Ran Hu, Jeongsuk Kim, Xinyi Wang, Cindy Bombeeck","doi":"10.1177/10982140211056913","DOIUrl":"https://doi.org/10.1177/10982140211056913","url":null,"abstract":"The increasing number of anti-trafficking organizations and funding for anti-trafficking services have greatly out-paced evaluative efforts resulting in critical knowledge gaps, which have been underscored by recent recommendations for the development of greater evaluation capacity in the anti-trafficking field. In response to these calls, this paper reports on the development and feasibility testing of an evaluation protocol to generate practice-based evidence for an anti-trafficking transitional housing program. Guided by formative evaluation and evaluability frameworks, our practitioner-researcher team had two aims: (1) develop an evaluation protocol, and (2) test the protocol with a feasibility trial. To the best of our knowledge, this is one of only a few reports concerning anti-trafficking housing program evaluations, particularly one with many foreign-national survivors as evaluation participants. In addition to presenting evaluation findings, the team herein documented decisions and strategies related to conceptualizing, designing, and conducting the evaluation to offer approaches for future evaluations.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"43 1","pages":"539 - 558"},"PeriodicalIF":1.7,"publicationDate":"2022-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47023787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}