Pub Date : 2022-06-09DOI: 10.1177/10982140211067206
Gregory Phillips, D. Felt, Esrea Pérez-Bill, Megan M. Ruprecht, Erik Elías Glenn, Peter Lindeman, R. Miller
Lesbian, gay, bisexual, transgender, queer, intersex, Two-Spirit, and other sexual and gender minority (LGBTQ+) individuals encounter numerous obstacles to equity across health and healthcare, education, housing, employment, and other domains. Such barriers are even greater for LGBTQ+ individuals who are also Black, Indigenous, and People of Color (BIPOC), as well as those who are disabled, and those who are working-class, poor, and otherwise economically disadvantaged, among other intersecting forms of oppression. Given this, an evaluation cannot be equitable for LGBTQ+ people without meaningfully including our experiences and voices. Unfortunately, all evidence indicates that evaluation has systematically failed to recognize the presence and value of LGBTQ+ populations. Thus, we propose critical action steps and the articulation of a new paradigm of LGBTQ+ Evaluation. Our recommendations are grounded in transformative, equitable, culturally responsive, and decolonial frameworks, as well as our own experiences as LGBTQ+ evaluators and accomplices. We conclude by inviting others to participate in the articulation and enactment of this new paradigm.
{"title":"Transforming the Paradigm for LGBTQ+ Evaluation: Advancing a Praxis of LGBTQ+ Inclusion and Liberation in Evaluation","authors":"Gregory Phillips, D. Felt, Esrea Pérez-Bill, Megan M. Ruprecht, Erik Elías Glenn, Peter Lindeman, R. Miller","doi":"10.1177/10982140211067206","DOIUrl":"https://doi.org/10.1177/10982140211067206","url":null,"abstract":"Lesbian, gay, bisexual, transgender, queer, intersex, Two-Spirit, and other sexual and gender minority (LGBTQ+) individuals encounter numerous obstacles to equity across health and healthcare, education, housing, employment, and other domains. Such barriers are even greater for LGBTQ+ individuals who are also Black, Indigenous, and People of Color (BIPOC), as well as those who are disabled, and those who are working-class, poor, and otherwise economically disadvantaged, among other intersecting forms of oppression. Given this, an evaluation cannot be equitable for LGBTQ+ people without meaningfully including our experiences and voices. Unfortunately, all evidence indicates that evaluation has systematically failed to recognize the presence and value of LGBTQ+ populations. Thus, we propose critical action steps and the articulation of a new paradigm of LGBTQ+ Evaluation. Our recommendations are grounded in transformative, equitable, culturally responsive, and decolonial frameworks, as well as our own experiences as LGBTQ+ evaluators and accomplices. We conclude by inviting others to participate in the articulation and enactment of this new paradigm.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"44 1","pages":"7 - 28"},"PeriodicalIF":1.7,"publicationDate":"2022-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43246592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-09DOI: 10.1177/10982140211018276
J. Qian-Khoo, K. Hiruy, Rebecca Hutton, Jo Barraket
Impact evaluation and measurement are highly complex and can pose challenges for both social impact providers and funders. Measuring the impact of social interventions requires the continuous exploration and improvement of evaluation approaches and tools. This article explores the available evidence on meta-evaluation—the “evaluation of evaluations”—as an analytical tool for improving impact evaluation and analysis in practice. It presents a systematic review of 15 meta-evaluations with an impact evaluation/analysis component. These studies, taken from both the scholarly and gray literature, were analyzed thematically, yielding insights about the potential contribution of meta-evaluation in improving the methodological rigor of impact evaluation and organizational learning among practitioners. To conclude, we suggest that meta-evaluation is a viable way of examining impact evaluations used in the broader social sector, particularly market-based social interventions.
{"title":"A Systematic Review of Meta-Evaluations: Lessons for Evaluation and Impact Analysis","authors":"J. Qian-Khoo, K. Hiruy, Rebecca Hutton, Jo Barraket","doi":"10.1177/10982140211018276","DOIUrl":"https://doi.org/10.1177/10982140211018276","url":null,"abstract":"Impact evaluation and measurement are highly complex and can pose challenges for both social impact providers and funders. Measuring the impact of social interventions requires the continuous exploration and improvement of evaluation approaches and tools. This article explores the available evidence on meta-evaluation—the “evaluation of evaluations”—as an analytical tool for improving impact evaluation and analysis in practice. It presents a systematic review of 15 meta-evaluations with an impact evaluation/analysis component. These studies, taken from both the scholarly and gray literature, were analyzed thematically, yielding insights about the potential contribution of meta-evaluation in improving the methodological rigor of impact evaluation and organizational learning among practitioners. To conclude, we suggest that meta-evaluation is a viable way of examining impact evaluations used in the broader social sector, particularly market-based social interventions.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"43 1","pages":"394 - 411"},"PeriodicalIF":1.7,"publicationDate":"2022-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43935959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-01DOI: 10.1177/10982140221098626
J. Hall, Laura R. Peck
After the untimely passing of the American Journal of Evaluation’s (AJE’s) Editor-in-Chief, George Julnes, the two of us—Jori Hall and Laura Peck—agreed to step in and serve as Interim Co-Editors-in-Chief while the American Evaluation Association (AEA) secured a new, permanent lead for our journal. We are grateful to George and Rachael Lawrence, AJE’s most recent Managing Editor, for ushering through the publication process of the six articles and the teaching note that appear in this issue. The articles reflect the diversity of the field of program evaluation with attention to the concept of the counterfactual (Reichardt), evaluation policy (Kinarsky & Christie), program dosage in evaluation (Hewawitharana et al.), policy advocacy (Albert et al.), evaluation capacity (Hudib & Cousins), qualitative data collection and analysis (LaChenaye & McCarthy), and evaluation competencies (Montrosse-Moorhead et al.). In addition, this issue presents an “In Memoriam” section, dedicated to and reflecting on the scholarly life of George Julnes. In our experiences with George, we believed him to be an exceptional editor for our journal because of his passion for pushing the boundaries of the field of program evaluation. He truly valued the diversity of approaches and perspectives that operate in our field and aimed to ensure that all of those approaches and perspectives earned attention in our journal. It is for that reason that he established, for example, the Experimental Methodology Section (which Laura edits) and the Economic Evaluation Section (which Brooks Bowden edits); and that he reconceptualized the Ethics, Values and Culture Section (formerly known as professional values and ethics, which Jill Anne Chouinard and Fiona Cram edit). He diversified the journal’s editorial team to ensure global representation of the many varied parts of our field, as represented by the inclusion of the International Developments Section (which Deborah Rugg and Zenda Ofir edit) and the appointment of Apollo Nkwake as an Associate Editor to bring more attention to evaluation scholars and practitioners in the Global South. George maintained the Method Note (which Tarek Azzam and Dana Wanzer edit), Teaching and Learning (which Anne Vo and Phung Pham edit), and Book Review (which Leslie Cooksy edits) Sections as having ongoing importance. In addition to Jori, Leah Neubauer, Gregory Phillips, II and Justus Randolph served as Associate Editors with George, and we have been grateful for their continued service during this transition. Upon his first “From the Editor” note, kicking off volume 40, George stated his aspirations for the journal. He desired AJE to be “(1) a top source for the most important and relevant information for members of the evaluation community and (2) an influential voice representing the expertise and values of evaluators in policy discussions that affect the evaluation community” (Julnes 2019, 158). It is our hope—during our time as Interim Co-Editors-in-C
{"title":"From the Interim Co-Editors","authors":"J. Hall, Laura R. Peck","doi":"10.1177/10982140221098626","DOIUrl":"https://doi.org/10.1177/10982140221098626","url":null,"abstract":"After the untimely passing of the American Journal of Evaluation’s (AJE’s) Editor-in-Chief, George Julnes, the two of us—Jori Hall and Laura Peck—agreed to step in and serve as Interim Co-Editors-in-Chief while the American Evaluation Association (AEA) secured a new, permanent lead for our journal. We are grateful to George and Rachael Lawrence, AJE’s most recent Managing Editor, for ushering through the publication process of the six articles and the teaching note that appear in this issue. The articles reflect the diversity of the field of program evaluation with attention to the concept of the counterfactual (Reichardt), evaluation policy (Kinarsky & Christie), program dosage in evaluation (Hewawitharana et al.), policy advocacy (Albert et al.), evaluation capacity (Hudib & Cousins), qualitative data collection and analysis (LaChenaye & McCarthy), and evaluation competencies (Montrosse-Moorhead et al.). In addition, this issue presents an “In Memoriam” section, dedicated to and reflecting on the scholarly life of George Julnes. In our experiences with George, we believed him to be an exceptional editor for our journal because of his passion for pushing the boundaries of the field of program evaluation. He truly valued the diversity of approaches and perspectives that operate in our field and aimed to ensure that all of those approaches and perspectives earned attention in our journal. It is for that reason that he established, for example, the Experimental Methodology Section (which Laura edits) and the Economic Evaluation Section (which Brooks Bowden edits); and that he reconceptualized the Ethics, Values and Culture Section (formerly known as professional values and ethics, which Jill Anne Chouinard and Fiona Cram edit). He diversified the journal’s editorial team to ensure global representation of the many varied parts of our field, as represented by the inclusion of the International Developments Section (which Deborah Rugg and Zenda Ofir edit) and the appointment of Apollo Nkwake as an Associate Editor to bring more attention to evaluation scholars and practitioners in the Global South. George maintained the Method Note (which Tarek Azzam and Dana Wanzer edit), Teaching and Learning (which Anne Vo and Phung Pham edit), and Book Review (which Leslie Cooksy edits) Sections as having ongoing importance. In addition to Jori, Leah Neubauer, Gregory Phillips, II and Justus Randolph served as Associate Editors with George, and we have been grateful for their continued service during this transition. Upon his first “From the Editor” note, kicking off volume 40, George stated his aspirations for the journal. He desired AJE to be “(1) a top source for the most important and relevant information for members of the evaluation community and (2) an influential voice representing the expertise and values of evaluators in policy discussions that affect the evaluation community” (Julnes 2019, 158). It is our hope—during our time as Interim Co-Editors-in-C","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"43 1","pages":"156 - 157"},"PeriodicalIF":1.7,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47802016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-01DOI: 10.1177/10982140221078750
S. Rallis
George Julnes has long been a friend and colleague to whom I often turned for provocative, engaging, theoretically grounded, complex (albeit convoluted at times), and fun conversations. I could count on George to question any point and to offer alternative perspectives. Thus, when I became editor of the American Journal of Evaluation (AJE) in 2014, I asked George to serve as editor of the section that we decided to title Professional Ethics and Values. A few words from what George wrote introducing the section in December 2014 illustrate his ability to raise critical questions:
乔治·朱尔斯(George Julnes)一直是我的朋友和同事,我经常求助于他,与他进行煽动性、迷人、有理论基础、复杂(尽管有时令人费解)、有趣的对话。我可以指望乔治对任何观点提出质疑,并提供不同的观点。因此,当我在2014年成为《美国评估杂志》(American Journal of Evaluation, AJE)的编辑时,我请乔治担任我们决定命名为“职业道德与价值观”部分的编辑。2014年12月,乔治在介绍这一部分时写了几句话,说明了他提出批判性问题的能力:
{"title":"Conversations with George","authors":"S. Rallis","doi":"10.1177/10982140221078750","DOIUrl":"https://doi.org/10.1177/10982140221078750","url":null,"abstract":"George Julnes has long been a friend and colleague to whom I often turned for provocative, engaging, theoretically grounded, complex (albeit convoluted at times), and fun conversations. I could count on George to question any point and to offer alternative perspectives. Thus, when I became editor of the American Journal of Evaluation (AJE) in 2014, I asked George to serve as editor of the section that we decided to title Professional Ethics and Values. A few words from what George wrote introducing the section in December 2014 illustrate his ability to raise critical questions:","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"43 1","pages":"295 - 297"},"PeriodicalIF":1.7,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48119815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-01DOI: 10.1177/10982140221078747
D. Rog
{"title":"Remembering George Julnes: My Friend, Colleague, and Pragmatic Theorist","authors":"D. Rog","doi":"10.1177/10982140221078747","DOIUrl":"https://doi.org/10.1177/10982140221078747","url":null,"abstract":"","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"43 1","pages":"301 - 303"},"PeriodicalIF":1.7,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44903239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-09DOI: 10.1177/10982140221100448
Corrie B. Whitmore
This paper describes a framework for educating future evaluators and users of evaluation through community-engaged, experiential learning courses and offers practical guidance about how such a class can be structured. This approach is illustrated via a reflective case narrative describing how an introductory, undergraduate class at a mid-size, public university in the northwest partnered with a community agency. In the class, students learned and practiced evaluation principles in the context of a Parents as Teachers home visiting program, actively engaged in course assignments designed to support the program's evaluation needs, and presented meta-evaluative findings and recommendations for future evaluation work to the community partner to conclude the semester. This community-engaged approach to teaching evaluation anchors student learning in an applied context, promotes social engagement, and enables students to contribute to knowledge about effective human action, as outlined in the American Evaluation Association's Mission.
{"title":"Teaching Evaluation Through Community-Engaged Learning Courses","authors":"Corrie B. Whitmore","doi":"10.1177/10982140221100448","DOIUrl":"https://doi.org/10.1177/10982140221100448","url":null,"abstract":"This paper describes a framework for educating future evaluators and users of evaluation through community-engaged, experiential learning courses and offers practical guidance about how such a class can be structured. This approach is illustrated via a reflective case narrative describing how an introductory, undergraduate class at a mid-size, public university in the northwest partnered with a community agency. In the class, students learned and practiced evaluation principles in the context of a Parents as Teachers home visiting program, actively engaged in course assignments designed to support the program's evaluation needs, and presented meta-evaluative findings and recommendations for future evaluation work to the community partner to conclude the semester. This community-engaged approach to teaching evaluation anchors student learning in an applied context, promotes social engagement, and enables students to contribute to knowledge about effective human action, as outlined in the American Evaluation Association's Mission.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"44 1","pages":"270 - 281"},"PeriodicalIF":1.7,"publicationDate":"2022-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43901136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-19DOI: 10.1177/10982140221079190
J. Randolph
In this tribute, I describe my wonderful experience having George Julnes as a long-time evaluation mentor and I pass on some of the sage wisdom that he passed on to me.
{"title":"A Tribute to George Julnes from a Devoted Mentee","authors":"J. Randolph","doi":"10.1177/10982140221079190","DOIUrl":"https://doi.org/10.1177/10982140221079190","url":null,"abstract":"In this tribute, I describe my wonderful experience having George Julnes as a long-time evaluation mentor and I pass on some of the sage wisdom that he passed on to me.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"43 1","pages":"304 - 305"},"PeriodicalIF":1.7,"publicationDate":"2022-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44685947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-19DOI: 10.1177/10982140221078753
M. Mark
{"title":"George Julnes: Scholar of Evaluation and of Life","authors":"M. Mark","doi":"10.1177/10982140221078753","DOIUrl":"https://doi.org/10.1177/10982140221078753","url":null,"abstract":"","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"43 1","pages":"293 - 294"},"PeriodicalIF":1.7,"publicationDate":"2022-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42366335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1177/10982140221075078
Rachael B. Lawrence
{"title":"A Letter From the Interim Editor","authors":"Rachael B. Lawrence","doi":"10.1177/10982140221075078","DOIUrl":"https://doi.org/10.1177/10982140221075078","url":null,"abstract":"","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"43 1","pages":"4 - 4"},"PeriodicalIF":1.7,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48411838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1177/10982140221075092
Laura R. Peck
As noted in my Editor’s Note to the Experimental Methodology Section of the American Journal of Evaluation’s (2020) Volume 40, Issue 4, experimental evaluations—where research units, such as people, schools, classrooms, and neighborhoods are randomly assigned to a program or to a control group—are often criticized for having limited external validity. In evaluation parlance, external validity refers to the ability to generalize results to other people, places, contexts, or times beyond those on which the evaluation focused. Evaluations—whether using an experimental design or not—are commonly conducted in a single site or a selected set of sites, either because that site is of particular interest or for convenience. Those special circumstances can mean that those sites—or the people within them—are not representative of a broader population of interest. In turn, the evaluation results may be useful only for assessing those people and places and not for predicting how a similar intervention might generate similar results for other people in other places. The good news, however, is that research and design innovations over the past several years have focused on how to overcome this criticism, making experimental evaluations’ results more useful for informing policy and program decisions (e.g., Bell & Stuart, 2016; Tipton & Olsen, 2018). Efforts for improving the external validity of experiments fall into two camps: design and analysis. Improving external validity through design means explicitly engaging a sample that is representative of a clearly identified target population. Although doing so is not common, particularly at the national level, some experiments have been successful at engaging a representative set of sites. The U.S. Department of Labor’s National Job Corps Study (e.g., Schochet, Burghardt & McConnell, 2006), the U.S. Department of Health and Human Services’ Head Start Impact Study (Puma et al., 2010), and the U.S. Social Security Administration’s Benefit Offset National Evaluation (Gubits et al., 2018) are three major evaluations that successfully recruited a nationally representative sample so that the evaluation results would be nationally generalizable. A simple, random selection of sites is the most straightforward way to ensure this representativeness and the generalizability of an evaluation’s results. In practice, however, that can be anything but simple. Even if an evaluation team randomly samples a site to participate, that site still needs to agree to participate; and if it does not, then the sample is no longer random.
{"title":"Section Editor’s Note: Insights into the Generalizability of Findings from Experimental Evaluations","authors":"Laura R. Peck","doi":"10.1177/10982140221075092","DOIUrl":"https://doi.org/10.1177/10982140221075092","url":null,"abstract":"As noted in my Editor’s Note to the Experimental Methodology Section of the American Journal of Evaluation’s (2020) Volume 40, Issue 4, experimental evaluations—where research units, such as people, schools, classrooms, and neighborhoods are randomly assigned to a program or to a control group—are often criticized for having limited external validity. In evaluation parlance, external validity refers to the ability to generalize results to other people, places, contexts, or times beyond those on which the evaluation focused. Evaluations—whether using an experimental design or not—are commonly conducted in a single site or a selected set of sites, either because that site is of particular interest or for convenience. Those special circumstances can mean that those sites—or the people within them—are not representative of a broader population of interest. In turn, the evaluation results may be useful only for assessing those people and places and not for predicting how a similar intervention might generate similar results for other people in other places. The good news, however, is that research and design innovations over the past several years have focused on how to overcome this criticism, making experimental evaluations’ results more useful for informing policy and program decisions (e.g., Bell & Stuart, 2016; Tipton & Olsen, 2018). Efforts for improving the external validity of experiments fall into two camps: design and analysis. Improving external validity through design means explicitly engaging a sample that is representative of a clearly identified target population. Although doing so is not common, particularly at the national level, some experiments have been successful at engaging a representative set of sites. The U.S. Department of Labor’s National Job Corps Study (e.g., Schochet, Burghardt & McConnell, 2006), the U.S. Department of Health and Human Services’ Head Start Impact Study (Puma et al., 2010), and the U.S. Social Security Administration’s Benefit Offset National Evaluation (Gubits et al., 2018) are three major evaluations that successfully recruited a nationally representative sample so that the evaluation results would be nationally generalizable. A simple, random selection of sites is the most straightforward way to ensure this representativeness and the generalizability of an evaluation’s results. In practice, however, that can be anything but simple. Even if an evaluation team randomly samples a site to participate, that site still needs to agree to participate; and if it does not, then the sample is no longer random.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"43 1","pages":"66 - 69"},"PeriodicalIF":1.7,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44824541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}