This paper describes the theoretical and conceptual frameworks used to guide the site-level evaluations of Building Infrastructure Leading to Diversity (BUILD) programs, part of the Diversity Program Consortium (DPC), funded by the National Institutes of Health. We aim to provide an understanding of which theories informed the evaluation work of the DPC and how the frameworks guiding BUILD site-level evaluations are conceptually aligned with one another and with the consortium-level evaluation.
{"title":"Theoretical and conceptual frameworks across local evaluation efforts in a nationwide consortium.","authors":"Christina A Christie, Carmel R Wright","doi":"10.1002/ev.20505","DOIUrl":"https://doi.org/10.1002/ev.20505","url":null,"abstract":"<p><p>This paper describes the theoretical and conceptual frameworks used to guide the site-level evaluations of Building Infrastructure Leading to Diversity (BUILD) programs, part of the Diversity Program Consortium (DPC), funded by the National Institutes of Health. We aim to provide an understanding of which theories informed the evaluation work of the DPC and how the frameworks guiding BUILD site-level evaluations are conceptually aligned with one another and with the consortium-level evaluation.</p>","PeriodicalId":35250,"journal":{"name":"New Directions for Evaluation","volume":"2022 174","pages":"69-78"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/af/e0/nihms-1903803.PMC10249506.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9612485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
BUilding Infrastructure Leading to Diversity (BUILD), an initiative of the National Institutes of Health (NIH), provides grants to undergraduate institutions to implement and study innovative approaches to engaging and retaining students from diverse backgrounds in biomedical research. The NIH awarded BUILD grants to 10 higher education institutions in multiple states, including funding for local evaluations. This chapter presents findings from an online survey and interviews with 15 local evaluators from nine of the 10 BUILD sites. Participants shared their perspectives on the role of professional local evaluators in national evaluations, ideal national-local multisite evaluation partnerships, and the ways that funders can support these partnerships to maximize impact. They argued for customized technical assistance and other support for local evaluations; the importance of including local results in national evaluation findings; the value of local evaluators' subject-matter expertise; and the potential for funders to act as central organizing entities in national-local evaluation partnerships.
{"title":"Advice from local/site evaluators: How to manage \"up\" within a large-scale initiative.","authors":"Melanie Hwalek, Matt Honoré, Shavonnea Brown","doi":"10.1002/ev.20504","DOIUrl":"https://doi.org/10.1002/ev.20504","url":null,"abstract":"<p><p>BUilding Infrastructure Leading to Diversity (BUILD), an initiative of the National Institutes of Health (NIH), provides grants to undergraduate institutions to implement and study innovative approaches to engaging and retaining students from diverse backgrounds in biomedical research. The NIH awarded BUILD grants to 10 higher education institutions in multiple states, including funding for local evaluations. This chapter presents findings from an online survey and interviews with 15 local evaluators from nine of the 10 BUILD sites. Participants shared their perspectives on the role of professional local evaluators in national evaluations, ideal national-local multisite evaluation partnerships, and the ways that funders can support these partnerships to maximize impact. They argued for customized technical assistance and other support for local evaluations; the importance of including local results in national evaluation findings; the value of local evaluators' subject-matter expertise; and the potential for funders to act as central organizing entities in national-local evaluation partnerships.</p>","PeriodicalId":35250,"journal":{"name":"New Directions for Evaluation","volume":"2022 174","pages":"79-95"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/f8/6d/nihms-1833133.PMC10243755.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9963471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The federal evaluation workforce plays a central role in the development and execution of evaluation policy. This workforce performs critical functions that include identifying where evidence should be built in particular policy areas, determining key research questions to inform their agency's mission, and shaping the field more broadly through federal investments in evaluation. Other roles include designing evaluation studies, overseeing contracts to conduct evaluations, performing internal evaluations, and communicating results to decision‐makers. For the most part, these are highly skilled and trained career staff responsible for overseeing and executing technical projects in a challenging bureaucratic and political environment. This chapter describes the role of the federal evaluation workforce in the executive branch and its importance in developing and executing evaluation policy. It also describes recent changes, including the passage of the Foundations for Evidence‐Based Policymaking Act of 2018, that have affected the roles, responsibilities, and opportunities for this vital workforce.
{"title":"Evaluation policy and the federal workforce","authors":"D. Epstein, E. Zielewski, Erika Liliedahl","doi":"10.1002/ev.20487","DOIUrl":"https://doi.org/10.1002/ev.20487","url":null,"abstract":"The federal evaluation workforce plays a central role in the development and execution of evaluation policy. This workforce performs critical functions that include identifying where evidence should be built in particular policy areas, determining key research questions to inform their agency's mission, and shaping the field more broadly through federal investments in evaluation. Other roles include designing evaluation studies, overseeing contracts to conduct evaluations, performing internal evaluations, and communicating results to decision‐makers. For the most part, these are highly skilled and trained career staff responsible for overseeing and executing technical projects in a challenging bureaucratic and political environment. This chapter describes the role of the federal evaluation workforce in the executive branch and its importance in developing and executing evaluation policy. It also describes recent changes, including the passage of the Foundations for Evidence‐Based Policymaking Act of 2018, that have affected the roles, responsibilities, and opportunities for this vital workforce.","PeriodicalId":35250,"journal":{"name":"New Directions for Evaluation","volume":"2022 1","pages":"100 - 85"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"51164778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kenneth D Gibbs, Christa Reynolds, Sabrina Epou, Alison Gammie
Advancing diversity in the biomedical research workforce is critical to the ability of the National Institutes of Health (NIH) to achieve its mission. The NIH Diversity Program Consortium is a unique, 10-year program that builds upon longstanding training and research capacity-building activities to promote workforce diversity. It was designed to rigorously evaluate approaches to enhancing diversity in the biomedical research workforce at the student, faculty, and institutional level. In this chapter we describe (a) the program's origins, (b) the consortium-wide evaluation, including plans, measures, challenges, and solutions, and (c) how lessons learned from this program are being leveraged to strengthen NIH research-training and capacity-building activities and evaluation efforts.
{"title":"The funders' perspective: Lessons learned from the National Institutes of Health Diversity Program Consortium evaluation.","authors":"Kenneth D Gibbs, Christa Reynolds, Sabrina Epou, Alison Gammie","doi":"10.1002/ev.20502","DOIUrl":"https://doi.org/10.1002/ev.20502","url":null,"abstract":"<p><p>Advancing diversity in the biomedical research workforce is critical to the ability of the National Institutes of Health (NIH) to achieve its mission. The NIH Diversity Program Consortium is a unique, 10-year program that builds upon longstanding training and research capacity-building activities to promote workforce diversity. It was designed to rigorously evaluate approaches to enhancing diversity in the biomedical research workforce at the student, faculty, and institutional level. In this chapter we describe (a) the program's origins, (b) the consortium-wide evaluation, including plans, measures, challenges, and solutions, and (c) how lessons learned from this program are being leveraged to strengthen NIH research-training and capacity-building activities and evaluation efforts.</p>","PeriodicalId":35250,"journal":{"name":"New Directions for Evaluation","volume":"2022 174","pages":"105-117"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/66/c1/nihms-1906112.PMC10270671.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10042334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Fierro, Alana R. Kinarsky, C. Echeverria-Estrada, Nadia Sabat Bass, Christina A. Christie
Federal agencies are increasingly expected to write and implement guidance for program evaluation, also known as evaluation policies. The Foundations for Evidence‐Based Policymaking Act required such policies for some federal agencies, and guidance from the White House Office of Management and Budget outlined an expectation that all agencies develop evaluation policies. Before these expectations, many federal agencies were already developing such policies to suit organizational needs and contexts. This chapter details findings from interviews with stakeholders at ten federal agencies and offices that developed and implemented evaluation policies before enacting the Foundations for Evidence‐Based Policymaking Act. These organizations represent early adopters of evaluation policies that can support future guidance and implementation of evaluation frameworks and capacity building in government. The study provides insight into the breadth and depth of the various strategies they used as well as their experiences with implementation.
{"title":"The importance of implementation: Putting evaluation policy to work","authors":"L. Fierro, Alana R. Kinarsky, C. Echeverria-Estrada, Nadia Sabat Bass, Christina A. Christie","doi":"10.1002/ev.20490","DOIUrl":"https://doi.org/10.1002/ev.20490","url":null,"abstract":"Federal agencies are increasingly expected to write and implement guidance for program evaluation, also known as evaluation policies. The Foundations for Evidence‐Based Policymaking Act required such policies for some federal agencies, and guidance from the White House Office of Management and Budget outlined an expectation that all agencies develop evaluation policies. Before these expectations, many federal agencies were already developing such policies to suit organizational needs and contexts. This chapter details findings from interviews with stakeholders at ten federal agencies and offices that developed and implemented evaluation policies before enacting the Foundations for Evidence‐Based Policymaking Act. These organizations represent early adopters of evaluation policies that can support future guidance and implementation of evaluation frameworks and capacity building in government. The study provides insight into the breadth and depth of the various strategies they used as well as their experiences with implementation.","PeriodicalId":35250,"journal":{"name":"New Directions for Evaluation","volume":"2022 1","pages":"49 - 62"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"51164426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Evaluation policy involves the dictates that guide the planning, conduct, and use of evaluation in any organization. It is – or at least should be – a central concern to those involved with evaluation. Evaluation policy shapes what evaluation practice looks like, while enabling or constraining what it can accomplish. This chapter offers a brief and selective history of evaluation policy in the United States. A description of the American Evaluation Association's activities regarding evaluation policy follows. The chapter then sets the stage for the other contributions in the current set of papers about evaluation policy.
{"title":"Evaluation policy: An introduction","authors":"Nicholas R. Hart, M. Mark","doi":"10.1002/ev.20492","DOIUrl":"https://doi.org/10.1002/ev.20492","url":null,"abstract":"Evaluation policy involves the dictates that guide the planning, conduct, and use of evaluation in any organization. It is – or at least should be – a central concern to those involved with evaluation. Evaluation policy shapes what evaluation practice looks like, while enabling or constraining what it can accomplish. This chapter offers a brief and selective history of evaluation policy in the United States. A description of the American Evaluation Association's activities regarding evaluation policy follows. The chapter then sets the stage for the other contributions in the current set of papers about evaluation policy.","PeriodicalId":35250,"journal":{"name":"New Directions for Evaluation","volume":"2022 1","pages":"16 - 9"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"51164440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This volume surveys the landscape regarding evaluation policy. According to Trochim et al. (2009, p. 16), evaluation policy includes “any rule or principle that a group or organization uses to guide its decisions and actions when doing evaluation.” Evaluation policy involves rules or principles that govern evaluation itself. Evaluation policies can be quite important because they are likely to “enable and constrain the potential contributions evaluation can make” (Mark et al., 2009, p. 3). The current issue of NDE expands on and updates an earlier issue, New Directions for Evaluation (NDE, issue no. 123) (Trochim et al., 2009). Much has changed since the 2009 issue, including more widespread development of explicit evaluation policies in agencies and organizations; empirical studies of evaluation policies; important legislation at the U.S. federal level, particularly the Foundations for Evidence-Based Policymaking Act of 2018 (the Evidence Act), which was signed into law in 2019; and ongoing changes in practices related to and emanating from evaluation policies, including those mandated by the Evidence Act. The current issue reviews many of these empirical, legislative, and practice developments, bringing readers up to date on evaluation policy and pointing the way to productive future directions. Most chapters in the issue focus primarily on the U.S. federal government. However, the volume gives attention to implications for the broader evaluation community. The first chapter, by the issue editors, Nick Hart and Mel Mark, introduces the reader to the idea of evaluation policy, offers a brief history, examines the role of the American Evaluation Association (AEA), and sets the stage for the chapters that follow. Chapter 2 consists of the AEA’s Evaluation Roadmap for a More Effective Government, prepared by the Association’s Evaluation Policy Task Force. In Chapter 3, Hind Al Hudib and Bradley Cousins draw on their research examining the written evaluation policies of a sample of international development agencies, a sample that, although global in scope, includes agencies of the U.S. federal government. Al Hudib and Cousins expand Trochim’s (2009) definition of evaluation policy, review the components found in evaluation policies, and examine likely linkages between aspects of an evaluation policy and evaluation capacity building. Chapter 4, by Leslie Ann Fierro, Alana Kinarsky, Carlos Escheverra-Estrada, Nadia Bass, and Christina Ann Christie, presents results from an interview study examining the initial implementation of evaluation policies at the U.S. federal level. Chapter 5, by Kathryn Newcomer, Karol Olejniczak, and Nicholas Hart, focuses on learning agendas, also known as evidence-building plans. Learning agendas are a requirement of the Evidence Act, but some federal agencies and other organizations had previously created this kind of strategic plan for evaluation and evidence. Newcomer and her colleagues
本卷调查有关评价政策的景观。根据Trochim et al. (2009, p. 16),评估政策包括“一个团体或组织在进行评估时用来指导其决策和行动的任何规则或原则”。评估政策包括管理评估本身的规则或原则。评估策略可能非常重要,因为它们可能“启用和限制评估可能做出的潜在贡献”(Mark等人,2009年,第3页)。NDE的本期扩展并更新了早期的一期《评估新方向》(NDE,第3期)。123) (Trochim et al., 2009)。自2009年以来,情况发生了很大变化,包括各机构和组织更广泛地制定了明确的评估政策;评价政策的实证研究;美国联邦层面的重要立法,特别是2019年签署成为法律的《2018年循证决策基础法案》(以下简称《证据法案》);以及与评估政策相关和产生的实践的持续变化,包括《证据法》规定的评估政策。本期杂志回顾了许多这些经验、立法和实践的发展,使读者了解评估政策的最新情况,并指出了富有成效的未来方向。该问题的大多数章节主要关注美国联邦政府。但是,本卷注意到对更广泛的评价界的影响。第一章,由问题的编辑,尼克·哈特和梅尔·马克,向读者介绍了评估政策的概念,提供了一个简短的历史,考察了美国评估协会(AEA)的作用,并为接下来的章节奠定了基础。第二章是美国行政审批协会评估政策工作组制定的“更有效的政府评估路线图”。在第三章中,Hind Al Hudib和Bradley Cousins利用他们的研究考察了国际发展机构样本的书面评估政策,该样本虽然在全球范围内,但包括美国联邦政府的机构。Al Hudib和Cousins扩展了Trochim(2009)对评估政策的定义,回顾了评估政策中发现的组成部分,并研究了评估政策各方面与评估能力建设之间可能存在的联系。第四章由Leslie Ann Fierro、Alana Kinarsky、Carlos Escheverra-Estrada、Nadia Bass和Christina Ann Christie撰写,介绍了一项访谈研究的结果,该研究考察了美国联邦一级评估政策的初步实施情况。第五章由Kathryn Newcomer、Karol Olejniczak和Nicholas Hart撰写,重点关注学习议程,也称为证据构建计划。学习议程是《证据法》的一项要求,但一些联邦机构和其他组织此前已经制定了这种评估和证据的战略计划。纽卡姆和她的同事
{"title":"Editors’ notes","authors":"M. Mark, Nicholas R. Hart","doi":"10.1002/ev.20493","DOIUrl":"https://doi.org/10.1002/ev.20493","url":null,"abstract":"This volume surveys the landscape regarding evaluation policy. According to Trochim et al. (2009, p. 16), evaluation policy includes “any rule or principle that a group or organization uses to guide its decisions and actions when doing evaluation.” Evaluation policy involves rules or principles that govern evaluation itself. Evaluation policies can be quite important because they are likely to “enable and constrain the potential contributions evaluation can make” (Mark et al., 2009, p. 3). The current issue of NDE expands on and updates an earlier issue, New Directions for Evaluation (NDE, issue no. 123) (Trochim et al., 2009). Much has changed since the 2009 issue, including more widespread development of explicit evaluation policies in agencies and organizations; empirical studies of evaluation policies; important legislation at the U.S. federal level, particularly the Foundations for Evidence-Based Policymaking Act of 2018 (the Evidence Act), which was signed into law in 2019; and ongoing changes in practices related to and emanating from evaluation policies, including those mandated by the Evidence Act. The current issue reviews many of these empirical, legislative, and practice developments, bringing readers up to date on evaluation policy and pointing the way to productive future directions. Most chapters in the issue focus primarily on the U.S. federal government. However, the volume gives attention to implications for the broader evaluation community. The first chapter, by the issue editors, Nick Hart and Mel Mark, introduces the reader to the idea of evaluation policy, offers a brief history, examines the role of the American Evaluation Association (AEA), and sets the stage for the chapters that follow. Chapter 2 consists of the AEA’s Evaluation Roadmap for a More Effective Government, prepared by the Association’s Evaluation Policy Task Force. In Chapter 3, Hind Al Hudib and Bradley Cousins draw on their research examining the written evaluation policies of a sample of international development agencies, a sample that, although global in scope, includes agencies of the U.S. federal government. Al Hudib and Cousins expand Trochim’s (2009) definition of evaluation policy, review the components found in evaluation policies, and examine likely linkages between aspects of an evaluation policy and evaluation capacity building. Chapter 4, by Leslie Ann Fierro, Alana Kinarsky, Carlos Escheverra-Estrada, Nadia Bass, and Christina Ann Christie, presents results from an interview study examining the initial implementation of evaluation policies at the U.S. federal level. Chapter 5, by Kathryn Newcomer, Karol Olejniczak, and Nicholas Hart, focuses on learning agendas, also known as evidence-building plans. Learning agendas are a requirement of the Evidence Act, but some federal agencies and other organizations had previously created this kind of strategic plan for evaluation and evidence. Newcomer and her colleagues","PeriodicalId":35250,"journal":{"name":"New Directions for Evaluation","volume":"2022 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"51164448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicole M G Maccalla, Dawn Purnell, Heather E McCreath, Robert A Dennis, Teresa Seeman
While guidance on how to design rigorous evaluation studies abounds, prescriptive guidance on how to include critical process and context measures through the construction of exposure variables is lacking. Capturing nuanced intervention dosage information within a large-scale evaluation is particularly complex. The Building Infrastructure Leading to Diversity (BUILD) initiative is part of the Diversity Program Consortium, which is funded by the National Institutes of Health. It is designed to increase participation in biomedical research careers among individuals from underrepresented groups. This chapter articulates methods employed in defining BUILD student and faculty interventions, tracking nuanced participation in multiple programs and activities, and computing the intensity of exposure. Defining standardized exposure variables (beyond simple treatment group membership) is crucial for equity-focused impact evaluation. Both the process and resulting nuanced dosage variables can inform the design and implementation of large-scale, diversity training program, outcome-focused, evaluation studies.
{"title":"Gauging treatment impact: The development of exposure variables in a large-scale evaluation study.","authors":"Nicole M G Maccalla, Dawn Purnell, Heather E McCreath, Robert A Dennis, Teresa Seeman","doi":"10.1002/ev.20509","DOIUrl":"https://doi.org/10.1002/ev.20509","url":null,"abstract":"<p><p>While guidance on how to design rigorous evaluation studies abounds, prescriptive guidance on how to include critical process and context measures through the construction of exposure variables is lacking. Capturing nuanced intervention dosage information within a large-scale evaluation is particularly complex. The Building Infrastructure Leading to Diversity (BUILD) initiative is part of the Diversity Program Consortium, which is funded by the National Institutes of Health. It is designed to increase participation in biomedical research careers among individuals from underrepresented groups. This chapter articulates methods employed in defining BUILD student and faculty interventions, tracking nuanced participation in multiple programs and activities, and computing the intensity of exposure. Defining standardized exposure variables (beyond simple treatment group membership) is crucial for equity-focused impact evaluation. Both the process and resulting nuanced dosage variables can inform the design and implementation of large-scale, diversity training program, outcome-focused, evaluation studies.</p>","PeriodicalId":35250,"journal":{"name":"New Directions for Evaluation","volume":"2022 174","pages":"57-68"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/6a/55/nihms-1903802.PMC10249684.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9672568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We highlight some key issues regarding evaluation policy, including themes that emerged across chapters of this volume. These topics include what an evaluation policy is, the kind of content that evaluation policies can have, learning agendas (which are an increasingly common component of evaluation policies, especially at the U.S. federal level), the processes by which evaluation policies are developed and implemented, the role of relationships in evaluation policies, and the consequences of evaluation policy. We briefly highlight how the chapters in this volume offer guidance to those involved with developing, implementing, or revising an evaluation policy, especially—but not only—in the U.S. federal context in the wake of legislation signed into law in 2019. Looking to the future, we also share suggestions for further advances with respect to advocacy, accountability, research, and practice related to evaluation policies.
{"title":"The future of evaluation policy","authors":"M. Mark, Nicholas R. Hart","doi":"10.1002/ev.20488","DOIUrl":"https://doi.org/10.1002/ev.20488","url":null,"abstract":"We highlight some key issues regarding evaluation policy, including themes that emerged across chapters of this volume. These topics include what an evaluation policy is, the kind of content that evaluation policies can have, learning agendas (which are an increasingly common component of evaluation policies, especially at the U.S. federal level), the processes by which evaluation policies are developed and implemented, the role of relationships in evaluation policies, and the consequences of evaluation policy. We briefly highlight how the chapters in this volume offer guidance to those involved with developing, implementing, or revising an evaluation policy, especially—but not only—in the U.S. federal context in the wake of legislation signed into law in 2019. Looking to the future, we also share suggestions for further advances with respect to advocacy, accountability, research, and practice related to evaluation policies.","PeriodicalId":35250,"journal":{"name":"New Directions for Evaluation","volume":"15 1","pages":"117 - 124"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"51164819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01Epub Date: 2022-08-08DOI: 10.1002/ev.20508
Catherine M Crespi, Krystle P Cobian
The National Institutes of Health (NIH) created the Building Infrastructure Leading to Diversity (BUILD) initiative to incentivize undergraduate institutions to create innovative approaches to increasing diversity in biomedical research, with the ultimate goal of diversifying the NIH-funded research enterprise. Initiatives such as BUILD involve designing and implementing programs at multiple sites that share common objectives. Evaluation of initiatives like this often includes statistical analyses that combine data across sites to estimate the program's impact on particular outcomes. Meta-analysis is a statistical technique for combining effect estimates from different studies to obtain a single overall effect estimate and to estimate heterogeneity across studies. However, it has not been commonly applied to evaluate the impact of a program across multiple different sites. In this chapter, we use the BUILD Scholar program-one component of the broader initiative-to demonstrate the application of meta-analysis to combine effect estimates from different sites of a multisite initiative. We analyze three student outcomes using a typical "single-stage" modeling approach and a meta-analysis approach. We show how a meta-analysis approach can provide more nuanced information about program impacts on student outcomes and thus can help support a robust evaluation.
{"title":"A meta-analysis approach for evaluating the effectiveness of complex multisite programs.","authors":"Catherine M Crespi, Krystle P Cobian","doi":"10.1002/ev.20508","DOIUrl":"10.1002/ev.20508","url":null,"abstract":"<p><p>The National Institutes of Health (NIH) created the Building Infrastructure Leading to Diversity (BUILD) initiative to incentivize undergraduate institutions to create innovative approaches to increasing diversity in biomedical research, with the ultimate goal of diversifying the NIH-funded research enterprise. Initiatives such as BUILD involve designing and implementing programs at multiple sites that share common objectives. Evaluation of initiatives like this often includes statistical analyses that combine data across sites to estimate the program's impact on particular outcomes. Meta-analysis is a statistical technique for combining effect estimates from different studies to obtain a single overall effect estimate and to estimate heterogeneity across studies. However, it has not been commonly applied to evaluate the impact of a program across multiple different <i>sites</i>. In this chapter, we use the BUILD Scholar program-one component of the broader initiative-to demonstrate the application of meta-analysis to combine effect estimates from different sites of a multisite initiative. We analyze three student outcomes using a typical \"single-stage\" modeling approach and a meta-analysis approach. We show how a meta-analysis approach can provide more nuanced information about program impacts on student outcomes and thus can help support a robust evaluation.</p>","PeriodicalId":35250,"journal":{"name":"New Directions for Evaluation","volume":"2022 174","pages":"47-56"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/90/f1/nihms-1903801.PMC10299763.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9958103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}