Pub Date : 2022-12-20DOI: 10.1177/10982140211056539
Minji Cho, Ann Marie Castleman, Haley Umans, Mike Osiemo Mwirigi
Evaluation scholars have committed decades of work to the development of evaluator competencies. The 2018 American Evaluation Association (AEA) Evaluator Competencies may be useful for evaluators to identify their strengths and weaknesses to improve their practice; however, a few empirically validated self-assessment tools based on the competencies exist. Two studies were conducted to develop a validated tool. The first study (N = 170) developed the Evaluator Competencies Assessment Tool (ECAT), a self-assessment tool based on the AEA, 2018 Evaluator Competencies. This study provided evidence for structural validity via confirmatory factor analysis. The second study (N = 142) reconfirmed structural validity with a new sample and examined variables that are associated with evaluator competencies through correlation and t-test analyses. Having a mentor, years of evaluation experience, age, evaluation training, and education level were positively related to evaluator competencies. The ECAT can be used to foster self-reflection for practitioners to improve evaluation competence.
{"title":"Measuring Evaluator Competencies: Developing and Validating the Evaluator Competencies Assessment Tool","authors":"Minji Cho, Ann Marie Castleman, Haley Umans, Mike Osiemo Mwirigi","doi":"10.1177/10982140211056539","DOIUrl":"https://doi.org/10.1177/10982140211056539","url":null,"abstract":"Evaluation scholars have committed decades of work to the development of evaluator competencies. The 2018 American Evaluation Association (AEA) Evaluator Competencies may be useful for evaluators to identify their strengths and weaknesses to improve their practice; however, a few empirically validated self-assessment tools based on the competencies exist. Two studies were conducted to develop a validated tool. The first study (N = 170) developed the Evaluator Competencies Assessment Tool (ECAT), a self-assessment tool based on the AEA, 2018 Evaluator Competencies. This study provided evidence for structural validity via confirmatory factor analysis. The second study (N = 142) reconfirmed structural validity with a new sample and examined variables that are associated with evaluator competencies through correlation and t-test analyses. Having a mentor, years of evaluation experience, age, evaluation training, and education level were positively related to evaluator competencies. The ECAT can be used to foster self-reflection for practitioners to improve evaluation competence.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"44 1","pages":"474 - 494"},"PeriodicalIF":1.7,"publicationDate":"2022-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47459902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-20DOI: 10.1177/10982140211056935
E. Casey, J. Vanslyke, B. Beadnell, N. Tatiana Masters, Kirstin McFarland
Principles focused evaluation (PFE) can complement existing formative and outcome evaluation plans by identifying Effectiveness Principles (EPs), an operationalization of values and standards that guide practitioners during program implementation. To date, however, few examples of PFE are available in the literature. This description of the application of PFE to the Washington State Rape Prevention and Education (RPE) sexual violence prevention program provides an example of how this flexible approach can augment an existing evaluation plan to distill shared evaluation components across different organizations implementing diverse prevention programming. Specifically, we describe the process used by a team of practitioners, funders, evaluation consultants and state-level sexual violence prevention technical assistance providers to identify EPs, operationalize indicators for each EP, and develop and test an EP measurement approach. In this process, the seven very different RPE-funded organizations, each serving a unique community, were able to identify and endorse shared, core EPs. This description illustrates PFE's promise for augmenting a shared evaluation approach and identifying common guiding tenets across uniquely situated organizations in a larger community of practice.
{"title":"The Process of Applying Principles-Focused Evaluation to the Sexual Violence Prevention Field: Implications for Practice in Other Social Services Fields","authors":"E. Casey, J. Vanslyke, B. Beadnell, N. Tatiana Masters, Kirstin McFarland","doi":"10.1177/10982140211056935","DOIUrl":"https://doi.org/10.1177/10982140211056935","url":null,"abstract":"Principles focused evaluation (PFE) can complement existing formative and outcome evaluation plans by identifying Effectiveness Principles (EPs), an operationalization of values and standards that guide practitioners during program implementation. To date, however, few examples of PFE are available in the literature. This description of the application of PFE to the Washington State Rape Prevention and Education (RPE) sexual violence prevention program provides an example of how this flexible approach can augment an existing evaluation plan to distill shared evaluation components across different organizations implementing diverse prevention programming. Specifically, we describe the process used by a team of practitioners, funders, evaluation consultants and state-level sexual violence prevention technical assistance providers to identify EPs, operationalize indicators for each EP, and develop and test an EP measurement approach. In this process, the seven very different RPE-funded organizations, each serving a unique community, were able to identify and endorse shared, core EPs. This description illustrates PFE's promise for augmenting a shared evaluation approach and identifying common guiding tenets across uniquely situated organizations in a larger community of practice.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"44 1","pages":"374 - 393"},"PeriodicalIF":1.7,"publicationDate":"2022-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44700944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-20DOI: 10.1177/10982140221138604
Kate L. Nolt, L. Leviton
Evidence-based programs and grassroots programs are often adapted during implementation. Adaptations are often hidden, ignored, or punished. Although some adaptations stem from lack of organizational capacity, evaluators report other adaptations happen in good faith or are efforts to better fit the local context. Program implementers, facilitators who need to adapt during implementation, do not always report adaptations because they fear losing funding if the program is not implemented with fidelity. Program personnel including program evaluators need this information to improve effectiveness of programs, and to determine whether an adaptation is still consistent with the theory of change. Evaluators also need this information for generalizing results to varied settings and populations. Following the PRECEDE–PROCEED model, we recommend a hybrid approach to fidelity and adaptation. We argue in favor of advance planning to accommodate potential adaptations. Such planning also establishes evaluation criteria for determining whether adaptations are helpful, harmful, and appropriate to the context. We illustrate some types of adaptations that can occur, why they may be needed, and how to structure transparent reporting about adaptations to program developers and funding organizations.
{"title":"Fidelity and Adaptation of Programs: Does Adaptation Thwart Effectiveness?","authors":"Kate L. Nolt, L. Leviton","doi":"10.1177/10982140221138604","DOIUrl":"https://doi.org/10.1177/10982140221138604","url":null,"abstract":"Evidence-based programs and grassroots programs are often adapted during implementation. Adaptations are often hidden, ignored, or punished. Although some adaptations stem from lack of organizational capacity, evaluators report other adaptations happen in good faith or are efforts to better fit the local context. Program implementers, facilitators who need to adapt during implementation, do not always report adaptations because they fear losing funding if the program is not implemented with fidelity. Program personnel including program evaluators need this information to improve effectiveness of programs, and to determine whether an adaptation is still consistent with the theory of change. Evaluators also need this information for generalizing results to varied settings and populations. Following the PRECEDE–PROCEED model, we recommend a hybrid approach to fidelity and adaptation. We argue in favor of advance planning to accommodate potential adaptations. Such planning also establishes evaluation criteria for determining whether adaptations are helpful, harmful, and appropriate to the context. We illustrate some types of adaptations that can occur, why they may be needed, and how to structure transparent reporting about adaptations to program developers and funding organizations.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"44 1","pages":"322 - 334"},"PeriodicalIF":1.7,"publicationDate":"2022-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45018945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-19DOI: 10.1177/10982140221123010
Julia Espinosa-Fajardo, Pablo Rodríguez-Bilella, Esteban Tapella
In the last three decades, the promotion of stakeholder involvement in evaluation has been gaining relevance in the Latin American and internationally, across varied agencies, institutions, and civic organizations. The 2030 Agenda and the Global Evaluation Agenda have also recognized the centrality of participation in evaluation. This article explores stakeholder involvement in evaluation based on collaborative work with stakeholders from 15 evaluative experiences. It shows what characterizes participatory evaluation in the region today and the principles of this practice.
{"title":"Principles for Stakeholder Involvement in Evaluation in Latin America","authors":"Julia Espinosa-Fajardo, Pablo Rodríguez-Bilella, Esteban Tapella","doi":"10.1177/10982140221123010","DOIUrl":"https://doi.org/10.1177/10982140221123010","url":null,"abstract":"In the last three decades, the promotion of stakeholder involvement in evaluation has been gaining relevance in the Latin American and internationally, across varied agencies, institutions, and civic organizations. The 2030 Agenda and the Global Evaluation Agenda have also recognized the centrality of participation in evaluation. This article explores stakeholder involvement in evaluation based on collaborative work with stakeholders from 15 evaluative experiences. It shows what characterizes participatory evaluation in the region today and the principles of this practice.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2022-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48929468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-19DOI: 10.1177/10982140221116094
J. Pann, E. DiLuzio, A. Coghlan, Scott D. Hughes
This article explores the utility of mindfulness in the field of evaluation. Mindfulness is a translation of the ancient Indian word, Sati, which means awareness, attention, and remembering. While definitions vary, a practical definition of mindfulness is present-moment awareness in an open and nonjudgmental manner. Mindfulness-based interventions have been employed by a wide variety of professions. Although it has received limited attention in the writings of evaluators, we argue that mindfulness can improve the practice of evaluation and support the development of the professional practice and interpersonal domains of American Evaluation Association (AEA) evaluator competencies. We review several mindfulness-based practices and how they can be used by evaluators in their work. Thus, we posit that far from being an esoteric concept, mindfulness practices can serve the pragmatic end of improving our discipline. We also discuss the limits of mindfulness and propose recommendations for future efforts.
{"title":"Supporting Evaluation Practice Through Mindfulness","authors":"J. Pann, E. DiLuzio, A. Coghlan, Scott D. Hughes","doi":"10.1177/10982140221116094","DOIUrl":"https://doi.org/10.1177/10982140221116094","url":null,"abstract":"This article explores the utility of mindfulness in the field of evaluation. Mindfulness is a translation of the ancient Indian word, Sati, which means awareness, attention, and remembering. While definitions vary, a practical definition of mindfulness is present-moment awareness in an open and nonjudgmental manner. Mindfulness-based interventions have been employed by a wide variety of professions. Although it has received limited attention in the writings of evaluators, we argue that mindfulness can improve the practice of evaluation and support the development of the professional practice and interpersonal domains of American Evaluation Association (AEA) evaluator competencies. We review several mindfulness-based practices and how they can be used by evaluators in their work. Thus, we posit that far from being an esoteric concept, mindfulness practices can serve the pragmatic end of improving our discipline. We also discuss the limits of mindfulness and propose recommendations for future efforts.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"44 1","pages":"293 - 307"},"PeriodicalIF":1.7,"publicationDate":"2022-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47726105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-04DOI: 10.1177/10982140221136486
S. Tucker, L. Stevahn, J. King
This article compares the purposes and content of the four foundational documents of the American Evaluation Association (AEA): the Program Evaluation Standards, the AEA Public Statement on Cultural Competence in Evaluation, the AEA Evaluator Competencies, and the AEA Guiding Principles. This reflection on alignment is an early effort in the third step of professionalization: defining how to use and recognize evaluator competencies. The analysis intentionally focuses on content and reflects on the implications of the differences and similarities across documents. The comparison reveals important questions of interest at both the micro level (individual evaluator) and the macro level (evaluation). The article concludes with challenges, learnings, and proposed next steps of AEA's Professionalization and Competencies Working Group.
{"title":"Professionalizing Evaluation: A Time-Bound Comparison of the American Evaluation Association's Foundational Documents","authors":"S. Tucker, L. Stevahn, J. King","doi":"10.1177/10982140221136486","DOIUrl":"https://doi.org/10.1177/10982140221136486","url":null,"abstract":"This article compares the purposes and content of the four foundational documents of the American Evaluation Association (AEA): the Program Evaluation Standards, the AEA Public Statement on Cultural Competence in Evaluation, the AEA Evaluator Competencies, and the AEA Guiding Principles. This reflection on alignment is an early effort in the third step of professionalization: defining how to use and recognize evaluator competencies. The analysis intentionally focuses on content and reflects on the implications of the differences and similarities across documents. The comparison reveals important questions of interest at both the micro level (individual evaluator) and the macro level (evaluation). The article concludes with challenges, learnings, and proposed next steps of AEA's Professionalization and Competencies Working Group.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"44 1","pages":"495 - 512"},"PeriodicalIF":1.7,"publicationDate":"2022-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44055415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1177/10982140221134238
J. Hall, Laura R. Peck
The fourth issue of volume 43 is our fi rst issue as Co-Editors-in-Chief of the American Journal of Evaluation. We have arrived at this point on our journey as Editors of AJE re fl ecting on our capacity as evaluators. While we are seasoned evaluators with decades of experience between us, we fi nd it is necessary to reexamine our role and capacity as evaluators and ask ourselves re fl ective questions such as What authority do we have as evaluators to address issues of power and privilege in the context of an evaluation? How do we determine if our evaluation approaches address vulnerable communities and sensitive topics respectfully? What analytic capacity do we have to produce valid and actionable evidence? And, what is within our capacity, as evaluators, to generate positive change for individuals, communities, and society? The articles we have assembled for this issues provide informed thinking on these and related topics based on the evaluation literature and other fi elds of study. Together, the discourse provided in the seven articles and three method notes in this issue will undoubtedly open up possibilities to re fl ect on and enhance your evaluative capacity as it has ours. The lead article in this issue, Critical Evaluation Capital (CEC): A New Tool for Applying Critical Race Theory to the Evaluand by Alice E. Ginsberg, centers issues of power in evaluation practice by presenting a tool to support critical evaluation approaches that challenge the notion of objectivity, consider evaluation a value-laden enterprise, and position the role of the evaluator as an agent for change. Informed by the lens of critical race theory and community cultural wealth, Ginsberg ’ s tool enhances the capacity of evaluators to pay attention to different types of power within the context of an evaluand. Speci fi cally, the CEC tool converts issues of power into several overlapping categories of “ capital. ” Each category is de fi ned and provides thought-provoking questions useful to explore our authority as evaluators and the role of power and privilege in an evaluation context. To conclude the article, Ginsberg retroactively applies the CEC tool to an evaluation. By
第43卷的第4期是我们作为《美国评估杂志》共同主编的第一期。作为AJE的编辑,我们已经到达了我们旅程的这一点,这反映了我们作为评估者的能力。虽然我们是经验丰富的评估者,拥有数十年的经验,但我们发现有必要重新审视我们作为评估者的角色和能力,并扪心自问:作为评估者,我们有什么权力在评估的背景下解决权力和特权问题?我们如何确定我们的评估方法是否尊重弱势群体和敏感话题?我们有什么样的分析能力来提供有效的和可操作的证据?作为评估者,我们有什么能力为个人、社区和社会带来积极的变化?我们为这个问题收集的文章在评估文献和其他研究领域的基础上,为这些和相关主题提供了知情的思考。本期的七篇文章和三篇方法说明所提供的论述,无疑将开辟各种可能性,使你能够像我们一样,思考和提高你的评价能力。这期的第一篇文章《批判性评估资本(CEC):将批判性种族理论应用于评估的新工具》由Alice E. Ginsberg撰写,通过提出一种支持批判性评估方法的工具,集中了评估实践中的权力问题,这些方法挑战了客观性的概念,认为评估是一个充满价值的企业,并将评估者的角色定位为变革的代理人。通过批判性种族理论和社区文化财富的视角,金斯伯格的工具增强了评估者在被评估对象的背景下关注不同类型权力的能力。具体来说,CEC工具将权力问题转化为几个重叠的“资本”类别。每个类别都有明确的定义,并提供了发人深省的问题,这些问题有助于探索我们作为评估者的权威,以及在评估环境中权力和特权的作用。在文章的结尾,Ginsberg追溯地将CEC工具应用于评估。通过
{"title":"From the Co-Editors: Building Evaluative Capacity to Examine Issues of Power, Address Sensitive Topics, and Generate Actionable Data","authors":"J. Hall, Laura R. Peck","doi":"10.1177/10982140221134238","DOIUrl":"https://doi.org/10.1177/10982140221134238","url":null,"abstract":"The fourth issue of volume 43 is our fi rst issue as Co-Editors-in-Chief of the American Journal of Evaluation. We have arrived at this point on our journey as Editors of AJE re fl ecting on our capacity as evaluators. While we are seasoned evaluators with decades of experience between us, we fi nd it is necessary to reexamine our role and capacity as evaluators and ask ourselves re fl ective questions such as What authority do we have as evaluators to address issues of power and privilege in the context of an evaluation? How do we determine if our evaluation approaches address vulnerable communities and sensitive topics respectfully? What analytic capacity do we have to produce valid and actionable evidence? And, what is within our capacity, as evaluators, to generate positive change for individuals, communities, and society? The articles we have assembled for this issues provide informed thinking on these and related topics based on the evaluation literature and other fi elds of study. Together, the discourse provided in the seven articles and three method notes in this issue will undoubtedly open up possibilities to re fl ect on and enhance your evaluative capacity as it has ours. The lead article in this issue, Critical Evaluation Capital (CEC): A New Tool for Applying Critical Race Theory to the Evaluand by Alice E. Ginsberg, centers issues of power in evaluation practice by presenting a tool to support critical evaluation approaches that challenge the notion of objectivity, consider evaluation a value-laden enterprise, and position the role of the evaluator as an agent for change. Informed by the lens of critical race theory and community cultural wealth, Ginsberg ’ s tool enhances the capacity of evaluators to pay attention to different types of power within the context of an evaluand. Speci fi cally, the CEC tool converts issues of power into several overlapping categories of “ capital. ” Each category is de fi ned and provides thought-provoking questions useful to explore our authority as evaluators and the role of power and privilege in an evaluation context. To conclude the article, Ginsberg retroactively applies the CEC tool to an evaluation. By","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"43 1","pages":"464 - 467"},"PeriodicalIF":1.7,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48654353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-30DOI: 10.1177/10982140221108662
A. Boyce, Tiffany L. S. Tovey, Onyinyechukwu Onwuka, J. R. Moller, Tyler Clark, Aundrea Smith
More evaluators have anchored their work in equity-focused, culturally responsive, and social justice ideals. Although we have a sense of approaches that guide evaluators as to how they should attend to culture, diversity, equity, and inclusion (DEI), we have not yet established an empirical understanding of how evaluators measure DEI. In this article, we report an examination of how evaluators and principal investigators (PIs) funded by the National Science Foundation's Advanced Technological Education (ATE) program define and measure DEI within their projects. Evaluators gathered the most evidence related to diversity and less evidence related to equity and inclusion. On average, PIs’ projects engaged in activities designed to increase DEI, with the highest focus on diversity. We believe there continues to be room for improvement and implore the movement of engagement with these important topics from the margins to the center of our field's education, theory, and practice.
{"title":"Exploring NSF-Funded Evaluators’ and Principal Investigators’ Definitions and Measurement of Diversity, Equity, and Inclusion","authors":"A. Boyce, Tiffany L. S. Tovey, Onyinyechukwu Onwuka, J. R. Moller, Tyler Clark, Aundrea Smith","doi":"10.1177/10982140221108662","DOIUrl":"https://doi.org/10.1177/10982140221108662","url":null,"abstract":"More evaluators have anchored their work in equity-focused, culturally responsive, and social justice ideals. Although we have a sense of approaches that guide evaluators as to how they should attend to culture, diversity, equity, and inclusion (DEI), we have not yet established an empirical understanding of how evaluators measure DEI. In this article, we report an examination of how evaluators and principal investigators (PIs) funded by the National Science Foundation's Advanced Technological Education (ATE) program define and measure DEI within their projects. Evaluators gathered the most evidence related to diversity and less evidence related to equity and inclusion. On average, PIs’ projects engaged in activities designed to increase DEI, with the highest focus on diversity. We believe there continues to be room for improvement and implore the movement of engagement with these important topics from the margins to the center of our field's education, theory, and practice.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"44 1","pages":"50 - 73"},"PeriodicalIF":1.7,"publicationDate":"2022-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45738847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-10DOI: 10.1177/10982140211008978
Colombe Lemire, M. Rousseau, C. Dionne
Implementation fidelity is the degree of compliance with which the core elements of program or intervention practices are used as intended. The scientific literature reveals gaps in defining and assessing implementation fidelity in early intervention: lack of common definitions and conceptual framework as well as their lack of application. Through a critical review of the scientific literature, this article aims to identify information that can be used to develop a common language and guidelines for assessing implementation fidelity. An analysis of 46 theoretical and empirical papers about early intervention implementation, published between 1998 and 2018, identified four conceptual frameworks, in addition to that of Dane and Schneider. Following analysis of the conceptual frameworks, a four-component conceptualization of implementation fidelity (adherence, dosage, quality and participant responsiveness) is proposed.
{"title":"A Comparison of Fidelity Implementation Frameworks Used in the Field of Early Intervention","authors":"Colombe Lemire, M. Rousseau, C. Dionne","doi":"10.1177/10982140211008978","DOIUrl":"https://doi.org/10.1177/10982140211008978","url":null,"abstract":"Implementation fidelity is the degree of compliance with which the core elements of program or intervention practices are used as intended. The scientific literature reveals gaps in defining and assessing implementation fidelity in early intervention: lack of common definitions and conceptual framework as well as their lack of application. Through a critical review of the scientific literature, this article aims to identify information that can be used to develop a common language and guidelines for assessing implementation fidelity. An analysis of 46 theoretical and empirical papers about early intervention implementation, published between 1998 and 2018, identified four conceptual frameworks, in addition to that of Dane and Schneider. Following analysis of the conceptual frameworks, a four-component conceptualization of implementation fidelity (adherence, dosage, quality and participant responsiveness) is proposed.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"44 1","pages":"236 - 252"},"PeriodicalIF":1.7,"publicationDate":"2022-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47289679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-03DOI: 10.1177/10982140211061018
Sarah Mason
Evaluators often lament that the general public does not understand what we do. Yet, there is limited empirical research on what the general public does know—and think—about program evaluation. This article seeks to expand our understanding in this domain by capturing views about evaluation from a demographically representative sample of the U.S population. This article also explores different strategies for describing program evaluation to the general public. Using an experimental design, it builds on previous research by Mason and Hunt, testing a set of hypotheses about how to enhance communication about evaluation. Findings suggest that public understanding of evaluation is indeed low, although two specific communication strategies—using well-known examples of social programs and including a why statement that describes the purpose of evaluation—can strengthen understanding among members of the public.
{"title":"Just Give Me an Example! Exploring Strategies for Building Public Understanding of Evaluation","authors":"Sarah Mason","doi":"10.1177/10982140211061018","DOIUrl":"https://doi.org/10.1177/10982140211061018","url":null,"abstract":"Evaluators often lament that the general public does not understand what we do. Yet, there is limited empirical research on what the general public does know—and think—about program evaluation. This article seeks to expand our understanding in this domain by capturing views about evaluation from a demographically representative sample of the U.S population. This article also explores different strategies for describing program evaluation to the general public. Using an experimental design, it builds on previous research by Mason and Hunt, testing a set of hypotheses about how to enhance communication about evaluation. Findings suggest that public understanding of evaluation is indeed low, although two specific communication strategies—using well-known examples of social programs and including a why statement that describes the purpose of evaluation—can strengthen understanding among members of the public.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"44 1","pages":"549 - 567"},"PeriodicalIF":1.7,"publicationDate":"2022-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46576725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}