Pub Date : 2024-06-01DOI: 10.1177/10982140231216667
Bianca Montrosse‐Moorhead, Daniela Schröter, L. W. Becho
Evaluation competency frameworks across the globe regard evaluation approaches as important to know and use in practice. Prior classifications have been developed to aid in understanding important differences among varying approaches. Nevertheless, there is an opportunity for a new classification of evaluation approaches, in particular one that is practitioner-oriented, intended to guide decision-making in practice, and inclusive of all scholarship. The evaluation garden presented in this article begins to map approaches against eight dimensions of practice and situates them in their philosophical orientations and methodological dispositions. This allows for approach comparison, a more nuanced understanding of where they overlap and differ, and how and where they can be intentionally combined. The goal is to offer a visual classification that addresses prior criticisms, that is of use to a wide range of audiences, and that helps evaluation practitioners be able to more easily integrate evaluation approaches in practice.
{"title":"The Garden of Evaluation Approaches","authors":"Bianca Montrosse‐Moorhead, Daniela Schröter, L. W. Becho","doi":"10.1177/10982140231216667","DOIUrl":"https://doi.org/10.1177/10982140231216667","url":null,"abstract":"Evaluation competency frameworks across the globe regard evaluation approaches as important to know and use in practice. Prior classifications have been developed to aid in understanding important differences among varying approaches. Nevertheless, there is an opportunity for a new classification of evaluation approaches, in particular one that is practitioner-oriented, intended to guide decision-making in practice, and inclusive of all scholarship. The evaluation garden presented in this article begins to map approaches against eight dimensions of practice and situates them in their philosophical orientations and methodological dispositions. This allows for approach comparison, a more nuanced understanding of where they overlap and differ, and how and where they can be intentionally combined. The goal is to offer a visual classification that addresses prior criticisms, that is of use to a wide range of audiences, and that helps evaluation practitioners be able to more easily integrate evaluation approaches in practice.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141415762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-22DOI: 10.1177/10982140241246127
Rodney Hopson, Laura R. Peck
{"title":"From the Co-Editors: Evolving Evaluation Theory, Methods, and Practice","authors":"Rodney Hopson, Laura R. Peck","doi":"10.1177/10982140241246127","DOIUrl":"https://doi.org/10.1177/10982140241246127","url":null,"abstract":"","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140676156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-22DOI: 10.1177/10982140241243036
Daniela Schröter, R. Woodland
This editorial introduces the new editorial team and vision of the Teaching and Learning of Evaluation (T&L) Section. With deep expertise in evaluation theory, methodology, and practice, Schröter and Woodland bring a vision of advancing the pedagogy, andragogy, and heutagogy of evaluation through innovative practices and inclusivity. This note outlines the section's focus on systematically examining T&L in evaluation, showcasing articles that inform instructional innovations, curriculum development, and educational research. The editors invite contributions from diverse perspectives, encompassing formal and informal settings, transdisciplinary boundaries, and various dimensions of T&L environments in evaluation. Highlighting recent articles aligned with the section's goals, the editors invite feedback and involvement from readers and contributors to further enhance the T&L section's impact. As they embark on this journey, Schröter and Woodland express their commitment to fostering a vibrant and inclusive community dedicated to advancing evaluation through transformative T&L practices.
{"title":"From the Section Editors: Teaching & Learning Section Vision: Innovate, Evaluate, Disseminate","authors":"Daniela Schröter, R. Woodland","doi":"10.1177/10982140241243036","DOIUrl":"https://doi.org/10.1177/10982140241243036","url":null,"abstract":"This editorial introduces the new editorial team and vision of the Teaching and Learning of Evaluation (T&L) Section. With deep expertise in evaluation theory, methodology, and practice, Schröter and Woodland bring a vision of advancing the pedagogy, andragogy, and heutagogy of evaluation through innovative practices and inclusivity. This note outlines the section's focus on systematically examining T&L in evaluation, showcasing articles that inform instructional innovations, curriculum development, and educational research. The editors invite contributions from diverse perspectives, encompassing formal and informal settings, transdisciplinary boundaries, and various dimensions of T&L environments in evaluation. Highlighting recent articles aligned with the section's goals, the editors invite feedback and involvement from readers and contributors to further enhance the T&L section's impact. As they embark on this journey, Schröter and Woodland express their commitment to fostering a vibrant and inclusive community dedicated to advancing evaluation through transformative T&L practices.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140674552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-22DOI: 10.1177/10982140241234835
Jane Buckley, Elyse Postlewaite, T. Archibald, M. Linver, Jennifer Brown Urban
The purpose of this paper is to offer both theoretical and practical support to evaluation professionals preparing to facilitate the utilization phase of evaluation with a program or organization team. The Systems Evaluation Protocol for Participatory Data Use (SEPPDU) presented here is rooted in a partnership approach to evaluation and is therefore designed as a way to structure conversations and facilitate thinking around data interpretation and decision making. The SEPPDU is presented in three main parts: (a) summarizing evaluation results, (b) interpreting results, and (c) planning for action. This paper describes specific and practical tips for the facilitation of each part based on field experience in a variety of settings.
{"title":"A Protocol for Participatory Data Use","authors":"Jane Buckley, Elyse Postlewaite, T. Archibald, M. Linver, Jennifer Brown Urban","doi":"10.1177/10982140241234835","DOIUrl":"https://doi.org/10.1177/10982140241234835","url":null,"abstract":"The purpose of this paper is to offer both theoretical and practical support to evaluation professionals preparing to facilitate the utilization phase of evaluation with a program or organization team. The Systems Evaluation Protocol for Participatory Data Use (SEPPDU) presented here is rooted in a partnership approach to evaluation and is therefore designed as a way to structure conversations and facilitate thinking around data interpretation and decision making. The SEPPDU is presented in three main parts: (a) summarizing evaluation results, (b) interpreting results, and (c) planning for action. This paper describes specific and practical tips for the facilitation of each part based on field experience in a variety of settings.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140218661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-22DOI: 10.1177/10982140231218693
Sara E. North
A method called multi-attribute utility analysis (MAUA) provides a decision-making framework that facilitates comparative analysis of multiple real-world decision alternatives with unique complex attributes. Utility analysis as a measure of effectiveness has been minimally used by educational researchers to date, despite clear relevance in complex decision-making. To illustrate its viability, the application of MAUA was modeled for two example academic programs with diverse partnership priorities as a form of assessing academic–clinical partnership alignment. Simulated application indicates MAUA may be successfully utilized as an evidence-based methodological framework. The presented example is illustrative of the wide-spanning potential for this approach in different contexts, as predicted and recommended by experts in the field. Evaluators are encouraged to collaborate in new ways and strive to produce tangible, solution-oriented approaches to address key challenges and demonstrate the value of sound evaluation practices.
{"title":"Application of Multi-Attribute Utility Analysis as a Methodological Framework in Academic–Clinical Partnership Evaluation","authors":"Sara E. North","doi":"10.1177/10982140231218693","DOIUrl":"https://doi.org/10.1177/10982140231218693","url":null,"abstract":"A method called multi-attribute utility analysis (MAUA) provides a decision-making framework that facilitates comparative analysis of multiple real-world decision alternatives with unique complex attributes. Utility analysis as a measure of effectiveness has been minimally used by educational researchers to date, despite clear relevance in complex decision-making. To illustrate its viability, the application of MAUA was modeled for two example academic programs with diverse partnership priorities as a form of assessing academic–clinical partnership alignment. Simulated application indicates MAUA may be successfully utilized as an evidence-based methodological framework. The presented example is illustrative of the wide-spanning potential for this approach in different contexts, as predicted and recommended by experts in the field. Evaluators are encouraged to collaborate in new ways and strive to produce tangible, solution-oriented approaches to address key challenges and demonstrate the value of sound evaluation practices.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140211844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-18DOI: 10.1177/10982140241240146
Jeremy Braithwaite
{"title":"Book Review: Evaluation in Rural Communities by Allyson Kelley","authors":"Jeremy Braithwaite","doi":"10.1177/10982140241240146","DOIUrl":"https://doi.org/10.1177/10982140241240146","url":null,"abstract":"","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140234030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-14DOI: 10.1177/10982140231224495
R. Woodland, Rebecca Mazur
Logic modeling, the process that explicates how programs are constructed and theorized to bring about change, is considered to be standard evaluation practice. However, logic modeling is often experienced as a transactional, jargon-laden, discrete task undertaken to produce a document to comply with the expectations of an external entity, the consequences of which have minimal or even negative influence on the quality of program evaluation. This article presents the Logic Modeling Theory of Action Framework (LMTAF) which elucidates needs, resources, and central activities of logic modeling, and describes its potential evaluation-related benefits. The LMTAF situates evaluators as the primary intended users of logic modeling, and logic modeling as a fundamental element of each stage of a program evaluation life cycle. We aim to reassert the value of logic modeling for evaluation and provide evaluation practitioners a useful touchstone for reflective practice and future action.
{"title":"Reclaiming Logic Modeling for Evaluation: A Theory of Action Framework","authors":"R. Woodland, Rebecca Mazur","doi":"10.1177/10982140231224495","DOIUrl":"https://doi.org/10.1177/10982140231224495","url":null,"abstract":"Logic modeling, the process that explicates how programs are constructed and theorized to bring about change, is considered to be standard evaluation practice. However, logic modeling is often experienced as a transactional, jargon-laden, discrete task undertaken to produce a document to comply with the expectations of an external entity, the consequences of which have minimal or even negative influence on the quality of program evaluation. This article presents the Logic Modeling Theory of Action Framework (LMTAF) which elucidates needs, resources, and central activities of logic modeling, and describes its potential evaluation-related benefits. The LMTAF situates evaluators as the primary intended users of logic modeling, and logic modeling as a fundamental element of each stage of a program evaluation life cycle. We aim to reassert the value of logic modeling for evaluation and provide evaluation practitioners a useful touchstone for reflective practice and future action.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140243316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-13DOI: 10.1177/10982140241234841
Michelle Searle, Amanda Cooper, Paisley Worthington, Jennifer Hughes, R. Gokiert, Cheryl Poth
Factors influencing evaluation use has been a primary concern for evaluators. However, little is known about the current conceptualizations of evaluation use including what counts as use, what efforts encourage use, and how to measure use. This article identifies enablers and constraints to evaluation use based on a scoping review of literature published since 2009 ( n = 47). A fulsome examination to map factors influencing evaluation use identified in extant literature informs further study and captures its evolution over time. Five factors were identified that influence evaluation use: (1) resources; (2) stakeholder characteristics; (3) evaluation characteristics; (4) social and political environment; and (5) evaluators characteristics. Also examined is a synthesis of practical and theoretical implications as well as implications for future research. Importantly, our work builds upon two previous and impactful scoping reviews to provide a contemporary assessment of the factors influencing evaluation use and inform consequential evaluator practice.
{"title":"Mapping Evaluation Use: A Scoping Review of Extant Literature (2005–2022)","authors":"Michelle Searle, Amanda Cooper, Paisley Worthington, Jennifer Hughes, R. Gokiert, Cheryl Poth","doi":"10.1177/10982140241234841","DOIUrl":"https://doi.org/10.1177/10982140241234841","url":null,"abstract":"Factors influencing evaluation use has been a primary concern for evaluators. However, little is known about the current conceptualizations of evaluation use including what counts as use, what efforts encourage use, and how to measure use. This article identifies enablers and constraints to evaluation use based on a scoping review of literature published since 2009 ( n = 47). A fulsome examination to map factors influencing evaluation use identified in extant literature informs further study and captures its evolution over time. Five factors were identified that influence evaluation use: (1) resources; (2) stakeholder characteristics; (3) evaluation characteristics; (4) social and political environment; and (5) evaluators characteristics. Also examined is a synthesis of practical and theoretical implications as well as implications for future research. Importantly, our work builds upon two previous and impactful scoping reviews to provide a contemporary assessment of the factors influencing evaluation use and inform consequential evaluator practice.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140245128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-11DOI: 10.1177/10982140241236390
Debbie L. Hahs-Vaughn, Christine Depies DeStefano, Christopher D. Charles, Mary Little
Randomized experiments are a strong design for establishing impact evidence because the random assignment mechanism theoretically allows confidence in attributing group differences to the intervention. Growth of randomized experiments within educational studies has been widely documented. However, randomized experiments within education have received criticism for implementation challenges and for ignoring context. Additionally, limited guidance exists for programs that are tasked with both implementation and evaluation within the same funding period. This study draws on a research team's experiences and examines opportunities and challenges in conducting a multisite randomized evaluation of an internship program for teacher candidates. We discuss how problems were collaboratively addressed and adjusted to align with local realities and demonstrate how the research team, in consultation with local stakeholders, addressed methodological and program implementation problems in the field. Recommendations for future research are provided.
{"title":"Challenges and Adjustments in a Multisite School-Based Randomized Field Trial","authors":"Debbie L. Hahs-Vaughn, Christine Depies DeStefano, Christopher D. Charles, Mary Little","doi":"10.1177/10982140241236390","DOIUrl":"https://doi.org/10.1177/10982140241236390","url":null,"abstract":"Randomized experiments are a strong design for establishing impact evidence because the random assignment mechanism theoretically allows confidence in attributing group differences to the intervention. Growth of randomized experiments within educational studies has been widely documented. However, randomized experiments within education have received criticism for implementation challenges and for ignoring context. Additionally, limited guidance exists for programs that are tasked with both implementation and evaluation within the same funding period. This study draws on a research team's experiences and examines opportunities and challenges in conducting a multisite randomized evaluation of an internship program for teacher candidates. We discuss how problems were collaboratively addressed and adjusted to align with local realities and demonstrate how the research team, in consultation with local stakeholders, addressed methodological and program implementation problems in the field. Recommendations for future research are provided.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140253645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-05DOI: 10.1177/10982140241231906
Huey T. Chen, L. Morosanu, Victor H. Chen
Most program evaluation efforts concentrate on assessments of program implementation and program outcomes. However, another area of programs that has not received sufficient attention in the literature is evaluating the plan of the program. Since the quality of the plan and planning process can influence program implementation and outcomes, there is a need to expand program evaluation efforts to cover program plans, and thus bridge plan evaluation and program evaluation. This paper utilizes the program evaluation literature to illustrate two approaches to participatory program plan evaluation— ex-ante or proactive and ex-post or reactive—including a conceptual framework that identifies the requirements, barriers, and strategies for evaluating program plans. Concrete examples are provided to illustrate the application of these two approaches.
{"title":"Program Plan Evaluation: A Participatory Approach to Bridge Plan Evaluation and Program Evaluation","authors":"Huey T. Chen, L. Morosanu, Victor H. Chen","doi":"10.1177/10982140241231906","DOIUrl":"https://doi.org/10.1177/10982140241231906","url":null,"abstract":"Most program evaluation efforts concentrate on assessments of program implementation and program outcomes. However, another area of programs that has not received sufficient attention in the literature is evaluating the plan of the program. Since the quality of the plan and planning process can influence program implementation and outcomes, there is a need to expand program evaluation efforts to cover program plans, and thus bridge plan evaluation and program evaluation. This paper utilizes the program evaluation literature to illustrate two approaches to participatory program plan evaluation— ex-ante or proactive and ex-post or reactive—including a conceptual framework that identifies the requirements, barriers, and strategies for evaluating program plans. Concrete examples are provided to illustrate the application of these two approaches.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140264674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}