While effective in imparting skills and competencies required for donor‐centric evaluations, the present system of evaluation education in the Global South adds little to the development of Indigenous evaluation theory and practice. As education is the primary tool for building evaluators’ capacity to construct knowledge situated in local epistemologies and culture, deconstructing the colonial character of education is the first step toward the decolonization of evaluation practice. The chapter first discusses the importance of disrupting the colonial episteme as a core feature of the decolonization process. Next, it explores the coloniality of the present education system in Global South evaluation and its implication for the evaluation field. The chapter then proposes five key strategic directions for decolonizing evaluation education and reinstating the voice and agency of Global South communities in the evaluation process: (1) transforming evaluation education to prioritize the learning needs of field‐based organizations, (2) strengthening access to evaluation education for grassroots communities, (3) acknowledging the primacy of local languages in building transformative knowledge, (4) reimagining evaluation educators, and (5) recognizing internal colonialism and social justice in the evaluation curriculum.
{"title":"Moving beyond methods training: Key directions for decolonizing evaluation education in the global south","authors":"Satlaj Dighe","doi":"10.1002/ev.20538","DOIUrl":"https://doi.org/10.1002/ev.20538","url":null,"abstract":"While effective in imparting skills and competencies required for donor‐centric evaluations, the present system of evaluation education in the Global South adds little to the development of Indigenous evaluation theory and practice. As education is the primary tool for building evaluators’ capacity to construct knowledge situated in local epistemologies and culture, deconstructing the colonial character of education is the first step toward the decolonization of evaluation practice. The chapter first discusses the importance of disrupting the colonial episteme as a core feature of the decolonization process. Next, it explores the coloniality of the present education system in Global South evaluation and its implication for the evaluation field. The chapter then proposes five key strategic directions for decolonizing evaluation education and reinstating the voice and agency of Global South communities in the evaluation process: (1) transforming evaluation education to prioritize the learning needs of field‐based organizations, (2) strengthening access to evaluation education for grassroots communities, (3) acknowledging the primacy of local languages in building transformative knowledge, (4) reimagining evaluation educators, and (5) recognizing internal colonialism and social justice in the evaluation curriculum.","PeriodicalId":35250,"journal":{"name":"New Directions for Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46126837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The International Program for Development Evaluation Training, IPDET, ran in its first chapter from 2001–2016 in Ottawa, Canada. In 2018, it began its second chapter in Bern, Switzerland and continues today – an almost unheard‐of longevity for a summer short‐term training program. Over its first 16 years, IPDET trained more than 4000 persons in evaluation from more than 80 countries. During the time we report on in this chapter, IPDET consisted of a mix and match basic 2‐week core program in development evaluation and two subsequent weeks of 2‐ and 3‐day workshops for more in‐depth specialized evaluation training. Workshop topics were updated annually to remain current but included, for example, Cost‐Benefit Analytic Tools for Development Evaluation, Logic Models in Evaluation, Sampling Techniques I and II, Monitoring and Evaluating Governance in Africa, and Assessing the Outcomes and Impacts of Complex Programs. IPDET graduates have made many contributions to the field, such as establishing national evaluation associations, establishing and leading monitoring and evaluation units, producing country evaluation plans and national evaluation policies, and advancing evaluation in non‐profits, foundations, and the private sector. This reflective chapter examines IPDET's successes by identifying good practices for short‐term evaluation training programs. We review nine major factors contributing to IPDET's longevity in increasing the availability and diversity of evaluators worldwide and examine research on good training practices for short‐term adult evaluation training. Based on IPDET's experience, we suggest additional good practices for evaluation training programs.
{"title":"What we can learn from the international program for development evaluation training (IPDET)","authors":"Linda G Morra Imas, R. Rist","doi":"10.1002/ev.20540","DOIUrl":"https://doi.org/10.1002/ev.20540","url":null,"abstract":"The International Program for Development Evaluation Training, IPDET, ran in its first chapter from 2001–2016 in Ottawa, Canada. In 2018, it began its second chapter in Bern, Switzerland and continues today – an almost unheard‐of longevity for a summer short‐term training program. Over its first 16 years, IPDET trained more than 4000 persons in evaluation from more than 80 countries. During the time we report on in this chapter, IPDET consisted of a mix and match basic 2‐week core program in development evaluation and two subsequent weeks of 2‐ and 3‐day workshops for more in‐depth specialized evaluation training. Workshop topics were updated annually to remain current but included, for example, Cost‐Benefit Analytic Tools for Development Evaluation, Logic Models in Evaluation, Sampling Techniques I and II, Monitoring and Evaluating Governance in Africa, and Assessing the Outcomes and Impacts of Complex Programs. IPDET graduates have made many contributions to the field, such as establishing national evaluation associations, establishing and leading monitoring and evaluation units, producing country evaluation plans and national evaluation policies, and advancing evaluation in non‐profits, foundations, and the private sector. This reflective chapter examines IPDET's successes by identifying good practices for short‐term evaluation training programs. We review nine major factors contributing to IPDET's longevity in increasing the availability and diversity of evaluators worldwide and examine research on good training practices for short‐term adult evaluation training. Based on IPDET's experience, we suggest additional good practices for evaluation training programs.","PeriodicalId":35250,"journal":{"name":"New Directions for Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43341596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This chapter focuses on the lived experiences of Ukrainian evaluators currently working amidst political, economic, and social crises. Eight Ukrainian evaluators and evaluation users – drawing from various DRG and DRG‐related programs – were interviewed by Zoom or responded in written form to chapter authors to provide insights to a series of questions regarding how they operate today, and how their priorities and perspectives regarding evaluation have or have not changed. Their responses were summarized by the two main authors. The chapter also provides a background of a DRG program that had to pivot during COVID‐19 and again after Russia's invasion of Ukraine.
{"title":"Evaluation amidst crisis: Voices of Ukrainian DRG evaluators","authors":"Olena Rybiy, Roland Kovats","doi":"10.1002/ev.20522","DOIUrl":"https://doi.org/10.1002/ev.20522","url":null,"abstract":"This chapter focuses on the lived experiences of Ukrainian evaluators currently working amidst political, economic, and social crises. Eight Ukrainian evaluators and evaluation users – drawing from various DRG and DRG‐related programs – were interviewed by Zoom or responded in written form to chapter authors to provide insights to a series of questions regarding how they operate today, and how their priorities and perspectives regarding evaluation have or have not changed. Their responses were summarized by the two main authors. The chapter also provides a background of a DRG program that had to pivot during COVID‐19 and again after Russia's invasion of Ukraine.","PeriodicalId":35250,"journal":{"name":"New Directions for Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45200645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Evaluators who take a complexity‐aware approach must consider tradeoffs related to theoretical parsimony, falsifiability, and measurement validity. These tradeoffs may be particularly pronounced with ex‐post evaluation designs in which program theory development and monitoring frameworks are often completed before the evaluator is engaged. In this chapter, we argue that theory‐based evaluation (TBE) approaches can address unique ex‐post evaluation challenges that complexity‐aware evaluation (CAE) alone cannot, and that these two sets of approaches are complimentary. We will outline strategies that evaluators may use to conduct rigorous ex‐post evaluations of democracy, human rights, and governance (DRG) interventions that merge CAE's inductive approaches with a theory‐testing structure. It will illustrate these strategies with two case studies of ex‐post evaluation using process tracing (PT).
{"title":"Theory amidst complexity – using process tracing in ex‐post evaluations","authors":"Kate Krueger, Molly Wright","doi":"10.1002/ev.20524","DOIUrl":"https://doi.org/10.1002/ev.20524","url":null,"abstract":"Evaluators who take a complexity‐aware approach must consider tradeoffs related to theoretical parsimony, falsifiability, and measurement validity. These tradeoffs may be particularly pronounced with ex‐post evaluation designs in which program theory development and monitoring frameworks are often completed before the evaluator is engaged. In this chapter, we argue that theory‐based evaluation (TBE) approaches can address unique ex‐post evaluation challenges that complexity‐aware evaluation (CAE) alone cannot, and that these two sets of approaches are complimentary. We will outline strategies that evaluators may use to conduct rigorous ex‐post evaluations of democracy, human rights, and governance (DRG) interventions that merge CAE's inductive approaches with a theory‐testing structure. It will illustrate these strategies with two case studies of ex‐post evaluation using process tracing (PT).","PeriodicalId":35250,"journal":{"name":"New Directions for Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49649717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Over the past five decades, democracy, human rights, and governance (DRG) support has gained momentum as a critical sector within international assistance structures. DRG programs support actors such as civil society organizations, independent media, political parties, and governments, who work to build and sustain democratic processes such as free and fair elections or robust human rights protections. Dealing as they do with whose voice matters and whose priorities are addressed, DRG programs are inherently political, focusing on the allocation of power. The role of DRG in the public mind has changed in recent years, influenced by 16 straight years of global democratic crisis and decline (Repucci & Slipowitz, 2022), along with movements that challenge the allocation of political power, such as Black Lives Matter and #MeToo in the United States, and Decolonize/Localize Aid (#ShiftThePower) in international aid. Discussions about whether and how we reform democratic institutions are now commonplace headlines rather than niche topics limited to political scientists and DRG practitioners. Accordingly, this issue is relevant to anyone concerned with how our political institutions can manage political conflict, and our role as evaluators in understanding and contributing to that process. Indeed, in the American Evaluation Association’s Guiding Principles, evaluators are charged to “contribute to the common good and advancement of an equitable and just society,” including advancing a democratic society (American Evaluation Association, n.d.). DRG program evaluators have worked to develop methods and approaches appropriate to the political nature of DRG work. Much of this work takes place in the “black box,” a term that refers to the complex and often poorly specified inner workings of programs that transform inputs into outcomes. DRG programming environments are characterized by shifting power dynamics, ideological conflicts, the competitive allocation of scarce resources, and the interplay between formal and informal institutions. Designing and evaluating programs amidst these environments is complex, and thus “complexity” has often been a rallying call for DRG evaluators struggling to juggle the dynamism of highly politicized contexts and often highly politicized goals. There are three types of individual responses to the challenge of complexity: those that choose to ignore it, those that acknowledge its relevance yet admit that it is not practical to operationalize, and those that fully embrace it. DRG actors – funders, implementers, change agents, and evaluators – fall into these categories equally. Some use complexity to justify ignoring certain research methods or to hide and point to a fuzzy future when change will manifest. Others fully embrace the idea but are stymied by the challenge of making “complexity” programmatically or evaluatively useful. Still others use the term as obfuscating shorthand for a host of interrelated cultural, social, r
{"title":"Editors’ note","authors":"Alysson Akiko Oakley, Kate Krueger","doi":"10.1002/ev.20532","DOIUrl":"https://doi.org/10.1002/ev.20532","url":null,"abstract":"Over the past five decades, democracy, human rights, and governance (DRG) support has gained momentum as a critical sector within international assistance structures. DRG programs support actors such as civil society organizations, independent media, political parties, and governments, who work to build and sustain democratic processes such as free and fair elections or robust human rights protections. Dealing as they do with whose voice matters and whose priorities are addressed, DRG programs are inherently political, focusing on the allocation of power. The role of DRG in the public mind has changed in recent years, influenced by 16 straight years of global democratic crisis and decline (Repucci & Slipowitz, 2022), along with movements that challenge the allocation of political power, such as Black Lives Matter and #MeToo in the United States, and Decolonize/Localize Aid (#ShiftThePower) in international aid. Discussions about whether and how we reform democratic institutions are now commonplace headlines rather than niche topics limited to political scientists and DRG practitioners. Accordingly, this issue is relevant to anyone concerned with how our political institutions can manage political conflict, and our role as evaluators in understanding and contributing to that process. Indeed, in the American Evaluation Association’s Guiding Principles, evaluators are charged to “contribute to the common good and advancement of an equitable and just society,” including advancing a democratic society (American Evaluation Association, n.d.). DRG program evaluators have worked to develop methods and approaches appropriate to the political nature of DRG work. Much of this work takes place in the “black box,” a term that refers to the complex and often poorly specified inner workings of programs that transform inputs into outcomes. DRG programming environments are characterized by shifting power dynamics, ideological conflicts, the competitive allocation of scarce resources, and the interplay between formal and informal institutions. Designing and evaluating programs amidst these environments is complex, and thus “complexity” has often been a rallying call for DRG evaluators struggling to juggle the dynamism of highly politicized contexts and often highly politicized goals. There are three types of individual responses to the challenge of complexity: those that choose to ignore it, those that acknowledge its relevance yet admit that it is not practical to operationalize, and those that fully embrace it. DRG actors – funders, implementers, change agents, and evaluators – fall into these categories equally. Some use complexity to justify ignoring certain research methods or to hide and point to a fuzzy future when change will manifest. Others fully embrace the idea but are stymied by the challenge of making “complexity” programmatically or evaluatively useful. Still others use the term as obfuscating shorthand for a host of interrelated cultural, social, r","PeriodicalId":35250,"journal":{"name":"New Directions for Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48280068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Complexity manifests in no clearer arena than women's political leadership and overall empowerment. The nature of the leadership journey, comprised of multiple events and milestones, paired with environmental factors like COVID‐19, adds to this complexity. This chapter tackles the challenges of measuring program impact in advancing women's leadership by discussing the Vital Voices Global Partnership (VVGP) approach. Using an adapted contribution mapping and analysis tool, VVGP identified contributions at three levels: programmatic, individual, and community. The tool allowed VVGP to collect evidence about VVGP's contributions to improved political leadership and the overall VVGP model's theory of change (TOC).
{"title":"Evaluating a woman's leadership journey and impact by adapting contribution mapping and analysis tools","authors":"Alejandra Garcia Diaz Villamil, Rodrigo Santos Legaspi, Ophelia Delali A. Akoto","doi":"10.1002/ev.20521","DOIUrl":"https://doi.org/10.1002/ev.20521","url":null,"abstract":"Complexity manifests in no clearer arena than women's political leadership and overall empowerment. The nature of the leadership journey, comprised of multiple events and milestones, paired with environmental factors like COVID‐19, adds to this complexity. This chapter tackles the challenges of measuring program impact in advancing women's leadership by discussing the Vital Voices Global Partnership (VVGP) approach. Using an adapted contribution mapping and analysis tool, VVGP identified contributions at three levels: programmatic, individual, and community. The tool allowed VVGP to collect evidence about VVGP's contributions to improved political leadership and the overall VVGP model's theory of change (TOC).","PeriodicalId":35250,"journal":{"name":"New Directions for Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43714473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The African Evaluation Association (AfrEA) plays a unique role as the pan‐Africa umbrella organization of African Voluntary Organizations for Professional Development (VOPEs). This chapter is in a self‐interview format with the former Chief of AfrEA Secretariat and discusses an AfrEA initiative called Made in Africa Evaluation (MAE). Part of MAE considers how Indigenous African proverbs contribute to improved evaluation practices in the African context. This interview discusses the approach to the use of proverbs with reference to democracy assistance, human rights, and governance programs (DRG).
{"title":"Made in Africa: Understanding Indigenous African approaches to democracy, human rights, and governance evaluation through the study of proverbs","authors":"Sîm‐Yassah Awilêlo Badjo","doi":"10.1002/ev.20526","DOIUrl":"https://doi.org/10.1002/ev.20526","url":null,"abstract":"The African Evaluation Association (AfrEA) plays a unique role as the pan‐Africa umbrella organization of African Voluntary Organizations for Professional Development (VOPEs). This chapter is in a self‐interview format with the former Chief of AfrEA Secretariat and discusses an AfrEA initiative called Made in Africa Evaluation (MAE). Part of MAE considers how Indigenous African proverbs contribute to improved evaluation practices in the African context. This interview discusses the approach to the use of proverbs with reference to democracy assistance, human rights, and governance programs (DRG).","PeriodicalId":35250,"journal":{"name":"New Directions for Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48180607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Democracy, human rights, and governance (DRG) programs work in environments where there are actors that actively undermine program goals, environments with conflict or war, situations where program management is physically distant from program implementation, contexts where target communities live with little or no internet or mobile phone connectivity, and where stakeholders are subject to intimidation and violence. As a result, DRG program monitoring and evaluation activities must apply principles from the Equitable Evaluation Framework™, to meaningfully engage program staff and participants to mitigate risks and innovate approaches for collecting and analyzing data, and communicating results that carefully consider digital, physical, and psychosocial safety and measure sustainable, culturally relevant changes at the individual and community level.
{"title":"Equitable evaluation in remote and sensitive spaces","authors":"Megan Guidrey, Emily Bango, A. Ayoob","doi":"10.1002/ev.20525","DOIUrl":"https://doi.org/10.1002/ev.20525","url":null,"abstract":"Democracy, human rights, and governance (DRG) programs work in environments where there are actors that actively undermine program goals, environments with conflict or war, situations where program management is physically distant from program implementation, contexts where target communities live with little or no internet or mobile phone connectivity, and where stakeholders are subject to intimidation and violence. As a result, DRG program monitoring and evaluation activities must apply principles from the Equitable Evaluation Framework™, to meaningfully engage program staff and participants to mitigate risks and innovate approaches for collecting and analyzing data, and communicating results that carefully consider digital, physical, and psychosocial safety and measure sustainable, culturally relevant changes at the individual and community level.","PeriodicalId":35250,"journal":{"name":"New Directions for Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44075770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This chapter examines good practices in implementing effective Monitoring, Evaluation, and Learning (MEL) systems within complex international development Democracy, Human Rights, and Governance (DRG) programs, which are characterized by challenges of non‐linearity, limited evidence of theories of change, and contextual and politically contingent nature of outcomes. The chapter presents three cases of MEL systems in complex projects implemented by Pact across distinct and diverse operating contexts – Zimbabwe, Cambodia, and Somalia – to illustrate those projects’ MEL approaches that enabled continuous adaptation. The authors analyze the cases to respond to two questions: (1) What are the key elements of effective adaptive management‐focused MEL systems in complex environments? (2) What is practical guidance for designing and enabling complexity‐responsive and effective adaptive management‐focused MEL systems? The case studies illustrate three key elements: (1) Information gathering that closely links context, research, and performance data; (2) Systems for reflection that offer scheduled learning moments of varying frequency and intensity, as well as multiple feedback mechanisms; and (3) Enabling structures that promote adaptive mindsets and attitudes within project teams.
{"title":"Nimble adaptation: Tailoring monitoring, evaluation, and learning methods to provide actionable data in complex environments","authors":"Lauren Serpe, Mason C. Ingram, Kate Byom","doi":"10.1002/ev.20523","DOIUrl":"https://doi.org/10.1002/ev.20523","url":null,"abstract":"This chapter examines good practices in implementing effective Monitoring, Evaluation, and Learning (MEL) systems within complex international development Democracy, Human Rights, and Governance (DRG) programs, which are characterized by challenges of non‐linearity, limited evidence of theories of change, and contextual and politically contingent nature of outcomes. The chapter presents three cases of MEL systems in complex projects implemented by Pact across distinct and diverse operating contexts – Zimbabwe, Cambodia, and Somalia – to illustrate those projects’ MEL approaches that enabled continuous adaptation. The authors analyze the cases to respond to two questions: (1) What are the key elements of effective adaptive management‐focused MEL systems in complex environments? (2) What is practical guidance for designing and enabling complexity‐responsive and effective adaptive management‐focused MEL systems? The case studies illustrate three key elements: (1) Information gathering that closely links context, research, and performance data; (2) Systems for reflection that offer scheduled learning moments of varying frequency and intensity, as well as multiple feedback mechanisms; and (3) Enabling structures that promote adaptive mindsets and attitudes within project teams.","PeriodicalId":35250,"journal":{"name":"New Directions for Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45654219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
What is democracy, human rights, and governance (DRG) program evaluation? This chapter defines the subfield, outlines major challenges to undertaking evaluation in highly politicized programming environments, and orients its current state with respect to its historical trajectory. The author describes the pendulum swings DRG evaluation has undergone in methodology, paradigms, and interests, organized into four generations: the Cowboys, the Technocratic Disenchantment, the Messy Middle, and the Complexity Crew. The chapter describes how the concept of complexity came to dominate the subfield, emerging out of the importance placed in context and contextual relevance for programs focused on political goals. The author puts forward a framework for operationalizing complexity more consistently within the DRG evaluation sector and concludes with a discussion of current issues and debates in the field.
{"title":"“Politics is more difficult than physics”: Complexity and the challenge of democracy, human rights, and governance program evaluation","authors":"Alysson Akiko Oakley","doi":"10.1002/ev.20531","DOIUrl":"https://doi.org/10.1002/ev.20531","url":null,"abstract":"What is democracy, human rights, and governance (DRG) program evaluation? This chapter defines the subfield, outlines major challenges to undertaking evaluation in highly politicized programming environments, and orients its current state with respect to its historical trajectory. The author describes the pendulum swings DRG evaluation has undergone in methodology, paradigms, and interests, organized into four generations: the Cowboys, the Technocratic Disenchantment, the Messy Middle, and the Complexity Crew. The chapter describes how the concept of complexity came to dominate the subfield, emerging out of the importance placed in context and contextual relevance for programs focused on political goals. The author puts forward a framework for operationalizing complexity more consistently within the DRG evaluation sector and concludes with a discussion of current issues and debates in the field.","PeriodicalId":35250,"journal":{"name":"New Directions for Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47385300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}