Focus groups are valuable tools for evaluators to help stakeholders to clarify programme theories. In 1987, R.K. Merton, often attributed with the birth of focus groups, wrote about how these were 'being mercilessly misused'. In the 1940s, his team had conceived focus groups as tools for developing middle-range theory, but through their astonishing success focus groups have metamorphosed and are often an 'unchallenged' choice in many evaluation approaches, while their practice seems to provide a philosophically diverse picture. This article examines what knowledge focus group data generate, and how they support theory development. It starts with an overview of the history of focus groups, establishing a relationship between their emergence as a data collection method and the evaluation profession. Practical lessons for conducting groups in realist evaluation are suggested, while exploring how qualitative data can support programme and middle-range theory development using the example of realist evaluation.
In Dutch healthcare, new market mechanisms have been introduced on an experimental basis in an attempt to contain costs and improve quality. Informed by a constructivist approach, we demonstrate that such experiments are not neutral testing grounds. Drawing from semi-structured interviews and policy texts, we reconstruct an experiment on free pricing in dental care that turned into a critical example of market failure, influencing developments in other sectors. Our analysis, however, shows that (1) different market logics and (2) different experimental logics were reproduced simultaneously during the course of the experiment. We furthermore reveal how (3) evaluation and political life influenced which logics were reproduced and became taken as the lessons learned. We use these insights to discuss the role of evaluation in learning from policy experimentation and close with four questions that evaluators could ask to better understand what is learned from policy experiments, how, and why.
International development organizations increasingly use advocacy as a strategy to pursue effectiveness. However, establishing the effectiveness of advocacy is problematic and dependent on the interpretations of the stakeholders involved, as well as the interactions between them. This article challenges the idea of objective and rational evaluation, showing that advocacy evaluation is an inherently political process in which space for interactions around methods, processes and results defines how effectiveness is interpreted, measured and presented. In addition, this article demonstrates how this space for interaction contributes to the quality and accuracy of evaluating advocacy effectiveness by providing room to explore and address the multiplicities of meaning around identifying, measuring and presenting outcomes.
This article describes a theory-driven evaluation of one component of an intervention to improve the quality of health care at Ugandan public health centres. Patient-centred services have been advocated widely, but such approaches have received little attention in Africa. A cluster randomized trial is evaluating population-level outcomes of an intervention with multiple components, including 'patient-centred services.' A process evaluation was designed within this trial to articulate and evaluate the implementation and programme theories of the intervention. This article evaluates one hypothesized mechanism of change within the programme theory: the impact of the Patient Centred Services component on health-worker communication. The theory-driven approach extended to evaluation of the outcome measures. The study found that the proximal outcome of patient-centred communication was rated 10 percent higher (p < 0.008) by care seekers consulting with the health workers who were at the intervention health centres compared with those at control health centres. This finding will strengthen interpretation of more distal trial outcomes.
The use of evaluation results is at the core of evaluation theory and practice. Major debates in the field have emphasized the importance of both the evaluator's role and the evaluation process itself in fostering evaluation use. A recent systematic review of interventions aimed at influencing policy-making or organizational behavior through knowledge exchange offers a new perspective on evaluation use. We propose here a framework for better understanding the embedded relations between evaluation context, choice of an evaluation model and use of results. The article argues that the evaluation context presents conditions that affect both the appropriateness of the evaluation model implemented and the use of results.
Models that shift more responsibility onto researchers for the process of incorporating research results into decision-making have greatly gained in popularity during the past two decades. This shift has created a new area of research to identify the best ways to transfer academic results into the organizational and political arenas. However, evaluating the utilization of information coming out of a knowledge transfer (KT) initiative remains an enormous challenge. This article demonstrates how logic analysis has proven to be a useful evaluation method to assess the utilization potential of KT initiatives. We present the case of the evaluation of the Research Collective on the Organization of Primary Care Services, an innovative experiment in knowledge synthesis and transfer. The conclusions focus not only on the utilization potential of results coming out of the Research Collective, but also on the theoretical framework used, in order to facilitate its application to the evaluation of other knowledge transfer initiatives.
Implementation evaluations, also called process evaluations, involve studying the development of programmes, and identifying and understanding their strengths and weaknesses. Undertaking an implementation evaluation offers insights into evaluation objectives, but does not help the researcher develop a research strategy. During the implementation analysis of the UNAIDS drug access initiative in Chile, the strategic analysis model developed by Crozier and Friedberg was used. However, a major incompatibility was noted between the procedure put forward by Crozier and Friedberg and the specific characteristics of the programme being evaluated. In this article, an adapted strategic analysis model for programme evaluation is proposed.

