Overview

A realist evaluation is a theory-led approach to evaluation that seeks to understand what works for whom, in what circumstances, and in what respects an intervention is more likely to succeed.

Pawson and Tilley’s (1997) starting point for setting out the realist approach to evaluation is to argue that the ‘traditional’ experimental evaluation is flawed because any attempt to reduce an intervention to a set of variables and control for difference by using an intervention and control group strips out context. They assume a different, ‘realist’ model of explanation in which ‘causal outcomes follow from mechanisms acting in contexts’ (Pawson and Tilley 1997, p. 58). Context-Mechanism-Outcome configurations (CMOs) are thus key to impact evaluation.

A mechanism explains what it is about a programme that makes it work. Mechanisms are not variables but accounts that cover individual agency and social structures. They are mid-level theories that spell out the potential of human resources and reasoning (Pawson and Tilley 1997).

Causal mechanisms and their effects are not fixed but are contingent on context. A programme will only be effective if the contextual conditions surrounding it are conducive (Pawson and Tilley 1994).

What is involved?

There is still much debate about exactly how to undertake a realist evaluation; however, it is possible to set out some key elements of any realist evaluation.

The starting point is mid-level theory building; ‘empirical work in programme evaluation can only be as good as the theory which underpins it’ (Pawson and Tilley 1997, p. 83).

Identifying programme mechanisms is key. Wong and colleagues (2013b) suggest that one way to identify a programme mechanism is to reconstruct, in the imagination, the reasoning of participants or stakeholders. They also note that mechanisms cannot be seen or measured directly (because they happen in people’s heads or at different levels of reality from the one being observed). There will potentially be many mechanisms and the role of the realist researcher is to identify the ‘main mechanisms’. The ‘causes’ of outcomes are not simple, linear or deterministic. This is partly because programmes often work through multiple mechanisms and partly because a mechanism is not inherent to the intervention, but is a function of the participants and the context.

Mechanisms are context‐sensitive and the evaluation must develop an ‘understanding of how a particular context acts on a specific program mechanism to produce outcomes – how it modifies the effectiveness of an intervention’ (Wong et al. 2013b, p. 9). Pulling these elements together, the scientific realist evaluator always constructs their explanation around the three vital ingredients of context, mechanism and outcome, which Pawson and Tilley refer to as context-mechanism-outcome configurations.

Although both quantitative and qualitative data are used in realist evaluation, there is generally more emphasis on the iterative gathering of qualitative data that allows for theory to be developed and explored.

The standard realist data matrix would make comparisons of variations in outcome patterns across groups, but those groups would not be experimental and control groups. Instead, they would be defined by CMO configurations, with the evaluator running a systematic range of comparisons across a series of studies to understand which combination of context and mechanism is most effective (Pawson and Tilley 1994).
Download a Realist Evaluation case study here
Download a longer briefing on Realist Evaluation here

Useful resources

The RAMESES II project, funded by the NIHR, developed quality and reporting standards and resources and training materials for realist evaluation. These are available online here.

There is a Supplementary Guide on realist evaluation, issued as part of the Magenta Book 2020, available here.

Pawson and Tilley’s influential and widely cited book on scientific realist evaluation is a good starting point for exploring scientific realism:

Pawson, R. and Tilley, N. (1997) Realistic evaluation. London: Sage.