As evaluators, an ongoing challenge is balancing the provision of clear-cut findings to inform decision-making whilst exercising caution when making causal claims (X caused Y). Many outcome evaluations cannot confidently make causal claims because they cannot establish a counterfactual, due to ethical and/or practical reasons. Process Tracing is one alternative approach to demonstrate causal links.
There are two main ways to approach causal attribution in evaluation:
M represents a causal ‘mechanism’. Mechanisms are described in lots of different ways in the literature, but the most practical definition is that they exist inside people’s heads. Mechanisms are the cognitive and emotional reactions people have to the cause (X). It’s these reactions that create change – interventions don’t create change. Therefore, mechanisms are the force that produces the outcome (Y). In a Theory of Change diagram, mechanisms are found in between outputs and outcomes.
One reason we have previously argued that behavioural science can improve evaluation is because it offers important insight into mechanisms.
Process Tracing is a Theory-Based Evaluation approach and therefore makes causal claims by examining evidence for theorised causal pathways – the links between causes, mechanisms, and outcomes. It can make more rigorous causal claims than some other theory-based approaches, such as Contribution Analysis, but it also requires more data collection and analysis.
First, evaluators generate a set of hypotheses and counterfactual hypotheses about causes, mechanisms, and outcomes. Ideally, they should specify the evidence needed to confirm or disconfirm each causal pathway before any data are collected. For example:
Next, evaluators collect data to ‘trace’ these hypotheses and counterfactual hypotheses within a single case study (e.g., one workplace) or multiple case studies (e.g., multiple workplaces).
Finally, causal claims are made by applying four tests to the evidence:
Process Tracing can be complex to execute so the best tip we can offer is to thoroughly plan how you will conduct the four tests of causality before commencing data collection. Furthermore, you don’t want to waste time and resources collecting ‘straw in the wind’ evidence when a little planning could ensure you collect data that enables stronger causal claims.
Start with a detailed Theory of Change to develop your hypotheses and counterfactual hypotheses. Create a table like we have shown above which lists the indicators and sources of data for the confirmatory and disconfirmatory evidence. You can then design your data collection instruments knowing that you’re not just accumulating data which you hope will generate clear-cut findings. Instead, you’re efficiently gathering data for testing purposes so that you can support or negate the larger claims that are most helpful to decision-makers.
This article was issued under our former global brand name: Kantar Public.