difference-in-difference
Consider the case of a road repair program whose objective is to improve the population’s access to labour markets. An outcome indicator to measure this would be the employment rate within the population. If we simply observe before and after changes in employment rates, it is unlikely, that changes pre and post the road repair program would give us an idea of the attributable impact, as several other factors – such as education levels, and access to networks – also known as confounders, are likely to influence the employment rate (Gertler Paul et al., 2012, p. 130) . And, although RCT’s or experimental methods are considered the gold standard design to infer causal attribution; given the treatment was not administered through random assignment, neither would an RCT be possible.
In this scenario, a difference in difference approach becomes useful.
The difference-in-difference design is quasi-experimental design that compares the changes in outcomes over time between a population enrolled in a program – i.e. the treatment group – and a population that is not i.e. the comparison group1. The comparison group is usually constituted using ‘matching methods’ i.e. matching the counterfactual with the treatment., on a set of defining key parameters, for instance, gender, education levels etc., to adjust for confounding effects.
First, we calculate the difference in the before-and-after outcomes for the enrolled group, which controls for factors that are constant in the group, over a period of time. Next, we calculate the difference in the before-and-after outcomes for the non-enrolled group, which captures the factors that vary over time. If we take the difference of these differences in outcome indicators, we would be able to accurately measure the impact by eliminating bias.1
Diff-in-diff approach compares trends between the treatment and comparison group instead of comparing outcomes.1 This can be understood with the help of the following diagram.
Comments
Post a Comment