The announcement of the annual Nobel prize in economics is big news in the world of academic economists. This year, the award is also relevant for those of us trying to promote the use of impact evaluation to improve government policy. That’s because this year’s accolade has been awarded to David Card, Joshua Angrist and Guido Imbens for their contributions to an evidence-based approach in relation to their research on the effects of minimum wages, immigration and education on labour market outcomes, which uses natural experiments to better understand the impact of policy.
A key objective of empirical economists is to understand the causal effect of programmes on outcomes, isolated from other factors. Only sound evidence can provide policymakers with reliable information for designing effective interventions that make a difference to people’s lives in terms of employment, poverty or other indicators. To generate this type of evidence, we rely on impact evaluations, which compare changes in outcomes of those supported by a programme against changes in outcomes for a similar group who did not receive support. The former is usually referred to as a treatment group, while the latter to a control group. The most straightforward way of achieving this split is by controlling the process and randomly allocating participants across groups. This is an experimental design, also known as a randomised controlled trial (RCT), which is the gold standard in policy evaluation.
Yet, it is not always possible or desirable to split and randomly allocate places, people or businesses into treatment and control groups. Many programmes are offered on a first-come, first-served basis. Some circumstances require swift policy action, for example, supporting people and businesses in response to a big and unexpected shock such as the COVID-19 pandemic. The Nobel laureates have repeatedly shown that robust evaluations are still possible even when policies and events occur naturally, which is why they are referred to as ‘natural experiments’ or ‘quasi-experimental’ designs. The principle however is similar: identifying and comparing those supported to some who did not get support, but without controlling the process of who benefits from the intervention. Instead, this approach exploits specific circumstances of the programme, for example the timing of implementation, differences in the type of support provided, eligibility rules, etc. to establish a good comparison group.
Natural experiments ultimately depend on the specific circumstances of each programme, and in many cases we are not able to obtain a good comparison group. That’s why we encourage the use of RCTs whenever possible. But if that cannot be done, the work of this years’ Nobel laureates shows us how we can exploit the natural conditions of the policy to conduct a robust evaluation.
The quest for refining how we assess the impact of policies, including evaluations from natural experiments, has increased the quantity and quality of evidence available in many policy areas. What Works Centre for Local Economic Growth uses the Maryland Scientific Methods Scale (SMS) to score policy evaluations based on the robustness of the research method. Quasi-experimental methods usually score high (SMS 3 or 4 out of 5). As such, our evidence reviews and toolkits include insights from many natural experiments. If you are a local government interested in developing partnerships to support high-quality evaluations, please get in touch with our Head of Evidence, Victoria Sutherland.