Rigorous evaluation plays an important role in understanding the effectiveness of local growth policies. What Works Growth aims to make impact evaluation easier for policymakers, by providing advice and support.
This briefing summarises the lessons learned from our ‘demonstrator evaluations’ of the Eat Out to Help Out scheme, Enterprise Zones, the Growth Vouchers Programme, and Local Major transport schemes. These projects involve working with partners to demonstrate that evaluation of local growth policies is possible and to assess the applicability of different evaluation methodologies to common evaluation questions.
The lessons relate to three broad topics – understanding the intervention, analysis, and data. The briefing also provides some cross-cutting recommendations.
Lessons
Understanding the intervention
Understanding the intervention’s aims and objectives
Understanding the intervention’s aims and objectives is a crucial first step in establishing a clear causal question that an impact evaluation can try to answer. But this can be challenging, particularly if they are too vague, broad or unrealistic.
Lesson 1: If the policy aims and objectives are too vague it can be difficult to figure out which outcomes should be the focus of evaluation.
Lesson 2: Broad aims and objectives complicate the analysis because of the need for data on lots of outcomes and the additional difficulties of assessing multiple hypotheses about potential policy impacts.
Lesson 3: Even if aims and objectives are clearly specified, evaluation will be difficult if these are unrealistic – either because the policy does not affect the outcome of interest, or because effects are likely to be swamped by other factors.
Understanding selection into treatment and intensity
Impact evaluation uses comparison groups to establish causality. The standard approach is to create a group of individuals, businesses or areas that are similar to those receiving the treatment but did not receive it. This group is known as the comparison group, while the group that receives the treatment is called the treatment group. By comparing changes in outcomes between these two groups, we can assess the impact of the treatment. The main challenge is finding a comparison group that is similar to the treatment group.
Lesson 4: Understanding the eligibility criteria and policy selection rules is crucial for choosing the right comparison group.
Lesson 5: Information on the policy can be used to construct comparison groups.
Lesson 6: Variation in how the policy is applied influences the interpretation of findings. If it is unclear what ‘being treated’ means, then we can only interpret the treatment effects as the ‘average’ across different treatments.
Lesson 7: Sometimes there can be variation in the intensity of treatment. This variation can be used in the evaluation, enabling a shift from a simple treated versus untreated approach to understanding how effects vary with treatment intensity.
Analysis
Once we have information about the policy, the next step involves selecting appropriate methods to estimate the causal effect. Choosing the appropriate methods is challenging because each comes with specific assumptions, data requirements, and limitations that can affect the reliability and accuracy of results.
Lesson 8: Evaluators must identify and apply the appropriate methods. Our guide to scoring the evidence provides more information on each method.
Lesson 9: When the analysis focuses on the intensity rather than the occurrence of treatment, how strongly the policy affects different groups has to be considered, which may have implications for the appropriate method.
Lesson 10: Methods need to be carefully adapted to ensure that evaluations are tailored to the specific context. Replicating the same analysis as in previous impact evaluations can be misleading without carefully understanding the policy context and assessing whether methods are valid in the new policy setting.
Lesson 11: Less robust methods can be misleading in real world settings because they may fail to account for all the factors influencing the results of the policy.
Lesson 12: Methods are developing constantly, and impact evaluation can take advantage of these methodological improvements.
Lesson 13: Including deadweight and displacement effects in the analysis is important.
Data
The availability of suitable, timely data on the outcomes of interest (such as employment, productivity or wages) and for area-level evaluations, at the appropriate spatial level, ensures that impact evaluation can draw valid conclusions. Without the right data, it becomes challenging to measure the true impact of interventions, identify causal relationships, and address potential biases.
Lesson 14: Outcome data that is not available or measured with errors can cause problems when evaluating local growth policies and result in over- or under-estimating the policy’s benefits.
Lesson 15: Impact evaluation of policies whose outcomes are measured at the area-level requires that data on relevant characteristics and on outcomes is at the appropriate spatial level. Data with low spatial granularity can lead to bias in the estimated impacts and limit evaluators’ ability to detect smaller, more localised effects.
Lesson 16: Choosing between primary and secondary data, or a combination of both, involves important trade-offs.
Recommendations
The briefing makes six cross-cutting recommendations:
- Collect detailed information on the policy.
- Choose your method carefully.
- Consider displacement effects when needed and possible.
- Define appropriate outcomes and the relevant unit of observation for analysis.
- Collect information on treatment and outcomes using geospatial tools.
- Consider time and cost of the data collection.