Skip to content
UNDERSTANDING IMPACT EVALUATION

Designing local growth policies to facilitate robust programme-level impact evaluation

arrow down
simone-hutsch-uziLYmndqlc-unsplash

Programme-level evaluation assesses the overall impact of the programme on its intended outcomes. As evaluation options are greatest if evaluation is considered during the policy development process, this briefing outlines four features that would help make programme-level impact evaluation possible.

Focus on a narrow range of interventions and outcomes

Impact evaluation considers whether an intervention leads to changes in intended outcomes. ‘What works’ questions are often phrased in the format: Did intervention A lead to outcome X? One implication of this is that narrow interventions and outcomes help facilitate impact evaluation at the programme-level.

Local growth policies that focus on a narrow range of interventions and outcomes would make data collection and analysis easier, reduce the risk of multiple hypothesis testing, and make it easier to construct a comparison group.

Establish a comparison group

Impact evaluation uses comparison to establish causality. The standard approach is to create a group of individuals, businesses or areas that are similar to those being treated, but that did not receive treatment. Changes in outcomes can then be compared between the treatment group and the comparison group. The main challenge is how to ensure the comparison group is similar to the treatment group.

Programme design influences whether or not it is possible to establish a comparison group. Examples include allocating funds to some areas only (and with this not linked to need or other factors that will affect outcomes), rolling funding out in phases, or allocating funding in different intensities (i.e. some areas receive higher funding per head than others on a random basis), or by using a cut-off for funding eligibility. However, all of these choices can be politically difficult. Alternatively, if programmes focus on a narrow set of interventions and outcomes, it may be possible to evaluate at the individual or business level, where it is generally easier to establish a comparison group.

Consider statistical power

As impact evaluation relies on statistical methods, the number of observations matters.  More observations is better, increasing the statistical ‘power’ – the likelihood of getting statistically significant results which accurately reflect whether the policy has positive, negative or no effects.

Focusing local growth policies on a narrow set of interventions and outcomes make it more likely that evaluation at the individual or business level will be feasible, and this can generate a large number of observations. If programme-level evaluation needs to be at the area level, the number of observations will depend on the number of areas included.

Check programme is big enough to have detectable effect

The importance of the policy in affecting outcomes, relative to everything else that affects the outcome, will also matter. Where a policy only plays a small role, it is less likely that an effect will be detected. 

This could be addressed by focusing local growth policies on outcomes that can be measured at the individual or business level, as these can be easier to detect. If the programme focuses on outcomes that are measured at the area level, focusing on larger investments in fewer areas may increase the likelihood that an effect can be detected but this may reduce the likelihood that there will be statistical power.

Other considerations

Programme evaluation may not be the priority. For example, Labour’s manifesto set out ambitions in relation to devolution and long-term (single) settlements for councils. This suggests local decision-making about what funds will be spent on will be prioritised. This means the requirement that policy is ‘focused on a narrow range of interventions and outcomes’ is unlikely to be met (especially if a ‘single pot’ approach is adopted), making programme-level evaluation difficult for central government to implement.

When programme-level evaluation is not feasible, it is important to explore other options for counterfactual impact evaluation (for example, intervention-level, place-level, or project-level) and ensure that policy design facilitates these to be undertaken. 

Downloads:

Designing local growth policies to facilitate robust programme-level impact evaluation