Skip to content
EVALUATION SUPPORT FOR LOCAL AREAS

We help places to evaluate their local growth policies.

arrow down
At What Works Growth, we work with places to address the barriers to doing high-quality evaluation. We provide training, resources, and bespoke support, all free of charge.
vishal-vasnani-QMhF0-q8oKg-unsplash (1)

Our training is aimed at people in local policymaking and designed to be practical, using real-life examples and activities from economic development.

Browse our training offer
tobias-keller-2ecH5Lw3zSk-unsplash

What is impact evaluation? Why is it important? When it is appropriate? Key concepts to understand and think through impact evaluation.

Understanding impact evaluation
simone-hutsch-WRIsWAI13H4-unsplash

What makes a good impact evaluation? A robust impact evaluation is at the heart of understanding what really works in local economic growth policy.

An 8-step guide to better evaluation
hiroyuki-sen-CQyvcDc0aLg-unsplash

We support central policy makers and analysts working on local growth to use evidence effectively and deliver high-quality evaluation.

Central government support

What kind of evaluation?

The focus of all What Works Centres is to support evidence-based policymaking – the idea that decisions on the allocation of public resources, or on setting public rules should be informed by systematically accumulated knowledge and impartial analysis. Ideally, this leads to better and more cost-effective policy.

Evaluation is one way of generating this knowledge. It looks at an intervention after it has been implemented, to try to understand what effect it had, how, and why. This is opposed to analysis looking at policies before they are implemented (‘appraisal’), which tries to predict what the effect might be.

Different types of evaluations answer different questions. At What Works Growth, we focus mainly on impact evaluations. 

Impact evaluations assess whether a given intervention – such as a business support scheme – has made a difference to a given outcome – such as business survival. The key question we try to understand is whether a policy affects the outcomes it was designed to influence, what the effects were, and how that compares to other policies trying to achieve the same thing.

Impact evaluation is crucial for cost-effective policymaking, because if money is mistakenly spent on things which don’t affect the intended outcomes, then public money will be wasted and people’s lives won’t be improved by policy.

We offer training, resources, and bespoke support to help places design, deliver or commission impact evaluations. Our training and guidance also support places looking to improve monitoring and process evaluation.


Impact evaluation

Impact evaluation asks the question ‘what difference did the intervention make?’ It focuses on understanding whether the outcomes observed (for example, whether new businesses survive for two years after start-up) can be attributed to (i.e. they were caused by) the project or programme being evaluated (for example, a training programme for new entrepreneurs). This is known as ‘causal impact’ (i.e. the project or programme is the cause of the outcomes).

Impact evaluations establish causality by using comparison.  Ideally, to identify the effect of a policy, evaluators would have access to a parallel world where they could compare the same individuals, businesses, or place under two different scenarios – one where the policy happens (known as treatment) and one where it does not. They could then see what would have happened in the absence of the policy and whether the policy made a difference. As it isn’t possible to know what would have happened if they hadn’t received treatment, we need to establish another comparison, often referred to as a control group, comparison group, or counterfactual.

The idea behind a counterfactual is to try to mimic that parallel world. To do this requires constructing or finding a comparison group which is as similar as possible to the group who receive the intervention. Differences in outcomes can then be compared between the ‘treatment group’ and the ‘comparison group’.

An almost identical comparison group where there are no meaningful differences between the group receiving the policy and the group not receiving it isolates the effects of the policy – the only difference between the groups is due to the policy itself. This means any difference in outcome can be attributed to the effects of the policy.

The main challenge of impact evaluation is in constructing a counterfactual that is credible – in other words, one that is genuinely similar. A big part of this is done by accounting for “observables” – characteristics of people, businesses or places which are known to the evaluator and that affect the likelihood of them being targeted by the policy and independently affect the outcomes being measured. More robust methods use statistical tricks to account for so called “unobservables” – characteristics of people, businesses or places which cannot be known to the evaluator. Different impact evaluation methodologies have different approaches to dealing with this problem.


Bespoke support for impact evaluations 

We provide expert help and advice on impact evaluation design and delivery.

All our support is free, and can include:

  • early conversations to find the right evaluation design for the project;
  • review of evaluation specification and ‘invitation to tender’ documents; 
  • sitting on advisory or steering group; and 
  • peer review of findings.

We may also be able to help deliver your impact evaluation if it is well designed and fills a gap in the evidence base.

WANT TO STAY UPDATED?