Our evidence reviews summarise findings on the effectiveness of local growth policies. They are a valuable resource and cover a wide range of policy areas. They draw on evaluations of business advice to support SMEs, results from trials of different ways to support apprentices to complete training, and the impact of rail investment on area businesses, among many others.
These reviews begin with an initial search that often finds over 1,000 studies and ends up with between 16-80 studies that meet our evidence standards. Although they can be based on a small numbers of studies these evidence reviews tell us quite a bit about what we can expect from local growth interventions.
However, we don’t have enough robust impact evaluation to help us answer a lot of questions that policymakers have. And we certainly don’t have enough robust impact evaluations of local growth policies in the UK.
Addressing this lack of evidence isn’t easy, but it can, and should, be done.
Learning from what works for what works evaluations
Our new publication, Lessons and recommendations from What Works Growth’s evaluations of local growth policies, draws on a number of evaluations that What Works Growth has carried out over the years—Eat Out to Help Out scheme, Enterprise Zones, the Growth Vouchers Programme, and Local Major transport schemes.
The briefing draws out a number of lessons and recommendations to help deliver more robust impact evaluation. A lot of this involves practical points around elements like data collection and selection criteria that should help evaluation leads in government departments and in combined authorities.
But there are overarching lessons as well.
Evaluating the effectiveness of local growth policies is possible
As I referenced above, robust impact evaluation isn’t easy. But that doesn’t mean it can’t be done.
The evaluations that fed into this publication demonstrate that evaluation of local growth policies in the UK is possible. Not only is it possible, but it can be done even when there are hiccups along the way with data collection or when methods need to be adapted. The lessons learned here are designed to help future evaluations run more smoothly.
The demonstrator evaluations also often required existing evaluation methodologies to be adapted – to a new setting, a different geographical scale, or a different situation. Future evaluations can build on these methods to improve the robustness of evaluatoins, and continue to grow the evidence base around effectiveness of interventions.
Useful lessons for all policymakers when designing projects
Many of the lessons have wider applicability as well, whether or not local authorities are in a position to commission impact evaluation. These include:
- Projects are better when they have clear aims and outcomes—that’s why we’ve delivered logic model training to officers from over 120 LAs and MCAs.
- Policymakers should beware claims around impact from less robust evaluation methods—we cover this on our web content around impact evaluation.
- Policymakers can, and should, think through whether an intervention will have the desired effect before the policy is agreed and implemented —we recommend considering deadweight and displacement early on.
Whether considering procurement for an impact evaluation or wanting to have more evidence-informed policies that draw on evidence summaries, we all benefit from an increase in robust impact evaluations of local growth policies.