Skip to content
Blog

Levelling up: how will we know what works?

arrow down
simone-hutsch-TPixIUNcQ7I-unsplash (1)

Yesterday’s white paper on levelling up is quite a doorstep, but you don’t have to read far to find reference to ‘what works’, evidence and evaluation. A transformed approach to data and evaluation is one of the five ‘pillars’ underpinning the Government’s approach to levelling up. For example, the work already underway includes new subnational statistics developed by ONS, which will address a significant barrier to policy-relevant evaluation if they provide more accessible and timely data on local outcomes such as productivity. Needless to say, we think this is great news.

Alongside data improvements, the Government also wants to improve incentives for evaluation of place-based policies, learning from existing good practice and collaboration with partners. This will be crucial to the success of levelling up. At present there are far too many aspects of local growth and levelling up policy where our knowledge of what works trails far behind other policy areas. Here are two things we think could make a real difference.

First, evaluate individual projects – like high street improvements, pedestrianisation and new bus services – as well the central programmes through which they are funded

Government is rightly committed to evaluating the impact of national funding programmes such as the Levelling Up Fund (LUF). This will tell us whether, overall, such funds have improved the places that received them.

But we also need to learn about which types of spending are effective, which are not, and why, so that central and local government can invest in the best possible ways to improve places across the country. Without this project-level evaluation, we cannot learn about which interventions are most likely to improve local outcomes: we will only ever know whether a pot of money makes a difference on average.

There are challenges. It isn’t possible, or proportionate, to centrally evaluate every project funded through, for example, the LUF. But selecting only some places for central evaluation has implications for programme design. Leaving project-level evaluation to individual places is an alternative. But past examples demonstrate that when impact evaluations are optional, places often don’t have the resources or incentives to do them, and if they are mandated, but not properly resourced, this can also create problems.

But more can be done. For some types of project in some places, local-led impact evaluation is possible and would benefit everyone; options for ensuring such projects are evaluated can be built into the funding design. And in other cases, where many places are delivering similar projects, government can coordinate and support evaluation to ensure that limited capacity, lack of local resources, and small scale in individual locations, are not barriers to evaluating popular interventions for which more evidence on impact is needed. Approaches like these require extra resource and effort, but the alternative is spending billions without taking the opportunity to learn about what works.

Second, incentivise local partners to find out ‘what works’

In 2011 the Department for Education commissioned the Education Endowment Foundation (EEF) to fund and deliver evaluations of education programmes in schools. It has been hugely successful in overcoming the barriers to education evaluation faced by local partners.

Incentives for local authorities and other local partners to assess the effectiveness of their levelling-up interventions are limited, while they face a number of barriers including: prioritisation of delivery over evaluation; limited evaluation capacity; and fear of negative findings and their political and funding consequences. We can see the impact of these barriers in a recent mapping exercise that we did: a call for evidence, including to all local and combined authorities and local enterprise partnerships, delivered not a single impact evaluation which met our evidence standards.

These same barriers – delivery priorities, limited capacity, implications of negative findings – are experienced by schools, but the incentive of fully funded interventions has been sufficient to overcome them, allowing the EEF to deliver over 150 education impact evaluations, involving thousands of schools. A model which learns from the success of the EEF, and funds pilot interventions as well as evaluations, could incentivise local bodies as it has schools.

The Government views levelling-up as a decades long project which will require ongoing and coordinated investment. These are precisely the circumstances in which impact evaluation is essential. We’re looking forward to being part of a renewed effort to find out, and act on, what works.