Skip to content
Blog

Using our evidence reviews

arrow down
steve-johnson-5P8vBdePmoQ-unsplash (1)

We are currently finalising our first review of Employment Training policy, which will be available on 4th April. During the review we have had several queries about how policymakers will be able to use the reviews to inform their decision-making. Below are some responses to these queries.

Focusing on Impact

The Centre’s reviews consider a specific type of evidence – impact evaluation. This type of evidence seeks to identify and understand the causal effect of policy interventions and to establish their cost-effectiveness. To put it another way they ask ‘did the policy work’ and ‘did it represent good value for money’?

Our focus on impact reflects the fact that we often simply do not know the answers to these and other basic questions that might reasonably be asked when designing a new policy. Figuring out what we do know will enable policy-makers to better design policies and undertake further evaluations to start filling the gaps in our knowledge. Of course we don’t always find this evidence – the employment training review, for example, turned up depressingly little good information on programmes’ value for money.

Our evidence reviews do not focus on process evaluation – which looks in detail at howprogrammes operate day to day. This is obviously important, but falls outside the Centre’s direct remit. However, impact evaluations do also help us to have more informed discussions about more nuts-and-bolts process and delivery issues.

Identifying common programme characteristics

Each individual evidence review will set out a number of ‘Best Bets’ which will outline what tends to work in a given policy area based on the best available impact evaluations. By looking at the details of the policies evaluated, these ‘Best Bets’ will highlight the common characteristics of programmes and projects that have positive effects. For example, employment training policies that offer in-firm training perform better than those that offer classroom-based training.

The Best Bets cannot provide definitive guidance as to what will or won’t work in any specific context. But they should provide useful guidance to policy-makers when designing a policy or may raise a note of caution if they decide to try out something which has not worked so well in the past.

Complementing local knowledge

These Best Bets will not generally address ‘what works where’ or ‘what will work for a particular individual’. In some cases evaluations do break out results by area type or different groups. But even when they do, detailed local knowledge and context remain crucial.

So the evidence from these impact evaluations should be regarded as a complement, not a substitute, for local, on-the-ground practitioner knowledge. Any policy intervention, whether it’s focused on employment training, business support or land remediation, will need to be tailored and targeted – and an accurate diagnosis of the specific local challenges a policy seeks to address is the first step to understanding how the evidence applies in any given situation.

Helping to fill the evidence gaps

The local economic growth evidence base is nowhere near complete. There are many gaps in our knowledge which could be filled by relatively easy changes in policy design, information gathering and improved evaluation.

The gaps highlight the need for more policy experimentation, and specifically for experiments that identify how different elements of policy design contribute to better (or worse) outcomes; and the value for money of different approaches. More experimentation also requires evaluation to be embedded in the policy design process, rather than being thought about as a last minute add-on, which means thinking differently about the policy-making cycle as a whole.

There will be more suggestions for how to use the evidence in our report at the beginning of April.