Most local growth policies – in the UK and elsewhere – are not evaluated. This makes learning ‘what works’ difficult and means public resources may not be spent in the most cost-effective way.
Whilst it easy to criticise the lack of impact evaluation, there are sometimes good reasons for this. For example, impact evaluation relies on statistical power and that needs lots of observations. Some policies aren’t large enough to generate a large number of observations, for example, if they are only working with a small number of business or individuals or affect a relatively small area. Another common challenge is that data on outcomes isn’t available.
Starting early is key to programme-level evaluation
Whilst some issues are structural – i.e. impact evaluation would never be feasible for that policy, programme or project – some come down to considering evaluation too late in the policy development cycle. Considering evaluation earlier increases options. For example, randomised control trials, the most robust method, are only feasible if built into the policy’s design. For this reason ‘start early’ is the first step in our guide to better evaluation.
To help policymakers think about evaluation during the policy development process, we’ve recently published a resource on programme-level evaluation of local growth policies. This sets out four policy design features that increase the feasibility of programme-level evaluation. They include focusing the programme on a narrow range of interventions and outcomes, and thinking about how a comparison group could be established.
But evaluation won’t always be possible
Whilst we encourage policymakers to evaluate their policies, sometimes it isn’t feasible. A good example of this is the evaluation of the overall impact of devolution. Eleven city-regions across England have devolution deals, and the new government is committed to expanding and deepening devolution, with an English Devolution Bill due soon. For this reason, there is significant interest in how to evaluate the impact of devolution.
Unfortunately, in another recent publication we explain why it will be difficult to evaluate whether employment, productivity, wages or other economic outcomes are different in some areas than others as a result of devolution. Whilst there are a few options, these would involve less robust methods and may not provide convincing answers to the key policy questions.
What to do when programme-level evaluation is not possible
Both resources focus on programme-level evaluation – i.e. the evaluation of the policy or programme as a whole. Even when this is not feasible, it will often be possible to undertake some impact evaluation of aspects of the policy or programme. For example, MCAs should be able to evaluate some programmes or projects delivered through the funding and powers they received from devolution deals. Similarly, government departments could evaluate a sub-set of projects undertaken as part of a broad programme. We encourage policymakers to evaluate in this way wherever possible.
Where impact evaluation isn’t feasible, we recommend collecting good quality monitoring data and feeding this back into programme delivery and future policy design. As ever, if you want more information on impact evaluation, check out the ‘how to evaluate’ resources on our website or get in touch.