A few weeks ago, I wrote a blog about the role of evaluation in answering questions about ‘what works better’? That is, in helping practitioners distinguish between alternative ways of implementing specific policies to achieve a given objective in the most cost-effective manner. In this follow-up, I thought I’d give sketch out some examples of how this might work in practice.
Take, for example, business support aimed at helping firms to expand their employment. Our evidence review wasn’t very encouraging on the success rates for such schemes: of 17 schemes that evaluated the impact of business advice on employment, less than half found a positive impact. We also found some evidence that more intensive support involving one-to-one advice (‘managed brokerage’) might work better than light touch advice. But of course, such intensive support will be more expensive. If we want to figure out which approach might be a more cost-effective way of helping increase firm employment we need to think about a way to directly compare these two types of support. The cleanest way to do this would be to offer a basic level of light touch support – for example a well-designed website – as the default support provided to a firm wishing to expand employment. More intensive support could then be (randomly) allocated to some firms. Over a period of time – say one to two years – we track what happens to employment at these firms. If more intensive support is to be cost-effective it better have a bigger impact on employment and that bigger impact better be worth the higher cost.
We could imagine doing something similar with support for start-ups. Here, there’s even less evidence on what works. Our access to finance review certainly makes for uncomfortable reading, with only one out of four evaluations showing a consistently positive effect of programmes on new start-ups. We didn’t find any evaluations of business advice programmes that specifically looked at start-ups, so there’s no help to be had there. Once again, we could imagine offering varying levels of supports to entrepreneurs looking to start-up new businesses. Some could be referred to light touch provision, some could be offered something slightly more intensive and yet others could be offered one-to-one support. Once again, we’d need to track these firms over time to figure out what kind of support works best.
All of this might seem unrealistic – but it’s precisely how we’ve made progress in other areas of social and economic policy – as the What Works Network event held this week clearly demonstrated. In medicine, trials often consider the effectiveness of a new treatment relative to some existing treatment. The Educational Endowment Foundation has been pioneering this approach in the UK for education. BIS, with its innovative growth voucher research programme is starting to consider ways of evaluating benefits that one particular ‘treatment’ provides over other means of support available for firms. Some of these experiments can be very large scale. One recent academic paper describes a trial from France involving a total of nearly 44,000 unemployed individuals allocated in to three different groups. But trials can also be much smaller – which means that local policy experimentation can provide a great context in which to try to figure out what works better, particularly if a number of local areas are willing to collaborate in piloting different approaches. Improved access to administrative (and other sources of data) can also dramatically reducing the costs of undertaking such trials.
We’re already working with a number of LAs and LEPs to provide advice on how to implement these kind of approaches in practice. If you’re interested in our work in this area, as always, please get in touch.