Over the past couple of months we’ve been working hard to try and finish our evidence reviews on broad policy areas. We’ll be publishing the rest of our reviews before the end of the year.
As this cycle of work comes to a conclusion, we’re also looking ahead to next steps in our work. In a previous blog I’ve talked about the work we are doing to develop a toolkit focused on what we know about specific elements of policy design.
But we are also looking to expand our work on ‘demonstrators’. That is, on giving support to areas who are interested in undertaking high quality impact evaluation – either of overall programme effectiveness or looking at different elements of policy design. I discussed the kind of things we are looking for in a November blog post.
These demonstrators raise an interesting question for Centres like ours. Specifically, to what extent is supporting individual evaluations a cost-effective way of improving the use of evaluation evidence in policy-making? I think there are two circumstances in which this kind of approach can be cost-effective and both rely on the scalability of results. The first kind of scalability comes when the findings are scalable. If many local governments are offering business advice then properly evaluating one scheme can help generate lessons that will improve the cost-effectiveness of many different programmes. The second kind of scalability comes when the methods are scalable. Here, as an example, I’d point to our work on improving the links between transport evaluation and appraisal. If we can work with DfT and a few local authorities to help develop methods and guidelines then these new approaches can be applied by many different local authorities trying to make decisions on future schemes.