The What Works Centre for Local Economic Growth is entering its third phase of funded activity, and we’re making some changes about the place.
I think this is a really exciting next step. It means we’ll be able to offer more direct support to practitioners who are doing innovative things and want to evaluate them well. It means helping to plug the evidence gaps identified through our series of evidence reviews with targeted evaluations. And there’s a lot planned for making new and more practical tools to help everyone evaluate better.
Not everything can or should be “gold standard”. Sometimes it’s not feasible, or proportionate. Sometimes the scale of a project is too small to allow meaningful quantitative findings. But in lots of circumstances there are simple things we can do to make everyday impact evaluation just a bit better. The key to this is defining a good counterfactual, usually by means of a comparison group or area. Done well, this can help and to pull an evaluation above the threshold where we think we are starting to understand impact and causality, rather than just observing change. As people who have been on one of our day-long training workshops will be sick of hearing me say: it’s about asking yourself “am I comparing apples with apples? If not: where can I find some apples?”. Sometimes it’s as simple as checking whether your programme design is introducing selection bias that doesn’t need to be there and then just… stopping.
But, when I used “we” up there, I meant it spiritually! After agreeing how we wanted to see the Centre evolve in the coming three years, it became clear that that this meant some changes in the way we are structured and staffed. It felt like the right time for me as Deputy Director, and the Arup team as a whole, to step back from being a delivery partner in the Centre.
The Arup team, and me personally, have been involved in the Centre since its inception in 2013. Alongside our LSE and Centre for Cities colleagues, we brainstormed how we would work, and on what we would focus. We trialled, tweaked, and tried again with the method for our 12 systematic evidence reviews. We have collectively searched, sifted and reviewed over 10,000 pieces of evidence. Most recently we have had a hand in the Congestion Charging mini evidence review and the analysis of Dependent Development. And, for the last 18 months, we have been focused on the design and delivery of our “How to Evaluate (well)” training workshop.
We remain great friends and strong supporters of the Centre and its objectives, because good evaluation matters. It’s not an esoteric bolt-on for geeks. It matters that policymakers don’t just do things that feel warm and cosy, but ask themselves tough questions about what really changes when they intervene.
We collectively spend a lot of money and professional effort on appraisal and business cases, and rightly so, but appraisal is only as reliable as the evidence it draws from. If the evidence base were better that would ultimately lead to better informed spending decisions.
Despite the Centre name, this isn’t just a local question. Sometimes the best evaluation evidence will come from national, programme level evaluations and there’s little to meaningfully add at local project level. Sometimes it needs funders to provide space and money for evaluation; and to demand higher standards.
Too often, I have met local policymakers who despair of onerous monitoring and evaluation requirements imposed by funders – despite no chance of them helping programme managers to understand impact. After all the funders’ requirements are met they tell me there’s sometimes little funding – or energy – remaining for causal impact analysis.
And then there are those who get fired up at the idea of carrying out full counterfactual evaluation only to come up against a brick wall of data availability. An important workstream in phase 3 will be working with the holders of the big adminstrative datasets to support policy evaluation. The will is there, but some focused resource is needed to make it happen.
Having said all of that: a lot has changed in the last 7 years. When we started, the evidence base was much much thinner than most people would assume in some areas. When we published our transport evidence review (link) in 2015, despite the fact that we hear near-constant discussion of its role in supporting productivity through agglomeration, we were able to find just 29 studies (in 2015) that met our evidence standards and looked at the impact of road or rail on local employment, GVA, wages or productivity. An absence of evidence isn’t the same as evidence that the benefits don’t occur. There are good theoretical reasons to think that transport can boost productivity in some circumstances, but precious little out there that demonstrates where it has in practice. As I write in 2020, I can think offhand of 6 or 7 good, robust evaluations of transport infrastructure that are planned or underway. That’s just one policy area.
So: big achievements, and a world that is much more receptive to the idea of causal impact analysis. As the Centre moves into its next phase, we at Arup wish our friends and colleagues the best of luck, and will be cheering from the sidelines from here on in.