Skip to content
Blog

Evaluation for the common good

arrow down
simone-hutsch-lOuPTwmlVOg-unsplash

What do the air you breathe, the Thames Barrier, and the Armed Forces have in common? It is impossible to exclude anyone in society from benefitting from them. We breathe the same air, flood protection benefits many homes, and the Armed Forces defend the nation, not just specific households. Economists say that public goods such as these are non-excludable and non-rivalrous. While it may be less intuitive, knowledge falls into the same category. Once a discovery is made, it may be possible to keep it secret for some time but sharing it with an unlimited number of people is costless. This is a powerful idea when thinking about evidence and evaluation.

Over the past one-and-a-half years, I have worked with the Evaluation Task Force in the Cabinet Office to understand barriers to better and more frequent evaluation of local growth policies, particularly by Mayoral Combined Authorities (MCAs). We have heard about the hard constraints and challenges MCAs face, including strained capacity and the time it takes to build the necessary capabilities and systems. However, we have also heard that there are broader issues around the appetite and incentives to evaluate. To exaggerate only slightly, everybody wants better evidence, but nobody wants to invest in it. A classic public goods problem.

Reframing the problem in this way is particularly useful for local government, as there are more decision makers, all facing relatively similar decisions, in a similar institutional and economic environment. If the Department for Work and Pensions is deciding whether to insist on more conditionality for job seekers, they may look for evidence abroad, but there is no other organisation in the UK facing that decision. However, Cambridge and Peterborough may look to Liverpool City Region for inspiration on adult skills policy and West of England may look to Greater Manchester for evidence on franchising local busses. Systematising the generation of evidence and sharing of it would potentially multiply the benefits from devolution.

There are currently few dedicated funding streams for evaluation by MCAs, with evaluation funding eating into delivery budgets. Recognising the public good nature of evidence generated through evaluation calls for dedicated funding, similar to the arguments that underpin public investment in research and innovation. There are also few mechanisms by which existing evidence can be shared, with organisations like What Works Growth and local universities taking on roles as knowledge brokers.

So, what if we thought about evaluation not as part of monitoring and assurance, but as the process of generating evidence we can all benefit from. On the one hand, we often hear that evaluation is considered a box-ticking exercise, required by funders as part of processes to assure that money has been spent as intended. On the other hand, there are political concerns about the possibility that an evaluation finds no, or only small, effects. One of the rationales behind the devolution process is that local mayors can be innovative, listening to local people and trying new things. It would be rather odd to expect that all of these new ideas work out. There is value in learning about failure: it prevents making the same mistake again. While it may be uncomfortable to see the failures of a policy spelt out on paper, it would be disingenuous to argue that local residents would not have felt that failure sooner or later anyways. An alternative narrative would acknowledge early on that policy is experimental and that failure is a possibility. This may require smaller trials of high risk policies to ‘fail fast’, and only scale once positive outcomes have been established. It may also mean pooling resources across areas, and trialling new ideas in this way if a smaller scale is not feasible for a particular policy.

Ultimately, it needs to be acknowledged that local policy is really hard: problems are intractable and intertwined, resources are scarce. With the current, thin, evidence base, MCAs are flying blind. Improving the evidence base to produce more evidence for everyone should be a crucial component of the devolution process to set local policy up for success.


Dr Carolin Ioramashvili is Assistant Professor in Innovation Policy at the Science Policy Research Unit at the University of Sussex Business School, University of Sussex and a Visiting Fellow in the Department of Geography and Environment at the London School of Economics and Political Science. Her research tries to uncover why some regions are more prosperous than others and how policy can foster local growth and development. This research was funded by a British Academy Innovation Fellowship.

What Works Growth
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.