Skip to content
Blog

How to evaluate

arrow down
martin-woortman-pG3GI6EB_Bg-unsplash

One of the greatest frustrations we have encountered in our reviews of the evidence on the impact of local growth policies has been the small number of robust evaluations from which to learn – particularly from the UK. As we suspected in our initial planning for the Centre, one of our most important tasks over the next two years will be to encourage more policymakers to undertake rigorous evaluation of their projects.

One reason people do not prioritise evaluating their projects is that they are not sure how to do it, and are concerned that it will be too expensive and time-consuming, drawing their effort away from programme delivery.

To help overcome this perception we are promoting some simple steps that policymakers can take to dramatically improve the quality of their evaluations:

Another (less frequently voiced) concern about evaluation is that it may show that a project did not work as anticipated. Policymakers or their political bosses may prefer to use soft methods of evaluation that are not only easier to undertake, but more likely to show the project in the most flattering light.

This is an understandable response – no one wants to be associated with a failed project. However, it is important to everyone in the policy community that programme results, whether successful or not, are properly evaluated and the findings made available from which everyone can learn.

Stephen Curry wrote a piece in the Guardian recently about his decision to publish the results of a scientific experiment he recently completed, which had resulted in failure. He says there is often aversion to publishing stories of failure, but the desire to bury them must be overcome:

‘…negative results matter. Their value lies in mapping out blind alleys, warning other investigators not to waste their time or at least to tread carefully.’

This applies to economic development as much as it does to science, medicine or any other field. A policy that did not produce the results intended, or indeed failed, can provide useful guidance to others in the field if the evaluation is done well. Indeed, the opportunity to prevent others making the same mistakes can be the saving grace of a project that did not meet its original objectives.

More positively, when policy does have the desired positive effects, good impact evaluation helps convince others that the policy really is working: something that will be very important as local authorities seek to earn autonomy from a central government that is, rightly, sceptical about unsubstantiated claims. Experimentation and evaluation can also help improve policy effectiveness by focusing on policy design and increased understanding of what works better.

We’ll be elaborating on each point on our ‘How to Evaluate’ list in a series of blogs and links on the website. If anyone has good examples from their own experience to illustrate any of the recommendations please send them to us.