Skip to content
Blog

Evaluation and the Green Book Review

arrow down
jaromir-kalina-o8sZlUbMR28-unsplash

In a recent Times article, Gareth Davies, head of the National Audit Office (NAO), drew on the findings of a new NAO report to argue for more and better evaluation of government spending.

Progress is being made. The 2020 Green Book review announced a greater emphasis on high quality evaluation, and the Cabinet Office’s Evaluation Taskforce is championing better use of evaluation, for example during last year’s spending review process. But despite pockets of excellence across government, evaluation still lags behind appraisal in terms of coverage and consistency: the NAO report found that only 8 per cent of major projects had robust evaluation plans in place.

In one sense, this is to be expected. The fact that appraisals are needed before a project can begin is as good an incentive as you can get to ensure policy-makers deliver them. And the fact that business case approval rests on meeting a certain bar means a minimum quality standard can be maintained. In contrast, evaluation can’t be completed until the project is delivered. Decision-makers can insist on evaluation plans as a condition of funding but there’s no standard process to ensure quality, usefulness or timeliness. Add to that barriers to evaluation such as costs, design constraints, and the risk of finding out the policy doesn’t work, and the challenges are clear.

But if, for any individual project, the incentives are stronger for appraisal than for robust evaluation, that’s not true when it comes to delivering value for money across the whole policy agenda. In this wider context, appraisal only contributes to good policy-making if we understand how predicted costs and benefits have played out in practice. And that requires good evaluation. If only 8 percent of projects are being robustly evaluated, building this broader picture of whether government intervention is actually delivering the desired changes is extremely difficult. We need a system which ensures that good evaluation happens, and refers back to appraisal, so that over time we improve our ability to select the most cost-effective interventions.

The creation of the Evaluation Taskforce and the changes announced in the Green Book review show that government is aware of the challenges, but the NAO finding reminds us that there’s lots of work still to do.

In this context, it was interesting to read a recent Twitter discussion about whether prioritising evaluation – specifically high-quality impact evaluation – might actually undermine good policy-making, rather than support it. One concern was that it can skew funding towards the types of interventions that are most easily evaluated, and therefore most likely to be well-evidenced, and that things that are difficult to evaluate – for example because they are placed-based, or hard to replicate – will be systematically under-used. 

The same point was made when the ‘What Works’ network was established in the early 2010s, and it is an important one. If funding were to be restricted only to those interventions which have been, or could be, evaluated using a ‘gold standard’ Randomised Control Trial, we’d be in trouble. Huge swathes of important projects and programmes would be affected.

This doesn’t seem to be a risk at present. To give just one example, in my own field of local economic development, billions of pounds have been allocated over the last few years to interventions which will be chosen on the basis of local priorities. For some places, evidence from previous evaluation might be a factor in their choices, but it doesn’t have to. This is not to say that a ‘lack of evidence’ is never cited as a reason not to pursue a policy, but it doesn’t appear to be red line. 

But those of us who champion greater evaluation requirements should keep this risk in mind. A greater emphasis on high-quality evaluation should not come at the cost of skewing funding towards particular types of projects and programmes simply because they are easier to evaluate. 

The following principles might be helpful. 

First, evidence requirements should not prevent innovation. It should never be the case that only things which have already been evaluated can be funded. Instead, we need to ensure that when we do fund something new, it is properly evaluated, facilitating better-informed decisions next time round. 

Second, the evaluation requirements should match the policy. For a national employment support programme delivered to individuals, it is reasonable to demand that it is either based on, or the subject of, a high-quality evaluation, comparing people who receive it with similar people who don’t. For a local flood defence programme, on the other hand, this isn’t possible or appropriate, but there exist a range of other evaluation methods which are. 

Third, good evaluation sometimes requires additional resources and skills. Project and programme design should always consider not just the budget needed for evaluation, but also the technical capacity. For example, if a programme is to be delivered by local partners with limited evaluation expertise, central government may need to build centrally supported evaluation into the design.   

With these principles in mind, we can move towards a much stronger approach to evaluation without undermining innovative, responsive and locally relevant policy-making.   

This blog was written for the Green Book Network. Danni Mason is a member of the Network Steering Group.