My last post dealt with the issue of the importance of defining success when thinking about how to evaluate. It might seem that answering that question also answers the question of ‘what to evaluate’. If the policy objective is employment, then we evaluate whether or not the policy has a positive impact on employment. If we are interested in cost-effectiveness, it is clearly helpful to undertake evaluations that address whether or not a policy works at all. It would be great if there were a large number of impact evaluations that clearly said that policy A works, policy B doesn’t. Unfortunately, as our reviews to date make clear (e.g. on employment training or business advice) this is rarely going to be the case – some employment training programmes work, others don’t (and similarly for business advice, access to finance, etc).
However, in practice, the challenge that we face in improving cost-effectiveness doesn’t just come down to conflicting findings. Call me cynical but even if the evaluation evidence on, say, business advice was uniformly negative, I’d be amazed if policymakers suddenly stopped funding programmes to provide such advice. Partly this comes down to politics – cost-effectiveness isn’t the only consideration when it comes to policy decisions. More prosaically, individual policy makers are often given a budget to spend on a broad policy area with one or two objectives. For example: commission an employment training programme to reduce unemployment among young people.
The question that such a practitioner really wants answered is not ‘what works’ but ‘what works better’? Given that we are going to provide employment training, what kind of training should we be providing? Short courses or long courses? On the job or off the job? Many considerations (resources, capacity constraints, etc) will have a bearing on how these questions get answered. In keeping with the Centre’s objectives, we would argue that cost-effectiveness should play a central role in answering questions about these policy design features. While often not well understood, good impact evaluation should play a crucial role in answering such questions.
For example, when NICE provides guidance it doesn’t try to answer the broad question about what makes us healthy. Instead, it tries to decide what treatments work best in addressing particular conditions. Similarly, the Educational Endowment Foundation focuses on assessing the effectiveness of very specific interventions (including after school programmes, arts participation, extended school time, feedback) on improving one specific outcome: the attainment of disadvantaged pupils of primary and secondary school age.
Trialling two or more versions of a policy is a very effective method of comparing their effectiveness. ome of these experiments can be very large scale. One recent academic paper describes a trial from France involving a total of nearly 44,000 unemployed individuals allocated in to three different groups. But trials can also be much smaller – which means that local policy experimentation can provide a great context in which to try to figure out what works better, particularly if a number of local areas are willing to collaborate in piloting different approaches.
We’ll be providing more detail on how to structure a policy’s implementation to provide comparison or control groups in the next blog.