Skip to content
Blog

How to Evaluate – Plagiarise

arrow down
simone-hutsch-_wpce-AsLxk-unsplash

One of the major problems with improving evaluation and getting it embedded in the policy design process arises because people think that evaluation is too complicated.

Throughout this series on how to evaluate, we’ve tried to show that thinking about evaluation need not be a very technical exercise. Asking simple questions about what and when to evaluate helps to define success. Thinking about how we’ll measure changes for those on the programme and compare those changes to a control group allows us to understand whether we will be able to attribute success to the programme (as opposed to all the other factors that will be at play). While thinking through these issues from first principles is to be strongly encouraged, we should also recognise the importance of copying freely from the approaches adopted in existing studies. Indeed, in an ideal world, we’d be able to draw on evidence from multiple randomised control trials before we even considered rolling out expensive interventions more widely.

Unfortunately, in practice, we are a long way from having that volume of high quality impact evaluation evidence available for any local economic growth policies. But our reviews do identify many evaluations that meet the Centre’s minimum standards – and thus provide a possible template for developing evaluation strategies. For example, this RCT shows how we might evaluate a scheme that provides adult education vouchers with the aim of improving labour market outcomes. A second example describes an RCT which uses business advice to achieve a similar outcome.

Admittedly, things get trickier when we move away from RCTs towards more statistical approaches. Indeed, one of the nicest things about RCTs as an evaluation approach is that they are relatively simple to understand, even for the non-specialist. Indeed, the EEF even has a simple guide for teachers to implement basic RCTs in their own schools and classrooms! But even when we move away from RCTS there are lessons to be learned from existing evaluations – even if some of the statistical techniques are more tricky.

We also provide a methodology guide which allows organisations thinking about commissioning an evaluation to assess how different approaches might rank on the Maryland Scale that we use to rank studies for the purposes of our review.

In short – this is one area where plagiarism doesn’t necessarily give the right answer, but it certainly helps getting there.