In putting together our reviews of the evidence on local economic growth, our team have to sort through thousands of research papers, government evaluations and think tank reports on our topics. Not all of this material is useful, however. Some evaluations are better than others, and we are committed to using only the most robust evidence available in order to produce our findings.
As we shift towards a greater focus on working with policy-makers on embedding evidence and evaluation into the delivery of local growth programmes, we thought it would be useful to provide an account of the process by which we select the best impact evaluations to use to inform policy. Our new Scoring Guide sheds light on the way we have sorted through all the material that passes through our hands.
More importantly, it can also serve as a scoring handbook for anyone wanting to assess the robustness of a particular policy evaluation.
The Scoring Guide can provide useful advice on how much weight to put on a particular piece of evidence, and can help organisations undertaking evaluations either in assessing them after completion or in helping choose between methodologies beforehand. Although it is no substitute for better technical training and expert advice, it should help those with some knowledge of evaluation techniques to better understand recent advances and the way that we treat these in our systematic reviews.
It’s important to note that the ranking of individual studies is not an exact science and often involves a degree of judgement. Indeed, anyone who has attended an academic seminar will know the extent to which such issues can be hotly disputed! Nevertheless, on average our scoring will tend to produce rankings on which many evaluation experts would broadly agree. We hope it will prove valuable when practitioners are faced with deciding how good a review or evaluation really is, or in deciding how best to embed robust evaluation in new local growth programmes and pilots.