Skip to content
Blog

Reissued Evidence Reviews – what changed, and why?

arrow down
maxim-bober-fNr766J-bvE-unsplash

Today we’re publishing updated versions of our four earliest evidence reviews (covering Employment TrainingBusiness AdviceAccess to Finance, and Sports and Culture). Here’s what’s changed, and why.

We developed our reviews methodology during the autumn of 2013, testing and iterating it on our first two reviews. Over the next 21 months we published a further 12 reviews. The way we have classified and presented findings has evolved somewhat over this period, as we’ve looked for the most helpful approach to present evidence and lessons across different subject areas.

Having recently published our final Evidence Review (at least for now), we have taken the opportunity to revisit the early Reviews and bring the methodology and presentation of our findings into line with later Reviews.

Fortunately, in practice, this mostly resulted in minor changes to our findings. In a number of cases these changes strengthened our initial findings, although they also lead to a few substantive differences.

We’ll be posting a blog in the next few days summarising the very small impact this has had on our findings. For those of your interested in the methodology, the main changes we’ve made are:

  • We removed verdicts on whether a policy ‘works’ or ‘doesn’t work’ overall in favour of saying whether a policy ‘works’ for a given outcome – with a particular focus on outcomes most closely related to local economic growth objectives (e.g. employment, productivity, wages).
  • In terms of reporting, we tend to discuss findings for specific outcomes in separate sections, whereas they were sometimes aggregated in the early reports (for example, reporting on employment and wages in the same section).
  • Some of the earlier reports contained a table which distinguished whether outcomes evaluated were part of the rationale for the policy. These have been removed – partly because so many of the studies were vague or even silent on what the objectives of the policy actually were, and partly because users suggested they found that distinction confusing rather than helpful.
  • The text explaining the Maryland Scientific Methods Scale levels changed slightly across reports to better reflect the classification as we were actually implementing it. This has now been standardised across reports.
  • We made a change to the way in which we classified papers in terms of whether impacts are ‘positive’, ‘mixed’, or ‘negative’. Sometimes evaluations report different findings for the same outcome because they use various different model specifications, and in some cases they only find significant impacts using some of the model specifications. In the early reviews we would have classified a mix of positive/no effect or negative/no effect as ‘positive’ or ‘negative’ respectively. Only if different specifications found impacts in different directions would we classify findings as ‘mixed’ for a given outcome. We eventually moved to a position where these were all called ‘mixed’ to better reflect the uncertainty of the findings. We have now updated the earlier reports to be consistent with this approach. Because of this, we have reclassified a small number of evaluations which has meant that more evaluations are now classed as ‘mixed’ which in some cases has slightly diluted findings on impacts (but not substantively changed them).

To avoid confusion, we have removed original reports from this website and replaced them with the revised versions. We are happy to provide copies of the original published versions or a detailed account of changes to individual reports on request – please just get in touch.