Three weeks ago, we launched our latest report: Evidence-based policy in disadvantaged places. The report is a joint effort from seven different centres across the What Works Network and outlines six principles that should help resources go a bit further.
One of recommendations within the report is for places to share their learning and to be open and transparent about their findings, irrespective of whether they were successful or not. In the spirit of this advice, we share what we learnt from our efforts to identify and profile places that we could work with on this project.
The process
Our first step was to define what we meant by ‘disadvantaged places’ and come up with a way of identifying them.
Supported by the Cabinet Office, we carried out a literature review to understand the various dimensions in which disadvantage can manifest in a place and landed on: business dynamics; employment outcomes; physical and mental health; housing quality; income, skills and education attainment; welfare need, and crime incidence.
We discussed the possibility of using the IMD given it is a fairly holistic measure of deprivation. The limitation was that the IMD, within the Income and Employment domains, focuses on labour market outcomes at the individual level. Given we were trying to measure disadvantage across places, we felt we needed to include indicators that capture the health of the local economy, for example, the number of business start-ups per 10,000 population or the share of jobs at risk of automation by 2030. So, we moved away from the IMD.
We scoured relevant databases to gather a shortlist of non-duplicative indicators for the largest 56 urban areas in the UK (we focused on larger areas because we worried that smaller places might not have the capacity to dedicate resources/people to work with us on the project). Armed with the indicators, we set up a model that allowed us to adjust how each of the dimensions were weighted against each other and also how the different indicators were weighted within each dimension.
The outcome
Underwhelmingly, but perhaps unsurprisingly, it turned out that the relative weighting of the dimensions or the indicators did not make much of a difference to the relative ranking of places. Movement that did occur when weightings were changed was mostly one or two places up or down the rankings.
This is because a lot of the underlying drivers/manifestations of disadvantage are linked to and affect each other. For instance, poor housing quality is known to lead to poor health outcomes. If this results in school absenteeism in children, it can also affect educational attainment.
The lesson
This isn’t to say that how we measure disadvantage is unimportant. It just needs to be approached knowing that at the individual project level, the details are unlikely to matter too much, especially when the remit is so broad. In hindsight, it would have been a simpler exercise to create a composite index – with the IMD as the base and the other indicators added in.
This methodological observation is just another expression of the inter-linked complexity of the challenges faced in these places which could mean that making good progress in one area has the potential to spark improvements in others. The challenge is knowing where, and how, to start. Is addressing skills the greatest priority? Or is it improving health? Do we need to co-ordinate both to have the greatest impact? What is the most cost-effective way to do this?
Some of these issues can be addressed using the evidence base from the various What Works Centres. Condensed summaries of our collective evidence on Mentoring and Reminders are also available.
The report, Evidence-based policy in disadvantaged places, also outlines various approaches that can be put in place immediately, across a range of policy areas. We hope that these modest suggestions will be useful (as well as the behind-the-scenes story of one unnecessarily laborious piece of analysis in getting to them).