Recently, the Evaluation Task Force within the Cabinet Office launched the evaluation registry with information on over 1,400 evaluations conducted across UK government. It can be searched by department, by keywords, and by evaluation methodology. In their announcement, they said that the registry
“supports the wider public sector reform objective of accountability and provides an invaluable tool for understanding “what works”, contributing to evidence based reform and the share of best practice in Government.”
Of course, this will only happen if people actually use it when developing interventions and policy. Below are some suggestions of how this could happen.
Understanding what works
Impact evaluation is a critical part of policymaking as we learn what interventions are effective and can feed that into future intervention design. Searching the evaluation registry by department and by evaluation method, I found a randomised control trial (RCT) by the Ministry of Housing, Communities & Local Government (MHCLG). This was an evaluation of the effectiveness of an 11-week English language courses held in community centres. The evaluation used a waiting list to establish a control group from amongst the women who signed up. The RCT found an increase in reading and writing scores and in social integration for participants compared to the control group. It noted that the control group also made increases in these areas, although smaller than the participants, potentially because they were on a waiting list. The RCT was also able to look at different characteristics that may affect the effectiveness of the programme. It highlighted having children over five years old was positively correlated with an increase in reading and writing scores.
For policymakers thinking about their residents moving towards the labour market, evaluations of community-based programmes such as this one can provide useful insights on ‘what works’.
Understanding what doesn’t work
Impact evaluation also tells us what doesn’t work, which is an essential part of learning. It can prevent money from being spent on projects that aren’t effective. Sometimes it can highlight where some aspects aren’t effective within a programme, but other elements show promise. DWP have delivered a number of impact evaluations of employment support schemes. For example, DWP commissioned a randomised control trial as part of Group Work (Jobs II), a group-based course aimed at improving self-efficacy and self-esteem alongside job skills. This was based on a similar programme in the USA that had also been evaluated for effectiveness.
The impact evaluation of Group Work in the UK found no statistically significant effect for participants on being in paid work 6 months and 12 months later, and a value for money evaluation suggested that the costs outweighed the benefits. This suggests that the programme, as it was run, was no more effective than the standard offer.
However, programme participants who at baseline had lower levels of self-efficacy, higher levels of anxiety, or higher levels of depression, were more likely to be in work, more likely to have improved mental health benefits, and the benefits outweighed the costs.
Policymakers across the country are concerned about economic inactivity, particularly around health conditions. Based on these results, they may design similar interventions that have more targeted recruitment towards those struggling with anxiety and depression.
Understanding how things work
Process evaluation helps us understand how programmes worked – answering questions about the mechanisms of an intervention. Many local authorities offer business support and may struggle to reach more businesses in rural areas. Process evaluations of similar schemes can help shed light on participants’ views, pinpoint elements of delivery that mattered, or highlight common barriers in getting projects up and running. Searching for ‘rural business’ brings up a process evaluation for the ‘Rural Enterprise Support’ project run in Scotland. This project provided 12 months of free business support to micro-firms within a rural area, focused on growing their business. The process evaluation included both a survey and interviews with participants, as well as data collected about the participating firms.
Information on sectors gives a sense of how targeted the programme was, and the interview feedback highlights what elements the participants found most helpful. Several participants in interviews emphasised the tailored element both for their business, and to address issues that rural businesses face.
Policymakers looking at their business support programmes, particularly in mixed urban and rural areas may use this evaluation to reflect on the sectors covered, and whether a tailored offer is possible for different geographies.
A beginning, not an end
The evaluation registry is a great resource and can be used alongside our evidence reviews covering impact evaluation across OECD countries. However, there is much more to do, and the evaluation registry needs to include more impact evaluation of local growth policies. For policymakers considering evaluating the effectiveness of an intervention, do get in touch.