Yesterday we launched our toolkit on employment training. The toolkit contains a set of policy design guides aimed at helping people make more informed decisions when developing employment training initiatives.
Each policy design guide covers a specific aspect of programme delivery – the four launched yesterday cover careers counselling, financial incentives, pre-qualifications and reminders. They aim to help policy makers understand how much is known about effectiveness of these different features. They also identify things to consider when thinking about implementing a particular approach.
Compared to our evidence reviews these guides focusing on specific features of a policy. In other words, they try to figure out what the evidence tells us about what works better when implementing a particular type of programme. That is, are there aspects of policy design that improve policy effectiveness?
Most of our evidence reviews were able to say something in this regard. For example, our access to finance review found some evidence that loan finance programmes were more likely to be successful than equity finance programmes; our review of Enterprise Zones suggested that local employment conditions could influence the extent to which employment effects were felt locally; and both our apprenticeships and employment training reviews pointed to the importance of employer involvement in improving policy success rates. But these findings only scratched the surface in terms of the questions that practitioners ask themselves when designing specific policy interventions. Our toolkit looks at a somewhat broader evidence base to see what more we can say about these questions.
These policy design guides can’t provide definitive evidence on how to design effective employment training. There’s simply not enough evidence out there to provide a set of rules about the way in which policy should be implemented. Evidence is a crucial input into the policy-making process, but it’s never going to provide a tick box solution to policy development. That said, in all cases our toolkit guides proved useful evidence that should help underpin more effective policy development. They also highlight the need for effective monitoring and evaluation to further improve cost-effectiveness. For more on embedding evaluation in to policy design, read our how to evaluate guide.
Over time we’ll be expanding both the range of policy areas covered and, if evidence becomes available, we’ll be adding to the list of design guides within specific policy areas. So do please let us know if they’re useful and let us have any suggestions on how we can improve them.