Here you can see answers to our most frequently asked questions.
About us
We help to make local growth policy more cost-effective by:
- Improving the use of evidence in policy;
- Enabling more and better impact evaluation; and
- Summarising what works using robust evidence.
In practice, this involves:
- Working with and convening events and training for local and combined authorities, LEPs, devolved administrations and central government to help them make better use of evidence in designing and delivering policy.
- Supporting and carrying out impact evaluations of local growth interventions, to build the UK evidence base.
- Reviewing the evidence on local economic growth policies and presenting the findings in accessible ways.
We are funded by:
- The Economic and Social Research Council
- The Department for Business, Energy and Industrial Strategy
- The Ministry of Housing, Communities and Local Government
- The Department for Transport
We also work closely with the Cabinet Office, which coordinates the What Works Network of similar centres addressing different policy areas.
No. We are an independent organisation hosted by the London School of Economics and Centre for Cities.
Our contract is with the ESRC.
Maintaining the impartiality and independence of What Works Growth is essential to the credibility of the work we produce. Our host organisations, London School of Economics and Centre for Cities, value their status as non-partisan organisations.
Our work is designed to support anyone involved in making policy decisions related to economic growth, especially local and combined authorities, . All our resources are available in our resource library. We present our resources and findings at training and events as well as through the latest section on our website. We can also offer bespoke support and advice in some cases.
WANT TO STAY UPDATED?
Sign up to our monthly newsletterYes, we would love to hear from you.
Please, contact us and we will put you in touch with the most relevant colleague.
About our resources
Our resources include systematic and rapid evidence reviews, toolkits, impact evaluation case studies and evidence briefings covering various local economic growth policy areas; guidance on impact evaluation and use of evidence; and blogs with relevant content for local economic growth.
All our resources are designed to help policymakers and organisations use evidence and evaluation to deliver local economic growth.
Explore our resource libraryOur evidence reviews draw on quantitative evidence from impact evaluations. Impact evaluations assess whether a given intervention – such as a business support scheme – has made a difference to a given outcome – such as business survival.
In each review we shortlist the most relevant and methodologically robust evidence. We do this using the Maryland Scientific Methods Scale to rank them in terms of robustness. You can find out more about how we score the evidence here, and more on how to use the reviews more generally here.
For some of our resources, such as our evidence briefings, we draw on a wider range of evidence, including, for example, data trends and economic theory. Unlike impact evaluations, these forms of evidence do not tell us about whether an intervention has been effective in the past. However, they can be useful additional tools for making decisions about local growth policy.
This is why many of our resources, including our evidence briefings and blogs, draw on both impact evaluations and a wider range of evidence, like data trends and economic theory. When our resources draw on a wider range of evidence this is always stated transparently in the document.
Local context is always important. Our evidence reviews allow local decision takers to explore what has worked best across the board, and then apply this learning to their specific needs and economic conditions.
We use evidence from all OECD countries, not just the UK, to draw on the broadest and best evidence base. We also recognise the importance of context: our training, and many of our resources, are designed to help decision makers apply the evidence in their own context.
About impact evaluation
Evaluation can take different forms.
At What Works Growth our primary objective is to improve the cost-effectiveness of local growth policies. Fundamentally, cost effectiveness in public policy is the relationship between inputs into policy (e.g. public spending) and the outcomes achieved from that policy.
The key question we try to understand is whether a policy affects the outcomes it was designed to influence, what the effects were, and how that compares to other policies trying to achieve the same thing.
Impact evaluation helps us answer these questions because it focuses on measuring the effect (impact) of a specific intervention on specific quantifiable outcomes. This is known as ‘causal impact’ or ‘causality’– because (when done well) these evaluations establish that the intervention caused the outcome.
Impact evaluation does this by trying to answer the question of “what would have happened if the policy didn’t happen?” or, in some cases “what would have happened if we tried this different policy instead?”. It does this by using counterfactuals – constructed alternative scenarios that mimic what would have happened in a world where the policy wasn’t implemented, or where a different one was.
Impact evaluation is crucial for cost-effective policymaking, because if money is mistakenly spent on things which don’t affect the intended outcomes, then public money will be wasted and people’s lives won’t be improved by policy.
Ideally, to identify the effect of a policy, evaluators would have access to a parallel world where they could compare the same individuals, businesses, or place under two different scenarios – one where the policy happens and one where it does not. They could then see what would have happened in the absence of the policy and whether the policy made a difference.
The idea behind a counterfactual is to try to mimic that parallel world. To do this requires constructing or finding a comparison group which is as similar as possible to the group who receive the intervention. Differences in outcomes can then be compared between the ‘treatment group’ and the ‘comparison group’.
An almost identical comparison group where there are no meaningful differences between the group receiving the policy and the group not receiving it isolates the effects of the policy – the only difference between the groups is due to the policy itself. This means any difference in outcome can be attributed to the effects of the policy.
The main challenge of impact evaluation is in constructing a counterfactual that is credible – in other words one that is genuinely similar. A big part of this is done by accounting for “observables” – characteristics of people, businesses or places which are known to the evaluator and that affect the likelihood of them being targeted by the policy and independently affect the outcomes being measured. More robust methods use statistical tricks to account for so called “unobservables” – characteristics of people, businesses or places which cannot be known to the evaluator. Different impact evaluation methodologies have different approaches to dealing with this problem – you can read more about this here.
Impact evaluation isn’t always the ‘best evidence’.
We think it is the best approach to finding robust answers for the types of questions (i.e. casual questions) we are interested in answering. Did the policy have an impact? How big was the impact? Which policies had stronger impacts on certain outcomes?
The answers to those types of questions – whether specific interventions had an impact in their specific settings – are one part of the wider evidence base. Impact evaluations usually need to be repeated in different contexts to build up a wider picture of the type of policy they are looking at and combined with other types of evidence for a fuller picture.
About evidence
Good evidence is always the best available evidence. There are several things to consider when assessing different types of the evidence. For example, assessing the quality of a research study may involve different criteria (e.g. methodology) than those used to assess the quality of a report of stakeholder feedback (e.g. how were respondents selected) or a final project report (e.g. whether it includes lessons learnt).
A good starting point is to ensure the evidence is reliable, relevant and recent.
- Reliable- Good evidence comes from a reliable source, and it is collected or producedthrough a clear and well-establishedprocess.
- Relevant – Good evidence provides an answeror insights for the question being asked. Relevance also depends on whether the context and problem being addressed are similar, and if the information (or results) can be transferable.
- Recent – Good evidence isusually the latest available – that it is also reliable and relevant. This reduces the change that the contexthas changed since it was produced that may have affect its relevance.
Keep in mind that not all evidence is good quality evidence, and good quality evidence is not always relevant for a specific questions (or purpose).