Skip to content

How to evaluate business advice: Regional business development programme in Sweden (statistical approach)

arrow down
simone-hutsch-l8fyK9RS-OU-unsplash (1)

What was the programme and what did it aim to do?

This study evaluates the Regional Business Development Programme (RBDP) in Sweden. The RDBP provides support for SMEs operating in rural areas of Sweden – especially in the sparsely populated north. The policy aims to stimulate firm performance through a combination of grants and ‘consultancy cheques’, which firms spend on business advice and mentoring (the focus of the analysis). The programme is voluntary: firms prepare a business case and apply for funding, with the regional development agency making the decision. In 2009, the average award was about €7,000 (£5,700).

What’s the evaluation challenge?

There are several challenges in identifying the effects of a business support programme like this one. First, firms’ characteristics rather than the intervention may drive performance, and we need some way to control for these. Second, because the programme is voluntary firms may ‘select’ into it. For instance, if the firms who least need support dominate applications, this will bias results upwards. Third, agency staff may not make objective decisions – reinforcing the self-selection problem. Finally, even if firms don’t receive RDBP funding they may still get other forms of business support, which may contaminate the evaluation results.

What did the evaluation do?

To deal with these issues, the study authors use a matching approach, using available data to construct a control group that looks similar to the treatment group. They also vary the matching process to control for contamination and self-selection – for example, to deal with self-selection they compare outcomes for treatment firms with a control group of unsuccessful RGBP applicants. They avoid a simple comparison of treatment and control groups after the firms receive advice because differences in performance may be driven by unobservable firm characteristics. Instead, they look at the change in firms’ performance before and after treatment, and check if the difference between treatment and control groups is meaningful. (Statisticians call this a ‘difference in difference’ approach.)

How good was the evaluation?

According to our scoring guide, matching combined with difference-in-differences receives a maximum of 3 (out of 5) on the Maryland Scientific Methods Scale (Maryland SMS). This is because it does well to control for observable differences (e.g. sales) between supported and non-supported firms, but is unable to control for unobservable differences (e.g. motivation). Since this paper uses a wide range of variables in its matching and since the difference-in-difference is based on a clear treatment date we score this study 3 on the SMS.

What did the evaluation find?

In its more basic analysis the study finds that consultancy cheques lead to higher value added (14.3%) and higher employment (12.6%). Allowing for contamination, effects are smaller. However, the effects are zero (technically insignificant) once treatment firms are compared with unsuccessful applicants rather than all other firms in the region.

What can we learn from this?

So does the programme have zero effect? The authors suggest that it’s the time firms spend thinking about business development that generates impacts. As this is part of the application process this is consistent with no effect of treatment when we compare successful and unsuccessful applicants, but positive effects when we compare successful firms to other firms in the region. That is, the results suggest a kind of placebo effect where the RBDP application is enough to improve performance rather than the support itself. That implies that policymakers could look for other, less costly ways of delivering support to firms – simply by encouraging them to think about the kind of issues raised during the application process (rather than paying to provide structured advice).

We should be careful in using these results in a UK context. As noted, there are some unresolved issues in identifying true effects, and the findings may not translate from rural Sweden to rural Britain. The evaluation could be replicated in the UK, for example testing control groups against firms receiving application advice and a second group receiving full support. Ideally, these groups would be randomly selected. 

References

Mansson, J., and Widerstedt, B. (2012). The Swedish Business Development Program: Evaluation and some methodological and practical notes. In: European Regional Science Association, European Society of Regional Analysis. Bratislava, Slovakia, 2012.