What was the programme and what did it aim to do?
This study evaluates the UK ‘Creative Credits’ programme, a pilot which provided small and medium-size businesses (SMEs) with vouchers to spend on creative services such as website design or advertising. The programme was designed and implemented by NESTA together with Manchester City Council. The programme was carried out in the Manchester city-region from 2009-10, and gave 150 local firms £4,000 in vouchers to spend on collaboration with creative industries firms in the area, conditional on the firm spending an additional £1,000 of their own money. The basic idea behind the policy is that collaboration with creative industries fuels innovation in ‘non-creative’ firms, which in turn should improve business performance (here measured by sales).
What’s the evaluation challenge?
Assessing the causal effect of collaboration policies such as these is particularly difficult since selection into the programme usually means that only certain types of firms will participate. Even if we can see that participating firms are more innovative and ‘perform’ better, it is not clear whether participation itself made any difference. For instance, it could be that the programme selected high-performing businesses who have already worked with graphic designers, and know how to get the best out of them. Furthermore, even if the programme itself does not select, it may be that only certain types of firm are attracted to the programme in the first place. For this reason, a simple comparison of participant and non-participant firms would potentially lead to an overstatement of the effects of creative collaboration.
What did the evaluation do?
To address this issue, the authors used a randomised control trial (RCT) to ensure that credits were randomly assigned, and that ‘treated’ firms were observably (e.g. on sales) and unobservably (e.g. motivation) the same as untreated firms. NESTA opened applications for the Creative Credits programme in two separate waves (September 2009 and October 2010). From 672 eligible firms, 150 were randomly awarded the credits, and 301 were used as controls. After being chosen, treatment firms were asked to choose a creative partner with whom they would spend their credits (94% did so). Four waves of surveys were conducted: before the collaborations began (baseline survey); just after the collaborations ended; six months after the collaborations ended; a year after the collaborations ended.
How good was the evaluation?
In principle, RCTs score 5 on the Scientific Maryland Scale. As discussed in our scoring guide, we have three main criteria for judging the quality of implementation. First, researchers need to avoid those in the control group also receiving treatment: in this case, vouchers are tied to recipient firms so such ‘contamination’ is designed out of the experiment. Second, is whether or not the treatment and control groups are similar on observable characteristics, i.e. if randomisation was successful or not. The authors find that most observables (e.g. age, sector) are not significantly correlated with being treated, so conclude that treatment is truly random. A further challenge is participants dropping out of the study over time, particularly in the control group. This sort of ‘attrition’ is an issue here: while 78 per cent of the treatment firms answered all four questionnaires, only 52 per cent of the control firms did so. But tests suggest few mean differences between stayers and leavers on observable characteristics, which helps us to believe that there are few unobservable differences, too.
Whilst this study performed fairly well on our three main criteria above of contamination, randomisation and attrition, we also take in to account other factors. In this case, we ended up scoring the study a 4 on the SMS due to data concerns. The data that was collected relies on self-reporting, and in some cases uses banded outcomes to organise this. This leads to imprecise estimates of treatment effects – for example, a change in sales of ‘1-9%’ may be close to zero or close to 10%. ‘Remained similar’ is also open to wide interpretation. More seriously, these bands create a risk of reporting false positives for firms that took part in the programme – that is, reporting effects where none exist. It’s important to note that despite these concerns this is still a high quality study relative to many other evaluations of similar programmes.
What did the evaluation find?
The study finds large and significant positive impacts of Creative Credits on reported product and process innovation six months post-treatment, with treated firms reporting 16% higher innovation rates. Knock-on effects on sales are only marginally significant and much smaller, with 1.1% higher reported sales growth in the treatment group. In both cases, effects do not persist after 12 months post-treatment.
What can we learn from this?
What does this mean for policymakers? The initial self-selection (firms must be willing to spend £1,000 of their own money) means that the study identifies effects of additional expenditure for the kind of firm that would be willing and able to make such a contribution. Taken at face value, the results suggest that in the short term the creative collaboration policy has positive innovation effects, but these weakly translate to sales growth; and there is no long-term impact for firms. Despite these somewhat negative results, the team behind the programme should be applauded for their willingness to subject their intervention to rigorous evaluation. Further experiments would put us in a much better position to assess the effectiveness of variations on the Creative Credits programme, and other tools to help firms innovate.
Bakhshi, H., Edwards, J., Roper, S., Scully, J., Shaw, D., Morley, L., & Rathbone, N. (2013). Creative credits: a randomized controlled industrial policy experiment. London (UK): National Endowment for Science, Technology and the Arts (NESTA). [Study 440 from our Innovation review, available here]