Ö÷²¥´óÐã

Ö÷²¥´óÐã BLOGS - Mark Easton's UK
« Previous | Main | Next »

Project Blueprint: 'Not sufficiently robust'

Mark Easton | 16:04 UK time, Thursday, 17 September 2009

The Ö÷²¥´óÐã Office must have hoped no-one would notice. Quietly, without press release or even a statement, two weeks ago ministers published a long and eagerly-awaited .

blueprintWhy, one might ask, did they not want to trumpet the conclusion of a major research project which took six years of work and close to £6m of our money? After all, "Project Blueprint" had been hailed as the most important UK assessment of what works in trying to stop children taking drugs.

The answer is that the science had been so bungled that the research was almost useless. Here is the key finding:

The original design of the Blueprint evaluation was not sufficiently robust to allow an evaluation of impact and outcomes, and consequently the report cannot draw any conclusions on the efficacy of Blueprint in comparison to existing drug education programmes.

What?

Yes, a significant programme to assess whether a new way of preventing young people using illegal drugs actually worked could do no such thing. It emerges that they had failed to follow two of the most basic rules of such research:
• Make sure your sample is large enough
• Make sure you have a control group for comparison

The evaluation of the Blueprint approach was done in 23 schools in four areas of England with another six local schools acting as a control. But it quickly became clear that the methodology was flawed, as the researchers admit:

It was originally intended that the local school sample would act as a comparison group so that the efficacy of the Blueprint programme could be tested. However, analysis during the development of the evaluation concluded that to be able to detect differences between the two samples would require a sample of at least 50 schools. This was considered beyond the scope of the evaluation, both in terms of the resources it would require and what was appropriate for the evaluation of an untested approach.

So, at some point during the years of research, Ö÷²¥´óÐã Office ministers must have been told of the problem at the heart of the Blueprint project. It would appear they were asked for more money to make the findings robust, but refused. Rather than pulling the plug on the whole evaluation, however, the process was allowed to struggle on in the hope that some broader comparisons might still be valid. It was to prove a vain hope.

While it was still planned that the local school data would be presented alongside the Blueprint school data, to enable some comparisons to be drawn between the two samples, recent academic and statistical reviews concluded that to present the data in this way would be misleading, given that the sample sizes are not sufficient to detect real differences between the two groups. Instead, findings from the local school data are presented separately in the report to provide some context to this work but do not act as a comparison group.

The Ö÷²¥´óÐã Office is putting a brave face on this evidential disaster. In a statement sent to me, a spokesman said:

"The Blueprint programme has helped to raise and improve our understanding about the delivery of drug education in schools. The data gathered from Blueprint schools has been extremely useful in improving our understanding about what children and young people want out of drug education lessons."

However, even the arguably Panglossian statement admits that:

"Blueprint has clearly highlighted some of the key challenges to delivery of evidence-based drug education."

Well, yes. The challenge for the £6m evaluation was to demonstrate whether this new system of drug education - going beyond the classroom to involve parents, local media, trading standards (to try and stop shops selling glue and aerosols to children) and other agencies - worked better than traditional methods. However you dress it up, the evaluation failed to answer that fundamental question.

Ö÷²¥´óÐã Office statisticians are anxious to distance themselves from the affair. One source made it clear to me that the evaluation was commissioned by the drugs "policy team" rather than by science and research.

There is also anger and frustration among those working in the drugs prevention field who already feel that the Ö÷²¥´óÐã Office cares more about raids and treatment than it does about stopping people taking drugs in the first place.

Andrew Brown, co-ordinator of the , described the evaluation report as "hugely disappointing". He told me that "there was a great deal of expectation that we would get something really useful out of it", but that instead, practitioners will have to rely on American research which may be of limited value in the UK.

Eric Carlin, who sat on the Advisory Group to the Blueprint project, has . Do read the thread, which contains some conspiratorial theories.

To some, this failure fits into a wider problem with Ö÷²¥´óÐã Office evaluations. You may recall the rows over the "Tackling Knives Action Programme" (TKAP) revealed by this blog earlier in the year.

On that occasion, as now, the absence of a robust control group was .

And there is academic criticism suggesting Ö÷²¥´óÐã Office ministers have form when it comes to cherry-picking bits of evaluations they like and ignoring the bits they don't.

For instance, the introduction of Drug Treatment and Testing Orders (DTTOs) in 1998 is examined in a 2007 report, :

"Before the DTTO was rolled out across England and Wales, a study of three pilot areas was commissioned which concluded 'we could hardly portray the pilot programmes as unequivocally successful' (Turnbull et al., 2000: 87). The response in terms of policy was typical of the 'farming' mechanism. The negative findings were not publicised and the roll-out went ahead."

It is a similar story with another Ö÷²¥´óÐã Office plan - the "Reducing Burglary Initiative". the episode "illustrates what might happen when responsibility for validating policy - that is, for establishing 'what works' - is placed in the hands of (social) science, but the evidence produced is not, apparently, congenial to the particular 'network of governance' that is responsible for the policy".

If there is an upside to this story, it is that Blueprint evaluation has had to be honest and up-front about its limitations. Almost two years late and smuggled out though it may have been, the report suggests that statistical integrity is beginning to count for a bit more inside the Ö÷²¥´óÐã Office.

Comments

or to comment.

Ö÷²¥´óÐã iD

Ö÷²¥´óÐã navigation

Ö÷²¥´óÐã © 2014 The Ö÷²¥´óÐã is not responsible for the content of external sites. Read more.

This page is best viewed in an up-to-date web browser with style sheets (CSS) enabled. While you will be able to view the content of this page in your current browser, you will not be able to get the full visual experience. Please consider upgrading your browser software or enabling style sheets (CSS) if you are able to do so.