Policy v. Evidence
A couple of weeks back, I was rather pleased with myself for coming up with the phrase "policy-based evidence making" to describe New Labour's cavalier way with scientific advice. Suddenly, post-Nutt, the phrase is popping up all over the place, sometimes attributed to me (thanks, Obnoxio and Bella) sometimes credited to Matthew Taylor, who wrote a post of that name on October 27th. I was beginning to feel rather proprietorial about it. But, of course, I wasn't the first person to think up the expression. A quick Google reveals that it was cited in a Parliamentary Report more than three years ago, which recommended that ministers:
The phrase is probably quite a bit older even than that. That it should have occurred to different people at different times is scarcely surprising, given this government's lamentable track record. The Nutt case follows hard on the heels of Nick Davies' exposure of the Home Office's manipulation (tantamount to invention) of sex-trafficking figures and not long after the same department was forced to backpedal on its plans to formalise its DNA retention policy, when the evidence it had relied on was disowned even by the researchers. The Home Office is currently the worst offender, but not the only one. Perhaps it's an inevitable that a government obsessed with tabloid headlines yet determined to use "objective" science to bolster its claims to legitimacy and (more sinisterly) to forestall debate should end up fitting evidence to policy rather than the other way around. "We are following the best scientific advice" is a powerful argument to deploy against opponents - even when they aren't.
Of course, this political strategy depends on scientists providing the "correct" advice - or shutting up when the government prefers to decide its policy not on the evidence but on what the tabloids (or even the opinions polls) are saying. So it's asking for trouble when, as with drugs classification, it institutes a system supposedly based on scientific evidence, and then overrules the advice. The problem is not that the Home Office wants to classify cannabis at a high level; it's that it claims to do so "scientifically", on the basis of measurable harm, when in fact it had other factors in mind. The revolting scientists, for their part, are too narrow in their focus, even naive (or presumptuous). They remind me slightly of the judges on last year's Strictly Come Dancing, who were outraged when week after week the audience voted for John Sergeant. They were labouring under the misapprehension that Strictly was a dance contest. Professor Nutt, likewise, thought that the Home Office classification was based on science, for no better reason than that the Home Office said so.
In fact this was an error. The report I mentioned above, that of the Select Committee on Science and Technology into Scientific Advice, Risk and Evidence Based Policy Making (pdf) is quite plain on that question:
That was three years ago, remember.
The issue of drugs classification provides an unusually clear instance of the facts contradicting government intentions (it's clearest of all in the case of Ecstasy, absurdly made class A despite being demonstrably safer than aspirin). In many cases, especially where social policy rather than hard science is at issue, the government has more leeway when it comes to selecting evidence. Getting the "right" scientific advice may require nothing more than finding an expert known to have sympathetic views.
Nevertheless, the government's misuse of science goes well beyond the more high-profile cases. And it's far from new. The Select Committee report contains a couple of fairly damning paragraphs:
This was also very revealing:
The MPs also noted (para 102) that they had:
According to the report, "the Government’s current approach to policy making is not sufficiently responsive to changing evidence"; the MPs urged both Government and opposition to "move towards a situation where a decision to revise a policy in the light of new evidence is welcomed, rather than being perceived as a policy failure."
Good luck with that.
The report is long, detailed, devastating - and, as is usual in such cases, was almost totally ignored by both government and media. The Committee made a number of recommendations about openness, about quality control, about the ways in which evidence should be commissioned and about independent oversight which might, if acted upon, have prevented L'affaire Nutt from blowing up in the way it did. But then pure "evidence-based policy" may be impossible (and not necessarily desirable) in a democratic society - as the Committee itself admitted:
But then it would have to admit that its policies were based on prejudice, political calculation and what ministers think they can get away with rather than a pure and rational assessment of evidence. And that - for obvious political reasons - they can never do.
should certainly not seek selectively to pick pieces of evidence which support an already agreed policy, or even commission research in order to produce a justification for policy: so-called "policy-based evidence making". Where there is an absence of evidence, or even when the Government is knowingly contradicting the evidence—maybe for very good reason—this should be openly acknowledged.
The phrase is probably quite a bit older even than that. That it should have occurred to different people at different times is scarcely surprising, given this government's lamentable track record. The Nutt case follows hard on the heels of Nick Davies' exposure of the Home Office's manipulation (tantamount to invention) of sex-trafficking figures and not long after the same department was forced to backpedal on its plans to formalise its DNA retention policy, when the evidence it had relied on was disowned even by the researchers. The Home Office is currently the worst offender, but not the only one. Perhaps it's an inevitable that a government obsessed with tabloid headlines yet determined to use "objective" science to bolster its claims to legitimacy and (more sinisterly) to forestall debate should end up fitting evidence to policy rather than the other way around. "We are following the best scientific advice" is a powerful argument to deploy against opponents - even when they aren't.
Of course, this political strategy depends on scientists providing the "correct" advice - or shutting up when the government prefers to decide its policy not on the evidence but on what the tabloids (or even the opinions polls) are saying. So it's asking for trouble when, as with drugs classification, it institutes a system supposedly based on scientific evidence, and then overrules the advice. The problem is not that the Home Office wants to classify cannabis at a high level; it's that it claims to do so "scientifically", on the basis of measurable harm, when in fact it had other factors in mind. The revolting scientists, for their part, are too narrow in their focus, even naive (or presumptuous). They remind me slightly of the judges on last year's Strictly Come Dancing, who were outraged when week after week the audience voted for John Sergeant. They were labouring under the misapprehension that Strictly was a dance contest. Professor Nutt, likewise, thought that the Home Office classification was based on science, for no better reason than that the Home Office said so.
In fact this was an error. The report I mentioned above, that of the Select Committee on Science and Technology into Scientific Advice, Risk and Evidence Based Policy Making (pdf) is quite plain on that question:
there is scope for improving clarity over the extent to which evidence holds sway over other factors in the determination of policy. Too often there has been a reluctance to even engage in this discussion. For example, we found in our case study on the classification system for illegal drugs that the need to “send messages” on drugs was an important consideration in decisions on classification, not just the evidence on the harm that these drugs caused. However, the importance of this factor was far from clear; we called on the Home Office to be more transparent about the various factors influencing its decisions. Indeed, we found the Government’s use of the Class of a particular drug to send a signal to be incompatible with the Government’s stated commitment to evidence based policy making, given the absence of any evidence on actual effectiveness on which to draw.
That was three years ago, remember.
The issue of drugs classification provides an unusually clear instance of the facts contradicting government intentions (it's clearest of all in the case of Ecstasy, absurdly made class A despite being demonstrably safer than aspirin). In many cases, especially where social policy rather than hard science is at issue, the government has more leeway when it comes to selecting evidence. Getting the "right" scientific advice may require nothing more than finding an expert known to have sympathetic views.
Nevertheless, the government's misuse of science goes well beyond the more high-profile cases. And it's far from new. The Select Committee report contains a couple of fairly damning paragraphs:
95.The commissioning of research from academics by Government departments is widespread so we were extremely concerned to hear allegations from certain academics that departments have been commissioning and publishing research selectively in order to ‘prop up’ policies. Professor Tim Hope, a criminologist from the University of Keele who has worked with the Home Office, told us: “it was with sadness and regret that I saw our work ill-used and our faith in government’s use of evidence traduced”. Of two case studies looking at burglary reduction commissioned by the Home Office, Professor Hope told us that the department decided to only write up one: “Presumably […] because the area-wide reduction was greater here than elsewhere”. Professor Hope also accused the Home Office of manipulating the data so as “to capitalise on chance, producing much more favourable findings overall”, despite the fact that “for individual projects, the [Home Office] method produces considerable distortion”. Furthermore, Professor Hope alleged that the Home Office had interfered with presentation of research findings by other researchers:
“At the British Society of Criminology conference in the University of Bangor in July 2003 there were a number of papers to be given by academics on the basis of contracted work that they were involved in, as I was, for the Home Office. A number of the researchers were advised not to present their papers at the last minute even though they had been advertised in the programme by Home Office officials”.
Other academics have voiced similar concerns. For example, Reece Walters of Stirling University has claimed of the Home Office’s treatment of research results: “It is clear the Home Office is interested only in rubber-stamping the political priorities of the Government of the day […] To participate in Home Office research is to endorse a biased agenda”.
96. These are serious accusations, amounting as they do to allegations of serious scientific/publication malpractice, and should be subject to vigorous examination. We are not in a position to make a judgment about the veracity of these claims....[but]We have heard enough on an informal basis about the selective publication of research to harbour concerns. Such allegations do nothing to encourage the research community to engage in government-sponsored research or to improve public confidence in the validity of such work. Because of the obvious problem of potential complainants being dependent on funding from those who commission research, the GCSA [Government's Chief Scientific Adviser] should not require a formal complaint from the alleged victim in order to instigate an investigation of substantive allegations of this sort of malpractice. We urge the GCSA to investigate proactively any allegations of malpractice in the commissioning, publication and use of research by departments and to ensure that opportunities to learn lessons are fully taken advantage of. We would expect the results of any such investigations to be made public.
This was also very revealing:
100. Norman Glass, Director of the National Centre for Social Research, warned that the “consequences of bias in evidence, which is what we social scientists are essentially hunting down day after day”, were sometimes considered by policy makers to be “a kind of geeky interest”. Mr Glass argued, however:
“If you are basing your evidence on unrepresentative, biased samples then you cannot believe a word. In fact, it is worse than knowing nothing. Knowing things that are not so is worse than knowing nothing at all.”
Professor Nancy Cartwright from the London School of Economics also emphasised the need to take into account different types of evidence: “the best decisions are made on the basis of the total evidence [...] taking into account how secure each result is and how heavily it weighs for the proposal and also taking into account the overall pattern of the evidence”.There is often a temptation to justify policy by selective use of available evidence. But this, and a failure to acknowledge the implications of the methodology used and the overall balance of evidence, risk serious damage to the credibility of evidence-based policy making.
The MPs also noted (para 102) that they had:
uncovered evidence in our ID cards case study which demonstrated that departments do not always commission trial or pilots at the appropriate stage in policy development and may use the outcomes for purposes other than those specified at the outset of the pilot. William Solesbury, Senior Visiting Research Fellow at the Centre for Evidence Based Policy and Practice, also told us: “there are probably too few [pilots] and they are used inappropriately”. He was of the view that there was a “mismatch between the research timetable and cycle and the political cycle” so that “once pilots are up and running ministers are very often keen to roll them out before results are ready”.
According to the report, "the Government’s current approach to policy making is not sufficiently responsive to changing evidence"; the MPs urged both Government and opposition to "move towards a situation where a decision to revise a policy in the light of new evidence is welcomed, rather than being perceived as a policy failure."
Good luck with that.
The report is long, detailed, devastating - and, as is usual in such cases, was almost totally ignored by both government and media. The Committee made a number of recommendations about openness, about quality control, about the ways in which evidence should be commissioned and about independent oversight which might, if acted upon, have prevented L'affaire Nutt from blowing up in the way it did. But then pure "evidence-based policy" may be impossible (and not necessarily desirable) in a democratic society - as the Committee itself admitted:
Evidence based policy making has been a watchword of this Government and is widely seen as representing best practice. However, in reality policies are developed on the basis of a myriad of factors, only one of which constitutes evidence in its scientific sense. We have argued that the phrase ‘evidence based policy’ is misleading and that the Government should therefore desist from seeking to claim that all its policies are evidence based.
But then it would have to admit that its policies were based on prejudice, political calculation and what ministers think they can get away with rather than a pure and rational assessment of evidence. And that - for obvious political reasons - they can never do.
Comments