Why don’t we experiment?

I argued on Thursday that we should be much more rigorous about rigorously testing which policies work and which don’t, using the well-established technique of a randomised trial. I’ve received lots of comments, many very supportive but others raising a number of familiar concerns.

One sensible objection is that due to the (much misunderstood) Hawthorne Effect, the experiment will not actually reveal what we think it will reveal. (It is much harder to use placebos or double-blind techniques in social policy than when trying out a new drug, meaning that the trial will not be as helpful as we would really like.)

A second objection is that it’s not fair to deny some people the terrific intervention that you’re proposing, while others get nothing. (I am working on a piece about Jeff Sachs’s Millennium Villages project, and this is an argument I hear from them quite a lot. Edit: That was sloppy. It is an argument I have heard from  John McArthur, the CEO of Millennium Promise, and on other occasions second hand. I might have given the innaccurate impression that this is the only explanation the Millennium Villages team give for the evaluation set-up there. There is much more to be said and I’ll write a proper column in due course.)

The two objections are based, I think, on a frequent misunderstanding, which is that any experiment should test the intervention against nothing at all. I am told – for instance by Ben Goldacre – that it is more common to test any new treatment versus the best current treatment. (This is both for ethical and practical reasons – who cares whether the policy is better than nothing? We’d like to know if it’s better than we’re currently doing.)

So when I imagine a randomised trial for, say, a phonics-based method of teaching children to read versus a whole-language method, I don’t anticipate an army of phonics tutors descending on 100 randomly chosen schools, compared with nothing whatsoever. I anticipate the phonics guys being pitched against the whole-language guys. This, to my mind, largely overcomes both objections.

Meanwhile, Dan Ariely (of Predictably Irrational fame) has a thoughtful column in the Harvard Business Review asking why more businesses don’t experiment:

I think this irrational behavior stems from two sources. One is the nature of experiments themselves. As the people at the consumer goods firm pointed out, experiments require short-term losses for long-term gains. Companies (and people) are notoriously bad at making those trade-offs. Second, there’s the false sense of security that heeding experts provides. When we pay consultants, we get an answer from them and not a list of experiments to conduct. We tend to value answers over questions because answers allow us to take action, while questions mean that we need to keep thinking. Never mind that asking good questions and gathering evidence usually guides us to better answers.

Well said, Dan.

Let me re-iterate my bottom line. Yes, experiments can be costly. Yes, sometimes they are simply impractical. And yes, occasionally they are unfair. But the truth is that the policy world is full of experiments. The experiments have all of these disadvantages. But because they’re conducted in a sloppy way, without proper randomisation, protocols, independent evaluation or serious attempts to identify a comparison group, they are experiments from which we learn very little. The choice isn’t between experiments and no experiments. It’s between useless experiments and useful ones.

The Undercover Economist: a guide

Publishing schedule: Excerpts from "The Undercover Economist" and "Dear Economist", Tim's weekly columns for the FT Magazine, are published on this blog on Saturday mornings.
More about Tim: Tim also writes editorials for the FT, presents Radio 4's More or Less and is the author of "The Undercover Economist" and "The Logic of Life".
Comment: To comment, please register with FT.com, which you can do for free here. Please also read our comments policy here.
Contact: Tim's contact address is: economist@ft.com
Time: UK time is shown on posts.
Follow: A link to the blog's RSS feeds is at the top of the page.
Follow on Twitter
FT blogs: See the full range of the FT's blogs here.