Last night, Angus Deaton gave the British Academy’s annual Keynes lecture on ‘Instruments of Development?’. I expected it to be enlightening; it turned out to be witty as well.
Some questions recur in economic research again and again, without ever seeming to get closer to a resolution. “Does aid work?” is one. “Do children learn better in small classes?” is another. Frustrated by years of trying to identify ever smaller effects in ever more complicated regressions, we have resorted to two clever techniques: instrumental variables (in macro) and randomized controlled trials (in micro). Angus Deaton suggested that these apparently different techniques are closely linked and similarly flawed.
Economics, like any social science, has a problem with experiments. You can’t work out the effect of aid on development by randomly selecting one country to receive aid and another not to: even if it were moral, it wouldn’t be practical because there’s so much else going on. Instrumental variables are a clever technique to overcome this (see ‘Freakonomics’): basically, you have to find a factor that could contribute to the effect you care about (latitude helps determine prosperity) without any possibility of reverse causation (because the prosperity of a country has no effect on its latitude). Deaton argued, in short, that instrumental variables are no panacea, because they are not statistically exogenous and in any case countries differ in ways we cannot control. If economists set instrumental variables up as a gold standard, we doom ourselves to eternal methodological debates amongst ourselves and ridicule from everyone else.
Randomized controlled trials are even more popular in the micro development world. Want to know by how much a vaccination programme improves public health? Easy: just pick the counties you vaccinate at random and compare the outcomes. Leaving aside the ethical difficulties with this (who deserves to come first?), the technique only tells us the mean treatment effect; it doesn’t tell us whether the effect was distributed widely or limited to a few very special cases. Moreover, some of the randomizations are less random than they seem. Supposed you picked schoolchildren with surnames starting with A to take part in an experiment: would they really do better because of the experiment, or because they have always sat in the front row and got more attention from their teachers? Maybe, maybe not: we don’t know.
Deaton poked fun at the ‘randomistas’ (Banerjee, Duflo, Kremer and others) but was sympathetic to their quest for identification, as long as it has a theoretical foundation. He also argued we should avoid randomization to test very obvious propositions (“Do parachutes help keep people who fall out of planes alive?”) or those that pose grave ethical problems (“do HIV-positive people receiving anti-retroviral drugs live longer than those who don’t?”). Rather as with evidence-based medicine, the statistical evidence is only as good as its interpretation by the doctor, or the economist, who applies it to the patient’s condition. Randomized controlled trials, in this view, should take their place in the economist’s toolkit, as one useful tool among many rather than as the knockout argument.
I agreed with all of his points as far as economists are concerned. My worry is what the non-economists (and that’s most of us) are supposed to do. Are we really supposed to wade through umpteen regression models and meta-analysis papers? Are we supposed to get excited about some tiny coefficient that is significant at the 95% level? I fear that policymakers and donors, who might understand the finding of a random evaluation, will turn off as soon as regressions rear their head. Surely it’s better for decision-makers to have some scientific evidence than none at all. Let the economists work out the 95% answer; meanwhile the rest of us will make do with 80%.
10 October 2008
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment