Development gurus forced to eat humble pie

Red faces at the Earth Institute at Columbia University in New York and at The Lancet, which have been forced to acknowledge that the main claim about child mortality in Africa made in a paper the journal published last month was based on faulty arithmetic and poor comparators.

The Lancet paper reported that child mortality at villages in sub-Saharan Africa had been reduced three times faster than the national average as a result of interventions made by the Millennium Villages project, run by the institute at Columbia. The project is the brainchild of economist Jeffrey Sachs and has been generously funded, recently announcing another $72 million for its second stage. Sachs claimed in a Guardian blog: “Of course, the MVP is based on rigorous measurement, detailed comparison of the villages with other sites, and peer-reviewed science.”

Anybody making claims like that has to be able to justify them. And MVP has been a controversial project since its inception, with many development experts critical of its approach and alert to any slips. The Lancet paper gave them an opportunity to pounce.

First off the block was Gabriel Demombynes, senior economist at the Nairobi office of the World Bank. He posted a blog on the World Bank site a few days after the Lancet article appeared, pointing out that the annual mortality reductions had been wrongly calculated because the authors had mistakenly used a baseline of three years rather than four – inflating an annual rate of decline of 5.9 per cent to 7.8 per cent – and had then compared this with national rates of decline recorded before the project started, rather than over the same period as it had been going on. Child mortality in Africa has been falling fast in recent years, faster in fact than was reported for the MVP villages (6.4 per cent vs 5.9 per cent per year).

Paul Pronyk, lead author of the paper, accepted that Demombynes was right. He withdrew the claim as “unwarranted and misleading” in a letter to The Lancet published on May 26, at the same time as a letter from Demombynes and others laying out their criticisms.They include, in addition to the two summarised above, the observation that the 7.8 per cent decline is quoted without a 95 per cent confidence interval. They calculate this from data in the paper to have been about 1.4 – 15.3 per cent, which overlaps with the cited background reduction of 2.8 per cent. So the claim, even as originally made, was not statistically significant – though the paper didn’t say so.

So much for rigorous measurement and peer-reviewed science! As an anonymous comment responding to Demombynes’ blog put it: “Oops, 15 co-authors, all PhDs, and they failed at simple math and selecting a comparison group?”

Pronyk, whose job was director of monitoring and evaluation at the Columbia institute, has since resigned his post, Retraction Watch reports. No comment yet from Jeffrey Sachs, who was also an author of the paper.

Anyone can make mistakes, although these were egregious ones for a project that boasts about its  authoritative evidence-gathering. It is the more surprising that such a paper was written or published given the background rumble of criticism of the MVP, well-summarised last October by Madeleine Bunting on The Guardian’s Poverty Matters blog. The same month Demombynes and Michael Clemens wrote an article on the same blog making detailed criticisms of earlier MVP publications. So the Columbia team can hardly have been unaware that it was being closely watched.

The UK has a substantial financial commitment to the MVP programme, last year committing £11.5 million to a project in northern Ghana. Monitoring and evaluation, the business case for this project says, will be provided through the New York-based Millennium Promise Alliance’s partnership with Columbia University’s Earth Institute, which in the circumstances doesn’t sound encouraging. The business case cites many of the claims previously made by the Earth Institute and criticised by Demombynes and Clemens, while admitting that there is a lack of published evaluation of impact. But it also promises “a robust independent evaluation that will provide insights into the value for money of the model”, which will be separately funded. In light of the current questionmarks over MVP claims, that seems prudent, even though it will cost another £3.75 million.

The business case for the evaluation dismisses the possibility of relying on the MVP’s own monitoring and evaluation. That will fail to deliver because it is not independent, DFID concludes. It also dismisses the suggestion by Demombynes and Clemens for randomisation. That would be too expensive, and would pose awkward political questions on the ground. How do you explain to poor people in rural Ghana that you are spending millions not helping them because they are the control group and not the intervention group? So it opts for a halfway house, an evaluation carried out by an independent team that  it admits is “not the most rigorous evaluation that could be applied to MVs” but one that  should provide robust evidence to identify any ‘MV effect’.

It may still prove hard to explain why £3.75m is being spent to evaluate a project which itself costs only £11.5 m. But given the manifest problems with MVP’s own evaluations, DFID had little choice.