Conflict over war deaths

For conflict in action, there are few livelier battlegrounds than the row over how many people die in wars.

The numbers have been falling steadily, according to experts from the International Peace Research Institute, Oslo (PRIO). They say that the annual battle toll around the world declined by more than 90 per cent between 1946 and 2002.
 
Not at all, retort three US analysts from the University of Washington in Seattle and Harvard Medical School. Ziad Obermeyer, Christopher Murray and Emmanuela Gadikou reported in BMJ in 2008 that PRIO had got it all wrong. According to them, nearly three times as many died – an average of 378,000 a year between 1985 and 1994, rather than PRIO’s figure of 137,000.
 
Now the defenders of PRIO have turned on the BMJ. They believe it should not have published the US study in the form it did, and having done so should have agreed to publish a rebuttal of similar length. “This is not some trivial academic disagreement” said Andrew Mack of the Human Security Report Project, based at Simon Fraser University in Canada.
 
“Accurate statistics on the health impacts of war are critically important not just for researchers but also for humanitarian organisations whose assistance programmes save millions of lives around the world.”
 
Both estimates are big numbers, but the discrepancy between them is huge, and so is the fact that OMG (to abbreviate the US team) says that war deaths are going up, while PRIO says they are going down. Not only is the problem three times larger, OMG claim, but it is getting worse, which has implications for policy.
 
The rebuttal PRIO’s supporters wanted BMJ to publish has now appeared in the Journal of Conflict Resolution, by a team comprising Professor Michael Spagat of Royal Holloway College, University of London, Andrew Mack and Tara Cooper from Simon Fraser University, and Joakim Kreutz from Uppsala University, Sweden. 
 
The two teams use different methods, and are measuring different things. PRIO measures battle-deaths, counting them from a variety of sources: peer-reviewed studies, epidemiologists and demographers, military historians, published accounts of particular conflicts, and Keesing’s Record of World Events. From these it makes a judgement of the likely number of battle-deaths in each war.
 
OMG used a different method. It based its conclusions on surveys carried out in 2002-03 by the World Health Organisation in which respondents were asked, among many other things, about their immediate family. A proportion responded that they had had siblings who had died in war, and from these numbers OMG calculated figures for war deaths in 13 countries over the past 50 years.
 
This method, they say, has advantages. Surveys carried out in peacetime avoid the exaggeration and double-counting that can occur in the midst of war. Siblings ought to know how their brothers - and, in lower numbers, their sisters - died. Compared to this, OMG are disparaging about what they describe as PRIO’s reliance on “media sources”. 
 
But here’s where things begin to get conflicted. OMG’s data is, in practice, rather limited: just 13 out of the 45 counties WHO surveyed had more than five sibling deaths recorded in a ten-year period, so it is these 13 they focus on: Bangladesh, Bosnia, Burma, Ethiopia, Georgia, Guatemala, Laos, Namibia, Philippines, Republic of Congo, Sri Lanka, Vietnam, and Zimbabwe.
 
In these 13 countries, they estimate, there were a total of 5.393 million war deaths, compared to PRIO’s estimate of 2.784 million battle-deaths in the same 13. But this is not a factor of three, as they claim in the BMJ paper, but of 1.9. And the 95 per cent confidence intervals on the OMG estimates are so wide (2.997 to 8.697 million) that they overlap PRIO’s confidence intervals (2.695 to 3.026 million) so one cannot actually be sure there is any difference at all.
 
The ratio of 3 is in fact an unweighted average of the ratio between OMG and PRIO estimates for the 13 countries. So Georgia, where the ratio is 12 but which accounted for only 0.6 per cent of all the deaths in the 13 countries, gets as much weight as Vietnam (by far the bloodiest conflict in this 50-year period) where the ratio is 1.8 but which accounted for 71 per cent of the deaths.
 
Taking out Georgia, an extreme outlier, would bring the ratio of the means to 2.2. If the median rather than the mean had been taken, the ratio would have been 2.1, even including Georgia. So it looks as if OMG were “talking up” the difference to make it appear as large as possible.
 
The PRIO figures include only those in wars in which government are one of the warring parties. It excludes intercommunal strife, battles between warlords, and conflicts between rebel groups, which the OMG methodology might be expected to capture. Nor does PRIO count the killing of non-combatants by either governments or non-state groups, on the grounds that slaughtering defenceless people does not constitute armed conflict. It can be argued that including these gives a more accurate picture of the costs of war, but only if they are added to the PRIO figures can a valid comparison between the two sets be made.
 
Professor Spagat’s team say that you can’t generalise from 13 countries, chosen simply because they happened to have sufficient data, to the 71 countries that actually had wars in the 50-year period covered by OMG. It also challenges the method by which OMG extrapolates its data from the 13 countries to the entire world.
 
It does this by replotting the OMG figures for the 13 against PRIO figures to produce the graph below. Essentially this is a straight line drawn through three points, since ten of the 13 form a “splotch” near the origin. The slope of the line is 1.81, and the intercept, say OMG, is 27,380. Visually, it appears that the intercept might as well be 0, remarks the Spagat paper.
 
      
 
OMG then calculate war deaths for the world using the formula:
 
     War deaths = PRIO battle deaths x 1.81 + 27,380
 
This manipulation makes no sense, say Professor Spagat and colleagues. When applied to every conflict, it would imply that even the smallest produced at least 20,000 deaths. In fact, the average conflict in 2007 killed less than 500.
 
And it also implies that there is a fixed relationship between PRIO data and the population data used by OMG. In each case, if this rule holds, PRIO data has to be multiplied up to get the right answer. Yet in five of the 13 countries OMG studied, the PRIO figures are actually higher than their own. So this assumption is falsified by their own data.
 
Surveys have their place in calculating war casualities, say the Spagat team. But they have severe limitations. In the rare cares where more than one survey has been carried out over the same period, radically different answers have been found – Iraq provides a recent example.
 
They conclude: "The extraordinary divergence in these estimates, and the lack of any consensus as to their cause, plus the problems we have identified ... suggest the utility of survey-based approaches to measuring war deaths, while clear in principle, still confronts major challenges in  practice." 
 
Andrew Mack put it more strongly. "There appears to be no way of effectively rebutting BMJ articles that contain unwarranted - and damaging - critiques of the work of other scholars. This makes the journal effectively unaccountable by shielding it from serious criticism."