Rushing from Syria to Sudan

I’ve opened several recent posts with something like “Here’s a nice article but it comes up short in a particular way that I’d like to discuss.”  Please take such formulations at face value and don’t imagine that I’m stretching to find something polite to say.  I’ll tell you directly when I think I see a terrible article.

This article is terrible.

All I can think of to soften the blow is that I like Rob Muggah and am sure he’s trying to be helpful.  But he should have done his homework before tossing this one out there.

The very first sentence explodes instantly at the click of a mouse.

At least 240,000 Syrians have died violently since the civil war flared up four years ago, according to the Violations Documentation Center in Syria.

Click on the link and you see 122,683 deaths, not “at least 240,000.”  (This number will rise over time so those who click after my post goes up should see a  higher number.)

Go to the Syrian Observatory for Human Rights if you want to figure out how Muggah got confused.  Here I just note in passing that the Violations Documentation Center is far more transparent in its work than is the Syrian Observatory for Human Rights.  This follows a common pattern that higher numbers seem to be associated with weaker documentation.

Predictably, this “click and collapse” opening is not a one-off:

In the case of DRC, the International Rescue Committee (IRC) commissioned a series of mortality surveys between 2000 and 2007…. The researchers initially claimed that 5.4 million Congolese died as a result of the armed conflict from 1998 to 2006. Under pressure from critics, the estimate was later revised downward to around 2.8 million deaths, with roughly 10% attributed to violence.

I was, initially, shocked to learn that the IRC had conceded so much ground to its critics rather than remaining in deep denial as I thought it had.  I experienced a warm feeling of the possibilities for rational discussion mixed with self-reproach for having, somehow, missed this important event.

But click on “revised downward” and you see that the IRC’s bow to its critics is an artifact of a surprising reading error;  Muggah mistakes a quote from the Human Security Report’s critique of the IRC work as a quote from the IRC itself.  (This kind of feels like a schoolboy fantasy….I call you a pooh pooh head and this means you concede that you are, indeed, a pooh pooh head.)

And this is only half of the error.

The Human Security Report’s position boils down to two main points.

  1. The IRC strung together a series of surveys to underpin its 5.4 million estimate.  The first two waves only covered small slivers of the Democratic Republic of Congo (DRC) but the IRC estimated, inappropriately, as if these places were nationally representative.  However, just as we should not treat political opinion in Massachusetts as representative for Texas we should not treat death rates where the IRC provides humanitarian relief as representative of death rates for the whole of the DRC.  The Human Security Report pointed out that if you throw out these two waves of nonrepresentative surveys, but still accept the rest of the IRC’s work, then the overall estimate falls from 5.4 million deaths to 2.8 million deaths.  (This is the sense in which the IRC voluntarily “revised downward” to 2.8 million.)
  2. The Human Security Report goes on to argue that the baseline death rate the IRC used to calculate excess deaths was too low.  (Go here if you’ve forgotten about excess deaths and baselines.)  If you take a more realistic baseline then the IRC estimate falls from 2.8 million to 900,000.

The excess death estimates for the final three surveys, the only ones to cover the entire country, were not affected by the methodological errors evident in the first two surveys. Here, the major problem, as mentioned above, lay with the inappropriately low baseline mortality rate. The impact of changing this rate to a more appropriate one was dramatic. The estimated excess death toll dropped from 2.8 million to less than 900,000. This is still a huge toll, but it is less than one-third of the IRC’s original estimate for the period. [Human Security Report]

It seems to me that this and other distortions come from extreme impatience rather than from Muggah having an ax to grind.   It is as if he conducted his research while driving a car full of screaming children and talking on the telephone.

Here is a particularly irksome paragraph for me (I work closely with Iraq Body Count and am on the Board for Every Casualty):

…Advocacy groups such as Every Casualty and the Iraq Body Count (IBC) restrict their databases to incidents where there is evidence of a corpse and the circumstances of the individual’s death. Each victim must have a name. And because they limit their counts to precisely documented cases of violent deaths, they are often criticised for under-counting the scale of lethal violence.

First of all, Every Casualty doesn’t have a database, let alone one that imposes Muggah’s created rules.  It does strive to bring about a world in which nobody dies in a war without his/her death being recorded and recognized.  Ideally, recognition includes names but no one I’m aware of thinks that war death databases should exclude unnamed victims.  IBC would love to have a name for every victim in its database but, unfortunately, most remain unnamed to this day.  In short, Muggah’s requirement that “each victim must have a name” is a mirage.

Muggah’s “often criticised” link in the above quote is, perhaps, a borderline case but I would also classify it as a click and collapse.  It is true that databases that document deaths on a case by case basis will tend to undercount because it is very difficult to capture every single case (although it appears that Kosovo Memory Book pulled off this feat after  an extraordinary effort made over many years.).  Yet, Muggah spends a good chunk of his article discussing competing figures on the number of deaths in Iraq and it turns out that the article behind this link (by Beth Osborne Daponte) covers the same Iraq numbers in great depth and concludes:

While each approach has its drawbacks and advantages, this author puts the most credence in the work that the Iraq Body Count has done for a lower bound estimate of the mortality impact of the war on civilians. The data base created by IBC seems exceptional in its transparency and timeliness. Creating such a data base carefully is an incredibly time-consuming exercise. The transparency of IBC’s work allows one to see whether incidents of mortality have been included. The constant updating of the data base allows one to have current figures.

Muggah’s short summary of this article is that is criticizes IBC and similar efforts for undercounting.

A few weeks back I introduced readers of the blog to the three fundamental ways to account for war deaths: documentation, counting and estimation. Quoting from an earlier post, you can:

1. Document war deaths one by one, listing key information such as names of victims, dates of deaths, how people were killed etc.

2.  Count the number of war deaths.  Doing this is an obvious outgrowth of documentation but also has status independent of documentation.  You can, for example, count deaths that may be documented to varying standards, e.g., counting unidentified bodies along side identified ones..

3.  Estimate the number of war deaths using a statistical procedure.

Muggah refers rather indiscriminately to “counts” throughout his article, even appearing to extend the concept to estimates of excess deaths.  This juxtaposition of counting and excess deaths suggests an illuminating thought experiment.

Imagine trying to count excess deaths deaths in a war.  Assume you are lucky enough to have a list of everyone who has died since the outbreak of the war.  You would have to consider every person on the list and make a call on whether or not that person would have survived if the war had never happened in the first place.

What do you do, for example, with people who die of strokes?  There could be a few cases of people dying on the operating table after power outages that can be clearly traced back to insurgent activity.  You would have a good case for classifying these as excess deaths, albeit at the cost of engaging in piss-poor monocausal social science since war-induced power outages can only be one among multiple causes of stroke  deaths.

Suppose now that the rate of stroke deaths increased dramatically after the outbreak of war but that it’s only possible to identify a war-related cause for a handful of stroke deaths. The excess death concept suggests that, nevertheless, you should view the increase in the death rate as caused by the war.  (This attribution of causality is debatable but that’s not my concern here.)  My point is simply that you can estimate the number of stroke deaths above and beyond a baseline pre-war rate but you can’t count them one by one because any individual case can’t be pinned monocausally on the war.

Thus, excess deaths are tied inextricably with estimation whereas violent deaths can be documented, counted or estimated.  In fact, it is no accident that we can approach violent deaths from more angles than we can approach excess deaths.   The excess deaths concept is rooted in a counterfactual scenario – how many people would have died if there had never been a war? – and is, therefore, more abstract than the violent deaths concept.

Muggah is right that there is a lot of disagreement about measuring war deaths.  Yet there is a hopeful sign in the way that  several dozen casualty recording organizations are now at the final stages of agreeing on a shared set of standards after a series of fruitful meetings organized by Every Casualty.  I hope to be able to post something soon on this positive development.


Citation Distortion: Part I

Apologies for the radio silence.  I went on holiday (for just one week) and then, somehow, have been desperately playing catch up ever since.

This 2009 paper by Steven Greenberg entitled “How citation distortions create unfounded authority….” strikes me as remarkably useful and important.

Suppose I write a peer-reviewed journal article  that includes a claim that “Campbell’s soup prevents breast cancer.”  I immediately drop a footnote citing seven journal articles.

Many readers will probably believe that the cited articles support the breast-cancer claim.   Sadly, nothing could be further from the truth.  In fact, it can easily turn out that none of the cited papers offer any support for my claim and some may even provide contrary evidence.

Faking people out with a blizzard of footnotes is a surprisingly effective strategy.  First of all you can intimidate many readers with your apparent erudition.  Plus people will ask themselves whether they want to invest precious time tracking down a load of footnotes.  They may figure that one or two of the citations could turn out to be flawed but is it really possible that all seven are not as advertised?

Yes, it is possible.

Have a look at pages 36-38 of this paper of mine.  This is just a sliver of a critique of the infamous Burnham et al. (2006) paper that dramatically overestimated the number of violent deaths in the Iraq conflict.  I will definitely return to this paper in future posts but for now I just want to note that there could hardly be a better example of a claim supposedly backed up by a lot of sources that don’t actually check out.

Greenberg’s study covers 242 papers with 675 citations on something incomprehensible (to me) having to do with proteins and Alzheimer’s disease.  Luckily, the only thing that matters for us is that there is a big literature pitting a  side A (my term) against a side B.

Please click on this nice summary picture of the Greenberg analysis:


The citations of side-A partisans have a pronounced tendency to back up only side A.  Greenberg calls this “citation bias”.

OK, I know that some of you will never recover from the shock of discovering that people prefer to cite evidence that backs up their own beliefs rather than evidence that calls their beliefs into question.  I apologize for doing this to you.


More interesting is what Greenberg calls “citation diversion”.  This means taking a a paper that supports side B but citing it as supporting side A.  Greenberg shows that three citation diversions mushroom into a whopping 7,848 false-claim chains.

Greenberg also introduces a third category, “invention”, to cover cases such as when a cited paper says nothing about the claim it supposedly backs or when a  citation elevates a mere hypothesis in the cited paper into a fact.

Once a diversion, invention or even an honest mistake is introduced into the literature it is readily perpetuated in subsequent publications.  Researchers may not bother to trace a claim back to its original sources, especially if these lazy souls  have a stake in the claim being true.

In follow-up posts I’ll give you some nice examples from the conflict literature.

P.S. – While I was writing the present post this article appeared totally backing me up.  Take it from me.  You don’t need to bother reading it.

Hi There

Why am I doing this??????

I thought for a long time that blogging wasn’t for me.  But I see false and misleading claims all around me.  Some are published in obscure places and strike me as dead on arrival.  Yet some of these get pulled into the world of the undead and get serious play.


Media publications resist making corrections.  Academic journals are worse.  It is difficult to publish debunking papers in academic journals.  So there needs to be another outlet and I’m hoping that blogging can work.

So, yes, to be perfectly honest the main thing that pushed me to blog in the first place is the need to fight against bad ideas…..

However, I plan to  do more with this blog than just combating myths..  There is an awful lot of great research coming out in the conflict field these days that I want to highlight.  This modern abundance of good ideas is one of the reasons it is such a joy to teach my “Economics of Warfare” course.

In fact, I will start with some of the positive stuff.

Let’s see what happens.