Chilcot on Civilian Casualties: Part 4

In October of 2004 The Lancet published a paper by Roberts et al. that estimated the number of excess deaths for the first year and a half of the Iraq war using data from a new survey they had just conducted.  (Readers wanting a refresher course on the concept of excess deaths  can go here.)

One of the best parts of the civilian casualties chapter of the Chilcot report is the front-row seat it provides for the (rather panicked) discussion that Roberts et al. provoked within the UK government.  Here the real gold takes the form of links to three separate reviews of the paper provided by government experts.  The experts are Sir Roy Anderson of the first report, Creon Butler of the second report and Bill Kirkup, CBE of the third report.

In the next several posts I will evaluate the evaluators.  I start by largely incorporating only information that was available when they made their reports.   But I will, increasingly, take advantage of hindsight..

For orientation I quote the “Interpretation” part of the Summary of Roberts et al.:

Making conservative assumptions, we think that about 100,000 excess deaths, or more have happened since the 2003 invasion of Iraq.  Violence accounted for most of the excess deaths and airstrikes from coalition forces accounted for most violent deaths.  We have shown that collection of public-health information is possible even during periods of extreme violence.  Our results need further verification and should lead to changes to reduce non-combatant deaths from air strikes.

The UK government reaction focused exclusively, so far as I can tell, on the question of how to respond to the PR disaster ensuing from:

  1.  The headline figure of 100,000 deaths which was much bigger than any that had been seriously put forward before.
  2. The claim that the Coalition was directly responsible for most of the violence.  (Of course, one could argue that the Coalition was ultimately responsible for all violence since it initiated the war in the first place but nobody in the government took such a position.)

Today I finish with two important points that none of the three experts noticed.

First, the field work for the survey could not have been conducted as claimed in the paper.  The authors write that two teams conducted all the interviews between September 8 and September 20, i.e., in just 13 days.  There were 33 clusters, each containing 30 households. This means that each team had to average nearly 40 interviews per day, often spread across more than a single sampling point (cluster).  These interviews had be on top of travelling all over the country, on poor roads with security checkpoints, to reach the 33 clusters in the first place.

To get a feel for the logistical challenge that faced the field teams consider this picture of the sample from a later, and much larger, survey – the Iraq Living Conditions Survey:

ILCS Sample

I know the resolution isn’t spectacular on the picture but I still hope that you can make out the blue dots.  There are around 2,200 of them, one for each cluster of interviews in this survey.

Now imagine choosing 33 of these dots at random and trying to reach all of them with two teams in 13 days.  Further imagine conducting 30 highly sensitive interviews (about deaths of family members) each time you make it to one of the blue points.  If a grieving parent asks you to stay for tea do you tell to just answer your questions because you need to move on instantly?

The best-case scenario is that is that the field teams cut corners with the cluster selection to render the logistics possible and then raced through the interviews at break-neck speed (no more than 10 minutes per interview).  In other words, the hope is that the teams succeeded in taking bad measurements of a non-random sample (which the authors then treat as random).  But, as Andrew Gelman reminds us, accurate measurement is hugely important.

The worst-case scenario is that field teams simplified their logistical challenges by making up their data.  Recall, that data fabrication is widespread in surveys done in poor countries.  Note, also, that the results of the study were meant to be released before the November 2 election in the US and the field work was completed only on September 20; so slowing down the field work to improve quality was not an option.

Second, no expert picked up on the enormous gap between the information on death certificates reported in the Roberts et al. paper and the mortality information the Iraqi Ministry of Health (MoH) was releasing at the time.  A crude back-of-the-envelope calculation reveals the immense size of this inconsistency:

  1.  The population of Iraq was, very roughly, 24 million and the number of people in the sample is reported as 7,868.  So each in-sample death translates into about 3,000 estimated deaths (24,000,000/7,868).  Thus, the 73 in-sample violent deaths become an estimate of well over 200,000 violent deaths.
  2. Iraq’s MoH reported 3,858 violent deaths between April 5, 2004 and October 5, 2004, in other words a bit fewer than 4,000 deaths backed by MoH death certificates.  The MoH has no statistics prior to April 5, 2004 because their systems were in disarray before then (p. 191 of the Chilcot chapter)
  3. Points 1 and 2 together imply that death certificates for violent deaths should have been present only about 2% of the time (200,000/4,000).
  4. Yet Roberts et al. report that their field teams tried to confirm 78 of their recorded deaths by asking respondents to produce death certificates and that 63 of these attempts (81%) were successful.

The paper makes clear that the selection of the 78 cases wasn’t random and it could be that death certificate coverage is better for non-violent deaths than it is for violent deaths.

Still……

There is a big, yawning, large, humongous massive gap between 2% and 81% and something has to give.

screen-shot-2016-05-16-113942-am

Here are the only resolution possibilities I can think of::

  1.  The MoH issued vastly more (i.e., 50 times more) death certificates  for violent deaths than it has admitted to issuing.  This seems far fetched in the extreme.
  2. The field teams for Roberts et al. fabricated their death certificate confirmation figures.  This seems likely especially since the paper reports:

Interviewers were initially reluctant to ask to see death certificates because this might have implied they did not believe the respondents, perhaps triggering violence.  Thus, a compromise was reached for which interviewers would attempt to confirm at least two deaths per cluster.

Compromises that pressure interviewers to risk their lives are not promising and can easily lead to data fabrication.

3.   The survey picked up too many violent deaths.  I think this is true and                we will return to this possibility in a follow-up post but I don’t think that            this can be the main explanation for the death certificate gap.

OK, that’s enough for today.

In the next post I’ll discuss more what the expert reports actually said rather than what they didn’t say.

 

 

 

 

 

 

 

 

 

 

 

War Death Estimates that are Lighter than Air

I’m in the middle of reexamining the data collected by the University Collaborative Iraq Mortality Study (UCIMS) (This is joint work with my former student Stijn Van Weezel.)

The number of excess deaths estimated by the UCIMS is 405,000, 461,000, 500,000, more than 500,000…. well, that’s the point….it’s not clear exactly what the UCIMS estimate is but it has a natural tendency to rise.14345365-Hot-air-balloon-flying-up-to-the-sky-rising-high-as-a-symbol-of-adventure-and-freedom-on-a-blue-summ-Stock-Photo

The abstract of the paper states:

From March 1, 2003, to June 30, 2011, the crude death rate in Iraq was 4.55 per 1,000 person-years (95% uncertainty interval 3.74–5.27), more than 0.5 times higher than the death rate during the 26-mo period preceding the war, resulting in approximately 405,000 (95% uncertainty interval 48,000–751,000) excess deaths attributable to the conflict.

OK, this seems crystal clear; the central estimate is 405,000.  (It’s rather absurd to carry the numbers out to the nearest thousand despite an uncertainty interval 700,000 deaths wide but at least we know that the estimate centres around 400,000.)  The estimate of 405,000 is confirmed three times in the paper, not that confirmation should be necessary since the abstract must surely contain the right number.

But wait, there’s more:

Our household survey produced death rates that, when multiplied by the population count for each year, produced an estimate of 405,000 total deaths. Our migration adjustment would add an additional 55,805 deaths to that total. Our total excess death estimate for the wartime period, then, is 461,000, just under half a million people.

To support their upward adjustment the UCIMS authors say that there are 2 million refugees outside the country, that these divide into 374,532 (not 374,531?) households and that 14.9% of Iraqi refugee households suffered at least one death.  The 14.9% figure comes from a reference that seems to be unavailable but let’s just accept it.    These numbers would, indeed, imply around 56,000 deaths but not 56,000 excess deaths.

Readers of this blog will recall that excess deaths are deaths above and beyond some baseline level.  The excess-deaths concept is meant to capture deaths that would not have happened if war had been avoided.  The UCIMS estimated the baseline to be 2.89 per 1,000 per year (maybe 2.89857 would have been a better estimate?).  This is an extremely low baseline and, of course, if we raise it then then the excess death estimate of 405,000 will fall but leave this point aside.

Here I just note that even if all 56,000 estimated deaths from the refugee households occurred in a single year the death rate for these households would be around 2.8 per 1,000 for that year, slightly below the baseline used by the UCIMS.  So even if we lard on far more deaths than the 14.9% figure suggests it would still be quite a challenge to squeeze a positive number of excess deaths out of this situation.  It seems that refugees have, on average, done better than the people left behind in Iraq.  So integrating refugees into an excess-death calculation should lower the estimate, not raise it as claimed by the UCIMS authors.

12-2-guard-against-inflation

Still, no one associated with the article seems prepared to stop even at 461,000.  Indeed, the very next sentence after the one quoted above switches from “just under half a million people” to “about half a million excess deaths”:

Our total excess death estimate for the wartime period, then, is 461,000, just under half a million people.

Discussion

We estimate about half a million excess deaths occurred in Iraq following the US-led invasion and occupation (March 2003–2011).

Surely the PLoS editors will take the punch bowl away from the inflation party:

….their final estimate is that approximately half a million people died in Iraq as a result of the war and subsequent occupation from March 2003 to June 2011.

I guess not.

The next step is for lead author Amy Hagopian to use the media to pretty much convert the half a million number into a lower bound:

“We think it is roughly around half a million people dead. And that is likely a low estimate,” says Hagopian.

Finally, something important has been lost in the shuffle as we have traced the trajectory of the ICIMS estimate from 405,000 up to 500,000+.

The uncertainty  interval has disappeared.  

We started out with an uncertainty interval of 48,000 to 751,000 and we ended up with 500,000 as “likely a low estimate.”  Somehow, the back-of-the-envelope calculation on Iraqi refugees airbrushed all downside uncertainty off the books.  The last remaining question seems to be: by how much should we pad the half-a-million figure?

 

World War II Deaths Visualization

Everyone should watch this film by Neil Halloran which manages to convey at least some sense of the scale and proportions of human losses in World War II.  Hopefully, others will follow in the footsteps of this pioneering work.

My main nitpick is that many viewers may walk away thinking that the true World War II figures are known with more precision than they really are.

That said, the narration clearly states that the Soviet number is disputed and could be very much higher than the one with which Halloran shocks his viewers.  He also notes that the number of Roma deaths is disputed and that there is an extremely wide range of estimates of civilians killed by the advancing Red Army during the collapse of the Nazi regime.

Moreover, it seems very unlikely that further information could overturn basic comparative facts cited in the film such as that Poland suffered the highest percent of people killed while the Soviet Union suffered the highest absolute number.

So maybe I’m a little harsh on whether or not Halloran adequately conveys the uncertainty of his subject.  Still, I do worry that future projects will give in to a temptation to sweep too much uncertainty under the rug.

Nevertheless, this project is an extraordinary achievement.

Columbia Journalism Review lowers journalistic standards while lecturing on said standards: Part I – Suppressing Uncertainty

Isn’t Columbia University supposed to be good at journalism?  And isn’t Columbia Journalism Review considered to be a decent publication?

It sure doesn’t look that way based on this remarkably poor article by Pierre Bienaimé which turns out to be right up my alley.

The article is an embarrassment of riches for the blog since it spreads numerous falsehoods about the Iraq conflict while simultaneously bidding for a record on the number of journalism traps.that can be packed into a single little article.

So this is a teaching moment.  But I’ll have to spread the material out over several posts.

Where to start?

I have a huge amount of respect for Beth Daponte and she has a quite a nice quote in the article:

When asked about journalistic responsibility, Daponte explains that the press must “take each study and really look at what it does say and what it doesn’t say. This is where journalists really do need to have at least a rudimentary understanding of statistics and confidence intervals, and what sampling really means.”

Bienaimé immediately botches the explanation of what a confidence interval is, surely leading some readers to conclude that Daponte herself does not know what she’s doing.

A confidence interval is the degree of certainty researchers attain that their estimates—in this case, of the death toll—fall within a certain range.

That was CJR, not Beth Daponte, explaining confidence intervals.

Given that he does not know what a confidence interval is it is not surprising that Bienaimé reports only central estimates with no quantification whatsoever of the uncertainty surrounding these estimates.  This is a cardinal sin when reporting on statistical estimates.

Bienaimé compounds the error by reporting central estimates to a preposterous number of significant digits, creating a spurious sense of diamond-sharp precision.

The 2006 study estimated that the war had caused roughly 654,965 excess deaths….

A study published in PLOS Medicine in 2013, applying the same surveying methods used in the Lancet studies on a broader scale,estimated 461,000 excess deaths throughout the war from 2003 to 2011.

Maybe the 2006 study was little off, with the true number being 654,963?

In fairness, Bienaimé just reproduces the quantitatively ignorant presentation of the original paper but at least the original gives a confidence interval.  And perhaps one can make a case for carrying non-zero digits to the 1,000’s place as the second study does.

Totally suppressing all uncertainty is unconscionable.

For your convenience the 95% uncertainty interval for:

1.  the 2006 study is 392,979 to 942,636 (with no information provided on in the digits in the tenth place although I suspect the top should really go up to 942,636.199):

2.  the 2011 study is 48,000 to 751,000.