My Free Online Course is Ready and About to Launch!

Hello everyone.

I haven’t posted for a while, mainly because I’ve been completely swamped writing with creating my free online course  which launches on Monday.  

The course is on exactly the sort of material I cover on the blog so if you’re following the blog you should seriously consider signing up for the course.  It’s called:

“Accounting for Death in War: Separating Fact from Fiction”

Advertisements

A Debate about Excess War Deaths: Part II

My rejoinder (with Stijn van Weezel) to Hagopian et al is out. Hooray!   See this earlier post for background.

Please have a read.  It’s short and sweet.  Here’s the abstract:

Spagat and van Weezel have re-analysed the data of the University Collaborative Iraq Mortality Study (UCIMS) and found fatal weaknesses in the headline-grabbing estimate of 500,000 excess deaths presented, in 2013, by Hagopian et al. The authors of that 2013 paper now defend their estimate and this is our rejoinder to their reply which, it is contended here, avoids the central points, addresses only secondary issues and makes ad hominem attacks. We use our narrow space constraint to refute some of the reply’s secondary points and indicate a few areas of agreement.

And here’s the first paragraph:

Hagopian et al. (2018), the reply paper to Spagat and van Weezel (2017) which is, in turn, our critique of Hagopian et al. (2013), does not address either of our two central points. These are as follows (Spagat and van Weezel, 2017). First, any appropriate 95% uncertainty interval (UI) for non-violent excess deaths is at least 500,000 deaths wide and starts many tens of thousands of deaths below zero. Second, we find no local spill over effects running from violence levels to elevated non-violent death rates.1 Both these results refute the ‘conservative’ estimate of several hundred thousand non-violent excess deaths given in Hagopian et al. (2013). The fact that Hagopian et al. (2018) ignore these two points suggests that the authors of that paper are unable to respond.

In other words, Hagopian et al. can’t address our main points so they search for errors they might be able to catch us out on.

And they do actually find an error.  However, as we argue in the rejoinder, it doesn’t lead anywhere.

Specifically, we assumed, wrongly, that they drew a stratified sample.  (In fact, we didn’t digest a separate paper they wrote explaining their sampling scheme in detail.)  What this means in practice is that the number of clusters per governorate in their sample is out of line with population proportions.  For example, governorate A might have twice the population of governorate B but four times the number of clusters.  But this is a random outcome rather than being by design (our mistake).

We actually spilled a fair amount of ink in our critique discussing the importance of incorporating a stratification adjustment into the estimation.  However, we also did all our estimates both with and without such an adjustment.  And it turns out that even without a stratification adjustment it’s still true that:

any appropriate 95% uncertainty interval (UI) for non-violent excess deaths is at least 500,000 deaths wide and starts many tens of thousands of deaths below zero.

Making the adjustment widens the UI’s further but this point doesn’t matter materially.

Moreover, we still think it’s a good idea to do ex post stratification.  The Hagopian et al. sample is small and the realized numbers of clusters per governorate are pretty far out of wack with population proportions.  These imbalances would get ironed out in a large sample but this didn’t happen in the actual small sample.  We think it’s best to adjust for this imbalance.

For me, the highlight of the Hagopian et al. response is the section on death certificates which shows a strong desire by this team to have their cake and eat it too.  When households reported deaths to interviewers the interviewers then asked to see death certificates.  Hagopian et al. report that interviewees were usually able to show these certificates.  However, sometimes interviewees said that they didn’t have death certificates and sometimes interviewees said that they have them but were unable to produce one when prompted.  Nevertheless, Hagopian et al. just go ahead and assume that every single reported death is 100% certain to have happened regardless of death certificate status.  So they want to use death certificate checks in general terms to demonstrate the high quality of their data but when the outcome of a particular death certificate check casts a shadow on a particular datum they ignore this outcome.

Consider the following analogy.  I run a bar.  I ask everyone ordering an alcoholic drink if they have an ID showing they are 21 years old.  If they say they do have one then I ask to see it.  Most people just show an ID.  But some people say they have an ID, although they are unable to produce one when prompted.  Other people say they don’t have an ID.  I serve alcoholic drinks to all three types of people.  The police then investigate me to determine whether or not I’m selling to underage drinkers.  I tell them that I am certain that I never ever do this.  The reason I’m so certain is that I always ask my customers for ID’s and most of the people I serve drinks to actually show me one.

Somehow I don’t think the police would be convinced by this logic.

OK, those are the highlights – time now to read the whole thing!

A Debate about Excess War Deaths: Part I

I just got page proofs for a new paper on excess deaths in Iraq that I’ve written with Stijn van Weezel. This new paper is actually a rejoinder to a reply to an earlier paper I wrote with Stijn which was, in turn, a critique of a still earlier paper.  In short, Stijn and I have been in an ongoing discussion about excess deaths in Iraq.

So now is a good time to bring my blog readers into the loop on all this new stuff.  Moreover, we are pressed for space in our soon-to-be-published rejoinder so we promise to extend the material onto my blog.  This post is the beginning of the promised extension.

Today I’ll set the table by describing the following sequence of publications.

  1. The starting point is this paper by Hagopian et al. which concludes:

Beyond expected rates, most mortality increases in Iraq can be attributed to direct violence, but about a third are attributable to indirect causes (such as from failures of health, sanitation, transportation, communication, and other systems). Approximately a half million deaths in Iraq could be attributable to the war.

I blogged on this estimate a while back.   Back then my point was simply to show how Hagopian et al. start with a data-based central estimate surrounded by massive uncertainty and then seize on one excuse after another to inflate their central estimate and air brush the uncertainty away.  They wind up with a much higher central estimate than their data can sustain which they then treat as a conservative lower bound.   (The above quote was just a way station along this inflationary journey, delivered in an academic journal that imposed some, but not sufficient, restraint.)

2.  Stijn and I publish a critique of the Hagopian et al. paper.

We focus mostly on the weakness of the case for a large number of non-violent excess deaths in the Iraq war, although we do touch on the inflationary dynamics mentioned above.

Before turning to the main highlights of our critique paper let’s quickly review the concept of excess deaths as it pertains to the Hagopian et al. Iraq estimates.  Their main claim boils down to saying that the during-war death rate in Iraq is higher than the pre-war death rate there.  They then assume that this increase is caused by the war.

There are a few problems with this train of thought.

a. The causal claim commits a known logical error called the “after this, therefore because of this” fallacy.  An example would be arguing that “my alarm clock going off causes the sun to rise.”

That said, the notion that the outbreak of war causes all observed changes in death rates afterward is sufficiently plausible that we shouldn’t just dismiss the idea because logic doesn’t automatically imply it.

b.  The only reason for invoking the excess-deaths concept in the first place is the idea that war violence might lead indirectly to non-violent deaths that wouldn’t have occurred without the war.  To address this possibility we should ask whether the during-war non-violent death rate is higher than pre-war non-violent death rate.  Hagopian et al. confound this comparison of like with like by tossing during-war violent deaths into this mix.  Thus, they compare during-war violent plus non-violent deaths with pre-war non-violent deaths.

Stijn and I perform appropriate comparisons of non-violent death rates.  You can look at the numbers yourself by popping open the paper.  But the general picture is easy enough to understand without looking.  Our central estimates (under various scenarios) for non-violent deaths are always positive but the uncertainty intervals surrounding these estimates are extremely wide and dip far below zero.  Thus, evidence that there are very many, if any, non-violent excess deaths is extremely weak despite the grandiose claims of Hagopian et al..

In our determination to uncover any possible evidence of excess non-violent deaths we also perform a “differences-in-differences” analysis.  The idea here is that if violence leads indirectly to non-violent deaths then we’d expect non-violent death rates to jump up more in relatively violent zones than they do in relatively peaceful zones.  In other words, if violence leads indirectly to non-violent deaths in Iraq then there should be a positive spatial correlation between violence and increases in non-violent death rates.  We find no such thing.

There is more in the paper and I would be delighted to respond to questions about it.  But, for now, I’ll move on.

3.  Next, Hagopian et al. respond.

I assume that, soon enough, you’ll be able to see their response together with our rejoinder side by side in the journal so I won’t go into detail here.  Still, I want to note two things.

First, the Hagopian et al. reply does not address our main point about the separation of violent deaths from non-violent deaths which is described in section 2 above.

Second, Hagopian et al. spill considerable ink on ad hominem attacks.  The main one takes of form of saying that I have worked with Iraq Body Count (IBC) and the IBC dataset is bad –  therefore nobody should trust anything I say.  Stijn and I don’t actually mention IBC in our critique paper so IBC data quality is entirely irrelevant to our argument.  Indeed, Hagopian et al. don’t even try to link IBC data quality with any of our substantive arguments.  Yet, I fear that much of the mud they sling at IBC will stick so I’ll try to clean some of it off in the follow-up blog posts.

4.  Finally, there is our rejoinder.

Again, I don’t want to attempt too much prior to publication.  However, as already mentioned above, I will do a few further blog posts on material that we couldn’t cover within the space we had.  These will be mainly, possibly exclusively, about the IBC database which Hagopian et al. attack very unreasonably in their reply.

OK, I’ve set the table.  More later.

 

 

 

Secret Data Sunday – Iraq Family Health Survey

The WHO-sponsored Iraq Family Health Survey (IFHS) led to a nice publication in the New England Journal of Medicine that came complete with an editorial puff piece extolling its virtues.  According to the NEJM website this publication has generated 60 citations and we’re still counting.   If you cast a net wider than just medical publications then the  citation count must run well into the hundreds.

But the IFHS virtues don’t stop there.  The NEJM paper, and the accompanying report, are well written and supply plenty of good methodological information about the survey.  The authors are pretty up front about the limitations of their work, notably that they had to skip interviews in some areas due to security concerns.  Moreover, the IFHS is an important survey not least because its estimate of 150,000 violent deaths discredited the Burnham et al. estimate of 600,000 violent deaths for almost exactly the same time period.  (The Burnham et al. survey hid its methodology and was afflicted by serious ethical and data integrity problems. )

I have cited the IFHS multiple times in my own work and generally believe in it.  At the same time, the IFHS people did several questionable things with their analysis that I would like to correct, or at least investigate, by reanalyzing the IFHS data.

But here’s the rub.  The WHO has not released the IFHS dataset.

I and other people have requested it many times.  The field work was conducted way back in 2006.  So what is the WHO waiting on?

I’ll leave a description of my unrealized reanalysis to a future post. This is because my plans just don’t matter for the issue at hand; the IFHS data should be in the public domain whether or not I have a good plan for analyzing them.  (See this post on how the International Rescue Committee hides its DRC data in which I make the same point.)

There is an interesting link between the IFHS and the Iraq Child and Maternal Mortality Survey, another important dataset that is also unavailable.  The main point of contact for both surveys is Mohamed Ali of the WHO.  Regarding the IFHS. Mohamed seemed to tell me in an email that only the Iraqi government is empowered to release the dataset.  If so, this suggests a new (at least for me) and disturbing problem;

Apparently, the WHO uses public money to sponsor surveys but then sells out the general public by ceding their data distribution rights to local governments, in this case to Iraq.  

This is practice of allowing governments benefiting from UN-sponsored research to withhold data from the public that pays for the research is unacceptable .  It’s great that the WHO sponsors survey research in needy countries but open data should be a precondition for this service.

 

 

How Many People were Killed in the Libyan Conflict – Some field work that raises more questions than it answers

Hana Salama asked me for an opinion on this article. I had missed it but it is, potentially, interesting to me so I am happy to oblige her.

I’ve now absorbed it but find myself even more puzzled than I was after reading that Syria survey I blogged on a few weeks back.  Again, it looks like some people did some useful field work but the write up is so bad that it’s hard to know exactly what they did.  In fact, the Libya work is more opaque than the Syria work to the point where I wonder what, if anything, was actually done.

For orientation here is the core of the abstract:

Methods

A systematic cross-sectional field survey and non-structured search was carried out over fourteen provinces in six Libyan regions, representing the primary sites of the armed conflict between February 2011 and February 2012. Thirty-five percent of the total area of Libya and 62.4% of the Libyan population were involved in the study. The mortality and injury rates were determined and the number of displaced people was calculated during the conflict period.

Results

A total of 21,490 (0.5%) persons were killed, 19,700 (0.47%) injured and 435,000 (10.33%) displaced. The overall mortality rate was found to be 5.1 per 1000 per year (95% CI 4.1–7.4) and injury rate was found to be 4.7 per 1000 per year (95% CI 3.9–7.2) but varied by both region and time, reaching peak rates by July–August 2011.

I’m not sure but I think the researchers (hereafter Daw et. al.) tried to count war deaths (plus injuries and displacement numbers) rather than trying to statistically estimate these numbers.  (See this paper on the distinction.)

Actually, I read the whole paper thinking that Daw et al. drew a random sample and did statistical estimation but then I changed my mind.  I got my initial impression at the beginning because they say

This epidemiological community-based study was guided by previously published studies and guidelines.

They then cite the (horrible) Roberts et al. (2004) Iraq survey as providing a framework for their research (see this and follow the links).   Since Roberts et al. was a sample survey I figured that Daw et al. was also a sample survey.  They then go on to say that

Face to face interviews were carried out with at least one member of each affected family….

This also seemed to point in the direction of a sample survey conducted on a bunch of randomly selected households.  (With this method you pick a bunch of households at random, find out how many people lived and died in each one and then extrapolate a national death rate from the in-sample death data.)

But then I realized that the above quote continues with

…listed in the registry of the Ministry of Housing and Planning

Hmmmm….so they interviewed all affected families listed in the registry of some Ministry.  This registry cannot have been a registry of every family living in the areas covered by the survey because there are far more families there than could have been interviewed on this project.  (The areas covered contain around 4.2 million people according to Table 1 of the paper and  surely Daw et al. did not conduct hundreds of thousands of interviews.)

So I’m guessing that the interviews were just of people from families on an official list of victims; killed, injured or displaced.  This guess places a lot of emphasis on one interpretation of the words “listed” and “affected” but it does make some sense.

To be clear, even interviewing one representative from every affected family would have been a gargantuan task since Daw et al. identify around 40,000 casualties (killings plus injuries) and more than 400,000 displaced people.  So we would still be talking about tens of thousands of interviews.

To be honest, now I’m wondering if all these interviews really happened.  That’s an awful lot of interviews and they would have been conducted in the middle of a war.

So now I’m back to thinking that maybe it was a sample survey of a few thousand households.  But if so then the write up has the large flaw that there is no description whatsoever of how its sample was drawn (if, indeed, there was a sample).

Something is definitely wrong here.  I shouldn’t have to get out a Ouiji board to divine the authors’ methodology.

The Syria survey discussed a few weeks ago seems to be in a different category.  For that one I have a lot of questions about what they did combined with doubts about whether their methods make sense.  But this Libya write-up seems weird to the point where I wonder whether they were actually out in the field at all.

Maybe an email to Dr. Daw will clear things up in a positive way.  With the Syria paper emailing the lead author got me nowhere but maybe here it will work.  I’m afraid that the best case scenario is that Daw et al. did some useful field work that was obscured by a poor write up and that there is a better paper waiting to get written.