Mismeasuring Deaths in Iraq: Addendum on Confidence Interval Calculations

Garfield_musing_CIs_533965604

In my last post I used a combination of bootstrapping and educated guesswork to find  confidence intervals for violent deaths in Iraq based on the data from the Roberts et al. survey.  (The need for guesswork arose because the authors have not been forthcoming with their data.)

Right after this went up a reader contacted me and asked whether the bottom of one of these confidence intervals can go below 0.

The short answer is “no” with the bootstrap method.  This technique can only take us down to 0 and no further.

Explanation

With bootstrapping we randomly select from a list of 33 clusters.  Of course, none of these clusters experienced a negative number of violent deaths. So 0 is the smallest possible count we can get for violent deaths in any simulation sample.  (In truth, the possibility of pulling 33 0’s is more theoretical than real.  This didn’t happen in any of my 1,000 draws of 33.)

Nevertheless, it turns out that if we employ the most common methods for calculating confidence intervals (not bootstrapping) then the bottom of the interval does dip below 0 when the dubious Fallujah cluster is included.

Here’s a step by step walk-through of the traditional method applied to the Roberts et al. data.  (I will assume that violent deaths are allocated across the 33 clusters as 18 0’s, 7 1’s, 7 2’s and 1 52.)

  1. Compute the mean number of violent deaths per cluster.  This is 2.2.  An indication that something is screwy here is the fact that the mean is bigger than the number of violent deaths in 32 out of the 33 clusters.  At the same time the mean is way below the number of violent deaths in the Fallujah cluster (52).  Note that without the Fallujah cluster the mean becomes 0.7, i.e., eliminating Fallujah cuts the mean by more than a factor of 3.
  2. Compute the sample standard deviation which is a measure of how strongly the number of violent deaths varies by cluster.  This is 9.0.  Note that if we eliminate the Fallujah cluster then the sample standard deviation plummets by more than a factor of 10, all the way down to 0.8.  This is just a quantitative expression of the obvious fact that the data are highly variable with Fallujah in there.  Note further that the big outlier observation affects the standard deviation more than it affects the mean.
  3. Adjust for sample size.  We do this by dividing the sample standard deviation by the square root of the sample size.  This gives us 1.6.  Here the idea is that you can tame the variation in the data by taking a large sample.  The larger the sample size the more you tame the data.  However, as we shall see, the Fallujah cluster makes it impossible to really tame the data with a sample of only 33 clusters.
  4. Unfortunately, the last step is mysterious unless you’ve put a fair amount of effort into studying statistics.  (This, alone, is a great reason to prefer bootstrapping which is very intuitive.)  Our 95% confidence interval for the mean number of violent deaths per cluster is, approximately, the average plus or minus 2 times 1.6, i.e., -1.0 to 5.4.  There’s the negative lower bound!
  5. We can translate from violent deaths per cluster to estimated violent deaths by multiplying by 33 and again by 3,000.  We end up with -100,000 to 530,000.  (I’ve been rounding at each step.  If, instead I don’t round until the very end I get -90,000 to 530,000….this doesn’t really matter.)  Note that without Fallujah we get a confidence interval of 30,000 to 90,000 which is about what we got with bootstrapping.

Have we learned anything here other than that I respond to reader questions?

I don’t think we’ve learned much, if anything, about violent deaths in Iraq.  We already knew that the Roberts et al. data, especially the Fallujah observation, is questionable and maybe the above calculation reinforces this view a little bit.

But, mostly, we learn something about the standard method for calculating confidence intervals; when the data are wild this method can give incredible answers.  Of course, a negative number of violent deaths is not credible.

There is an intuitive reason why the standard method fails with the Roberts et al. data; it forces a symmetric estimate onto highly asymmetric data.  Remember we get 2.2 plus or minus 3.2 average violent deaths per cluster.  The plus or minus means that the confidence interval is symmetric.  The Fallujah observation forces a wide confidence interval which has to go just as wide on the down side as it is on the up side.  In some sense the method is saying that if it’s possible to find a cluster with 52 violent deaths then it also must be possible to find a cluster with around -52 violent deaths.  But, of course, no area of Iraq  experienced -52 violent deaths.  So you wind up with garbage.

Part of the story is also the small sample size. With twice as many cluster, but the same sort of data, the lower limit would only go down to about 0.

It’s tempting to just say “garbage in, garbage out” and, up to a point, this is accurate.   But the bigger problem is that the usual method for calculating confidence intervals is not appropriate in this case.

Mismeasuring War Deaths in Iraq: The Partial Striptease

I now continue the discussion of the Roberts et al. paper that I started in my series on the Chilcot Report.  This is tangent from Chilcot so I’ll hold this post and its follow-ups outside of that series.

Les Roberts never released a proper data set for his survey.  Worse, the authors are sketchy on important details in the paper, leaving us to guess on some key issues.  For example, in his report on Roberts et al. to the UK government Bill Kirkup wrote:

The authors provide a reasonable amount of detail on their figures in most of the paper.  They do, however, become noticeably reticent when it comes to the breakdown of deaths into violent and non-violent, and the breakdown of violent deaths into those attributed to the coalition and those due to terrorism or criminal acts, particularly taking into account the ‘Fallujah problem’…

Roberts et al. claim that “air strikes from coalition forces accounted for most violent deaths” but Kirkup points out that without the dubious Fallujah cluster it’s possible that the coalition accounted for less than half of the survey’s violent deaths.

Kirkup’s suspicion turns out to be correct.

However, you need to look at this email from Les Roberts to a blog to settle the issue.  It turns out that coalition air strikes outside Fallujah account for 6 out of 21 violent deaths there with 4 further deaths attributed to the coalition using other weapons.

My primary point here is about data openness rather than about coalition air strikes.  Roberts et al. should just show their data rather than dribbling it out in bibs and bobs into the blogosphere.

Roberts gives another little top up here.  (I give that link only to document my source.  I recommend against ploughing through this Gish Gallop by Les Roberts.)  Buried deep inside a lot of nonsense Roberts writes:

The Lancet estimate [i.e. Roberts et al.], for example, assumes that no violent deaths have occurred in Anbar Province; that it is fair to subtract out the pre-invasion violence rate; and that the 5 deaths in our data induced by a US military vehicles are not “violent deaths.”

Hmmm…..5 deaths caused by US military vehicles.

Recall that each death in the sample yields around 3,000 estimated deaths. This translates into 15,000 estimated deaths caused by US military vehicles – nearly 30 per day for a year and a half.  There have, unfortunately, been a number of Iraqis killed by US military vehicles.  Iraq Body Count (IBC) has 110 such deaths in its database during the period covered by the Roberts et al. survey.  I’m sure that IBC hasn’t captured all deaths in vehicle accidents but, nevertheless, the 15,000 figure looks pretty crazy.

Again I come back to my main point – please just give us a proper dataset rather than a partial striptease.  Meanwhile, I can’t help thinking Roberts et al. are holding back on the data because it contains more embarrassments that we don’t yet know about.

PS – After providing the above quote I feel obligated to debunk it further.

  1. Roberts writes that his estimate omits deaths in Anbar Province (which contains Fallujah).  But many claims in his paper are only true if you include Anbar (Fallujah).  Indeed, this very blog post opened with one such claim.  We see that Fallujah is in for the purpose of saying that most violent deaths were caused by coalition airstrikes but Fallujah is out when it’s time to talk about how conservative the estimate is because it omits Fallujah.  Call this the “Fallujah Shell Game”.  (See the comments of Josh Dougherty here.)Shell Game_Thimblerig small
  2. Roberts suggests that he bent over backwards to be fair by omitting pre-invasion violent deaths from his estimate.  But, first of all, there was only one such death so it hardly makes a difference whether this one is in our out.  Second, it’s hard to understand what the case would be for blaming a pre-invasion death on the invasion.  .

 

Comments Down Below!

Hello everybody.

This is just a quick note to say that there were some interesting comments that appeared on my last two post on Chilcot (here and here).  I’ve just replied to both.

While I’m at it I have a question for Bill Kirkup (who made one of the comments).  Can he give us a little briefing on how death certificates have been handled in post-invasion Iraq?

Actually, I have a number of specific questions on this subject. I’d be happy to switch to email to clear these up and then post a summary if that works best (m.spagat@rhul.ac.uk).

Chilcot on Civilian Casualties: Part 5

This post continues my coverage of the three reports (one, two, three) written by UK government experts on the Roberts et al. 2004 article claiming that the 2003 invasion of Iraq caused a very large number of deaths.  According to the abstract of the paper:

We estimate that 98,000 more deaths than expected (8,000-194,000) happened after the invasion outside of Falluja and far more if the outlier Falluja cluster is included…Violent deaths were widespread, reported in 15 of 33 clusters, and were mainly attributed to coalition forces.  Most individuals reportedly killed by coalition forces were women and children.

Here’s some useful background.

Iraq Body Count (IBC) had already documented the violent deaths of nearly 20,000 civilians by the time the Roberts et al. paper was released.  So it was already clear that the war had caused a very large number of civilian deaths. The civilians chapter of the Chilcot Report does not suggest to me that this fact triggered deep concern within the UK government.  But the Roberts et al. paper produced a shock which I attribute mainly to its headline-grabbing figure of 100,000.

The 100,000 estimate is not directly comparable to IBC’s 20,000 count because 100,000 refers to excess deaths, i.e., violent plus non-violent deaths of civilians plus combatants beyond a baseline level, whereas IBC records only violent deaths of civilians.  There is also a phenomenally wide confidence interval of 8,000 to 194,000 surrounding the 100,000  estimate which severely complicates any comparison with another source.

Despite all these ambiguities media coverage tended to present the Roberts et al. results as reliably demonstrating in the prestigious scientific journal, The Lancet, that the war had caused 100,000 violent deaths of civilains.  This Guardian article is typical of much misleading media coverage.  There is no mention of a confidence interval, the excess-death estimate is portrayed as a violent-death estimate which is then presented as civilians-only when, in fact, the estimate mixes combatants with civilians. Such media attention further upped the ante on the 100,000 figure, making it still harder to ignore.

Roberts et al. conducted a “cluster survey“.   Specifically, they selected 33 locational points in Iraq and interviewed a bunch of close neighbours at each place.  Households located so close to one another are likely to have similar violence experiences.  So it’s probably more useful to view the sample as 33 data points, one for each cluster, rather than as roughly 1,000 data points, one for each household.

This is a tiny sample.

To get a handle on the sample-size problem consider some pertinent simulations I ran a few years back on some Iraq violence data. These show just how easy it is to overestimate violent deaths by factors of 2, 3 or more when you only have around 30 clusters. By the same token, surveys of this size can easily fail to detect a single violent death even when these surveys are conducted within very violent environments.

6a0105369e6edf970b01a73e187118970d-800wi

The problem with using a mere 33 clusters to measure war violence is intuitive.  Interviewers can easily stumble onto a few unusually violent hot spots and overestimate the average level of violence by a wide margin.   On the other hand, researchers can just as easily draw a qualitatively different kind of sample consisting of 33 peaceful islands.

The Roberts et al. survey seems to have landed on a super turbo charged version of this small sample issue.  They found a total of 21 violent deaths in 32 of their clusters, i.e., less than one death per cluster.  Yet they reported no fewer than 52 violent deaths in their 33rd cluster in Fallujah..

Such a sample yields estimates that are all over the place depending on your assumptions.  One standard calculation method (bootstrapping) leads to a central estimate of 210,000 violent deaths with a 95% confidence interval of around 40,000 to 600,000.  However, if you remove the Fallujah cluster the central estimate plummets to 60,000 with a 95% confidence interval of 40,000 to 80,000.  (I’ll give details on these calculations in a follow-up post.)

In short, there is no reliable way to create a stable estimate out of the Roberts et al. data. We would like to have an estimate that is robust to whether or not we include the extreme Fallujah outlier.  Alas, the usual methods are highly sensitive to whether the wild Fallujah observation is in or out.

photorealistic-3d-render-of-a-house-of-cards-collapsing-B79JFX.jpg

Given this background I’m at a loss to explain how Sir Roy Anderson can describe the Roberts et al. methodology as “robust”.  In fact, he invokes the r-word in two successive sentences.  Yet extreme sensitivity to outliers is one of the main characteristics that earns estimates the lable “non-robust”.

Sir Roy notices that the sample is small but goes nowhere from this starting point.  He seems unaware that war violence tends to cluster heavily at some locations.  Indeed, he  did not even read the Roberts et al. paper carefully enough to discern that their sample displays this pattern in spades.

Sir Roy swings and a misses in another, more subtle, way.  He points, rightly, to a key measurement problem with the Roberts et al. methodology – how do we know that households reporting deaths really did suffer these reported deaths?  He notes that Roberts et al. try to diffuse this issue by checking death certificates.  However, they only check for death certificates in a small non-random sample of their reported deaths and 20% of these checks were failures. So there is plenty of room to question the veracity of many of the reported deaths in the survey.

This is a good catch for Sir Roy but he doesn’t then ascend to the next level.  Suppose that out of the households that did not experience a violent death a mere 1% are recorded as having one anyway.  Since there must be more than 900 such households, this error rate would generate around 9 falsely reported deaths.  These false reports would then translate into about 27,000 estimated violent deaths.  Thus, a small rate of “false positives” can inflate the number of estimated deaths quite substantially, creating another non-robustness issue for the Roberts et al. methodology.

Someone might respond that we don’t have to worry about “false positives” because there will also be “false negatives”, i.e., households that experienced real deaths that somehow don’t get recorded.  However, this view is wrong because the situation is fundamentally asymmetric.  If roughly 50 households experienced violent deaths and 1% of these failed to report these deaths then we’d expect to miss only 0 or 1 real deaths this way.  So a 1% false negative rate will deflate an estimate by much less than a 1% false positive rate will inflate the same estimate.

(Alert readers will have noticed that I just described the base rate fallacy.  See the last slides of this presentation for more details.)

To summarize, Sir Roy wasted the small amount of effort he invested in his report.

Creon Butler at least had a serious go at evaluating the Roberts et al. paper, managing to notice some important new points that eluded Sir Roy.  I list the better ones here.  First on this positive side of the ledger is that Butler at least mentions the crucial Fallujah cluster.  Second, he correctly questions whether the sample is genuinely random.  Butler notes, in particular, that:

  1. The Fallujah field team did not follow the survey’s official randomization methodology when they  selected that cluster.
  2. Six of Iraq’s 18 governorates were excluded from the sample, although Butler thinks this was OK since they were randomly excluded.

Third, Butler draws attention to the preposterously wide confidence interval in the estimate for excess deaths – 8,000 to 194,000.  Fourth, Butler realizes, rightly, that the Roberts et al. figures for violent deaths suggest that hospitals should have received vastly more injured people than the figures of the Iraqi Ministry of Health (MoH) suggest they actually received.

Despite these strengths the Butler report is still weak.  As noted in post number 4 all three expert reports, including Butler’s, missed some central problems with the Roberts et al. paper.  Beyond that, Butler is strangely tolerant of the weaknesses he finds. Here are a few examples:

  1. He knows that the Fallujah field team violated the sampling protocols and then recorded a tremendous outlier observation that was then excluded from the main estimate published in the paper.  But it never seems to occur to him that such a serious data quality issue in one cluster could signal a deeper data quality problem affecting other clusters.
  2. He notices, but immediately shies away from, a weird aspect of the sampling scheme.  Twelve governorates are divided into two pairs with one governorate from each pair selected randomly for sampling and the other one excluded from the sample.  At a stretch we can view this as an acceptable way to claim national coverage for the survey while actually excluding 6 governorates from the sample.  But to do this legitimately you need to build this source of random variation into your confidence interval.  Roberts et al. don’t do this.  So even the gargantuan confidence interval of 8,000 to 194,000 is actually too narrow.
  3. Butler does a bad job of quantifying his point about injuries.  For example, he should have mentioned that the MoH recorded 15,517 injuries during the last 6 months covered by the survey.  Roberts et al. have something like 56 violent deaths during this period which translates into around 170,000 estimated violent deaths. Assuming a rule of thumb of 3 injuries per death one could predict 500,000 injuries, a number which exceeds the MoH figure by more than a factor of 300.  Note, moreover, that people with serious injuries should almost always put in an appearance at a hospital.  So there is really something to explain here.  Yet Butler  pretty much lets this discrepancy pass.

To summarize, in an era of grade inflation Creon Butler gets a gentleman’s pass.

Bill Kirkup of the Department of Health wrote the most perceptive UK government analysis although his paper is marred by one big error.  Here are some of his strong points:

  1.  He spots the absurdity of the confidence interval and grasps the magnitude of the problem – “A confidence interval this large makes the meaning of the estimate difficult to interpret. This point has been largely ignored in media reporting.”
  2. He is aware of what he calls the “patchy distribution of violence” in war and he realizes that this feature renders the survey’s 33 sampling points to be precious few.  He connects the reported results for Fallujah with this patchiness issue.  (You might say this is obvious but the other two experts missed it.)
  3. He identifies an annoying tendency for Roberts et al. to make detailed claims about types of deaths, e.g., the percent of all deaths accounted for by coalition air strikes, without providing numerical tables sufficiently detailed to flesh out these claims.  It appears that some such claims rely on data from the dubious Fallujah outlier which is something we would like to know whenever this is the case.  But it is often hard to be sure of such dependence on Fallujah without more information.  This is information that the authors could easily supply but chose not to do so.  Such “reticence”, as Kirkup puts it, does not inspire confidence.
  4. He realizes that the arrival of a survey team into a neighbourhood will draw the attention of local dangerous individuals.  These violent thugs will pressure people to answer the survey questions in ways that further their agendas.  Such local dynamics decrease the reliability of the data and place the survey’s interviewees at risk.  (I blogged recently on this issue in a similar context.)

Kirkup’s big error is that, somehow, he estimates that only 23,000 of the 98,000 estimated excess deaths (outside Fallujah) were violent deaths.  But a very easy back-of-the-envelope calculation shows that such a low number can’t possibly be right.  (21 violent deaths in the sample, around 8,000 people in the sample in a country of around 24 million – 24,000,000 x 21/8,000 gives around 60,000 violent deaths).  This mistake messes up Kirkup’s report substantially.

Nevertheless, I still think that Kirkup delivered the best report because he alone grasps the fundamental low quality of the Roberts et al. paper.

Where does this leave us?

First, I’d like to soften my criticism of the government evaluators a little bit.  In this post and in the last one I’ve tried to impose a ground rule of evaluating the evaluators based only on information that was available to them when they did their reports.  (I’ll drop this straighjacket in the next post in this series)  But it is hard to maintain this discipline and I’m sure that I’ve allowed myself to benefit in certain ways from some hindsight knowledge.  In addition, I’m sure these guys were under pressure to produce lots of stuff really fast so they couldn’t make every project into their best work.

That said, the casualties of war is a vital issue.  So the UK government should have allowed its analysts the space, and perhaps the outside consultants, they needed to give their work on civilian casualties its due.  (Of course, this applies even more strongly to the US government which has avoided a Chilcot-type enquiry in the first place.)

Finally, I’d like to give a sense of what I think a good report would have looked like.  Here’s a provisional list of key points:

  1.  We already knew that thousands of people are dying because of the Iraq war.
  2.  We should track these deaths closely and, more importantly, use the tracking data to figure out ways to save lives.  (I can’t find anything in the Chilcot Report to suggest that anyone in the government was thinking about this.)
  3. The Roberts et al. paper doesn’t change this picture qualitatively but it does suggest that people could be dying at far greater rates in the war than anyone has previously suggested.
  4. However, the Roberts et al. methodology is extremely weak and unreliable (see the technical appendix to this report) so we shouldn’t count on it except possibly on points that can be corroborated from other sources.
  5. Nevertheless, we should request the detailed data from this project and also from  Iraq Body Count and see whether we can learn something helpful from them.
  6. We should issue a public statement saying that we are not convinced by the Roberts et al. study at this moment but we have requested the data and are looking into it.  Meanwhile, we are very concerned about civilain casualties in Iraq and are working hard to reduce them.
  7. Point 6 should be reality, not just a public relations position.

Chilcot on Civilian Casualties: Part 4

In October of 2004 The Lancet published a paper by Roberts et al. that estimated the number of excess deaths for the first year and a half of the Iraq war using data from a new survey they had just conducted.  (Readers wanting a refresher course on the concept of excess deaths  can go here.)

One of the best parts of the civilian casualties chapter of the Chilcot report is the front-row seat it provides for the (rather panicked) discussion that Roberts et al. provoked within the UK government.  Here the real gold takes the form of links to three separate reviews of the paper provided by government experts.  The experts are Sir Roy Anderson of the first report, Creon Butler of the second report and Bill Kirkup, CBE of the third report.

In the next several posts I will evaluate the evaluators.  I start by largely incorporating only information that was available when they made their reports.   But I will, increasingly, take advantage of hindsight..

For orientation I quote the “Interpretation” part of the Summary of Roberts et al.:

Making conservative assumptions, we think that about 100,000 excess deaths, or more have happened since the 2003 invasion of Iraq.  Violence accounted for most of the excess deaths and airstrikes from coalition forces accounted for most violent deaths.  We have shown that collection of public-health information is possible even during periods of extreme violence.  Our results need further verification and should lead to changes to reduce non-combatant deaths from air strikes.

The UK government reaction focused exclusively, so far as I can tell, on the question of how to respond to the PR disaster ensuing from:

  1.  The headline figure of 100,000 deaths which was much bigger than any that had been seriously put forward before.
  2. The claim that the Coalition was directly responsible for most of the violence.  (Of course, one could argue that the Coalition was ultimately responsible for all violence since it initiated the war in the first place but nobody in the government took such a position.)

Today I finish with two important points that none of the three experts noticed.

First, the field work for the survey could not have been conducted as claimed in the paper.  The authors write that two teams conducted all the interviews between September 8 and September 20, i.e., in just 13 days.  There were 33 clusters, each containing 30 households. This means that each team had to average nearly 40 interviews per day, often spread across more than a single sampling point (cluster).  These interviews had be on top of travelling all over the country, on poor roads with security checkpoints, to reach the 33 clusters in the first place.

To get a feel for the logistical challenge that faced the field teams consider this picture of the sample from a later, and much larger, survey – the Iraq Living Conditions Survey:

ILCS Sample

I know the resolution isn’t spectacular on the picture but I still hope that you can make out the blue dots.  There are around 2,200 of them, one for each cluster of interviews in this survey.

Now imagine choosing 33 of these dots at random and trying to reach all of them with two teams in 13 days.  Further imagine conducting 30 highly sensitive interviews (about deaths of family members) each time you make it to one of the blue points.  If a grieving parent asks you to stay for tea do you tell to just answer your questions because you need to move on instantly?

The best-case scenario is that is that the field teams cut corners with the cluster selection to render the logistics possible and then raced through the interviews at break-neck speed (no more than 10 minutes per interview).  In other words, the hope is that the teams succeeded in taking bad measurements of a non-random sample (which the authors then treat as random).  But, as Andrew Gelman reminds us, accurate measurement is hugely important.

The worst-case scenario is that field teams simplified their logistical challenges by making up their data.  Recall, that data fabrication is widespread in surveys done in poor countries.  Note, also, that the results of the study were meant to be released before the November 2 election in the US and the field work was completed only on September 20; so slowing down the field work to improve quality was not an option.

Second, no expert picked up on the enormous gap between the information on death certificates reported in the Roberts et al. paper and the mortality information the Iraqi Ministry of Health (MoH) was releasing at the time.  A crude back-of-the-envelope calculation reveals the immense size of this inconsistency:

  1.  The population of Iraq was, very roughly, 24 million and the number of people in the sample is reported as 7,868.  So each in-sample death translates into about 3,000 estimated deaths (24,000,000/7,868).  Thus, the 73 in-sample violent deaths become an estimate of well over 200,000 violent deaths.
  2. Iraq’s MoH reported 3,858 violent deaths between April 5, 2004 and October 5, 2004, in other words a bit fewer than 4,000 deaths backed by MoH death certificates.  The MoH has no statistics prior to April 5, 2004 because their systems were in disarray before then (p. 191 of the Chilcot chapter)
  3. Points 1 and 2 together imply that death certificates for violent deaths should have been present only about 2% of the time (200,000/4,000).
  4. Yet Roberts et al. report that their field teams tried to confirm 78 of their recorded deaths by asking respondents to produce death certificates and that 63 of these attempts (81%) were successful.

The paper makes clear that the selection of the 78 cases wasn’t random and it could be that death certificate coverage is better for non-violent deaths than it is for violent deaths.

Still……

There is a big, yawning, large, humongous massive gap between 2% and 81% and something has to give.

screen-shot-2016-05-16-113942-am

Here are the only resolution possibilities I can think of::

  1.  The MoH issued vastly more (i.e., 50 times more) death certificates  for violent deaths than it has admitted to issuing.  This seems far fetched in the extreme.
  2. The field teams for Roberts et al. fabricated their death certificate confirmation figures.  This seems likely especially since the paper reports:

Interviewers were initially reluctant to ask to see death certificates because this might have implied they did not believe the respondents, perhaps triggering violence.  Thus, a compromise was reached for which interviewers would attempt to confirm at least two deaths per cluster.

Compromises that pressure interviewers to risk their lives are not promising and can easily lead to data fabrication.

3.   The survey picked up too many violent deaths.  I think this is true and                we will return to this possibility in a follow-up post but I don’t think that            this can be the main explanation for the death certificate gap.

OK, that’s enough for today.

In the next post I’ll discuss more what the expert reports actually said rather than what they didn’t say.

 

 

 

 

 

 

 

 

 

 

 

Dispute Resolution by Mutual Maiming

I’m puzzled by the following sequence of events.  (This story has a very clear summary.)

  1. The UN issues a report entitled “Children and Armed Conflict”.  The report highlights quite a few groups for committing grave abuses against children.  The “Saudi Arabia-led Coalition”  in the war in Yemen is on this UN blacklist.  The report fingers the Coalition for killing and maiming children and for attacking hospitals and schools.  (So far I’m not puzzled.)
  2. UN Secretary General Ban Ki Moon then announces that he is caving in to pressure and will remove Saudi Arabia from the UN blacklist:

“The report describes horrors no child should have to face,” Ban said at a press conference. “At the same time, I also had to consider the very real prospect that millions of other children would suffer grievously if, as was suggested to me, countries would defund many U.N. programs. Children already at risk in Palestine, South Sudan, Syria, Yemen, and so many other places would fall further into despair.”  (The quote is from the same summary story mentioned above.)

Moon stops just short of directly naming his blackmailer but it’s obviously  Saudi Arabia.

Of course, this story is sad and pathetic.  It would be nice to live in a world in which the UN can at least speak the truth and exert moral suasion upon belligerent parties to clean up their acts even if the UN cannot force good behaviour.  Unfortunately, we do not really live in this world.

But here’s the puzzle.  Why do the Saudis think they have accomplished something with their bullying censorship?

Saudi Arabia was named in an obscure report that is read by only a handful of specialists.  Suddenly the report is famous.   What’s the take-home message for people outside the Saudi inner circle?  Is it that the UN screwed up by naming the Saudi-led Coalition but that this mistake has now been corrected and the Saudis are finally getting the respect they deserve?  I don’t think so.

It’s as if a rape victim names her rapist but then recants, saying that he threatened to kill her unless she did so – the rapist then breaths of sigh of relief now that his good name has been cleared.

The only way I can make sense of the Saudi behaviour is to think of it as just a single  move in a long game.   This time Saudi Arabia elevates a black-hole report to a major news item spiced up by Saudi blackmail.

But next time the UN will think twice before embarrassing the Saudis.

Maybe it makes sense that way.  But if so then we should always assume that the Saudis are behaving much worse than the self-censoring UN says they are.

PS (Two hours after posting) – Looking at this again I realize that my title is a little weird.  This is because I started with the title but then the ideas drifted while I wrote and by the end the connection between the post and the title became obscure.

For the record, the idea is that the dispute resolution harmed both parties.   Saudi Arabia comes off as a bully and blackmailer in addition to the original charge of abusing children.  The UN demonstrates that it can’t be trusted to speak the truth.  So, at least in the short run, both sides are damaged by the dispute.