Secret Data Sunday – AAPOR Investigates the Trump-Clinton Polling Miss Using Data you Can’t See

The long-awaited report from the American Association for Public Opinion Research (AAPOR) on the performance of polling in the Trump-Clinton race is out.  You will see that this material is less of a stretch for the blog than it might seem to be at first glance and I plan a second post on it.

Today I just want to highlight the hidden data issue which rears its head very early in the report:

The committee is composed of scholars of public opinion and survey methodology as well as election polling practitioners. While a number of members were active pollsters during the election, a good share of the academic members were not. This mix was designed to staff the committee both with professionals having access to large volumes of poll data they knew inside and out, and with independent scholars bringing perspectives free from apparent conflicts of interest. The report addresses the following questions:

So on the one hand we have pollsters “having access to large volumes of poll data” and on the other hand we have “independent scholars” who….errr….don’t normally have access to large volumes of polling data because the pollsters normally hide it from them.   (I’m not sure what the apparent conflict of interest of the pollsters is but I guess it’s that they might be inclined to cover up errors they may have made in their election forecasts.)

You might well ask how come all these datasets aren’t in the public domain?

elephant

Sadly, there is no good answer to that question.

But the reason all these important data remain hidden is pretty obvious.  Pollsters don’t want independent analysts to embarrass them by finding flaws in their data or their analysis.

This is a bad reason.

There is a strong public interest in having the data available.  The data would help all of us, not just the AAPOR committee, understand what went wrong with polling in the the Trump-Clinton race.  The data would also help us learn why Trump won which is clearly an important question.

 

But we don’t have the data.

I understand that there are valid commercial reasons for holding polling data privately while you sell some stories about it.  But a month should be more than sufficient for this purpose.

It is unacceptable to say that sharing requires resources that you don’t have because sharing data just doesn’t require a lot of resources.  Yes, I know that I’ve whinged a bit on the blog about sharing all that State Department data and I’m doing it in tranches.  Still, this effort is costing me only about 15-30 minutes per dataset.  It’s really not a big deal.

I suppose somebody might say that these datasets are collected privately and so it’s OK to permanently keep them private.  But election polls drive public discussions and probably affect election outcomes.  There is a really strong public interest in disclosure.

There is further material in the report on data openness:

Since provision of microdata is not required by the AAPOR Transparency Initiative, we are particularly grateful to ABC News, CNN, Michigan State University, Monmouth University, and University of Southern California/Los Angeles Times for joining in the scientific spirit of this investigation and providing microdata. We also thank the employers of committee members (Pew Research Center, Marquette University, SurveyMonkey, The Washington Post, and YouGov) for demonstrating this same commitment.

I’ve written before about how AAPOR demands transparency on everything except the main thing you would think of when it comes to survey transparency – showing your data.

I’ll return to this AAPOR problem in a future Secret Data Sunday.  But for now I just want to say that the Committee’s appeal to a “scientific spirit” falls flat.  Nobody outside the committee can audit the AAPOR report and it will be unnecessarily difficult to further develop lines of inquiry initiated by the report for one simple reason; nobody outside the committee has access to all of the data the committee analyzed.  This is not science.

OK, that’s all I want to say today.  I’ll return to the main points of the report in a future post.

Advertisements

I’ve Done Something or Other and Say that 470,000 People were Killed in Syria – Would you Like to Interview Me?

Let’s go back to February of 2016 when the New York Times ran this headline:

Death Toll from War in Syria now 470,000, Group Finds

The headline is more conservative than a caption in the same article which reads:

At least [my emphasis] 470,000 Syrians have died as a result of the war, according to the Syrian Center for Policy Research.

This switch between the headline and the caption is consistent with a common pattern of converting an estimate, that might be either too high or too low, into a bare minimum.

Other respected outlets such as PBS, and Time jumped onto the 470,000 bandwagon with the Guardian claiming primacy in this story with an early exclusive that quotes the report’s author:

“We use very rigorous research methods and we are sure of this figure,” Rabie Nasser, the report’s author, told the Guardian. “Indirect deaths will be greater in the future, though most NGOs [non-governmental organisations] and the UN ignore them.

“We think that the UN documentation and informal estimation underestimated the casualties due to lack of access to information during the crisis,” he said.

Oddly, none of the news articles say anything about what this rigorous methodology is.  The Guardian refers to “counting” which I would normally interpret as saying that the Syrian Center for Policy Research (SCPR) has a list of 470,000 people killed but it is not at all clear that they really have such a list.

This report was the source for all the media attention.  The figure of 470,000 appears just once in the report, in a throwaway line in the conclusion:

 The armed conflict badly harmed human development in Syria where the fatalities in 2015 reached about 470,000 deaths, the life expectancy at birth estimated at 55.4 years, and the school age non-attendance rate projected at 45.2 per cent; consequently, the HDI of Syria is estimated to have lost 29.8 per cent of its HDI value in 2015 compared to 2010.

The only bit of the report that so much as hints at where the 470,00 number came from is this:

The report used results and methodology from a forthcoming SCPR report on the human development in Syria that is based on a comprehensive survey conducted in the mid of 2014 and covered all regions in Syria. The survey divided Syria into 698 studied regions and questionnaire three key informants, with specific criteria that guarantee inclusiveness and transparency, from each region. Moreover, the survey applied a strict system of monitoring and reviewing to ensure the correctness of responses. About 300 researchers, experts, and programmers participated in this survey.

This is nothing.

The hunger for scraps of information on the number of people killed in Syria is, apparently, so great that it is feasible to launch a bunch of news headlines just by saying you’ve looked into this question and come up with a number that is larger than what was previously thought.  (I strongly suspect that having a bigger number which you use to dump on any smaller numbers is a key part of getting noticed.)

That said, the above quote does promise a new report with more details and eventually a new report was released – but the details in the new report on methodology are still woefully inadequate.  They divide Syria up, interview three key informants in each area and then, somehow, calculate the number of dead people based on these interviews.  I have no idea what this calculation looks like.  There is a bit of description on how SCPR picked their key informants but, beyond that, the new report provides virtually no information relevant for evaluating the 470,000 figure.  The SCPR doesn’t even provide a copy of their questionnaire and I can hardly even guess at what it looks like.

One thing is clear though – they did not use the standard sample survey method for estimating the number of violent deaths.  Under this approach you pick a bunch of households at random, do interviews on the number of people who have lived and died in each one and extrapolate a national death rate based on death rates observed in your sample households.  If the SCPR had done something like this then at least I would’ve had a sense of where the 470,000 number came from, although I’d still want to know details.

I emailed Rabie Nasser asking for details but didn’t hear back.  Who knows.  Maybe my message went into his spam folder.  There are other people associated with this work and I’ll try to contact them and will report back if I hear something interesting.

I want to be clear.  I’m not saying that this work is useless for estimating the number of people killed in the Syrian war.  In fact, I suspect that the SCPR generated some really useful information on this question and on other issues as well.  But until they explain what they actually did I would just disregard the work, particularly the 470,000 figure.  I’m not saying that I think this number is too high or that it is too low.  I just think that it is floating in thin air without any methodological moorings to enable us to understand it.

Journalists should lay off press releases taking the form of “I did some unspecified research and here are my conclusions.”

 

Mismeasuring Deaths in Iraq: Addendum on Confidence Interval Calculations

Garfield_musing_CIs_533965604

In my last post I used a combination of bootstrapping and educated guesswork to find  confidence intervals for violent deaths in Iraq based on the data from the Roberts et al. survey.  (The need for guesswork arose because the authors have not been forthcoming with their data.)

Right after this went up a reader contacted me and asked whether the bottom of one of these confidence intervals can go below 0.

The short answer is “no” with the bootstrap method.  This technique can only take us down to 0 and no further.

Explanation

With bootstrapping we randomly select from a list of 33 clusters.  Of course, none of these clusters experienced a negative number of violent deaths. So 0 is the smallest possible count we can get for violent deaths in any simulation sample.  (In truth, the possibility of pulling 33 0’s is more theoretical than real.  This didn’t happen in any of my 1,000 draws of 33.)

Nevertheless, it turns out that if we employ the most common methods for calculating confidence intervals (not bootstrapping) then the bottom of the interval does dip below 0 when the dubious Fallujah cluster is included.

Here’s a step by step walk-through of the traditional method applied to the Roberts et al. data.  (I will assume that violent deaths are allocated across the 33 clusters as 18 0’s, 7 1’s, 7 2’s and 1 52.)

  1. Compute the mean number of violent deaths per cluster.  This is 2.2.  An indication that something is screwy here is the fact that the mean is bigger than the number of violent deaths in 32 out of the 33 clusters.  At the same time the mean is way below the number of violent deaths in the Fallujah cluster (52).  Note that without the Fallujah cluster the mean becomes 0.7, i.e., eliminating Fallujah cuts the mean by more than a factor of 3.
  2. Compute the sample standard deviation which is a measure of how strongly the number of violent deaths varies by cluster.  This is 9.0.  Note that if we eliminate the Fallujah cluster then the sample standard deviation plummets by more than a factor of 10, all the way down to 0.8.  This is just a quantitative expression of the obvious fact that the data are highly variable with Fallujah in there.  Note further that the big outlier observation affects the standard deviation more than it affects the mean.
  3. Adjust for sample size.  We do this by dividing the sample standard deviation by the square root of the sample size.  This gives us 1.6.  Here the idea is that you can tame the variation in the data by taking a large sample.  The larger the sample size the more you tame the data.  However, as we shall see, the Fallujah cluster makes it impossible to really tame the data with a sample of only 33 clusters.
  4. Unfortunately, the last step is mysterious unless you’ve put a fair amount of effort into studying statistics.  (This, alone, is a great reason to prefer bootstrapping which is very intuitive.)  Our 95% confidence interval for the mean number of violent deaths per cluster is, approximately, the average plus or minus 2 times 1.6, i.e., -1.0 to 5.4.  There’s the negative lower bound!
  5. We can translate from violent deaths per cluster to estimated violent deaths by multiplying by 33 and again by 3,000.  We end up with -100,000 to 530,000.  (I’ve been rounding at each step.  If, instead I don’t round until the very end I get -90,000 to 530,000….this doesn’t really matter.)  Note that without Fallujah we get a confidence interval of 30,000 to 90,000 which is about what we got with bootstrapping.

Have we learned anything here other than that I respond to reader questions?

I don’t think we’ve learned much, if anything, about violent deaths in Iraq.  We already knew that the Roberts et al. data, especially the Fallujah observation, is questionable and maybe the above calculation reinforces this view a little bit.

But, mostly, we learn something about the standard method for calculating confidence intervals; when the data are wild this method can give incredible answers.  Of course, a negative number of violent deaths is not credible.

There is an intuitive reason why the standard method fails with the Roberts et al. data; it forces a symmetric estimate onto highly asymmetric data.  Remember we get 2.2 plus or minus 3.2 average violent deaths per cluster.  The plus or minus means that the confidence interval is symmetric.  The Fallujah observation forces a wide confidence interval which has to go just as wide on the down side as it is on the up side.  In some sense the method is saying that if it’s possible to find a cluster with 52 violent deaths then it also must be possible to find a cluster with around -52 violent deaths.  But, of course, no area of Iraq  experienced -52 violent deaths.  So you wind up with garbage.

Part of the story is also the small sample size. With twice as many cluster, but the same sort of data, the lower limit would only go down to about 0.

It’s tempting to just say “garbage in, garbage out” and, up to a point, this is accurate.   But the bigger problem is that the usual method for calculating confidence intervals is not appropriate in this case.

Mismeasuring War Deaths in Iraq: The Partial Striptease

I now continue the discussion of the Roberts et al. paper that I started in my series on the Chilcot Report.  This is tangent from Chilcot so I’ll hold this post and its follow-ups outside of that series.

Les Roberts never released a proper data set for his survey.  Worse, the authors are sketchy on important details in the paper, leaving us to guess on some key issues.  For example, in his report on Roberts et al. to the UK government Bill Kirkup wrote:

The authors provide a reasonable amount of detail on their figures in most of the paper.  They do, however, become noticeably reticent when it comes to the breakdown of deaths into violent and non-violent, and the breakdown of violent deaths into those attributed to the coalition and those due to terrorism or criminal acts, particularly taking into account the ‘Fallujah problem’…

Roberts et al. claim that “air strikes from coalition forces accounted for most violent deaths” but Kirkup points out that without the dubious Fallujah cluster it’s possible that the coalition accounted for less than half of the survey’s violent deaths.

Kirkup’s suspicion turns out to be correct.

However, you need to look at this email from Les Roberts to a blog to settle the issue.  It turns out that coalition air strikes outside Fallujah account for 6 out of 21 violent deaths there with 4 further deaths attributed to the coalition using other weapons.

My primary point here is about data openness rather than about coalition air strikes.  Roberts et al. should just show their data rather than dribbling it out in bibs and bobs into the blogosphere.

Roberts gives another little top up here.  (I give that link only to document my source.  I recommend against ploughing through this Gish Gallop by Les Roberts.)  Buried deep inside a lot of nonsense Roberts writes:

The Lancet estimate [i.e. Roberts et al.], for example, assumes that no violent deaths have occurred in Anbar Province; that it is fair to subtract out the pre-invasion violence rate; and that the 5 deaths in our data induced by a US military vehicles are not “violent deaths.”

Hmmm…..5 deaths caused by US military vehicles.

Recall that each death in the sample yields around 3,000 estimated deaths. This translates into 15,000 estimated deaths caused by US military vehicles – nearly 30 per day for a year and a half.  There have, unfortunately, been a number of Iraqis killed by US military vehicles.  Iraq Body Count (IBC) has 110 such deaths in its database during the period covered by the Roberts et al. survey.  I’m sure that IBC hasn’t captured all deaths in vehicle accidents but, nevertheless, the 15,000 figure looks pretty crazy.

Again I come back to my main point – please just give us a proper dataset rather than a partial striptease.  Meanwhile, I can’t help thinking Roberts et al. are holding back on the data because it contains more embarrassments that we don’t yet know about.

PS – After providing the above quote I feel obligated to debunk it further.

  1. Roberts writes that his estimate omits deaths in Anbar Province (which contains Fallujah).  But many claims in his paper are only true if you include Anbar (Fallujah).  Indeed, this very blog post opened with one such claim.  We see that Fallujah is in for the purpose of saying that most violent deaths were caused by coalition airstrikes but Fallujah is out when it’s time to talk about how conservative the estimate is because it omits Fallujah.  Call this the “Fallujah Shell Game”.  (See the comments of Josh Dougherty here.)Shell Game_Thimblerig small
  2. Roberts suggests that he bent over backwards to be fair by omitting pre-invasion violent deaths from his estimate.  But, first of all, there was only one such death so it hardly makes a difference whether this one is in our out.  Second, it’s hard to understand what the case would be for blaming a pre-invasion death on the invasion.  .

 

Comments Down Below!

Hello everybody.

This is just a quick note to say that there were some interesting comments that appeared on my last two post on Chilcot (here and here).  I’ve just replied to both.

While I’m at it I have a question for Bill Kirkup (who made one of the comments).  Can he give us a little briefing on how death certificates have been handled in post-invasion Iraq?

Actually, I have a number of specific questions on this subject. I’d be happy to switch to email to clear these up and then post a summary if that works best (m.spagat@rhul.ac.uk).

Chilcot on Civilian Casualties: Part 5

This post continues my coverage of the three reports (one, two, three) written by UK government experts on the Roberts et al. 2004 article claiming that the 2003 invasion of Iraq caused a very large number of deaths.  According to the abstract of the paper:

We estimate that 98,000 more deaths than expected (8,000-194,000) happened after the invasion outside of Falluja and far more if the outlier Falluja cluster is included…Violent deaths were widespread, reported in 15 of 33 clusters, and were mainly attributed to coalition forces.  Most individuals reportedly killed by coalition forces were women and children.

Here’s some useful background.

Iraq Body Count (IBC) had already documented the violent deaths of nearly 20,000 civilians by the time the Roberts et al. paper was released.  So it was already clear that the war had caused a very large number of civilian deaths. The civilians chapter of the Chilcot Report does not suggest to me that this fact triggered deep concern within the UK government.  But the Roberts et al. paper produced a shock which I attribute mainly to its headline-grabbing figure of 100,000.

The 100,000 estimate is not directly comparable to IBC’s 20,000 count because 100,000 refers to excess deaths, i.e., violent plus non-violent deaths of civilians plus combatants beyond a baseline level, whereas IBC records only violent deaths of civilians.  There is also a phenomenally wide confidence interval of 8,000 to 194,000 surrounding the 100,000  estimate which severely complicates any comparison with another source.

Despite all these ambiguities media coverage tended to present the Roberts et al. results as reliably demonstrating in the prestigious scientific journal, The Lancet, that the war had caused 100,000 violent deaths of civilains.  This Guardian article is typical of much misleading media coverage.  There is no mention of a confidence interval, the excess-death estimate is portrayed as a violent-death estimate which is then presented as civilians-only when, in fact, the estimate mixes combatants with civilians. Such media attention further upped the ante on the 100,000 figure, making it still harder to ignore.

Roberts et al. conducted a “cluster survey“.   Specifically, they selected 33 locational points in Iraq and interviewed a bunch of close neighbours at each place.  Households located so close to one another are likely to have similar violence experiences.  So it’s probably more useful to view the sample as 33 data points, one for each cluster, rather than as roughly 1,000 data points, one for each household.

This is a tiny sample.

To get a handle on the sample-size problem consider some pertinent simulations I ran a few years back on some Iraq violence data. These show just how easy it is to overestimate violent deaths by factors of 2, 3 or more when you only have around 30 clusters. By the same token, surveys of this size can easily fail to detect a single violent death even when these surveys are conducted within very violent environments.

6a0105369e6edf970b01a73e187118970d-800wi

The problem with using a mere 33 clusters to measure war violence is intuitive.  Interviewers can easily stumble onto a few unusually violent hot spots and overestimate the average level of violence by a wide margin.   On the other hand, researchers can just as easily draw a qualitatively different kind of sample consisting of 33 peaceful islands.

The Roberts et al. survey seems to have landed on a super turbo charged version of this small sample issue.  They found a total of 21 violent deaths in 32 of their clusters, i.e., less than one death per cluster.  Yet they reported no fewer than 52 violent deaths in their 33rd cluster in Fallujah..

Such a sample yields estimates that are all over the place depending on your assumptions.  One standard calculation method (bootstrapping) leads to a central estimate of 210,000 violent deaths with a 95% confidence interval of around 40,000 to 600,000.  However, if you remove the Fallujah cluster the central estimate plummets to 60,000 with a 95% confidence interval of 40,000 to 80,000.  (I’ll give details on these calculations in a follow-up post.)

In short, there is no reliable way to create a stable estimate out of the Roberts et al. data. We would like to have an estimate that is robust to whether or not we include the extreme Fallujah outlier.  Alas, the usual methods are highly sensitive to whether the wild Fallujah observation is in or out.

photorealistic-3d-render-of-a-house-of-cards-collapsing-B79JFX.jpg

Given this background I’m at a loss to explain how Sir Roy Anderson can describe the Roberts et al. methodology as “robust”.  In fact, he invokes the r-word in two successive sentences.  Yet extreme sensitivity to outliers is one of the main characteristics that earns estimates the lable “non-robust”.

Sir Roy notices that the sample is small but goes nowhere from this starting point.  He seems unaware that war violence tends to cluster heavily at some locations.  Indeed, he  did not even read the Roberts et al. paper carefully enough to discern that their sample displays this pattern in spades.

Sir Roy swings and a misses in another, more subtle, way.  He points, rightly, to a key measurement problem with the Roberts et al. methodology – how do we know that households reporting deaths really did suffer these reported deaths?  He notes that Roberts et al. try to diffuse this issue by checking death certificates.  However, they only check for death certificates in a small non-random sample of their reported deaths and 20% of these checks were failures. So there is plenty of room to question the veracity of many of the reported deaths in the survey.

This is a good catch for Sir Roy but he doesn’t then ascend to the next level.  Suppose that out of the households that did not experience a violent death a mere 1% are recorded as having one anyway.  Since there must be more than 900 such households, this error rate would generate around 9 falsely reported deaths.  These false reports would then translate into about 27,000 estimated violent deaths.  Thus, a small rate of “false positives” can inflate the number of estimated deaths quite substantially, creating another non-robustness issue for the Roberts et al. methodology.

Someone might respond that we don’t have to worry about “false positives” because there will also be “false negatives”, i.e., households that experienced real deaths that somehow don’t get recorded.  However, this view is wrong because the situation is fundamentally asymmetric.  If roughly 50 households experienced violent deaths and 1% of these failed to report these deaths then we’d expect to miss only 0 or 1 real deaths this way.  So a 1% false negative rate will deflate an estimate by much less than a 1% false positive rate will inflate the same estimate.

(Alert readers will have noticed that I just described the base rate fallacy.  See the last slides of this presentation for more details.)

To summarize, Sir Roy wasted the small amount of effort he invested in his report.

Creon Butler at least had a serious go at evaluating the Roberts et al. paper, managing to notice some important new points that eluded Sir Roy.  I list the better ones here.  First on this positive side of the ledger is that Butler at least mentions the crucial Fallujah cluster.  Second, he correctly questions whether the sample is genuinely random.  Butler notes, in particular, that:

  1. The Fallujah field team did not follow the survey’s official randomization methodology when they  selected that cluster.
  2. Six of Iraq’s 18 governorates were excluded from the sample, although Butler thinks this was OK since they were randomly excluded.

Third, Butler draws attention to the preposterously wide confidence interval in the estimate for excess deaths – 8,000 to 194,000.  Fourth, Butler realizes, rightly, that the Roberts et al. figures for violent deaths suggest that hospitals should have received vastly more injured people than the figures of the Iraqi Ministry of Health (MoH) suggest they actually received.

Despite these strengths the Butler report is still weak.  As noted in post number 4 all three expert reports, including Butler’s, missed some central problems with the Roberts et al. paper.  Beyond that, Butler is strangely tolerant of the weaknesses he finds. Here are a few examples:

  1. He knows that the Fallujah field team violated the sampling protocols and then recorded a tremendous outlier observation that was then excluded from the main estimate published in the paper.  But it never seems to occur to him that such a serious data quality issue in one cluster could signal a deeper data quality problem affecting other clusters.
  2. He notices, but immediately shies away from, a weird aspect of the sampling scheme.  Twelve governorates are divided into two pairs with one governorate from each pair selected randomly for sampling and the other one excluded from the sample.  At a stretch we can view this as an acceptable way to claim national coverage for the survey while actually excluding 6 governorates from the sample.  But to do this legitimately you need to build this source of random variation into your confidence interval.  Roberts et al. don’t do this.  So even the gargantuan confidence interval of 8,000 to 194,000 is actually too narrow.
  3. Butler does a bad job of quantifying his point about injuries.  For example, he should have mentioned that the MoH recorded 15,517 injuries during the last 6 months covered by the survey.  Roberts et al. have something like 56 violent deaths during this period which translates into around 170,000 estimated violent deaths. Assuming a rule of thumb of 3 injuries per death one could predict 500,000 injuries, a number which exceeds the MoH figure by more than a factor of 300.  Note, moreover, that people with serious injuries should almost always put in an appearance at a hospital.  So there is really something to explain here.  Yet Butler  pretty much lets this discrepancy pass.

To summarize, in an era of grade inflation Creon Butler gets a gentleman’s pass.

Bill Kirkup of the Department of Health wrote the most perceptive UK government analysis although his paper is marred by one big error.  Here are some of his strong points:

  1.  He spots the absurdity of the confidence interval and grasps the magnitude of the problem – “A confidence interval this large makes the meaning of the estimate difficult to interpret. This point has been largely ignored in media reporting.”
  2. He is aware of what he calls the “patchy distribution of violence” in war and he realizes that this feature renders the survey’s 33 sampling points to be precious few.  He connects the reported results for Fallujah with this patchiness issue.  (You might say this is obvious but the other two experts missed it.)
  3. He identifies an annoying tendency for Roberts et al. to make detailed claims about types of deaths, e.g., the percent of all deaths accounted for by coalition air strikes, without providing numerical tables sufficiently detailed to flesh out these claims.  It appears that some such claims rely on data from the dubious Fallujah outlier which is something we would like to know whenever this is the case.  But it is often hard to be sure of such dependence on Fallujah without more information.  This is information that the authors could easily supply but chose not to do so.  Such “reticence”, as Kirkup puts it, does not inspire confidence.
  4. He realizes that the arrival of a survey team into a neighbourhood will draw the attention of local dangerous individuals.  These violent thugs will pressure people to answer the survey questions in ways that further their agendas.  Such local dynamics decrease the reliability of the data and place the survey’s interviewees at risk.  (I blogged recently on this issue in a similar context.)

Kirkup’s big error is that, somehow, he estimates that only 23,000 of the 98,000 estimated excess deaths (outside Fallujah) were violent deaths.  But a very easy back-of-the-envelope calculation shows that such a low number can’t possibly be right.  (21 violent deaths in the sample, around 8,000 people in the sample in a country of around 24 million – 24,000,000 x 21/8,000 gives around 60,000 violent deaths).  This mistake messes up Kirkup’s report substantially.

Nevertheless, I still think that Kirkup delivered the best report because he alone grasps the fundamental low quality of the Roberts et al. paper.

Where does this leave us?

First, I’d like to soften my criticism of the government evaluators a little bit.  In this post and in the last one I’ve tried to impose a ground rule of evaluating the evaluators based only on information that was available to them when they did their reports.  (I’ll drop this straighjacket in the next post in this series)  But it is hard to maintain this discipline and I’m sure that I’ve allowed myself to benefit in certain ways from some hindsight knowledge.  In addition, I’m sure these guys were under pressure to produce lots of stuff really fast so they couldn’t make every project into their best work.

That said, the casualties of war is a vital issue.  So the UK government should have allowed its analysts the space, and perhaps the outside consultants, they needed to give their work on civilian casualties its due.  (Of course, this applies even more strongly to the US government which has avoided a Chilcot-type enquiry in the first place.)

Finally, I’d like to give a sense of what I think a good report would have looked like.  Here’s a provisional list of key points:

  1.  We already knew that thousands of people are dying because of the Iraq war.
  2.  We should track these deaths closely and, more importantly, use the tracking data to figure out ways to save lives.  (I can’t find anything in the Chilcot Report to suggest that anyone in the government was thinking about this.)
  3. The Roberts et al. paper doesn’t change this picture qualitatively but it does suggest that people could be dying at far greater rates in the war than anyone has previously suggested.
  4. However, the Roberts et al. methodology is extremely weak and unreliable (see the technical appendix to this report) so we shouldn’t count on it except possibly on points that can be corroborated from other sources.
  5. Nevertheless, we should request the detailed data from this project and also from  Iraq Body Count and see whether we can learn something helpful from them.
  6. We should issue a public statement saying that we are not convinced by the Roberts et al. study at this moment but we have requested the data and are looking into it.  Meanwhile, we are very concerned about civilain casualties in Iraq and are working hard to reduce them.
  7. Point 6 should be reality, not just a public relations position.