The Perils and Pitfalls of Matching War Deaths Across Lists: Part 2

This is my second post with Josh Dougherty of Iraq Body Count (IBC).  We asserted in the first one that Carpenter, Fuller and Roberts (CFR) did a terrible job of matching violent events in Iraq, 2004-2009, between the IBC dataset and the SIGACTs dataset of the US military and its coalition partners. In particular, CFR found only 1 solid match in their Karbala sample whereas 2/3 of the records and 95% of the deaths actually match.  We now present case-by-case details to explain how CFR’s matching went so badly wrong.

Here is the Karbala sample of data with 50 SIGACT records together with our codings.  Columns A-S reproduce the content of the SIGACT records themselves.  The column headings are mostly self-explanatory but we add some clarifications throughout this post.  We use Column T, which numbers the records from 1 to 50, to reference the records we discuss in this and the following post.  Columns V-Y contain our own matching conclusions (SIGACTs versus IBC).  Column AB shows our interpretation of what CFR’s reported algorithmic criteria should imply for the matching.

Both our matching and CFR’s compare the SIGACTs dataset to the IBC dataset as it existed prior to the publication of the SIGACTs in 2010. IBC has integrated a lot of the SIGACTs material into its own dataset since that time.  Thus, most of the Karbala cases we characterize in the spreadsheet as “not in IBC” (column Y) are actually in IBC now (Column Z).  However, these newer entries are based, in whole or in part, on the SIGACTs themselves. Of course, it is interesting to compare pre-2010 IBC coverage to another parallel or separately compiled dataset; this is the point of CFR’s exercise and of ours here as well.

Readers can check codings for themselves and are welcome to raise coding issues in the comments section of the blog.  You can look up IBC records at the database page here: To view a specific record, simply replace “page1” at the end of the url with the incident code of the record you wish to view, such as: for record k7338.  The whole SIGACTs dataset is here.

Recall from part 1 of this series that CFR’s stated matching algorithm applies the following 3 criteria:

  1. Date +/- one calendar day
  2. Event size +/- 30%
  3. Weapon type

As noted in the first post, we cannot be precise about the algorithm’s matching implications because of some ambiguities that arise when applying the reported criteria, particularly in criteria 2 and 3.  It appears, however, that a reasonable application of the above three criteria matches 11 out of the 50 SIGACTs Karbala records.  We are, therefore, unable to explain why CFR report that they could only match 1 record on all 3 of their criteria. Recall that we do not know CFR’s actual case-by-case conclusions because they refuse to show their work.

Each criterion causes some matching ambiguities, but we focus here on criterion 2 because it causes the most relevant problems for the Karbala sample.  The main problem is that CFR do not specify whether SIGACTs or IBC records should be the basis from which to calculate percent deviations.  Consider, for example, record 4 for which SIGACTs lists 7 civilian deaths.  IBC event k685 matches record 4 on criteria 1 and 3 (reasonably construed) but lists 10 civilian deaths.  If 7 is the base for percentage calculation then 10 deviates from it by 43% which is, of course, greater than 30% and a mismatch.  But if 10 is the calculation base then 7 deviates by exactly 30% and we have a match.

Further ambiguity stems from CFR’s failure to specify whether their 30% rule is applied in just one direction or if matching within 30% in both directions is required.  Records 30 and 36, in addition to record 4 (just discussed above), either match or do not match depending on how this ambiguity is resolved. These ambiguous cases are classified as “Borderline” in Column AB in the spreadsheet.

A third problem with criterion 2 is that IBC often uses ranges rather than single numbers and CFR do not specify how to handle ranges or even acknowledge their existence. When there is a range, does the +/- 30% rule apply to IBC’s minimum, maximum, both, or to an average of these two numbers?  We don’t know.  We have to add SIGACT records 5, 15, 34 and 42 to the list of unresolved cases when we combine this range ambiguity with the base-number ambiguity.

Criterion 3 is, potentially, the most ambiguous of all because, strictly speaking, neither SIGACTs nor IBC have a “weapon type” field.  The two projects code types in different ways and with different wordings.  Nevertheless, both datasets have some event types, such as “IED Explosion” or “gunfire,” that can be viewed as weapon types.  Sensible event- or weapon-type matches can be made subjectively from careful readings of each record, but mechanical weapons matching based just on coded fields will not work.  For example, SIGACTs has a “Criminal Event – Murder” category (in Columns N-O of our spreadsheet) whereas IBC has no such category. However, many IBC event types, such as “gunfire”, “executed”, “beheaded”, “stabbed” and “tortured, executed”, among many others, can be consistent with “Criminal Event – Murder”. Thus, rule 3 seems to consist of subjective human judgments about all these varying descriptions, even though CFR claim that “the algorithm-based matching involved little to no human judgment.”  Any attempt to replicate CFR’s judgments on this rule would be pure guesswork since it is hard to imagine an algorithm to implement rule 3, and it does not seem like there was one.  Therefore, we just ignore rule 3 and proceed as if CFR made appropriate judgment calls on weapons types in all cases, even though this assumption may give too much credit.

With the above ambiguities aside, we’ll now move on to the substantive matching errors that arise when attempting to match these two datasets algorithmically. We distinguish between 8 error types that affect CFR’s application of their algorithm to the two datasets.  The rest of this post covers the first 4 error types and the next post will cover the remaining 4.

The first 4 error types concern basic errors in documentation or coding within the SIGACTs dataset itself.  We give a broad treatment of these SIGACT errors, in part to prepare the ground for future posts.  The SIGACT errors usually translate into matching errors under the CFR algorithm, but do not always do so, and we identify cases for which the algorithm should reach correct conclusions despite SIGACT errors.  Modifications of matching criteria tend to change the ways in which SIGACT errors translate into matching errors. Thus, matching procedures that we will consider later in this series sometimes fall prey to a different range of errors than those that affect CFR’s matching.  So it is useful to provide full information on the underlying SIGACT errors at the outset.


  1. Location errors – Errors in recorded locations affect at least 9 records (25, 31, 33, 37, 39, 40, 45, 48 and 50) and probably 2 more (32 and 38), affecting at least 17 deaths, i.e., at least 18% of the records and 3% of the deaths in the sample.

Many SIGACT records contain incorrectly coded locations.  Usually, but not always, these errors take the form of a wrong letter or number in a code for the Military Grid Reference System (MGRS, Column B). For example, in Record 33 both the Title (Column E) and MGRS coordinates identify Karbala as the location for a “criminal murder” that killed 6 civilians. However, the Reporting Unit (Column S) for this record is “KIRKUK NJOC” which suggests that this event occurred in Kirkuk, not Karbala: two entirely different places that are far apart.  Moreover, the Summary (Column D) of the event, a full text description, describes an attack on electricity workers from the Mullah Abdullah power plant in the “AL HWAIJAH AREA” which is southwest of the city of Kirkuk in the province of Tameem (also sometimes called Kirkuk province). IBC record k5908 is of an attack “near Hawija” that matches the characteristics of the SIGACTs event, including describing the victims as workers for the same power plant.  Taken together, all these factors confirm that Record 33 is a Kirkuk event that was mis-coded as a Karbala event.

The location error appears to stem from a flaw in the MGRS coordinates, “38SMB119152”, which, if true, would place the event in Karbala.  It seems that the letter “B” should have been an “E”.  This single small change shifts the event 250 miles north to an area near Hawija, southwest of Kirkuk, where the Summary suggests it should be.  It appears, further, that the Title of “IVO KARBALA,” i.e., in the vicinity of Karbala, was likely based on the MGRS coordinate error.  The Title coder might not have read the Summary or may not have been familiar with the locations of Karbala, Hwaijah or Kirkuk and, therefore, not realized that these were discrepant.

The crucial point for list matching is that this basic location error renders the record un-matchable under CFR’s algorithm and, indeed, under any reasonable algorithmic method that does not consider the possibility of documentation errors.  A careful reading of the detailed record by a knowledgeable human can ferret out the error. A sophisticated text reading computer should also be able to spot the error, but only if the system is programmed to question the information coded in casualty fields.

Record 33’s location error can be detected from the fact that the reporting unit was based in Kirkuk.  But, importantly, the field for “Reporting Unit” (column S) is also omitted from the Guardian version of the dataset used by CFR, along with the detailed event descriptions (Column D). Indeed, records 31, 37, 39 and 40 also list reporting units that appear to be inconsistent with a Karbala event – a clear red flag to an attentive researcher.  But this flag would be invisible to anyone, such as the CFR team, without access to the Reporting Unit field.

Records 48 and 50 also have location errors, but these mistakes take a different, almost opposite, form. These are not events outside Karbala that were erroneously shifted into Karbala.  Rather, they are true Karbala events, as we can see from their summaries, but with MGRS coordinates that incorrectly place them outside Karbala.  This kind of MGRS error should not affect CFR’s Karbala matching because they assume that location matching is already satisfied through their sample selection method of filtering for “Karbala” in the Title field. Subsequently, CFR only try to match on date, size and type, not location.  Thus, the particular coordinates issue in these two records should not ruin CFR’s matching.  Nevertheless, such MGRS coordinate errors would undermine CFR’s main (nationwide) matching exercise because that exercise uses locations, based on MGRS coordinates, as a matching criterion.

  1. Duplication errors, which affect 3 records (35, 43, 46) and 205 deaths, i.e., 6% of records and 35% of deaths are duplicated.

The CFR paper never mentions the topic of duplication or de-duplication even though this is a focus of many list matching/merging efforts.  It seems fairly clear that CFR did not attempt to de-duplicate the SIGACTs samples they used in their paper.  Yet, the three duplicates in this Karbala sample account for no fewer than 205 out of 558 deaths.  In fact, correcting for duplicates leaves just 353 unique deaths in the sample, not 558.

Records 42 and 43 report the same event from two different sources in slightly differing versions. These match IBC record k7338.  The duplicate records report 46 and 48 deaths respectively. However, if one ignores the possibility of duplication then k7338 can match only one of the two SIGACTs records.  De-duplication failure here creates a large phantom event that cannot be matched against IBC since the actual match has already been used up.

Records 34 and 35 are also duplicates of a large event but this time with the added twist that deaths and injuries are interchanged in record 35.  Thus, 36 deaths and 158 injuries from an IED explosion in record 34 becomes 158 deaths and 34 injuries in a supposedly different IED explosion on the same day in record 35.  This improbable death to injury ratio in an explosion should have been enough to raise a red flag for CFR, even though they cut themselves off from the Summary Column (D) that confirms the interchange.  This time the phantom event creates 158 un-matchable deaths.

Records 46 and 47 also duplicate the same matching event although they only account for a single death.

These duplication problems in the small Karbala sample should be sufficient to establish that duplication is going to be a significant problem to overcome in any attempt to match the SIGACTs with another list. It’s also difficult to see how duplicates could be reliably identified without exploring the details in the Summary field (Column D) omitted from CFR’s version of the dataset.  Failing to eliminate duplicates across the SIGACTs winds up leaving an array of fictional events mixed in with real events and will naturally lead to many spurious conclusions about coverage failures in a comparison list.

  1. Classification errors, taking the form here of a reversal of death and injury numbers for 1 record (35) with, supposedly, 158 deaths, i.e., affecting 2% of records and 28% of deaths.

Another form of error that occurs in some SIGACTs is that casualty numbers or types are sometimes shifted into incorrect fields, i.e., one type of casualty is misclassified as another type. Record 35, which we already mentioned above in the section on duplication, interchanges deaths and injuries and is the only such classification error in the Karbala sample.  However, similar problems such as this and misclassifying victim categories (Friendly, Host Nation, Civilian, Enemy), occur in other parts of the SIGACTS data.

Record 35 also shows how some records contain multiple error types simultaneously.

  1. Doubling the number of deaths, an error that affects 2 records (11 and 12) for a total of 8 deaths, i.e., 4% of the records and 0.9% of the deaths in the sample.

All casualty fields for record 12 are exactly doubled compared both to what is written in the Summary field (Column D) and the Title field (Column E). Thus, the correct number of 3 for the “CIV KIA” number is doubled to 6.  IBC record k1878 has this event with 3 deaths and the same date and type.  Thus, without the doubling error this record would match under the CFR algorithm.  With the error, record 12 violates the +/-30% rule, regardless of whether we use 3 or 6 for the base, and becomes a mismatch.

Note that CFR could potentially have caught this error by reading the Title, which they did have at their disposal.  Comparison of these two fields shows, at a minimum, that one of them is wrong and matching must proceed with caution.  The detailed event description (Summary) confirms that the figure of 3 civilian deaths in the Title is correct and, except for the error, this should have matched under CFR’s criteria.

Record 11 makes the same error, converting 1 death into 2.  This error also contradicts both the Title and Summary.


Now is a good time to take stock.

It should be abundantly clear that errors in any dataset that is part of a matching project are potentially lethal to the project.  Any casualty dataset of even moderate complexity will probably contain some errors. Conflict casualty datasets tend to be collected and compiled under much less than ideal circumstances, so an error mitigation strategy must be a central feature of matching work.  Of course, one can get lucky and wind up working with very high quality data.  CFR were not lucky.  The SIGACTs data is highly valuable, but it is also rather messy, containing many important errors crucial to case matching.

CFR and others who have relied on their findings seem not to have considered the possibility of data errors in the SIGACTs.  Rather, it appears that CFR just assumed that they had two pristine datasets with the unique weakness of incompleteness.  This misguided assumption leads them to misinterpret the many data errors as revealing incomplete coverage.  These misinterpretations are not minor.  In effect, CFR padded their discoveries of true coverage problems with a host of other issues that are unrelated to coverage, substantially exaggerating the coverage issue in the process.

And yet we have only told half the story so far.  The next post will cover an additional 4 error types.


The Perils and Pitfalls of Matching War Deaths Across Lists: Part 1

I argued in an earlier post that matching deaths across lists is a nontrivial exercise that involves a lot of judgement and that, therefore, needs to be done transparently.  Here is the promised follow up post which I do jointly with Josh Dougherty of Iraq Body Count.  In fact, we’ll make this into another multi-part series as there are many different sources and issues to explore. This is a large subject of growing importance to the conflict field, so we may also eventually convert some of this material into a journal article. Throughout this series we’ll draw heavily on Josh’s extensive experience matching violent deaths across sources for the Iraq conflict.

Today we’ll set the table with some preliminaries and offer basic findings, with more detailed exploration of the data to follow in future posts.

First, list matching for Iraq has involved a combination of event matching and victim matching.  Events are usually considered to be discrete violent incidents, such as suicide bombings, air attacks or targeted assassinations, and are typically defined by their location, date, size, type and other features.

The event matching aspect of the Iraq work means that it won’t always be directly relevant to pure victim-based matching efforts such as those underpinning the statistical work of Peru’s Truth and Reconciliation Commission (TRC), or the various efforts involving casualty lists covering the war in Syria.  We’ll talk more about pure victim-based matching in a future post.  However, matching events is ultimately still about matching deaths/victims, so the issues that arise are very similar and most of what we write here will be relevant to victim-based matching.

Second, we analyse a matching exercise from this paper by Carpenter, Fuller and Roberts (CFR) that attempts to match events from the Iraq war across two sources.  This CFR paper has been cited in some major journal articles.  In fact, Megan Price and Patrick Ball, the main author of the statistical report of the Peruvian TRC, relied heavily on CFR’s matching in some of their own papers. Yet CFR’s matching turns out to be very bad.

Third, we won’t address here the main matching exercise of 2,500 records carried out (again badly) in the CFR paper.  We cover, rather, a robustness check matching smaller samples that CFR present towards the end of their paper, and which should be more easily digestible for readers.  A proper analysis of CFR’s main matching exercise is beyond the scope of this series, but we can say here that the kind of problems affecting the robustness check generally carry over into the main matching exercise. Note, however, that CFR’s main matching is done by hand with human researchers, whereas the robustness check that we cover below is described as “computer-driven” and “non-subjective”. Still, both the human and computer approaches use essentially an algorithmic matching approach with very similar pre-determined parameters. The major difference is that in one case an algorithm is applied by hand, with more room for human judgment, while in the other it is apparently applied more strictly with the help of a machine. Indeed, CFR report that the two approaches “resulted in the same conclusions,” so they suggest that their robustness check has succeeded and that we should feel more confident in their findings.

In this exercise, CFR match samples from two sources covering events that occurred in Karbala, Iraq, between 2004 and 2009.  The sources are Iraq Body Count (IBC) and the Iraq War Logs published by WikiLeaks in 2010, also known as the official SIGACTs database of the US military.  Here are the methods of IBC.

Unfortunately, we know of no formal statement of a data collection methodology for SIGACTs, however we do know that it is compiled by the US Department of Defence from the field reports of US and Coalition soldiers, Iraqi security forces and other Iraqi sources.  We can also learn about SIGACTs by inspecting the entries.  This one, for example, describes a “search and attack” operation in which Coalition Forces killed seven “Enemy” fighters in the Diyala governorate.  The entry displays SIGACTs’ standard data-entry fields which include the date, time, GPS coordinates, event type, reporting unit and numbers killed and wounded. The casualty numbers are further divided into “Enemy”, “Friendly”, “Civilian” and “Host Nation” categories. Each record begins with a short headline and also contains a longer text description of the events. These descriptions tend to be rather jargon-filled but can be read fluently after some practice.

We will show in the next post of this series that careful reading of the detailed text descriptions is essential for matching SIGACTs-recorded deaths against other sources correctly. The CFR work runs aground already at this data inspection stage because they worked only with a summary version of the data, published by The Guardian, which omits the detailed text descriptions. Note also that the above-cited Price and Ball paper, which closely follows the CFR lead, shares CFR’s cavalier approach to the SIGACTs data, writing incorrectly of its methodology:

SIGACTSs based on daily “Significant Activity Reports” which include “…known attacks on Coalition forces, Iraqi Security Forces, the civilian population, and infrastructure. It does not include criminal activity, nor does it include attacks initiated by Coalition or Iraqi Security Forces”

This is not true of the full SIGACTs database released in 2010, and instead comes third hand from a description of some statistics on “Enemy-initiated attacks” that appeared in a 2008 US DoD report. Those data were derived from only selected portions of the SIGACTs database and their description does not apply to the full dataset. A cursory glance at the full SIGACTs dataset would have quickly revealed that it includes criminal activity and attacks initiated by Coalition or Iraqi Security Forces.

Further background on the SIGACTs (Iraq War Logs) data can be found here and here.

CFR derives their Karbala sample, plus a separate Irbil one to which we will return later, by:

filtering the entire WL data set in the event description for the appearance of the words ‘‘Irbil’’ and ‘‘Karbala.’’

You should interpret “the entire WL data set” to mean the entire Civilian category, with at least 1 death, of the Guardian version of the SIGACTs dataset, i.e., the version that omits the detailed text descriptions of each record.  In this context, the above phrase “event description” can only refer to the headline of each record, as there is nothing else in the Guardian version of the dataset that could both approximate an “event description” and contain the word “Karbala”.

The above filtering yields a sample of 50 records containing 558 deaths.  However, strangely, CFR report only 39 records in their results table.  It would seem that CFR had an additional, unreported, filtering stage that eliminated 11 records.  Or perhaps CFR simply made a mistake.  There is no way to know at present how or why this happened because CFR do not list their 39 Karbala records or their matching interpretations for each in their paper, and have ignored or refused past data requests.  Consequently, we will simply follow CFR’s reported sampling methodology, as it appears in their paper, and proceed with matching the 50 records it produces.

CFR’s reported matching algorithm applied to this sample contains three matching requirements:

  1. Event dates must be within one calendar day of each other.
  2. The number killed cannot be more than + or – 30% apart.
  3. Weapon types must match.

CFR report one main finding on Karbala alone (again, we will return to Irbil later):

the majority of events in WL [SIGACTs] are not in IBC and vice-versa.

Indeed, CFR’s results table claims that only 1 of their 39 SIGACT records match IBC on all three of their criteria. [Note that the first version of this post said that there were 2 matches rather than the correct number which is 1] They report only event, not death, statistics, but there is an obvious implication that IBC missed a high percentage of the deaths in the Karbala sample.

The problem is that their results are very wrong. When we compare each of the records in detail, the majority of records and the vast majority of deaths in the Karbala sample match with IBC. Specifically, 95% of deaths (533 out of 558) and 66% of records (33 out of 50) match with the IBC database.

However, when we apply CFR’s matching algorithm to those same records, only 24% of deaths (132 of 558) and 22% of records (11 of 50) match on all three criteria. We should note here that applying CFR’s algorithm is not as simple or straightforward as it might seem. Their three requirements all raise some ambiguities that need to be resolved by subjective judgement in practice, and the outcomes of these choices can move the final numbers around a bit.  We will discuss these issues in our next post, but any resolution of these ambiguities will still leave an enormous distance between CFR results and the truth.

It should be stressed that the CFR approach apparently seemed reasonable and reliable to the authors, journal referees and editors, and to other researchers, like Price and Ball, who build on CFR’s work. Yet their approach ultimately gets the data all wrong, and for reasons that become pretty clear when one examines the data in detail. Indeed, we find that CFR’s conclusions reflect defects in their methodology far more than they reflect holes in IBC’s coverage of conflict deaths in Karbala.

With this in mind, let’s circle back to the Peru debate which inspired the present series on matching. In the Peru discussion Daniel Marique Vallier and Patrick Ball (MVB) argue that some of Silvio Rendon’s point estimates for numbers of people killed in the Peruvian war are “impossible” because these point estimates are below numbers obtained by merging and deduplicating deaths appearing on multiple lists. But the results we report here should shock anyone who previously thought that counts emerging from such list mergers can simply be taken at face value and treated uncritically as absolute minima. MVB’s matching is unlikely to be anywhere near as bad as CFR’s, but we still need to see the matching details before we can begin to talk seriously about minima.

Our next post will share the Karbala sample along with our case-by-case matching interpretations and dig into the details of how and why the CFR approach got things so wrong.