Threats, Real and Imagined (mostly Imagined)

Two interesting and related articles crossed my screen almost simultaneously – this one by Nils Petter Gleditsch and Ida Rudolfsen and this one by John Mueller.

Gleditsch and Rudolfsen crunch the numbers and reach a conclusion that is simple, interesting and novel.  I love it!

Civil wars in Muslim countries have not increased dramatically in absolute terms, but they make up a larger share of all civil wars.

Have a look: Untitled The number of civil wars in Muslim countries has increased in recent years but the present level is not historically unprecedented.  Simultaneously, there has been quite a drop in the number of civil wars outside the Muslim world.

Imagine you’re a long-time listener to, say,  the BBC World Service.  Suppose that the BBC holds roughly constant over time the percentage of airtime they devote to war stories.  I know that’s a pretty bold assumption but something along these lines could be true.

If so, then the amount of airtime devoted to civil wars in Muslim countries would have risen over the last decade out of proportion to the real increase in civil wars in these countries. Thus, long-time listeners could easily develop an exaggerated sense that collapse of the Muslim world is a central feature of the decade.

News consumers would then be ripe targets for fear mongering  over the threat of Islamic terrorists.

A cartoon man cowers under the covers in bed at night.

Mueller does a pretty good job of explaining why ISIS, which is absolutely vicious in its area of operation, does not look like a serious threat outside this area.  For example, ISIS seems to regard foreign volunteers as readily expendable and even likes to post videos of foreigners burning their passports, rendering surviving foreigners unlikely candidates to return home and wreak havoc.

Barack Obama recently made a pertinent comment, bemoaning America’s tolerance for widespread gun violence mixed with a disproportionate fear of terrorism:

If you look at the number of Americans killed since 9/11 by terrorism, it’s less than 100. If you look at the number that have been killed by gun violence, it’s in the tens of thousands

(But while searching for this quote I stumbled onto this insanity.   Sheeeshhh)

I’m sure some American readers out there smugly assume they’ve cornered the market on overreaction to terrorism.  Think again.  University lecturers in the UK now have legal duties to combat extremism.

At this moment it is unclear how universities will comply with this legislation.  Many hope it will be sufficient just to force everyone to do an online training course.

So I close with a solemn promise to my readers.  If it’s online training then I will share with all of you the content of the training in a future post.

Is Everyone a Civilian or is it just Everyone who’s not a Military-Aged Male?

I’m already organizing my “Economics of Warfare” course for 2015-16 so I had another look at this incisive 13 minutes on drones from John Oliver.

The bit around minute 5:00 triggered a few bad memories that I would like to ….errr….share with my loyal readers. It shows Scott Shane of the New York Times decrying a CIA practice of presumptively classifying as militants all able bodied males the CIA kills in drone strikes.

That’s one way to make sure you hardly ever kill civilians.

In a similar vein, Michael Ballaban descends perilously close to the CIA’s moral universe in this article which manages to be simultaneously  offensive and useful.  Ballaban hopes to get a handle on the number of civilians killed by the Israelis in Operation Protective Edge.  Fine, but all he ends up doing is counting up the number of military-aged males killed.

To be fair, Ballaban does issue an appropriate caveat (albeit undercut in his next breadth):

In addition, one of the few things I can definitively say about the conclusions we draw is that being both male, and of military age, does not a fighter make. Just as well, Hamas fighters often are completely non-uniformed, battling in civilian clothing. And many of the fighters may not even be strict members of the Hamas hierarchy.

I agree with everything in the quote.  However, I regard the first point as terminal for the whole exercise.  The rest of the quote is largely irrelevant to the question of whether or not it’s OK to treat all dead military aged males as combatants..

Ballaban then paints his  exaggerated count as a conservative undercount:

Again, it is imperfect. Not all fighters in Gaza are males, nor are they all of military fighting age….

There is probably some truth here as well but I doubt these considerations come very close to offsetting the initial misclassification.

To be clear, gender and age breakdowns on people killed are welcome and useful. But these breakdowns do not help us figure out who is a combatant and who is a civilian.

In contrast to the above tendency to see combatants almost everywhere, the literature on the epidemiology of war often seems to pretend that everyone is a civilian.

Take this paper (Roberts et al.) and this paper (Burnham et al.).  (These are both bad papers, especially the second one, but for now I’m only interested in how they handle the concept of “civilian”.)  Feel free to pop open the links and search for the term “civilian”.

Both papers are based on sample surveys aimed at estimating the number of people killed in the Iraq war.  Neither survey attempts to separate civilians from combatants so the estimates are of  deaths of civilians plus combatants.

There is nothing wrong with mixing together civilians with combatants in an estimate. But it is a cardinal sin to interpret such an estimate as a civilians-only  one.

It would be fair to say that the Roberts et al. article pretty much commits the sin although the researchers allow themselves some degree of plausible deniability.  For example, the conclusion states:

This survey shows that with modest funds, 4 weeks and seven Iraqi team members willing to risk their lives a useful measure of civilian deaths could be obtained.

I suppose that, if pushed, the Roberts et al. team could say that, although they did not obtain a useful measure of civilian deaths themselves, they have demonstrated that it is possible to get one (for a team similarly able to bypass university IRB’s?).   Yet the much more reasonable interpretation is that they do claim to have estimated civilian deaths only.

The press office of the Johns Hopkins Bloomberg School of Public Health is totally direct about this.  For them it’s civilians, civilians, civilians, civilians all the way.

The Burnham et al. paper talks a fair amount about civilians but does not say that their estimate applies only to civilians.  Indeed, a publicity article in the Johns Hopkins magazine complains that:

Misleading headlines appeared in The Wall Street Journal, The Los Angeles Times, and The Times of London. The latter also reported that the deaths were civilian, though the Lancet article [i.e., Burnham et al.] makes clear the surveyors did not attempt to ascertain if the dead had been civilians or combatants.

OK, careful researchers, sloppy journalists (at least on the civilians question).

But what about this article by Burkle and Garfield (It’s free but you have to register on the Lancet site to get to it.)  Entitled “Civilian mortality after the 2003 invasion of Iraq”  (my emphasis), it strongly pushes the Burnham et al. and Roberts et al. surveys but never says that they mix together civilians with combatants.

Burkle and Garfield also maintain some level of plausible deniability; for example they only use the word “civilian” in their central table when describing studies that really do measure civilian deaths.  But the concluding paragraph gives the game away:

Arguably, although passive surveillance [they mean IBC] has great immediate usefulness in war, active surveillance [they mean Roberts et al. and Burnham et al.] must prevail if we are to have more complete information. In truth, because of the politicisation and perceived weaknesses of the methods of the Iraq studies [again, Roberts et al. and Burnham et al.], all the studies of civilian death have been discounted or dismissed, yet if half a million civilians have perished, that information should be known.

This looks like a clear claim that the two surveys distinguish civilians from combatants.  In fact, I’d wager that almost all readers of  “Civilian mortality after the 2003 invasion of Iraq” walked away thinking that the article was all about civilian deaths.

I can certainly understand the impulse to claim that you are measuring civilian deaths even if this is….well….false.  It feels good.  It builds self esteem.  People might like you more if they think you’re helping civilians.

But we’re talking about historical truth here so it is no more acceptable to pretend that everyone is a civilian than it is to pretend that all military-aged males are combatants.

Addendum: An alert reader sends in this editorial by Lancet editor Richard Horton very directly and repeatedly making the false claim that the Roberts et al. estimate was for civilians only.

Addendum 1a: Even though I do a lot of proof reading of these posts it seems that every time I put something up some other alert readers finds a few typos.  I fix these quietly.  When I make substantive errors (which I will definitely do in the future) I will flag the corrections very clearly.

Gaza, Civilians and War Crimes

David Traven has an interesting post over at Monkey Cage about the Report of the Commission of Inquiry into the 2014 Gaza Conflict and ongoing debates about whether each side committed war crimes during that conflict.

Many of these arguments revolve around the issue of intentionality.  Did the Israelis intentionally target Palestinian civilians?  Did the Palestinians intentionally target Israelis civilians?   These are valid and important questions, especially since it is always a war crime to intentionally target civilians.

However, a strong focus on intentionality can be counter-productive since intentions are difficult to prove.  As Traven points out:

…international law can enable “plausible deniability” — intentionally aiming at civilians, but claiming that their deaths are accidental. It can be hard to determine whether military decision-makers intend to kill civilians. They can always misrepresent their motives — or, to be blunt, they can lie.

Moreover, an unsuccessful attempt to prove bad intentions may convey to many people a false sense that everything is fine….as in….”Regrettably, we killed some civilians but we didn’t mean it.  Next question please.”

 Interestingly, the press release for the Commission of Inquiry uses the word “intention” only once:

The indiscriminate firing of thousands of rockets and mortars at Israel appeared to have the intention of spreading terror among civilians there.

The Commission report itself also uses the word “intention” only once, mostly for theoretical clarification:

Israel should provide specific information on the effective contribution of a given house or inhabitant to military action and the clear advantage to be gained by the attack. Should a strike directly and intentionally target a house in the absence of a specific military objective, this would amount to a violation of the principle of distinction.[1] It may also constitute a direct attack against civilian objects or civilians, a war crime under international criminal law.[2]

In fact, the Commission mainly tries to focus discussion here on the question of whether civilian damages from Israeli military actions are out of proportion to the military advantages the Israelis gained from these actions (which would also be a war crime, if true).

Shifting attention to proportionality certainly does not resolve all problems since, as Traven points out, there can never be a fully objective test of  whether or not particular military actions are proportional to associated civilian damages.  But such a shift does at least eliminate defences taking the form of “I didn’t mean it.”

A few years back Madelyn Hicks and I introduced what we called the “dirty war index” which is relevant to the present discussion.  The dirty war index specifically sidesteps the question of intent.  Rather, we focus on predictable  consequences of using particular weapons or tactics as well as on the established patterns of harming civilians that particular armed groups have established over time.

If, for example, using explosives in densely populated areas is known to cause substantial harm to civilians then this fact must be integrated into evaluations of military decisions, regardless of whether or not the users of these weapons intend to harm civilians.

A quantitative approach such as the dirty war index cannot settle any one proportionality question but it can help organize our thinking.  For example, if data indicate that mortars tend generally to be more indiscriminate than gunfire then the use or mortars should be inherently more suspect than the use of gunfire.

Perhaps the UN can adopt such an approach in future work.

Kosovo Memory Book

Kosovo Memory Book is one of the most extraordinary projects ever undertaken in the history of casualty recording.  This joint work of the Humanitarian Law Center and the Kosovo Humanitarian Law Center seeks to document every single person killed during the war in Kosovo, 1998-2000.

I had the privilege of evaluating  the Kosovo Memory Book database together with people at HRDAG, an NGO that works extensively in this field.  Here you can see the results of our evaluations together with materials from presentations we made in February 2015.

Using entirely different approaches we both concluded that the database is of exceptionally high quality and appears to contain a virtually exhaustive list of all the people killed in the war. I use the word “virtually” because we will never be able to exclude the possibility that some further victims will be discovered.  But we are confident that there cannot be many such cases.

My report spells out a fundamental distinction between documenting  war deaths and estimating the number of war deaths.  Estimation uses some kind of statistical procedure to try to determine the number of deaths. One such technique is to draw a random sample of households, measure the percent of people killed within the sample and extrapolate this in-sample death rate to the whole population suffering from a war.  Most of the deaths covered in such an estimate will not be documented within the sample.

The above example is just a simple version of one estimation technique.  We will discuss war-death estimation in more depth in future posts.  For now, please just reflect a bit about how different estimation is from case-by-case documentation of deaths.

Also, recall this post which focused on the difference between documentation and counting or war deaths.

Combining these posts and summarizing, you can:

1. Document war deaths one by one, listing key information such as names of victims, dates of deaths, how people were killed etc.

2.  Count the number of war deaths.  Doing this is an obvious outgrowth of documentation but also has status independent of documentation.  You can, for example, count deaths that may be documented to varying standards, e.g., counting unidentified bodies along side identified ones..

3.  Estimate the number of war deaths using a statistical procedure.

I am not saying that one approach is the best one.  Indeed, my report on the Kosovo Memory Book database shows that in this case there is a nice interplay between estimation and documentation.

The documentation is still ongoing after many years as the two Humanitarian Law Centers seek to provide still more detail about every victim.  The estimates were available soon after the end of the war but provide less information and are of less memorial value than the Kosovo Memory Book work is.

Yet the estimates and the documentation reinforce one another nicely in the case of Kosovo.

This is a simultaneous triumph both for memory and for war-death research.

Columbia Journalism Review does at least have an error correction mechanism

In this post and this post I criticized Columbia Journalism Review(CJR) for writing about Iraq war-death numbers without investigating the methodologies for the production of the numbers and for suppressing the uncertainty that surrounds the numbers.

Yet I need to credit CJR for one thing – they do investigate errors and make some corrections.  My personal experience with both academic journals and journalistic publications suggests that such willingness to correct errors is not typical at all.

The first error is to cite this paper as claiming something that it doesn’t actually say.  Indeed, the CJR claim doesn’t survive first contact with the paper’s abstract.

Some readers maybe be shocked to learn that even peer-reviewed journal articles often make false claims about other published literature.  In future blog posts I will show you plenty of such false citations.

CJR cites a paper that compares two datasets and estimates their overlap.  One is Iraq Body Count, with is already familiar to readers of this blog.  The second is the  “sigacts” database for Iraq, which is the official data of the US military which was brought into the public domain by Wikileaks.

So there are two datasets recording deaths in the Iraq war.  The question is – to what extent are they recording the same deaths?  CJR cites the journal article as showing that IBC captures less than 1/4 of the sigacts deaths.  The article actually estimates that IBC captures 46% of the sigacts deaths.

…it is estimated that 2035 (46.3%) of the 4394 deaths reported in the Wikileaks War Logs had been previously reported in IBC.

(In fact, the cited article is terrible and the true percentage is much higher than 46% but I will come back to this issue in a future post.)

Now it gets more interesting.  Here is a peer-reviewed article that falls straight into the same falsehood as CJR does:

the emergence of the Wikileaks “Iraq War Logs” reports in October 2010 [69] prompted the Iraq Body Count team to add to its count, but a recent comparison of recorded incidents between the two databases revealed that the Iraq Body Count captured fewer than one in four of the Iraq War Logs deaths[70].

(Citation 70 is to the same article that CJR cites.)

Thus, 46% is represented as “fewer than one in four” in a peer-reviewed journal article.  Any reader who bothers to follow the citation can spot the falsehood instantly.

Welcome to the real world.

At least we can trust doctors…maybe….a little bit?  The recent Physicians for Social Responsibility report on war deaths avers:

Generally, however, the students found that only every sixth individual death in the Logs had a match in the IBC database

Even worse – 46% is portrayed as 1/6.  (The “Logs” refers to sigacts and, yes, the matching was done by students in a course project.)

I have some sympathy for CJR here.  They are operating inside a hall of mirrors in which multiple, seemingly respectable, sources are dead wrong.  And CJR did correct the error when it was brought to their attention.

In the end there is a happy message here – we are not doomed to drab lives of pinging back other peoples’ waste products if we follow one simple rule.  Do not cite a source without first reading that source.

The other CJR correction is, perhaps, less interesting.  As we have already discussed here the CJR article shrinks the IBC methodology to a shadow of its real self, portraying IBC as only tracking a few dozen newspapers and TV broadcasts and only recording deaths of named individuals.  CJR now admits that this was wrong although its correction still don’t recognize that sigacts integration is well under way.  So this is not the greatest correction in the world but it’s better than leaving the original alone.

Should we give CJR an award for standing up and admitting to their fallibility unlike many others who never admit error?

Not really.  Doing this would be like commending a husband for going ten years without beating his wife.  Correcting error and not beating your partner are things you’re supposed to do.  Still, it is fair to criticize those who don’t rise to even this low standard.

One-year anniversary of “Operation Protective Edge”

A year has passed since Israel launched “Operation Protective Edge” in the Gaza Strip.  This very short video  is an effective presentation.of the human and economic losses from this war, including some shocking footage of physical devastation.

The video also gives numbers of people killed in various categories, sourced to the UNHCR.  These figures are unlikely to need substantial modification in the future.  See this document and this document from Every Casualty (for which I’m a board member)  to understand where the UNHCR numbers come from.

Ultimately, the most authoritative account of the human losses of Operation Protective Edge will, in my opinion, come from B’tselem which is still working to document every single death, one by one.

B’tselem does a remarkably good job of providing open-source statistics on the Palestinian-Israeli conflict, including exceptional detail on each person killed with few missing pieces of information.  This individualized attention to every person killed memorializes the victims much like the Srebrenica Memorial does.

Columbia Journalism Review lowers journalistic standards while lecturing on said standards: Part II – Suppressing Methodology

In part I of this series I discussed how a Columbia Journalism Review (CJR) article by Pierre Bienaimé on the number of war deaths in Iraq treats extremely noisy estimates as if they are very precise numbers.

In this post I’ll look at the article’s treatment of methodology.

Recognize that the entire article is about numbers.  And when we consume numbers it is vital that we don’t just let them slide over us like the water in a morning shower.  Rather, we need to think carefully about how our numbers are manufactured, i.e., – think hard about the methodologies of number production.

The CJR article offers virtually nothing on the methodologies behind the numbers it discusses and most of the threadbare material it does provide is plain wrong.

What does the CJR article say about the Iraq Body Count (IBC) methodology.  IBC and collaborators (among which I count myself the proudest one) have published a string of articles that include detailed descriptions of IBC methodology so a lazy reporter can’t go terribly wrong just quoting from one of these papers.  For instance Bienaimé could have just dumped this one into his article :

The IBC database was prospectively developed by the authors HD and JAS when an invasion of Iraq appeared imminent in 2003, with the aim of systematically recording and monitoring deaths of individual Iraqi civilians from armed violence [13],[14]. Data sources are mainly professional media reports, including international and Iraqi press in translation. IBC uses key-word searches to scan Internet-published, English-language press and media reports of armed violence in Iraq directly resulting in civilian death. This process uses search engines and subscription-based press and media collation services (e.g., LexisNexis). Reports are scanned from over 200 separate press and media outlets meeting IBC’s criteria: (1) public Web-access; (2) site updated daily; (3) all stories separately archived on the site, with a unique URL; and (4) English as a primary or translated language. Sources include dozens of Arabic-language news media that release violent incident reports in English (e.g., Voices of Iraq and Al Jazeera English), and report translation services such as the BBC Monitoring Unit. The three most frequently used sources are Reuters, Associated Press, and Agence France Presse. These and other international media in Iraq increasingly employ Iraqis trained in-house as correspondents. Media-sourced data are cross-checked with, and supplemented by, data from hospitals, morgues, nongovernmental organizations, and official figures [13].

The cost of such drag-and-drop journalism in this case would be that that the above description predates the release by Wikileaks of a vast trove of US military data that IBC has been gradually integrating into its database.  Still, it would have been vastly superior to the false little snippets that CJR provides:

The organization aggregates death reports from several dozen news sources—“in other words, people who were named in a newspaper or television broadcast,” says John Tirman, a political theorist at MIT.

……and one more tidbit that could, at a stretch, be viewed as methodological information:

So-called “passively collected figures” come from tabulating external reports or other existing material—like news articles, local television reports, and statistics from morgues—as opposed to original, newly acquired data. Where civilian deaths in Iraq are concerned, passive collection is the method used by a source frequently cited in mainstream press accounts: the aforementioned Iraq Body Count

The first quote is wrong in its entirety.  It low-balls the number of sources IBC use and massively constricts their scope.  News wires are much more important sources for IBC than are newspapers or television broadcasts.  Incident reports from US soldiers also overshadow newspapers.  In addition, there are freedom of information requests, morgue data, hospital data etc..

More importantly, IBC imposes no requirement that only named victims can be recorded in the database.  Whenever names are available these are recorded but, unfortunately, names are mostly not available.  So such a naming requirement would substantially reduce the number of deaths covered by IBC.  But the claim of a naming requirement is simply false.

Saying that the data is “passively collected” is just an empty insult.  It follows a cheap debating tactic of affixing an unattractive word to something you don’t like and pretending that you are just using a technical term.  This would be akin to a prosecuting attorney referring to a defendant as “the assailant” rather than as “the defendant” during court proceedings.  Obviously, such games are not allowed in court and they should not be taken seriously here.

For the record, quite a few people have died collecting the information contained in the IBC data, including reporters and soldiers (entering through the Wikileaks data release).  These victims and many survivors pursued the data very actively indeed.  Ultimately someone from IBC sits in front of a screen and enters the information into the database but if data entry converts data collection into a “passive” undertaking then all survey data also must be classified as “passive”.

OK so much for the CJR (mis)treatment of the IBC methodology.

The other numbers in the article are from sample survey estimates.  What information does Bienaimé provide on the methodologies of these surveys?

For the Lancet studies, epidemiologists at Johns Hopkins used cluster sampling to get a random selection of neighborhoods across the country. They then went knocking on doors and asking questions to determine death rates.

There is no information on how the sampling is done, what questions are asked after knocking on doors, how the estimation and uncertainty intervals are done, etc…..  Saying that the methodology is a “cluster survey” is like saying that the programme for a West End theatre tonight will be “a play”.  Critics panning the play can then be dismissed on the grounds that plays are known to be top-quality entertainment.

Then there is a bizarre passage that attempts to explain the idea behind what are known as “excess deaths” (to which we will return in a later post):

Several studies have sought to estimate the hidden impact of the war by comparing death rates measured before and after the conflict. The idea is that whatever difference exists between observed deaths and those representing an increased death rate since the war began can safely be attributed to war’s effects, and labeled as “excess deaths.”

….hmmmm…..so the surveys compare observed deaths with deaths “representing an increased death rate since that war began”….what?????  So the interesting thing that’s measured are the unobserved deaths?  Does CJR have an English speaking editor on the payroll?

That’s it on methodology.

To summarize, all information on methodology in the the CJR article is either wrong, vacuous or incoherent.