Mismeasuring Deaths in Iraq: Addendum on Confidence Interval Calculations

Garfield_musing_CIs_533965604

In my last post I used a combination of bootstrapping and educated guesswork to find  confidence intervals for violent deaths in Iraq based on the data from the Roberts et al. survey.  (The need for guesswork arose because the authors have not been forthcoming with their data.)

Right after this went up a reader contacted me and asked whether the bottom of one of these confidence intervals can go below 0.

The short answer is “no” with the bootstrap method.  This technique can only take us down to 0 and no further.

Explanation

With bootstrapping we randomly select from a list of 33 clusters.  Of course, none of these clusters experienced a negative number of violent deaths. So 0 is the smallest possible count we can get for violent deaths in any simulation sample.  (In truth, the possibility of pulling 33 0’s is more theoretical than real.  This didn’t happen in any of my 1,000 draws of 33.)

Nevertheless, it turns out that if we employ the most common methods for calculating confidence intervals (not bootstrapping) then the bottom of the interval does dip below 0 when the dubious Fallujah cluster is included.

Here’s a step by step walk-through of the traditional method applied to the Roberts et al. data.  (I will assume that violent deaths are allocated across the 33 clusters as 18 0’s, 7 1’s, 7 2’s and 1 52.)

  1. Compute the mean number of violent deaths per cluster.  This is 2.2.  An indication that something is screwy here is the fact that the mean is bigger than the number of violent deaths in 32 out of the 33 clusters.  At the same time the mean is way below the number of violent deaths in the Fallujah cluster (52).  Note that without the Fallujah cluster the mean becomes 0.7, i.e., eliminating Fallujah cuts the mean by more than a factor of 3.
  2. Compute the sample standard deviation which is a measure of how strongly the number of violent deaths varies by cluster.  This is 9.0.  Note that if we eliminate the Fallujah cluster then the sample standard deviation plummets by more than a factor of 10, all the way down to 0.8.  This is just a quantitative expression of the obvious fact that the data are highly variable with Fallujah in there.  Note further that the big outlier observation affects the standard deviation more than it affects the mean.
  3. Adjust for sample size.  We do this by dividing the sample standard deviation by the square root of the sample size.  This gives us 1.6.  Here the idea is that you can tame the variation in the data by taking a large sample.  The larger the sample size the more you tame the data.  However, as we shall see, the Fallujah cluster makes it impossible to really tame the data with a sample of only 33 clusters.
  4. Unfortunately, the last step is mysterious unless you’ve put a fair amount of effort into studying statistics.  (This, alone, is a great reason to prefer bootstrapping which is very intuitive.)  Our 95% confidence interval for the mean number of violent deaths per cluster is, approximately, the average plus or minus 2 times 1.6, i.e., -1.0 to 5.4.  There’s the negative lower bound!
  5. We can translate from violent deaths per cluster to estimated violent deaths by multiplying by 33 and again by 3,000.  We end up with -100,000 to 530,000.  (I’ve been rounding at each step.  If, instead I don’t round until the very end I get -90,000 to 530,000….this doesn’t really matter.)  Note that without Fallujah we get a confidence interval of 30,000 to 90,000 which is about what we got with bootstrapping.

Have we learned anything here other than that I respond to reader questions?

I don’t think we’ve learned much, if anything, about violent deaths in Iraq.  We already knew that the Roberts et al. data, especially the Fallujah observation, is questionable and maybe the above calculation reinforces this view a little bit.

But, mostly, we learn something about the standard method for calculating confidence intervals; when the data are wild this method can give incredible answers.  Of course, a negative number of violent deaths is not credible.

There is an intuitive reason why the standard method fails with the Roberts et al. data; it forces a symmetric estimate onto highly asymmetric data.  Remember we get 2.2 plus or minus 3.2 average violent deaths per cluster.  The plus or minus means that the confidence interval is symmetric.  The Fallujah observation forces a wide confidence interval which has to go just as wide on the down side as it is on the up side.  In some sense the method is saying that if it’s possible to find a cluster with 52 violent deaths then it also must be possible to find a cluster with around -52 violent deaths.  But, of course, no area of Iraq  experienced -52 violent deaths.  So you wind up with garbage.

Part of the story is also the small sample size. With twice as many cluster, but the same sort of data, the lower limit would only go down to about 0.

It’s tempting to just say “garbage in, garbage out” and, up to a point, this is accurate.   But the bigger problem is that the usual method for calculating confidence intervals is not appropriate in this case.

Advertisements

Mismeasuring War Deaths in Iraq: Confidence Interval Calculations

We return again to the Roberts et al. paper.

In part 5 of my postings on the Chilcot Report I promised to discuss the calculations of confidence intervals underlying these claims:

One standard calculation method (bootstrapping) leads to a central estimate of 210,000 violent deaths with a 95% confidence interval of around 40,000 to 600,000.  However, if you remove the Fallujah cluster the central estimate plummets to 60,000 with a 95% confidence interval of 40,000 to 80,000.  (I’ll give details on these calculations in a follow-up post.)

I have to start with some caveats.

Caveat 1:  No household data – failure to account for this makes confidence intervals too narrow

As we know the authors of the paper have not released a proper dataset.  To do this right I would need to have violent deaths by household but the authors are holding this information back.  Thus, I have to operate at the cluster level.  This shortcut suppresses household-level variation which, in turn, constricts the widths of the confidence intervals I calculate.  It’s possible to get a handle on the sizes of these  effects but I won’t go there in this blog post.

Caveat 2: Violent deaths are not broken down by cluster – confidence intervals depend on how I resolve this ambiguity

Roberts et al. don’t provide us with all the information we need to proceed optimally at the cluster level either since they don’t tell us the number of violent deaths in each of their 33 clusters.  All they say in the paper (unless I’ve missed something) is that the Fallujah cluster had 52 violent deaths and the other 32 clusters combined had 21 violent deaths spread across 14 clusters.  I believe this is the best you can do although maybe a clever reader can mine the partial striptease to extract a few more scraps of information on how the 21 non-Fallujah violent deaths are allocated across clusters.

This ambiguity leaves many possibilities.  Maybe 13 clusters had one violent death and one cluster had the remaining eight.  Or maybe ten cluster had one death, three clusters had two deaths and the last cluster had 5 violent deaths.  Etc.

To keep things simple I’ll consider just four scenarios.  This first is that there are 18 clusters with 0 deaths, 7 clusters with 1 death, 7 clusters with 2 deaths and the Fallujah cluster with 52 deaths.  The second is that there are 18 clusters with 0 deaths, 13 clusters with 1 death, 1 cluster with 8 deaths and the Fallujah cluster with 52 deaths. The third and fourth scenarios are the same as the first and second except that the latter toss out the Fallujah clusters.

Caveat 3: There is a first stage to the sampling procedures that tosses out 6 governorates – failure to account for this makes the confidence intervals too narrow.  (I already alluded to this issue in this post.)

I quote from the paper:

During September, 2004, many roads were not under the control of the Government of Iraq or coalition forces. Local police checkpoints were perceived by team members as target identification screens for rebel groups.  To lessen risk to investigators, we sought to minimise travel distance and the number of Governorates to visit, while still sampling from all regions of the country. We did this by clumping pairs of Governorates. Pairs were adjacent Governorates that the Iraqi study team members believed to have had similar levels of violence and economic status during the preceding 3 years.

Roberts et al. randomly selected one governorate from each pair, visited only the selected governorates and ignored the non-selected ones.  So, for example, Karbala and Najaf were a pair.  In the event Karbala was selected and the field teams never visited Najaf.  In this way Dehuk, Arbil, Tamin, Najaf, Qadisiyah and Basrah were all eliminated.

This is not OK.

The problem is that the random selection of 6 governorates out of 12 introduces variation into the measurement system that should be, but isn’t, built into the confidence intervals calculated by Roberts et al.  This problem makes all the confidence intervals in the paper too narrow.

It’s worth understanding this point well so I offer an example.

Suppose I want to know the average height of students in a primary school consisting of  grades 1 through 8.  I get my estimate by taking a random sample of 30 students and averaging their heights.  If I repeat the exercise by taking another sample of 30 I’ll get a different estimate of average student height.  Any confidence interval for this sampling procedure will be based on modelling how these 30-student averages vary across different random samples.

Now suppose that I decide to save effort by streamlining my sampling procedure.  Rather than taking a simple random sample of 30 students from the whole school I first choose a grade at random and then randomly select 30 students from this grade.  This is an attractive procedure because now I don’t have to traipse around the whole school measuring only one or two students from each class.  Now I may be able to draw my sample from just one or two classrooms. This procedure is even unbiased, i.e., I get the right answer on average.

But the streamlined procedure produces much more variation than the original one does.  If, at the first stage, I happen to select the 8th grade then my estimate for the school’s average height will be much higher than the actual average height.  If, on the other hand, I select the 1st grade then my estimate will be much lower than the actual average. These two outcomes balance each other out (the unbiasedness property).  But the variation in the estimates across repeated samples will be much higher under the streamlined procedure than it will be under the original one.  A proper confidence interval for the streamlined procedure will need to be wider than a proper confidence interval for the original procedure will be.

Analogously, the confidence intervals of Roberts et al. need to account for their first-stage randomization over governorates.  Since they don’t do this all their confidence intervals are too narrow.

Unfortunately, this problem is thornier than it may appear to be at first glance.  The only way to correct the error is to incorporate information about what would have happened in the excluded governorates if they had actually been selected.  But since these governorates were not selected the survey itself supplies no useful information to fill this gap.  We could potentially address this issue by importing information from outside the system but I won’t do this today.  So I, like Roberts et al., will just ignore this problem which means that my confidence intervals, like theirs, will be too narrow.

OK, enough with the caveats.  I just need to make one more observation and we’re really to roll.

Buried deep within the paper there is an assumption that the 33 clusters are “exchangeable”. This technical term is actually crucial.  In essence, it means that each cluster can potentially represent any area of Iraq. So if, for example, there was a cluster in Missan with 2 violent deaths then if we resample we can easily find a cluster in Diala just like it, in particular having 2 violent deaths.   Of course, this exchangeability assumption implies that there is nothing special about the fact that the cluster with 52 violent deaths turned out to be in Fallujah.  Exchangeability implies that if we sample again we might well find a cluster with 52 deaths in Baghdad or Sulaymaniya.  Exchangeability seems pretty far fetched when we think in terms of the Fallujah cluster but if we leave this aside the assumption is strong but, perhaps, not out of line with many assumptions researchers tend to make in statistical work.

We can now implement an easy computer procedure to calculate confidence intervals:

  1. Select 1,000 samples, each one containing 33 clusters.  These samples of 33 clusters are chosen at random (with replacement) from the list of 33 clusters given above (18 0’s, 7 1’s, 7 2’s and 1 52).  Thus, an individual sample can turn out to have 33 0’s or 33 52’s although both of these outcomes are very unlikely (particularly the second one.)
  2. Estimate the number of violent deaths for each of the 1,000 samples.  As I noted in a previous post we can do this in a rough and ready way by multiplying the total number of deaths in the sample by 3,000.
  3. Order these 1,000 estimates from smallest to largest.
  4. The lower bound of the 95% confidence interval is the estimate in position 25 on the list.  The upper bound is the estimate in position 975.  The central estimate is the estimate at position 500.

Following these procedures I get a confidence interval of 40,000 to 550,000 with a central estimate of 220,000.  (I’ve rounded all numbers to the nearest 10,000 as it seems ridiculous to have more precision than that.)  Notice that these numbers are slightly different from the ones at the top of the post because I took 1,000 samples this time and only 100 last time.  So these numbers supercede the earlier ones.

We can do the same thing without the Fallujah cluster.  Now we take samples of 32 from a list with 18 0’s, 7 1’s and 7 2’s.  This time I get a central estimate of 60,000 violent deaths with a 95% confidence interval of 40,000 to 90,000.

Next I briefly address caveat 2 above by reallocating the 21 violent deaths that are spread over 14 clusters in an indeterminate way.  Suppose now that we have 13 clusters with one violent death and 1 cluster with 8 violent deaths.  Now the estimate that includes the Fallujah cluster becomes 210,000 with a confidence interval of 30,000 to 550,000.  Without Fallujah I get an estimate of 60,000 with a range of 30,000 to 120,000.

Caveats 1 and 3 mean that these intervals should be stretched further by an unknown amount.

Here are some general conclusions.

  1. The central estimate for the number of violent deaths depends hugely on whether Fallujah is in or out.  This is no surprise.
  2. The bottoms of the confidence intervals do not depend very much on whether Fallujah is in or out.  This may be surprising at first glance but not upon reflection.  The sampling simulations that include Fallujah have just a 1/33 chance of picking Fallujah at each chance.  Many of these simulations will not chose Fallujah in any of their 33 tries.  These will be the low-end estimates.  So at the low end it is almost as if Fallujah never happened.  These sampling outcomes correspond with reality.  In three subsequent surveys of Iraq nothing like that Fallujah cluster ever appeared again. It really seems to have been an anomaly.
  3. The high-end estimates are massively higher when Fallujah is included than they are when it isn’t.  Again, this makes sense since some of the simulations will pick the Fallujah cluster two or three times.

 

Targeting Terrorists and Near Certainty

The American Civil Liberties Union had a successful FOIA request that yielded a document explaining how the Obama Administration approves actions to kill suspected terrorists.  I learned about the release from this good article.

What follows are just my personal angles, not a comprehensive treatment of the document.

First, I don’t know why these policies haven’t been in the public domain from the get go.  And given that they were kept secret in the first place I don’t know why the Administration fought this in court rather than just coughing up the document when the FOIA arrived.  There is some blacked out material in the document but nothing in the actual release needs to be secret.

Second, this document is commonly described as the “drone playbook” and this is probably a reasonable way to think of it but, so far as I can tell, the policies apply generally to anti-terrorist actions, not only to drone strikes.

Third, I’m really struck by the constant use of the term “near certainty” which appears eight times in eighteen pages.  For example, two prerequisites for green-lighting an attack are near certainty that the target is there and “near certainty that non-combatants will not be injured or killed.”

Despite the legalistic nature of the document I don’t see a definition of “near certainty.”  To me it would imply that air attacks should rarely fail to hit their intended targets and civilian casualties should also be rare – maybe in one strike out of a hundred there could be civilian casualties or the target could turn out to be somewhere else.

To be honest, I’m not sure I’d even describe ninety-nine in a hundred as “near certainty”.  Before crossing the street I expect near certainty that I won’t get struck by a car. If I had only 99% certainty of crossing safely then I’d get hurt within a matter of weeks.

I am skeptical that there are many, if any, air strikes that are conducted under conditions of near certainty.  I can seriously entertain the possibility that US planners of air attacks are surprised in cases when the target is not there or when civilians are harmed.  But are these planners dumbfounded every time a drone strike goes awry as would be implied by the “near certainty” standard?  I doubt it.

raw

Indeed, Chris Woods of Airwars just wrote this in the New York Times:

Official White House data on counterterrorism actions in Pakistan, Yemen and elsewhere show civilians dying on average once every seven airstrikes.

I believe that Chris’ claim comes from here.  This official document says there have been 473 air strikes against terrorists and between 64 and 116 civilian deaths in these strikes. This averages out at one death per 7 airstrikes or one death per 4 airstrikes depending on whether you take the minimum or the maximum number of officially acknowledged civilian deaths (and Chris says there have been many unacknowledged civilian deaths as well).  Plus there must have been many civilian injuries.

This really doesn’t sound to me like near certainty that there will be no civilian casualties in antiterrorist air strikes.

Fourth, the document giving the official numbers on antiterrorist air strikes seems to have another inconsistency with the document spelling out the policies governing these strikes.  The policy document brings to mind a room filled with lawyers and policy experts deliberating on the individuals “nominated” to be targeted.  (Are they really terrorists?  Is it possible to capture them rather than killing them?  Is waiting a viable option?). But the numbers documents says that 2,372 to 2,581 combatants were killed in the 473 strikes.  Did the experts really know who all these people were and deliberate carefully on each one of them?  I doubt it.

I could only speculate on who is fooling whom here so I won’t do it.  But I don’t feel like the Obama Administration has really been following its stated guidelines on actions against terrorist targets.

Does the Overreaction to Terrorism Justify the Overreaction to Terrorism?

Vox has yet another great article, this one giving good insights into how the Obama Administration thinks about foreign policy.

I won’t rehash it here.  Instead, I just want to zero in on one point that I found particularly interesting.

Zack Beauchamp interviews Susan Rice, President Obama’s National Security Adviser. He notes, correctly, that the US is spending many 10’s of billions of dollars per year fighting terrorism despite the fact that hardly any Americans get killed by terrorists.  (This recent article  lays out the facts about spending and the threat very clearly.)  Beauchamp continues:

I asked Rice how she could justify this given that roughly as many Americans are killed per year by their own furniture as they are by terrorists. (This is true; it falls on them by accident.)

Rice responds:

“You are correct that the threat to Americans from terrorism is less than the threat to die in car accidents, to die of the flu, or any number of things we could list.”

Hmmm….I find myself unable to pass on the total ineptness of Rice’s reply to Beauchamp’s nice comparison.  It is as if you remark that the US Women’s Basketball team is totally dominant in the world, blowing away the Olympic-level competition by 35 points on average.  I reply “Yes you’re right.  In fact, if they played the St. Mary’s seventh grade team they would win by a lot.”

More than 30,000 people die per year in car accidents and the US is averaging around seven terrorist deaths per year.

Anyway, Rice rambles on about a few things that don’t really address the question and then lands, interestingly, on this:

“The threat, I would argue, has got to be measured not only in the number of lives but in the risk that it poses to our economy, our social cohesion, our international presence, and our leadership,” she says. “It’s more than a question of how many lives are taken.”

Beauchamp then writes:

What Rice didn’t say is that these consequences are all the result of irrational overreaction to terrorism — which is the conclusion implied by her own analysis.

If, objectively speaking, terrorism doesn’t kill very many Americans, attacks really shouldn’t have a major effect on the US economy or people’s attitudes toward their fellow Muslim citizens. And yet people panic out of proportion to the body count, prompting market losses, expensive security policies, and a surge in Islamophobia.

The is interesting.  The idea is that maybe massive spending to prevent terrorism is justified because even small terrorist incidents cause people to go bonkers and engage in all sorts of destructive behaviours.  It is best to pay through the nose to prevent this self-immolation.

I should hasten to add that Rice doesn’t actually say this and Beauchamp doesn’t endorse this position either.  He just says that Rice’s statement seems to imply it.  I’m not endorsing this idea either although I don’t think it can be summarily dismissed.

My Economics of Warfare course outline contains a paper by Bruno Frey that makes a related point.  Frey argues that the overreaction to terrorism strengthens the incentives for terrorists to make attacks in the first place.

Think about it.  If I respond to insults from my enemies by shooting myself in the foot then I will probably be fielding a lot of insults from my enemies.

The next stage in this chain is to say that as long as I expect to keep behaving in this self-destructive way then I had better do what I can to prevent the insults from coming my way in the first place.

Of course, the analogy breaks down because Rice isn’t saying that the Obama Administration plans to overreact to terrorist incidents.  Rather, she says that the broader society does this – which is true.  As long as overreactions continues to be the rule then it’s at least plausible that the overreaction to terrorism justifies the overreaction to terrorism.

 

Mismeasuring War Deaths in Iraq: The Partial Striptease

I now continue the discussion of the Roberts et al. paper that I started in my series on the Chilcot Report.  This is tangent from Chilcot so I’ll hold this post and its follow-ups outside of that series.

Les Roberts never released a proper data set for his survey.  Worse, the authors are sketchy on important details in the paper, leaving us to guess on some key issues.  For example, in his report on Roberts et al. to the UK government Bill Kirkup wrote:

The authors provide a reasonable amount of detail on their figures in most of the paper.  They do, however, become noticeably reticent when it comes to the breakdown of deaths into violent and non-violent, and the breakdown of violent deaths into those attributed to the coalition and those due to terrorism or criminal acts, particularly taking into account the ‘Fallujah problem’…

Roberts et al. claim that “air strikes from coalition forces accounted for most violent deaths” but Kirkup points out that without the dubious Fallujah cluster it’s possible that the coalition accounted for less than half of the survey’s violent deaths.

Kirkup’s suspicion turns out to be correct.

However, you need to look at this email from Les Roberts to a blog to settle the issue.  It turns out that coalition air strikes outside Fallujah account for 6 out of 21 violent deaths there with 4 further deaths attributed to the coalition using other weapons.

My primary point here is about data openness rather than about coalition air strikes.  Roberts et al. should just show their data rather than dribbling it out in bibs and bobs into the blogosphere.

Roberts gives another little top up here.  (I give that link only to document my source.  I recommend against ploughing through this Gish Gallop by Les Roberts.)  Buried deep inside a lot of nonsense Roberts writes:

The Lancet estimate [i.e. Roberts et al.], for example, assumes that no violent deaths have occurred in Anbar Province; that it is fair to subtract out the pre-invasion violence rate; and that the 5 deaths in our data induced by a US military vehicles are not “violent deaths.”

Hmmm…..5 deaths caused by US military vehicles.

Recall that each death in the sample yields around 3,000 estimated deaths. This translates into 15,000 estimated deaths caused by US military vehicles – nearly 30 per day for a year and a half.  There have, unfortunately, been a number of Iraqis killed by US military vehicles.  Iraq Body Count (IBC) has 110 such deaths in its database during the period covered by the Roberts et al. survey.  I’m sure that IBC hasn’t captured all deaths in vehicle accidents but, nevertheless, the 15,000 figure looks pretty crazy.

Again I come back to my main point – please just give us a proper dataset rather than a partial striptease.  Meanwhile, I can’t help thinking Roberts et al. are holding back on the data because it contains more embarrassments that we don’t yet know about.

PS – After providing the above quote I feel obligated to debunk it further.

  1. Roberts writes that his estimate omits deaths in Anbar Province (which contains Fallujah).  But many claims in his paper are only true if you include Anbar (Fallujah).  Indeed, this very blog post opened with one such claim.  We see that Fallujah is in for the purpose of saying that most violent deaths were caused by coalition airstrikes but Fallujah is out when it’s time to talk about how conservative the estimate is because it omits Fallujah.  Call this the “Fallujah Shell Game”.  (See the comments of Josh Dougherty here.)Shell Game_Thimblerig small
  2. Roberts suggests that he bent over backwards to be fair by omitting pre-invasion violent deaths from his estimate.  But, first of all, there was only one such death so it hardly makes a difference whether this one is in our out.  Second, it’s hard to understand what the case would be for blaming a pre-invasion death on the invasion.  .

 

New B’Tselem Report on Operation Protective Edge…and a Critic who Fires Blanks at B’Tselem

B’Tselem is one of the finest casualty recording organisations in the world so the recent publication of its report on Operation Protective Edge (July 8 – August 26, 2014) is an important moment for the field.  The report is simultaneously very good and very brief so I urge everyone to have a look.

There is a well-organised interactive page that lists each person killed (Palestinians and Israelis) by name, age and gender.  This page also provides the date, location and circumstance of each death.

A special feature of the report is that victims are classified according to whether or not they participated in hostilities (with this category sometimes left empty).  To make these calls B’Tselem looks for evidence that a victim either belonged to a combat organisation or was fighting when he/she was killed.  (See the methodology page for details).

B’Tselem clearly puts considerable effort into making and explaining their useful “participation in hostilities” classifications.  It is, therefore, frustrating to see Ben-Dror Yemini dismiss all this hard work and declare that of the 1,394 people killed while not participating in hostilities (according to B’Tselem) “the vast majority of those killed are fighters.”

How does Yemini back up his strong claim?

To see just how farfetched the NGO’s claims are, one need only look at the very data it provides, including the gender and age of each fatality. Let’s leave for a moment the group of 808 fatalities that even B’Tselem graciously admits were terrorists. We’re left with 1,394. If they were indeed all innocents, killed as a result of indiscriminate or random fire, the age distribution would be identical, or at the very least close, to the age distribution in the Gaza Strip.

But lo and behold, it turns out that the real statistics are quite different. Among those defined as innocents between the ages of 18-32, 275 are men and 127 are women. Among all fatalities aged 18-59, 1,296 are men and 247 are women. Five times(!) more men than women. Such high numbers of fighting-aged men, compared to such small numbers of women from the same age group do not point toward randomness. Such a discrepancy could not have occurred if indisriminate fire towards population centers had actually taken place.

Oh dear…..we’ve been here before.  I’m a bit embarrassed to even take this seriously but such misconceptions appear to be common so they can’t be overlooked.

From this 9/11 page we learn that:

The victims were overwhelmingly male (about 75 percent), young (many under 40, most under 50),…

Aha – on 9/11 Al-Qaeda mainly attacked fighters!  There can be no benign reason why the Twin Towers were so packed full of young males.

Indeed, in this paper we found that about 80% of the people killed by suicide bombs in Iraq were adult males (at least out of the ones for which we could find victim demographics).  It appears that Iraqi open-air markets are also packed full of legitimate targets.

OK, it’s obvious why Twin-Tower demographics didn’t match those of America as a whole but what about open-air markets in Iraq?   The answer is almost surely that women and children are generally kept away from such places since they are potential targets for suicide bombers and other attacks.

Let me by crystal clear so as to avoid misinterpretations.  I do not think that the Twin Towers were filled with fighters.  I do not think that open air markets in Iraq are filled with fighters.  And I do not think that most adult males in Gaza are fighters.  Moreover, when B’Tselem investigates and finds that a particular victim did not participate in hostilities I will not overturn this judgement just because that person was an adult male.

I’m hoping that people will pay attention to this post and stop making such wrong headed claims about adult males as a whole…at least until I reach my 60th birthday.