Childmortality.org Recognizes Reality in the DRC

Loyal readers will recall my bold prediction that childmortality.org would bow to reality in 2015 and explicitly recognize that child mortality rates have steadily decreased in the Democratic Republic of Congo (DRC) during the war (which began around 1999).  (Well, actually an insider told me this would happen so perhaps this prediction wasn’t so bold.)

The 2014 report pencilled in a flat area (dark blue curve) that, I suggested, might have been a friendly gesture towards the International Rescue Committee (IRC), the NGO that went out on a limb to claim epic levels of war-caused excess deaths in the DRC with children presented as the primary victims.

DRC

The flat area has, indeed, disappeared in the 2015 report as predicted:

image001

Child mortality rates in the DRC have long been high for the DRC’s region and remain so.  But now the data-driven dark blue line indicates that , if anything, the decline in child mortality rates accelerated during the war .

Advertisements

Don’t Mention the Methodology

Two recent Andrew Gelman posts connect together for me.

The first is on the Open Science Collaboration paper that attempts to replicate 100 important psychology experiments and succeeds for less than half of them.

The second is on “Being polite vs. saying what you think.”

What is the connection between the two posts?

Well, the Open Science team replicates the originals in around 39 of the 100 cases (depending on how you score).  However, a little-known fact is that in 11 of these 39 “successes” the researchers behind the original experiments refused to disclose their methodology.  Open Science couldn’t attempt to replicate these experiments so they scored them as successful replications.

The Open Science scoring methodology makes sense because:

  1.  Naming a scientist for refusing to disclose his/her methodology constitutes an ad hominem attack on that scientist, in effect saying: “person X is a bad scientist because he keeps his methodology secret.”  Battles within the field of psychology are bruising enough already without getting so personal and Open Science wants to focus on methodology, not personalities.
  2. Open Science has produced no evidence that would invalidate the 11 studies since it didn’t try to replicate them.  Therefore, we should consider these 11 experiments to be valid.

Really?   No, not really.  I made up the “fact” about experiments declared replicated because their authors don’t disclose their methodologies.

Hopefully, we all agree that the above (fake) procedures would be preposterous.  If  your primary concern is methodology then you can’t hand out free passes to scientists who keep theirs secret.

That said, there is a certain perverse logic at work here.  If methodological openness is central to the scientific process then there will never be a nice way to say that someone is hiding their methodology.  But if you’re too polite to criticize methodological silence then you’ve gone well beyond what Gelman already viewed as problematic politeness.

OK, let’s return now to the notorious Burnham et al. survey that estimated quite a lot more violent deaths in Iraq than several other credible sources of information did.  I don’t want to survey all the evidence in one little blog post.  Here I just want to treat the issue of methodological openness.

The original Burnham et al. paper was sparse on details about how the survey was conducted.  The authors later deepened the uncertainty over methodology by issuing contradictory statements about how they did their sampling.

The American Association for Public Opinion Research  supports minimum standards for methodological disclosure in survey research.  The required information covers only such basic things as the exact wording of the questions asked and the details of how the sample is drawn.

The original Burnham et al. publication fell short of the AAPOR standards so AAPOR investigated.  Specifically, an AAPOR committee asked Gilbert Burnham multiple times to disclose the basics of his survey methodology.  Burnham refused so AAPOR issued a formal censure.  At the time AAPOR president Richard Kulko said:

“When researchers draw important conclusions and make public statements and arguments based on survey research data, then subsequently refuse to answer even basic questions about how their research was conducted, this violates the fundamental standards of science, seriously undermines open public debate on critical issues, and undermines the credibility of all survey and public opinion research.

It’s fair to say that not providing your methodology removes your work from the scientific universe.  However, many researchers and journalists seem to believe that one shouldn’t be so crass as to mention this fact.

I got a sense of the way the wind was blowing when I was working on this paper and got comments back from an interested party:

Overall, I think it is good.  However, if you will permit me, I would recommend that you make it less personal and remove any references to Burnham getting censored, etc…. I don’t think it is relevant to the chapter and I believe it detracts from the more important points that you make.

I replied:

The AAPOR censure of Burnham is pretty central to the whole line of argument.  It is one thing to say that a scientific study has been subject to a lot of unanswered criticism as is the case here.  But the AAPOR censure effectively removes the Burnham et al. study from the scientific universe.  If you do a survey and refuse to reveal basics like your sampling methodology and your questionnaire then you are not doing science anymore.

My correspondent replied:

I still believe that the points you make can be made just as forcefully but less personally.  Staying away from naming people but rather dealing with the key issues in an objective and constructive manner.  I don’t wish to get into the details, but the censures by the AAPOR and Hopkins, in my mind, don’t invalidate the surveys themselves; they rather deal with actions by an individual(s).  But many of the points you et al. are making re: past mortality surveys are important and need further constructive discussion.  I believe that some people on both sides have become too personal in their rebuttals – and this should be avoided.

Perhaps my joke about the Open Science Collaboration doesn’t seem like a joke any more.  The message seems to be that one should focus on methodology but it is out of bounds to say that someone won’t disclose his methodology.

By now quite a number of articles have been written that treat the Burnham et al. survey as a serious contribution to science and are too polite to mention Burnham’s refusal to disclose his methodology.  For examples, see this paper, this paper (which cites approvingly both the AAPOR standards and the Burnham et al. paper but does not connect the dots between the two), and this article (but you probably can’t access it unless you have a university library).  There are many such journalistic articles but I won’t try to list them here.

Note that non-disclosure is not just an unfortunate, but unfixable, accident.  A methodology can be disclosed at any time.   It’s not like you’re up there on a quiz show and the $50,000 question comes up:

How did you conduct your survey?

Inexplicably you hem and haw and reply:

I’m not telling you

You lose the $50,000 and are kicking yourself for years.  Why didn’t you just answer the question rather than panicking?

Ladies and Gentleman.  We are living through a very encouraging groundswell of support for replicability and methodological openness in science. (Another interesting example is the “worm wars” controversy during which the researchers on the original paper have been 100% forthcoming about their methodology and data.)  If it was considered poor form to nag scientists about their methodologies back in 2009 such unscientific politeness is now a quaint thing of the past.

It’s time to wake up and talk straight.

A Must Read: Steven Pinker Marks to Market

Steven Pinker’s master work, The Better Angels of our Nature, found very long-term declines (going back to pre-history) in a broad array of violence indicators.  The book is mercifully long although it doesn’t reach the lofty standards of the John Cage Organ Project.  Those of us craving further fixes can briefly binge on the annual updates of the graphs.  This year’s installment is here and here.

Doing updates like these is a really healthy practice.  Marking to market should be almost a legal requirement for people who make predictions.  Better Angels is more about explaining the past than predicting the future but much of its allure comes from the clear sense it projects that declining violence is likely to carry forward into the foreseeable future.

The latest update suggests that, indeed, the decline has continued except that battle deaths have recently turned upwards.

I have just a few comments.

First, please don’t label the battle deaths graph as “war deaths”.  The term “war deaths” usually includes things such as the killing of unarmed civilians and non-violent excess deaths that are not covered by the battle deaths concept (see here and here).

Second, what about refugees?  I know this comment seems almost optimized to confirm Pinker’s observation about the undeserved dominance of the “availability heuristic.”:

But headlines are a poor guide to history. People’s sense of danger is warped by the availability of memorable examples – which is why we are more afraid of getting eaten by a shark than falling down the stairs, though the latter is likelier to kill us.

Fair enough, and I’m not an expert on migration flows.  But it does seem that something serious is afoot that we need to think through.

Finally, one massive event could make things look a lot different.  (This article of mine can provide a gateway into this discussion.)

Looking forward to next year.

Rushing from Syria to Sudan

I’ve opened several recent posts with something like “Here’s a nice article but it comes up short in a particular way that I’d like to discuss.”  Please take such formulations at face value and don’t imagine that I’m stretching to find something polite to say.  I’ll tell you directly when I think I see a terrible article.

This article is terrible.

All I can think of to soften the blow is that I like Rob Muggah and am sure he’s trying to be helpful.  But he should have done his homework before tossing this one out there.

The very first sentence explodes instantly at the click of a mouse.

At least 240,000 Syrians have died violently since the civil war flared up four years ago, according to the Violations Documentation Center in Syria.

Click on the link and you see 122,683 deaths, not “at least 240,000.”  (This number will rise over time so those who click after my post goes up should see a  higher number.)

Go to the Syrian Observatory for Human Rights if you want to figure out how Muggah got confused.  Here I just note in passing that the Violations Documentation Center is far more transparent in its work than is the Syrian Observatory for Human Rights.  This follows a common pattern that higher numbers seem to be associated with weaker documentation.

Predictably, this “click and collapse” opening is not a one-off:

In the case of DRC, the International Rescue Committee (IRC) commissioned a series of mortality surveys between 2000 and 2007…. The researchers initially claimed that 5.4 million Congolese died as a result of the armed conflict from 1998 to 2006. Under pressure from critics, the estimate was later revised downward to around 2.8 million deaths, with roughly 10% attributed to violence.

I was, initially, shocked to learn that the IRC had conceded so much ground to its critics rather than remaining in deep denial as I thought it had.  I experienced a warm feeling of the possibilities for rational discussion mixed with self-reproach for having, somehow, missed this important event.

But click on “revised downward” and you see that the IRC’s bow to its critics is an artifact of a surprising reading error;  Muggah mistakes a quote from the Human Security Report’s critique of the IRC work as a quote from the IRC itself.  (This kind of feels like a schoolboy fantasy….I call you a pooh pooh head and this means you concede that you are, indeed, a pooh pooh head.)

And this is only half of the error.

The Human Security Report’s position boils down to two main points.

  1. The IRC strung together a series of surveys to underpin its 5.4 million estimate.  The first two waves only covered small slivers of the Democratic Republic of Congo (DRC) but the IRC estimated, inappropriately, as if these places were nationally representative.  However, just as we should not treat political opinion in Massachusetts as representative for Texas we should not treat death rates where the IRC provides humanitarian relief as representative of death rates for the whole of the DRC.  The Human Security Report pointed out that if you throw out these two waves of nonrepresentative surveys, but still accept the rest of the IRC’s work, then the overall estimate falls from 5.4 million deaths to 2.8 million deaths.  (This is the sense in which the IRC voluntarily “revised downward” to 2.8 million.)
  2. The Human Security Report goes on to argue that the baseline death rate the IRC used to calculate excess deaths was too low.  (Go here if you’ve forgotten about excess deaths and baselines.)  If you take a more realistic baseline then the IRC estimate falls from 2.8 million to 900,000.

The excess death estimates for the final three surveys, the only ones to cover the entire country, were not affected by the methodological errors evident in the first two surveys. Here, the major problem, as mentioned above, lay with the inappropriately low baseline mortality rate. The impact of changing this rate to a more appropriate one was dramatic. The estimated excess death toll dropped from 2.8 million to less than 900,000. This is still a huge toll, but it is less than one-third of the IRC’s original estimate for the period. [Human Security Report]

It seems to me that this and other distortions come from extreme impatience rather than from Muggah having an ax to grind.   It is as if he conducted his research while driving a car full of screaming children and talking on the telephone.

Here is a particularly irksome paragraph for me (I work closely with Iraq Body Count and am on the Board for Every Casualty):

…Advocacy groups such as Every Casualty and the Iraq Body Count (IBC) restrict their databases to incidents where there is evidence of a corpse and the circumstances of the individual’s death. Each victim must have a name. And because they limit their counts to precisely documented cases of violent deaths, they are often criticised for under-counting the scale of lethal violence.

First of all, Every Casualty doesn’t have a database, let alone one that imposes Muggah’s created rules.  It does strive to bring about a world in which nobody dies in a war without his/her death being recorded and recognized.  Ideally, recognition includes names but no one I’m aware of thinks that war death databases should exclude unnamed victims.  IBC would love to have a name for every victim in its database but, unfortunately, most remain unnamed to this day.  In short, Muggah’s requirement that “each victim must have a name” is a mirage.

Muggah’s “often criticised” link in the above quote is, perhaps, a borderline case but I would also classify it as a click and collapse.  It is true that databases that document deaths on a case by case basis will tend to undercount because it is very difficult to capture every single case (although it appears that Kosovo Memory Book pulled off this feat after  an extraordinary effort made over many years.).  Yet, Muggah spends a good chunk of his article discussing competing figures on the number of deaths in Iraq and it turns out that the article behind this link (by Beth Osborne Daponte) covers the same Iraq numbers in great depth and concludes:

While each approach has its drawbacks and advantages, this author puts the most credence in the work that the Iraq Body Count has done for a lower bound estimate of the mortality impact of the war on civilians. The data base created by IBC seems exceptional in its transparency and timeliness. Creating such a data base carefully is an incredibly time-consuming exercise. The transparency of IBC’s work allows one to see whether incidents of mortality have been included. The constant updating of the data base allows one to have current figures.

Muggah’s short summary of this article is that is criticizes IBC and similar efforts for undercounting.

A few weeks back I introduced readers of the blog to the three fundamental ways to account for war deaths: documentation, counting and estimation. Quoting from an earlier post, you can:

1. Document war deaths one by one, listing key information such as names of victims, dates of deaths, how people were killed etc.

2.  Count the number of war deaths.  Doing this is an obvious outgrowth of documentation but also has status independent of documentation.  You can, for example, count deaths that may be documented to varying standards, e.g., counting unidentified bodies along side identified ones..

3.  Estimate the number of war deaths using a statistical procedure.

Muggah refers rather indiscriminately to “counts” throughout his article, even appearing to extend the concept to estimates of excess deaths.  This juxtaposition of counting and excess deaths suggests an illuminating thought experiment.

Imagine trying to count excess deaths deaths in a war.  Assume you are lucky enough to have a list of everyone who has died since the outbreak of the war.  You would have to consider every person on the list and make a call on whether or not that person would have survived if the war had never happened in the first place.

What do you do, for example, with people who die of strokes?  There could be a few cases of people dying on the operating table after power outages that can be clearly traced back to insurgent activity.  You would have a good case for classifying these as excess deaths, albeit at the cost of engaging in piss-poor monocausal social science since war-induced power outages can only be one among multiple causes of stroke  deaths.

Suppose now that the rate of stroke deaths increased dramatically after the outbreak of war but that it’s only possible to identify a war-related cause for a handful of stroke deaths. The excess death concept suggests that, nevertheless, you should view the increase in the death rate as caused by the war.  (This attribution of causality is debatable but that’s not my concern here.)  My point is simply that you can estimate the number of stroke deaths above and beyond a baseline pre-war rate but you can’t count them one by one because any individual case can’t be pinned monocausally on the war.

Thus, excess deaths are tied inextricably with estimation whereas violent deaths can be documented, counted or estimated.  In fact, it is no accident that we can approach violent deaths from more angles than we can approach excess deaths.   The excess deaths concept is rooted in a counterfactual scenario – how many people would have died if there had never been a war? – and is, therefore, more abstract than the violent deaths concept.

Muggah is right that there is a lot of disagreement about measuring war deaths.  Yet there is a hopeful sign in the way that  several dozen casualty recording organizations are now at the final stages of agreeing on a shared set of standards after a series of fruitful meetings organized by Every Casualty.  I hope to be able to post something soon on this positive development.

Darfur: Building Numbers out of Sand

Two days ago we discussed the claim of 300,000 excess deaths caused by the war in Darfur.  I suggested that a possible source for the 300,000 figure is this study.

But there is another possibility.

A quick google of Darfur 300,000 leads us to a UN official pulling this number out of his …errr….armpit:

John Holmes, the undersecretary general for humanitarian affairs, told a security council meeting yesterday that the previous number of 200,000 dead in fighting between rebel groups, some backed by the Khartoum government, was last tallied in 2006.

“That figure must be much higher now, perhaps half as much again,” Holmes said to the council. Answering questions from reporters, he later qualified the estimated number, by admitting the death toll of 300,000 “is not a very scientifically based figure” because there have been no new mortality studies in Darfur, but “it’s a reasonable extrapolation”.

OK, but Holmes was embellishing on a firm figure of 200,000, right?   Not really. He’s actually building on top of a sand castle sculpted by his UN colleague Jan Egeland:

It has been at least 10,000 … on average, of preventable deaths since the emergency became a big emergency, which was towards the end of 2003,” Egeland said….

“If you say for the last 18 months, 10,000 a month, that’s 180,000,” Egeland said.

“It could be just as well more than 200,000 but I think 10,000 a month … is a reasonable figure”

Egeland’s 10,000 per month comes from a WHO estimate that covers only seven months.  Egeland just goes ahead and applies this number to 18 months, rounds up to 200,000 and then speculates that the real number could be more than that.

It is bad practice, to say the least, to apply the results of surveys taken at one time and place to other times and places not surveyed. What Egeland does is sort of like saying that Massachusetts leans Democrat in my survey so it’s safe to say that Texas must also lean Democrat, possibly more so than Massachusetts.  I would do better to either gather data on Texas or just shut up about it.

The WHO study that offered the 10,000 figure is a bit of a mixed bag.  On the one hand 10,000 per month is put forward as a maximum, not a minimum as Egeland claims.  On the other hand, the estimate applies only to camps for internally displaced people so the number for the whole country could ….well it could be all sorts of things, either bigger or smaller than 10,000.

My point here is not really about Darfur.  We actually have a decent starting point for understanding the Darfur death toll, if we just ignore all the UN arm flapping.

My real point is twofold:

  1. This episode shows how readily people conjure up war death figures out of thin air.
  2. Each step along the way tends to obscure the earlier steps.  This means that it can take considerable effort to trace numbers back to their sources and realize just how dubious they are.

Unfortunately, the moral of the story is that you need to be suspicious of these sorts of numbers and avoid taking anything very seriously without at least tracing it back to its source.