I’ve now absorbed it but find myself even more puzzled than I was after reading that Syria survey I blogged on a few weeks back. Again, it looks like some people did some useful field work but the write up is so bad that it’s hard to know exactly what they did. In fact, the Libya work is more opaque than the Syria work to the point where I wonder what, if anything, was actually done.
For orientation here is the core of the abstract:
A systematic cross-sectional field survey and non-structured search was carried out over fourteen provinces in six Libyan regions, representing the primary sites of the armed conflict between February 2011 and February 2012. Thirty-five percent of the total area of Libya and 62.4% of the Libyan population were involved in the study. The mortality and injury rates were determined and the number of displaced people was calculated during the conflict period.
A total of 21,490 (0.5%) persons were killed, 19,700 (0.47%) injured and 435,000 (10.33%) displaced. The overall mortality rate was found to be 5.1 per 1000 per year (95% CI 4.1–7.4) and injury rate was found to be 4.7 per 1000 per year (95% CI 3.9–7.2) but varied by both region and time, reaching peak rates by July–August 2011.
I’m not sure but I think the researchers (hereafter Daw et. al.) tried to count war deaths (plus injuries and displacement numbers) rather than trying to statistically estimate these numbers. (See this paper on the distinction.)
Actually, I read the whole paper thinking that Daw et al. drew a random sample and did statistical estimation but then I changed my mind. I got my initial impression at the beginning because they say
This epidemiological community-based study was guided by previously published studies and guidelines.
They then cite the (horrible) Roberts et al. (2004) Iraq survey as providing a framework for their research (see this and follow the links). Since Roberts et al. was a sample survey I figured that Daw et al. was also a sample survey. They then go on to say that
Face to face interviews were carried out with at least one member of each affected family….
This also seemed to point in the direction of a sample survey conducted on a bunch of randomly selected households. (With this method you pick a bunch of households at random, find out how many people lived and died in each one and then extrapolate a national death rate from the in-sample death data.)
But then I realized that the above quote continues with
…listed in the registry of the Ministry of Housing and Planning
Hmmmm….so they interviewed all affected families listed in the registry of some Ministry. This registry cannot have been a registry of every family living in the areas covered by the survey because there are far more families there than could have been interviewed on this project. (The areas covered contain around 4.2 million people according to Table 1 of the paper and surely Daw et al. did not conduct hundreds of thousands of interviews.)
So I’m guessing that the interviews were just of people from families on an official list of victims; killed, injured or displaced. This guess places a lot of emphasis on one interpretation of the words “listed” and “affected” but it does make some sense.
To be clear, even interviewing one representative from every affected family would have been a gargantuan task since Daw et al. identify around 40,000 casualties (killings plus injuries) and more than 400,000 displaced people. So we would still be talking about tens of thousands of interviews.
To be honest, now I’m wondering if all these interviews really happened. That’s an awful lot of interviews and they would have been conducted in the middle of a war.
So now I’m back to thinking that maybe it was a sample survey of a few thousand households. But if so then the write up has the large flaw that there is no description whatsoever of how its sample was drawn (if, indeed, there was a sample).
Something is definitely wrong here. I shouldn’t have to get out a Ouiji board to divine the authors’ methodology.
The Syria survey discussed a few weeks ago seems to be in a different category. For that one I have a lot of questions about what they did combined with doubts about whether their methods make sense. But this Libya write-up seems weird to the point where I wonder whether they were actually out in the field at all.
Maybe an email to Dr. Daw will clear things up in a positive way. With the Syria paper emailing the lead author got me nowhere but maybe here it will work. I’m afraid that the best case scenario is that Daw et al. did some useful field work that was obscured by a poor write up and that there is a better paper waiting to get written.