Unfortunately, most of the articles are behind a paywall but, thankfully, the overview by Steve Koczela and Fritz Scheuren is open access. It’s a beautiful piece – short, sweet, wise and accurate. Please read it.
Here are my comments.
Way back in 1945 the legendary Leo Crespi stressed the importance of what he called “the cheater problem.” Although he did this in the flagship survey research journal, Public Opinion Quarterly, the topic has never become mainstream in the profession. Many survey researchers seem to view the topic of fabrication as not really appropriate for polite company, akin to discussing the sexual history of a bride at her wedding. Of course, this semi taboo is convenient for cheaters. Maria Konnikova has a great new book about confidence artists. Much in the book is relevant to the subject of fabrication in survey research but one point really stands out for me; a key reason why the same cons and the same con artists move seamlessly from mark to mark is that each victim is too embarrassed to publicize his/her victimization.
Discussions of fabrication that have occurred over the years have almost always focused on what is known as curbstoning, i.e., a single interviewer making up data. (The term comes from an image of a guy sitting on a street curb filling out his forms.) But this is just one type of cheating and one of the great contributions of Koczela and Scheuren’s journal edition and the impressive series of prior conferences is that have substantially expanded the scope of the survey fabrication field. Now we discuss fabrication by supervisors, principal investigators and the leaders of a survey companies. We now know that hundreds of public opinion surveys, especially surveys conducted in poor countries, are contaminated by widespread duplication and near duplication of single observations. (This journal issue publishes the key paper on duplication.)
Let me quote a bit from the to-do list of Koczela an Scheuren.
It does not only happen to small research organizations with fewer resources, as was previously believed . Recent instances involve the biggest and most names in the survey research business, academia and the US Government.
This is certainly true but I would add that reticence about naming names is crippling. Yes, it’s helpful to know that there are many dubious surveys out there but guidance on which ones they are would be very helpful.
An acknowledgement by the research community that data fabrication is a common threat, particularly in remote and dangerous survey environments would allow the community to be cooperative and proactive in preventing, identifying and mitigating the effects of fabrication.
This comment about remote and dangerous survey environments fits perfectly with my critiques of Iraq surveys including this one.
Given the perceived stakes, these discussion often result in legal threats or even legal action of various types.
…the problem of fabrication is fundamentally one of co-evolution. The more detection and prevention methods evolve, the more fabricators may evolve to stay ahead. And to the extent we discover and confirm fabrication, we will never know whether we found it all, or caught only the weakest of the pack. With these truths in mind, more work is needed in developing and testing statistical methods of fabrication detection. This is made more difficult by the lack of training datasets, a problem prolonged by a general unwillingness to openly discuss data fabrication.
Again, I couldn’t agree more.
Technical countermeasures during fielding are less useful in harder to survey areas, which also happen to be the areas where the incentive to fabricate data is the highest. Many of the recent advances in field quality control processes focus on areas where technical measures such as computer audio recording, GPS, and other mechanisms can be used [6,13].
In remote and dangerous areas, where temptation to fabricate is the highest, technical countermeasures are often sparse . And perversely, these are often the most closely watched international polls, since they often represent the hotspots of American interest and activity. Robbins and Kuriakose show a heavy skew in the presence of duplicate cases in non-OECD countries, potentially a troubling indicator. These polls conducted in remote areas often have direct bearing on policy for the US and other countries. To get a sense of the impact of the polls, a brief review of the recently released Iraq Inquiry, the so-called Chilcot report, contains dozens of documents that refer, in most cases uncritically, to the impact and importance of polls.
To be honest, Koczela and Scheuren do such a great job with their short essay that I’m struggling to add value here. What they write above is hugely pertinent to all the work I’ve done on surveys in Iraq.
By the way, a response I sometimes get to my critiques of the notorious Burnham et al. survey of deaths in the Iraq war (see, for example, here, here and here) is that it is unreasonable to expect perfection for a survey operating in such a difficult environment. Fair enough. But then you have to concede that we cannot expect high-quality results from such a survey either. If I were to walk in off the street and take Harvard’s PhD qualification exam in physics (I’m assuming they have such a thing….) it would be unreasonable to expect me to do well. I just haven’t prepared for such an exam. Fine, but that doesn’t somehow make me an authority on physics. It just gives me a perfect excuse for not being such an authority.
Finally, Koczela and Scheuren provide a mass of resources that researchers can use to bring themselves to the frontier of the survey fabrication field. Anyone interested in this subject needs to take a look at these resources.