Langer Research Associates Responds: Part IV

This continues the stream of posts beginning here and continuing through here, here, here and here.

Today I had wanted to write on duplicates in the D3/KA Iraq surveys but I’ve hit a little snag in the analysis so I will postpone this subject for the near future.

Instead, today I’ll cover empty categories, that is, answer choices that are offered to respondents but that nobody among some broad class of respondents actually picks.

We stressed these empty categories in our original paper, finding a number of questions for which all respondents to our flagged supervisors failed to give certain answers that at least some respondents for other supervisor did give.

Yesterday’s discussion of duplicates is actually relevant for understanding why having so many empty categories is suspicious.  Peoples’ opinions are not cloned from one another.  Moreover, there is randomness in how people respond to questions and how these responses are recorded.  So we would expect a lot of natural variation in real answers to real survey questions.  We would not expect all responses to converge on just a few categories.

Quick note – today I will merge together two things that we held separate in the original paper.  Back then we had a section on substantive responses such as how much people approved of Prime Minister Maliki or whether or not people owned shortwave radios.  Then we had a another section on the responses “don’t know” and “refused to answer”.  Here I simplify things by treating the two types of missing categories as interchangeable.

The Exhaustive Review made a good point on missing categories.  We always split our sample in two: the interviews conducted by the flagged supervisors (we called them “focal supervisors” in the paper) and the interviews conducted by all the other supervisors.  This method of splitting means that there were always two to three times as many interviews in the unflagged category than there were were in the flagged category.  So maybe the excess of empty categories for the flagged supervisors is just because of the lower number of interviews they conducted.  In particular, the Exhaustive Review points out that when you group interviews by single supervisors, rather than by groups of supervisors as we did, you see that many supervisors have empty categories, not just the flagged supervisors.

This is definitely something that merits further investigation which I’m still doing.  However, I can report that a clear pattern has already emerged.  Once you adjust for the numbers of interviews the flagged supervisors tend to produce roughly two to four times the number of empty categories as the other supervisors do.

For example, in the January 2006 PIPA survey our flagged supervisors have a total of 240 empty categories in 332 interviews.  Two nonoverlapping combinations of other supervisors with 320 and 322 interviews had 110 and 122 missing categories, respectively.  I did find a single supervisor who had 316 missing categories…. but on only  66 interviews.

The results are similar for other surveys.  More interviews do tend to reduce the count of missing categories but the flagged supervisors consistently rack up 2 to 4 times their share of empties relative to interview counts.

So the Exhaustive Review has made a useful point that helps to improve the analysis of these surveys.  I just wish they had made the point openly back in 2011.  And this extension of the original approach does not weaken the evidence for fabricated data in the surveys.

Advertisements

2 thoughts on “Langer Research Associates Responds: Part IV

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s