Thursday, April 20, 2017

How Totally Fraudulent Is The Polling And Survey Industry and The Science That Pretends It's Reliable

My friend RMJ has a really good post up about the study I wrote about yesterday and much more, with a link to a Vox article which is the best treatment of it I've seen in the media.   The Vox piece goes into one of the major absurdities of the methodology which, if I'd felt up to it, I should have gone into.

I ran Gervais and Najle’s conclusion by Greg Smith, who directs Pew’s polling efforts on religion. He’s not yet ready to buy it.

“I would be very reluctant to conclude that phone surveys like ours are underestimating the share the public who are atheists to that kind of magnitude,” he says.

For one, Smith says, Pew has asked questions on religion both on the phone and online and didn’t see much of a difference. You’d expect if people were unwilling to say that they’re atheists over the phone to a stranger, they’d be slightly more likely to input it into a computer. (Though Pew’s online questioning still has participants directly answer the question, instead of asking people to merely list the numbers of items they agree with. Even online, people might be uneasy answering the question.)

Also, Smith points out a weird quirk in Gervais’s data.

In one of the trials, instead of adding the “I don’t believe in God” measure to the list, the survey added a nonsense phrase: “I do not believe that 2 + 2 is less than 13.” And 34 percent of their participants agreed. Bizarre indeed. The researchers’ explanation? “It may reflect any combination of genuine innumeracy [lack of math skills], incomprehension of an oddly phrased item, participant inattentiveness or jesting, sampling error, or a genuine flaw in the ... technique,” Gervais and Najle write in the paper.

But they still think their measure is valid. When they limited the sample to people who were self-professed atheists (as measured in a separate question), 100 percent said they didn’t believe in God, which is correct. “It is unlikely that a genuinely invalid method would track self-reported atheism this precisely,” they write.

Still, more research is needed. “In time, we'll hopefully be able to refine our methods and find other indirect measurement techniques,” Gervais says. (Overall, kudos to Gervais and Najle for being forthright about their curious finding. In the past, psychologists have had incentives to avoid printing this type of contradictory finding in their papers.)

I don't know why the author would be giving them "kudos" for them publishing the obviously bogus study using an obviously bogus methodology.   Really:

“I do not believe that 2 + 2 is less than 13.” And 34 percent of their participants agreed. Bizarre indeed. The researchers’ explanation? “It may reflect any combination of genuine innumeracy [lack of math skills], incomprehension of an oddly phrased item, participant inattentiveness or jesting, sampling error, or a genuine flaw in the ... technique,” Gervais and Najle write in the paper.

Notice their speculations as to why 34% of those included answered as believing that 2 + 2 is equal or greater than 13 includes the absurd notion that any adult except those who would be classified as profoundly mentally disabled and who are certainly not answering their other questions really believes that is the case.  That, in itself, is so absurd as to impeach the validity of any of their conclusions.  The study is bogus, the methodology is crap, the sampling - far from unproblematic if statistical validity is to be considered desirable - might be the least bad thing about it.

There would be no reason to suspect that you could draw any reliable conclusions from a study that had such a result.  If some poeple answered "yes" to that question out of ignorance or "inattentiveness" or who knows why, there must have been people who answered "no" to the question out of something other than a knowledge of first-grade arithmetic.   If they couldn't be trusted to give a straight answer to that question - either yes or no - then there is no reason to believe they could on any of the other questions in the studies,

But this study is going to be published in some peer-reviewed professional journal, even as it's already being taken seriously in the un-reviewed and credulous media.  The whole thing is sold on the phony assertion that it is science and, so, reveals something you can rely on when the entire thing is absolutely not reliable even though they do a pantomime of doing what real science does.

The entire thing is a professionally interested con-job that will be supported by people who certainly know it is bogus through a kind of mutual professional protection con job.  I doubt that there is much in the social-sciences these days, certainly not what gets a lot of buzz in the media that could fulfill any real test of rigorous review.   You could ask the same of Pew or any others who do this kind of surveying.

You wonder how they could figure out how many of those answering any of their questions did so out of "inattentiveness" and, so, what rate of such invalid responses could be.  I'd love to read their methodological cat's cradle reasoning as to why anyone should believe they could figure that out.

The excellent post by RMJ goes into far more and far more important aspects of this problem and its wider implications.

No comments:

Post a Comment