Sociological improvisations
the sociology of science (natural & social), and other sociological matters
Categories:

Archives:
Meta:
April 2024
M T W T F S S
« Jan    
1234567
891011121314
15161718192021
22232425262728
2930  
03/09/16
Using online panels for election polls
Filed under: Polling & Survey Research
Posted by: Dominic Lusinchi @ 8:32 am

Introduction

Although the events I relate in this post took place more than a year ago, the topic of the controversy (the use of opt-in online panels for election polling purposes) is still very much current, especially at this time of electoral contests, when we are likely to see both successes and blunders (recall the recent 2015 UK parliamentary elections).

The Setting

    Background

On July 25, 2014, the New York Times (NYT), and its polling partner CBS News (CBS), made an announcement that “rocked the polling world” (Washington Post, 07/31/14).  The news organizations reported that they had retained YouGov to conduct their polls for the upcoming midterm November elections.  The remarkable part was that the polling house is one that bases its polls on an Internet panel, meaning folks who volunteer to take a survey from time to time.  This represented a departure from NYT/CBS’s traditional approach: in the past they relied on polls that used telephones and random-digit-dialing (RDD) to reach respondents.  RDD is held as the “gold standard” when it comes to polling and survey research by telephone because it conforms to the statistical theory of probability (or random) sampling.  In contrast, Internet panels do not fit this theory because panel members are not selected at random; they self-select themselves to be part of the panel.

The NYT indicated that their polls would be based on “an online panel of more than 100,000 respondents nationwide” (NYT, 07/27/14).  It attributed its choice to work with YouGov to the fact that “declining response rates may be complicating the ability of telephone polls to capitalize on the advantages of random sampling” (id.).  In the same article, it acknowledged both the limitations of working with an online panel (“only the 81 percent of Americans (…) use the Internet”), and YouGov’s less than perfect estimates in the 2012 election (it “underestimated President Obama’s share of the Hispanic vote in 2012”).  However, YouGov’s results, it affirmed, “are broadly consistent with previous data on the campaign” (id.).  It also cited the serious problem that has plagued telephone sampling: “Only 9 percent of sampled households responded to traditional telephone polls in 2012, down from 21 percent in 2006 and 36 percent in 1997, according to the Pew Research Center” (id.).

In a piece dated 10/05/2014, the NYT stated, perhaps to placate those who criticized their use of the YouGov panel, “The YouGov online surveys are being used to supplement, not replace, the Times’s traditional telephone polls.”  It went on to explain that the NYT/CBS “political and social surveys are conducted using random digit dialing probability sampling,” and that the “YouGov data is used for The Upshot election forecasting model in key congressional races and Senate battleground states.”

    The Reaction

The event described above was considered “a very big deal in the survey world” by the Pew Research Center’s director of survey research, Scott Keeter 1.  Days after the NYT/CBS revelation, the American Association for Public Opinion Research (AAPOR) issued a statement (08/01), signed by its then president, Michael Link, expressing its “concerns” regarding the use of “opt-in Internet” surveys 2.  AAPOR is a professional organization that regroups polling and survey research practitioners that work in the private sector, in government, and in academia.  (In the interest of full disclosure the reader should know that I am a member of this organization.)  As such, one of its responsibilities is to police what is done in the polling industry.  AAPOR chastised NYT/CBS: first, for using an Internet panel to report on an electoral contest, because this method of selecting a sample has “little grounding in theory”; second, for a lack of “transparency” regarding how the news organizations arrived at the results they published.  As for this last point, the statement read: “While little information about the methodology accompanied the story, a high level overview of the methodology was posted subsequently on the polling vendor’s [i.e. YouGov] website.  Unfortunately, due perhaps in part to the novelty of the approach used, many of the details required to honestly assess the methodology remain undisclosed.”

AAPOR rebuked the NYT for abandoning its high standards in matters of polling, and only telling its readers that “the old standards were undergoing review”.  It also insisted that “standards need to be in place at all times.”  In addition, it criticized the Times for publishing a story (NYT 05/20/2014) that reported on a study whose respondents were recruited by means of ads on Facebook.  It warned that “using information from polls which are not conducted with scientific rigor in effect sets a new–lower–standard for the types of information that other news outlets may now seek to report.”

While acknowledging that the “world of polling and opinion research is indeed in the midst of significant change”, in so far as data collection, it warned that “the use of any new methods [should] be conducted within a strong framework of transparency, full disclosure and explicit standards.”

    Reactions to the Reaction

Many individuals had their say about AAPOR’s statement.  I will concentrate on two of the more notable (and accessible) ones – in my view.  Although, predictably, there were two types of reactions to AAPOR’s announcement, for and against, the ones presented here are of the negative variety.

One response (08/05) came from a long time member of the organization, Reg Baker, on his personal blog: The Survey Geek 3.  He has been part of AAPOR’s leadership, having been, among other positions, a member of its executive council.  The title of his post says it all: “AAPOR gets it wrong.”  What did it get wrong?

He writes: “We have well over a decade of experience showing that with appropriate adjustments these polls are just as reliable as those relying on probability sampling, which also require adjustment.”  He adds: “There is a substantial literature stretching back to the 2000 elections showing that with the proper adjustments polls using online panels can be every bit as accurate as those using standard RDD samples.”  Presumably Baker’s remark was in response to AAPOR stating: “we are witnessing some of the potential dangers of rushing to embrace new approaches without an adequate understanding of the limits of these nascent methodologies.”  So what Baker is saying is that AAPOR is wrong on two counts: online polling is not new, and we do have “an adequate understanding of [its] limits.”

AAPOR is also wrong, Baker believes, when it says that YouGov did not provide sufficient details regarding its methodology.  On the contrary, Baker asserts: “The details of YouGov’s methodology have been widely shared, including at AAPOR conferences and in peer-reviewed journals.”

He says he agrees (partially) with AAPOR on one point.  The NYT, he opines, did “an exceptionally poor job of describing [the decision to use online panels] and disclosing the details of the methodologies they are now willing to accept and the specific information they will routinely publish about them.  Shame on them.”  But he faults AAPOR for not providing practitioners with “a full set of standards for reporting on results from online research,” despite the fact that this methodology has been around for nearly two decades and is widely used by researchers around the world.  One should note that Baker was chair of a 2010 AAPOR task force on opt-in online panels.  One might ask: would that not have been a good opportunity to devise “a full set of standards for reporting on results from online research”?  But AAPOR’s Executive Council made it very clear that it was not in the task force’s mandate to do so.  Nevertheless, the task force did give one recommendation regarding the reporting of survey results based on the opt-in methodology: that surveys based on opt-in or other self-selected samples should not report a “margin of error” as this is not appropriate for non-probability samples.

A more strident reaction, at least in its second formulation, came from a Columbia University professor of political science and statistics.  At first the good professor, Andrew Gelman is his name, in a blog called “The Monkey Cage” (?), a regular feature in the Washington Post, provided a response, with his colleague David Rothschild of Microsoft, in the best tradition of polite academic dialog 4.  The authors’ post, “Modern polling needs innovation, not traditionalism”, was a model of moderation and reasonableness.  In it, they gave AAPOR an emphatic reverential bow, calling it “a justly well-respected organization”, and warned their readers that they were not “disinterested observers” since they collaborate with YouGov on a number of projects.  They found AAPOR’s statement, although “undoubtedly well-intentioned”, “so disturbing”.  Why?  Because, the authors believe, AAPOR’s “rigid faith in technology and theories or ‘standards’ determined in the 1930s” is “holding back our understanding of public opinion” and “putting the industry and research at risk of being unprepared for the end of landline phones and other changes to existing ‘standards’.”  Like Baker, the authors point out that YouGov’s methodology has been widely discussed in professional meetings and in peer-reviewed journals.  In their view, the theory behind YouGov’s methodology is “well-founded” and “based on the general principles of adjusting for known differences between sample and population.”  They add: “If anything, people on the cutting edge of research are not hiding anything; on the contrary, we are fighting hard to overcome entrenched methods by being even more diligent and transparent.”

Although not generally known, academics are human too.  And, as any other member of the species, they are prone to the occasional bile-spilling.  This is what happened in Gelman’s second formulation of his response to the AAPOR missive posted (08/06) on his personal blog, which rejoices under the name of “Statistical Modeling, Causal Inference, and Social Science”.  The article is titled (hold on to your hats) “President of American Association of Buggy-Whip Manufacturers takes a strong stand against internal combustion engine, argues that the so-called ‘automobile’ has ‘little grounding in theory’ and that ‘results can vary widely based on the particular fuel that is used’” 5.  The professor directed his ire against Michael Link.  He accuses Link of having an “anti-innovation” attitude, of “making things up” to support “his” position, of “talking out of his ass” (no, I’m not making this up; go check for yourself), and of “aggressive methodological conservatism” – apparently, the latter must emit some putrid odor since it seems to have occasioned (twice) a desperate search for the vomit bag – as he reports that it “just makes me want to barf” (no, I’m still not making this up).  (Fortunately, our somewhat indisposed professor did make a few substantive points – I will come to that in a moment.)  In a blog later in the year (12/09: “Buggy-whip update”), he tells his readers that six days after the posting just mentioned, he sent a personal email to Link asking him to explain “his” (i.e. AAPOR’s) statement of August 1 6.  He received no response.  Somewhat miffed, the professor writes: “I get frustrated when people don’t respond to my queries.”  Tell me about it!  Now, it seems to me that it doesn’t take a very sophisticated statistical model to predict that the probability of receiving a response given Gelman’s August 6 post is much closer to zero (0=no response) than to one (1=response).

Now to the substance.  Gelman makes the point that there really is no difference between a “probability” sample that has a response rate of 10% and an opt-in Internet panel – both are self-selected samples.  In either case, in order to estimate what it is you are trying to estimate (e.g. the percentage a political candidate will receive), you “have to do some adjustment to correct for known differences between sample and population,” and in the process “make assumptions”.  The methodology is “not new”, he says, and “a lot of research” has been done on these issues.  Regarding the latter, he mentions the work of Roderick Little, an expert in the statistics of “missing data”.

A Sociological View
This controversy illustrates several themes of the sociology of science, in our case, social science:


    Reaffirming the boundary between science and non-science

AAPOR as the guardian of agreed upon standards for the conduct of polls and survey research is duty bound (as one AAPOR member put it, it would have been irresponsible of AAPOR not to have said something) to intervene when any of its norms have been violated – whether the violator is a member of the association or not.  In its view the NYT/CBS organization had done just that when it decided to base its election forecasts on polling data that came from an opt-in Internet panel, i.e. from a non-random sample.  Generally, in the past, these types of samples have been considered un-scientific.  In contrast, probability (aka random) samples have been recognized, if not adopted, as the “gold standard” of sampling since the late 1940s – at least in the United States.  In other words, to borrow from sociologist Thomas Gieryn, it has been the task of AAPOR to demarcate science (probability sampling) from non-science (non-probability samples) in the field of polling and survey research 7.  These norms, for example, have forced news network organizations to warn their viewers, when reporting the results of a call-in poll (aka 1-800-poll or “junk” poll), that the numbers on their screens were obtained from a “non-scientific” survey.

What is considered science in the polling world has changed over the years.  In the 1930s and 40s (1935-1948), the new pollsters (Crossley, Gallup, and Roper) promoted a distinctly non-probability methodology (quota sampling) as science – and (before 1948) nobody really challenged them on this as AAPOR is now challenging the NYT/CBS organization 8.  Nowadays, or at least until very recently, were you to use a quota sample or any other non-random sampling methodology for your study, you were liable to get your wrists slapped (figuratively, of course) – at least, and I repeat, in the US 9.  The Hite Report is a good example 10.  Thus, science is what those who are empowered to say what it is say it is.  And science varies depending on the era you live in and what part of the world you reside in.

The AAPOR statement is an opportunity for the association to assert its authority.  It is “the leading association of public opinion and survey researchers”, and as such its credentials cannot be doubted.  The statement is also the occasion the reiterate the basic tenets of the faith: “a fundamental belief in a scientific approach”; “objective standards”; polls, conducted according to “standards of quality”, “mirror reality” (that is, social reality); etc.  AAPOR’s basic ruling in the August 1, 2014 release is that Internet opt-in panels are NOT quite ready for the big league – pre-election polling; they’re still wet behind the ears.  The time to extend the boundaries of what is considered scientific in polling is not now, because “these new approaches and methodologies” still require “rigorous empirical testing”, etc.  In other words, AAPOR re-emphasized the demarcation line between legitimate polls or surveys that provide reliable knowledge about social reality (e.g. public opinion), and “polls which are not conducted with scientific rigor” whose results are “highly questionable, if not outright incorrect”.  It also stated that it is not opposed to the idea of widening the boundaries, indeed it “encourage[s] assessment of [these methodologies’] viability for measure and insight”, but this must done “within a strong framework of transparency, full disclosure and explicit standards.”

    Transparency: a device to unmask illegitimate, non-scientific polls

One thing readers of the AAPOR statement might have noticed is the heavy emphasis that has been given to “transparency”.  Transparency is the act of unveiling all the steps that were taken to generate the final poll results that are published: from the sampling design, question wording, and data collection mode (e.g. telephone, web), to weighting and other forms of “adjusting” the raw data.  AAPOR launched its Transparency Initiative (TI) in 2014.  Now, as anybody who has studied the history of polls in this country will tell you, “transparency” is not one of the pollsters’ most conspicuous virtues.  Back in the 1940s, one reporter complained that he had made “several informal attempts (…) to check facts and figures” regarding Gallup polls, but that “all ended in failure” 11 (p.737).  Two decades later things seemed to have improved a bit.  Trying to get answers about the “Gallup system of processing” polling data, a New Yorker columnist had this to say: “By calling members of the Gallup staff, and by writing to Dr. Gallup, I was able to get answers – reluctant and incomplete, but still answers – to some of my questions about the process” 12 (p.174).  Nevertheless, in the following decade, some folks were still not satisfied with the pollsters’ transparency, and not just anybody: a member of congress drafted an unsuccessful bill under the name “Truth-in-Polling Act”.

The AAPOR release states that the organization “has for decades worked to encourage disclosure of methods.”  Be that as it may… but between encouragement and actual disclosure, there is a wide gap.  As an example, a recent (January 2016) poll by the Harvard School of Public Health in collaboration with STAT, an organization that reports news in the health and medical field 13.  One page of the 15 page report is dedicated to the poll’s methodology.  Although it provides a fair amount of detail (sample size, type of sampling, dates during which the poll took place, mode of interviewing), one would be hard pressed to find any information on response rate – even though it warns the reader that non-response bias can be part of the total error of the survey.  Now I am not trying to pick on the folks who did this particular survey (I just happened to receive the results in my inbox as I was writing this), or the polling house (SSRS) that actually conducted the data collection, and is a member of AAPOR’s Transparency Initiative; I am sure they are all fine upstanding researchers.  I am merely illustrating that the ideal of “full disclosure” that AAPOR promotes is yet to be realized – as some gentleman from China has said, I am told, “the future is bright but the road is tortuous”.

So, one may ask, why this hard push about transparency?  Answer: the Internet (at least one answer).  Thanks to the advent of this technology just about anyone can do a survey or poll nowadays.  Throw a few questions together (you know how to ask questions, don’t you Steve?), spend a few bucks (or loonies, if you’re in Canada) to use SurveyMonkey or some other web-survey platform, put an ad on craigslist (or elsewhere) to recruit your participants; when you’re done download the results into Excel, et voilà, you’ve got yourself a study.  (Disclaimer: I want the reader to know that I am neither promoting nor endorsing the companies mentioned.  I am just describing what I have witnessed during the course of my professional career.)  Because this technology is so ubiquitous and seemingly user-friendly, it endangers the monopoly the polling profession has over the production of knowledge about society, in general, and public opinion, in particular.  It threatens the profession in that it creates the appearance that to conduct a survey or poll no longer requires “expert” knowledge – just like the museum visitor standing in front of a Jackson Pollock painting and exclaiming “My six-year-old could’ve done that!”

The promotion of transparency is, in part, a demarcation maneuver (again to borrow from Gieryn).  It is a means for AAPOR to assert its authority and to reiterate what is and what is not legitimate when it comes to polling.  Those that joined the Transparency Initiative are recognized as worthy (i.e. scientific) polling practitioners; it is akin to the warning label consumers find on the package of products in the supermarkets.  The “Transparency Initiative” label tells consumers that the product from a particular organization is fit for consumption, and, by extension, for those that have not integrated the TI, their products should be viewed as suspect (non-scientific).

One last thing about “transparency”: there is no such thing as “full disclosure” – at least from commercial polling houses.  Polling is a business, big business, not mere idle curiosity.  These companies can always invoke proprietary rights to avoid revealing how the results they publish have been produced.  Thus, the gruesome details remain hidden from the public eye 14.

    Redrawing the boundary: what should be considered science?

One of themes in Gelman’s response to AAPOR is the contested nature of the boundary between what constitutes sound (i.e. scientific) polling practice and what does not.  As we saw AAPOR is firmly attached to the principle of probability (aka random) sampling.  As I said before, this has been the central credo of the polling profession for decades.  Gelman wants the boundary to be extended; he wants to push the demarcation line so that it will include non-probability samples.  In reality, the line’s location has already been renegotiated since, nowadays, probability samples with response rates of 10% or less are still considered scientific 15.  Gelman’s argument is that there is no difference between these types of samples and samples that recruit their respondents off the Internet (à la YouGov): they are both self-selected samples.  Gelman writes: “the ‘grounding in theory’ that allows you to make claims about the nonrespondents in a traditional survey, also allows you to make claims about the people not reached in an internet survey.”  In both cases, after the poll’s raw results are in, the analyst will have “to do some adjustment to correct for known differences between sample and population.”

In fact, Gelman is intimating that AAPOR seems to be unaware of this shift of the demarcation line, namely, that methodologies as the one used by YouGov are definitely inside the scientific corral.  In his view, they have become a legitimate part of the polling culture.  Both he and Baker state that data obtained from Internet opt-in panels polls have a solid pedigree: they have passed muster.  How?  By the traditional, tried-and-true, means to establish one’s claim to scientificity, or scientific worth: the peer-review system and presentations at conferences attended by one’s peers.  Baker writes: “There is a substantial literature stretching back to the 2000 elections showing that with the proper adjustments polls using online panels can be every bit as accurate as those using standard RDD samples.”  He could have added “or inaccurate” to his statement for the sake of completeness.

Just as AAPOR relies on “transparency” to question the scientific credentials of polling houses that rely on non-probability samples, like YouGov, Gelman, clearly wedded to the transparency norm, underscores the fact that YouGov’s chief scientist “has detailed the methodology at length and subjected the methodology and results to public transparency that rivals the best practices of major polling companies.”  In addition, that same individual has written “academic papers (…) published in the top peer review journals.”  He adds: “If anything, people on the cutting edge of research are not hiding anything; on the contrary, we are fighting hard to overcome entrenched methods by being even more diligent and transparent.”  So there you are.  Who could doubt the scientific worth of the new polling techniques?  They have been peer-reviewed and they are as transparent as Baccarat crystal.  Clearly they have proven, so Gelman believes, their scientificity, and therefore their legitimacy.  So what’s the beef?

In his diatribe of August 6, Gelman adopts a rhetoric that has quasi-moralistic tone: he paints AAPOR as a force opposing progress.  The title of his post could not be more explicit: AAPOR is stuck in the past, still relying on the horse-drawn “buggy” to get around, whereas he and his acolytes are the forces of progress, gliding in the most up-to-date mode of transportation, the automobile, propelled by the internal combustion engine.  Who could argue against progress?  Who would support obscurantism?  AAPOR, apparently.  Thus, Gelman’s whiggish attitude seems to want to locate this venerable institution beyond the pale – in that hellish zone of non-science.  But really what he wants AAPOR to do is to recognize the scientific character, the legitimacy and respectability, of the new polling methodologies.  It is time, he proclaims, to expand the scientific territory, to push back the boundaries, for the de jure to catch up with the de facto.

    Resolution? Plus ça change… or “déjà vu all over again” (Berra)

The debate around the NYT/CBS announcement boils down to this: are polling samples based on Internet opt-in panels ready for prime time or not?  AAPOR say no, Gelman and like-minded researchers say yes.  How is this controversy going to be resolved?  If the issue appears to be unsettled, it is only in the sphere of the de jure (an official acknowledgment from AAPOR), because, on the ground, in the de facto world, it has been resolved: pollsters have “voted with their feet”.  Internet opt-in panels have been in use in the commercial polling world for nearly two decades.  Powerful economic interests are at stake here: all the corporate polling organizations that have sprouted as a result of the advent of the Web.  And it is not some statistical theory (probability sampling), however prestigious, especially when its application is doubtful and cumbersome, that is going to stand in the way of business: clients expect actionable results, while the polling house expects to be profitable – and so do corporate clients.  Besides, as some believe (Gelman and others), plenty of tools have been developed to mitigate the limitations of self-selected (opt-in) samples, and their scientific character cannot be impugned: they can “mirror” reality just as well (or as badly) as the next probability sample.

The issue is how is this going to worm itself into AAPOR’s code of professional practice?  In fact, non-probability samples have already carved themselves a bit of territory within the AAPOR canon.  The current AAPOR Code of Ethics (November 2015 update) states: “Disclosure requirements for non-probability samples are different because the precision of estimates from such samples is a model-based measure (rather than the average deviation from the population value over all possible samples). Reports of non-probability samples will only provide measures of precision if they are accompanied by a detailed description of how the underlying model was specified, its assumptions validated and the measure(s) calculated. To avoid confusion, it is best to avoid using the term “margin of error” or “margin of sampling error” in conjunction with non-probability samples” 16.  So the non-probability sample, anathema as it was in the not too distant past, has got its foot in the door – and then some.  Does that mean the controversy is over?  Apparently so.  Of course, a lot of folks are not too crazy about non-probability samples; their probability counterparts are so much neater – if only those darn people cooperated, the blissful days of the 70%+ response rate would be back.  But what can you do, if you’re not the Federal government?  The show must go on as thespians say.  Hence the online opt-in panel. Thus, non-probability sampling and probability sampling, now both harboring the science label, seem destined to live side-by-side in peaceful coexistence for the foreseeable future.

The polling profession has accomplished a complete circle: it started its modern career (ca. 1935) using non-probability samples (quotas), and now it has gone back to its roots by relying on opt-in online panels.  And both claim to be scientific.  Another feature they have in common is their dependence on very large samples, much larger than is required if one uses probability sampling.  In the ‘30s, Gallup used “vote poll” samples in the one hundred to two hundred thousand range 17.  This was considered progress compared to the mass mailing (10 million) done by the most prestigious poll of that era: the Literary Digest poll.  The scientific pollsters (Crossley, Gallup, and Roper) considered the Digest’s approach to be wasteful, among other things.  Nowadays, online polling organizations also rely on samples in the tens of thousands to make their forecasts.

Scientific practice, here social scientific practice, seems to be ruled, in part, by the Humpty-Dumpty philosophy: “Science means just what I choose it to mean–neither more nor less.”  (The reader will forgive, I hope, the poetic license, once again.)  Moreover, what constitutes science depends on the circumstances.  As I said, quota sampling that was used in the 30s and 40s by Crossley, Gallup, and Roper was considered scientific, and labeled as such, even though probability sampling was known and, in 1934, it was demonstrated by a Polish mathematician-statistician, Jerzy Neyman, to be superior to any other form of sampling.  The pollsters never adopted probability sampling until well after their disastrous prediction of a Dewey victory over Truman in the 1948 presidential election.  Folks in federal agencies, such as the Department of Agriculture, quickly adopted Neyman’s approach, and he was invited to lecture the staff on the issue of probability sampling.  So, in effect, two forms of “scientific” sampling, although apparently polar opposites, one a probability methodology, the other a non-probability practice, coexisted during a number of years.  Why does that sound familiar?

But let’s come back to our world.  Whose “science” is winning?  Link’s or Gelman’s?  But is there a contest in the first place?  I think not – in spite of the appearances: the bile spilled, the moral high ground (e.g. innovation vs. “methodological conservatism”), the abandonment of standards, etc.  Gelman and like-minded data analysts are going about their business.  As Gelman puts it, addressing AAPOR: “How bout [sic] you do your job and I do mine.”  Indeed, no one in one’s right mind is going to strip probability sampling of its scientific legitimacy.  But it is its practical implementation these days that makes it problematic for many survey researchers, thus their reliance on the opt-in methodology thanks to the rise of the Internet.  This difficulty in applying probability sampling is reminiscent, if the reader allows me to go down memory lane once again, of the assessment made by the pollsters of the 30s and 40s.  Gallup wrote: “Although random sampling can be highly accurate in the case of homogeneous populations, and is in many cases the simplest sampling method, there are times when it cannot be used successfully.  Sometimes the statistical universe is heterogeneous–that is, it is composed of a number of dissimilar elements which are not evenly distributed throughout the whole.  In addition, the universe is sometimes so widely distributed or so inaccessible that it is not feasible to set up a random sampling procedure which will guarantee that each unit has an equally good chance of being included in the sample” 18.  Thus, they chose to use quota sampling.  Gallup and his fellow pollsters were not the only ones in those days to think that way.  As eminent a statistician as Samuel Wilks could state: “In the case of large-scale polls, which are made on a state-wide or nation-wide basis, it is clear that it would be impossible, or at any rate highly impractical to draw a random sample from the population under consideration” 19.  Just like today, the pollsters of yester-years found it very difficult to implement probability sampling, so they relied on a non-probability methodology to select respondents to their polls.

I have tried to illustrate the back-and-forth way the science label has been attached to and then taken away from non-probability sampling depending on the circumstances.  During the early era of modern polling (1935-1948) in America, pre-election and issue polls were characterized by a distinctly non-probability methodology, the quota sample, which, nevertheless, was branded as scientific by the pollsters of that time.  The circumstances then were that probability sampling was not a viable method for the pollsters in those days.  During the golden era of random-digit-dialing (RDD) telephone surveys, any form of non-probability sampling was frowned upon and considered distinctly non-scientific.  Respondents in non-probability samples only represent themselves, we were told sternly.  The circumstances then were that polls were blessed with relatively high response rates (70%+).  Then, just in time, came the Internet or Worldwide Web era, and non-probability samples were back in business.  The circumstances then were that traditional RDD survey were (are) plagued with appalling low response rates making it increasingly costly, and thus impractical, to implement this methodology.  That tension is present in the world of polling and survey research seems clear enough.  On one side is the AAPOR statement; the association appears reluctant to confer the science label to opt-in Internet polls.  On the other, there are those who rely squarely on that technology and believe in its scientificity.  Nowadays, we live in an era, and not for the first time in the history of polling, in which two seemingly opposite sampling methodologies are used by practitioners.  Both technologies have been labeled as science and both are riding into the sunset, perhaps not hand-in-hand but definitely side-by-side, towards new successes and failures for the foreseeable future.  Does the Spanish philosopher’s, George Santayana, adage “Those who cannot remember the past are condemned to repeat it”, apply to the polling industry or not?  Or does it matter?

NOTES
1 http://www.pewresearch.org/fact-tank/2014/07/28/qa-what-the-new-york-times-polling-decision-means/
2 https://www.aapor.org/AAPORKentico/AAPOR_Main/media/MainSiteFiles/Response-to-NYTimes_AAPOR-website-final_logo_01Aug14.pdf
3 http://regbaker.typepad.com/regs_blog/2014/08/aapor-gets-it-wrong.html
4 https://www.washingtonpost.com/news/monkey-cage/wp/2014/08/04/modern-polling-requires-both-sampling-and-adjustment/
5 http://andrewgelman.com/2014/08/06/president-american-association-buggy-whip-manufacturers-takes-strong-stand-internal-combustion-engine-argues-called-automobile-little-grounding-theory/
6 http://andrewgelman.com/2014/12/09/buggy-whip-update/
7 Thomas F. Gieryn: “Boundary-Work and the Demarcation of Science from Non-Science: Strains and Interests in Professional Ideologies of Scientists”, American Sociological Review, Vol. 48, No. 6 (Dec., 1983), pp. 781-795.
8 They were criticized by a few statisticians during the course of a congressional hearing in December 1944: Hearings Committee to Investigate Campaign Expenditures House of Representatives Seventy-Eighth Congress Second Session on H. Res. 551.  See for example p. 1294: “The quota-sampling method used, and on which principal dependence was placed, does not provide insurance that the sample drawn is a completely representative cross-section of the population eligible to vote, even with an adequate size of sample.”  But to no avail.
9 In other countries, France for example, polling organizations have been using the quota methodology with, presumably, as much success and failure as their American counterparts using probability sampling.  To paraphrase, with some poetic license, one of their 17th century compatriots: science on this side of the Atlantic, non-science on the other side.
10 This is a fascinating case and a real treasure trove for the sociologist of (social) scientific knowledge, and merits a post in itself – I will work on it.  I mention it here because it was roundly criticized by AAPOR, among others, for the lack of randomness of the samples and the very low response rate to its questionnaires – in other words, not much different than one of today’s Internet or telephone surveys.
11 Benjamin Ginzburg, “Dr. Gallup on the mat”, The Nation, December 16, 1944, pp. 159, 737-739.
12 Joseph Alsop, “Dissection of a Poll”, The New Yorker, September 24, 1960, pp. 170-174, 177-184.
13 http://www.statnews.com/2016/02/11/stat-harvard-poll-gene-editing/ and https://cdn1.sph.harvard.edu/wp-content/uploads/sites/94/2016/01/STAT-Harvard-Poll-Jan-2016-Genetic-Technology.pdf (p.10 for the methodology; retrieved Thu 2/11/2016).
14 Academic survey research centers don’t escape the bottom line either: they will be closed down if they don’t meet certain financial standards.  Knowledge production is good but not at any cost.
15 Pollsters and survey researchers have always had to struggle with low response rates: in other words, low response rates are nothing new.  Contrary to what a recent article in the New Yorker claims [http://www.newyorker.com/magazine/2015/11/16/politics-and-the-new-machine] (a claim later picked up by the Guardian [http://www.theguardian.com/us-news/datablog/2016/jan/27/dont-trust-the-polls-the-systemic-issues-that-make-voter-surveys-unreliable]), response rates in 1930s in America were not in the 90s.  The most prestigious poll during that era was conducted by the Literary Digest (a weekly magazine similar to today’s Time): the highest response rate it achieved was about 24% in 1930 and 1936.  When the new pollsters (Crossley, Gallup and Roper) emerged in 1935, they used quotas as their sampling methodology from which a response rate cannot be computed.  However, Gallup did use mail-in ballots, in addition to in-person interviews, for his pre-election polls of 1936.  Two researchers assessing Gallup’s ballot returns wrote: “As a rule less than one-fifth of the mailed ballots are returned and these tend to come from selected groups. (…)The [Gallup] Institute found that the largest response (about 40 per cent) came from people listed in Who’s Who. Eighteen per cent of the people in telephone lists, 15 per cent of the registered voters in poor areas, and 11 per cent of people on relief returned their ballots” – a far cry from 90% (Daniel Katz & Hadley Cantril, “Public Opinion Polls”, Sociometry, Vol. 1, No. 1/2, Jul. - Oct., 1937, p.160).
16 https://www.aapor.org/Standards-Ethics/AAPOR-Code-of-Ethics.aspx
17 “POLL: Dr. Gallup to Take the National Pulse and Temperature”, News-Week, October 26, 1935, p.24.  Gallup was less than transparent when it came to revealing the exact size of his samples.
18 George Gallup and Saul Forbes Rae, The Pulse of Democracy: The Public-Opinion Poll and How It Works, 1940, Simon & Schuster, New York, p.59.
19 Samuel S. Wilks, “Representative Sampling and Poll Reliability”, The Public Opinion Quarterly, Vol. 4, No. 2 (Jun., 1940), p. 262.

Comments are closed.