Wednesday, August 17, 2016

Does the Future of Polling Require a Trip to the Past?

One of the hotter somewhat “nerd” topics in politics of late is the rather significant inaccuracies that have been demonstrated in various public polls among numerous credible polling agencies over the last few years. These inaccuracies range from prediction failures in a number of Presidential Primaries and senate elections in the United States to Parliament elections and the British exit from the EU in Europe not withstanding inaccurate polling results in other countries as well. While layman individuals may not be overly concerned about these inaccuracies, those in the business as well as a number of political scientists are concerned for they view polls as an important element to understanding how people view the state of their country and how their values can influence the path of the country. So what are the major problems creating this inaccuracy and what can be done to address them?

One of the fortunate things about this problem in modern polling is that not only are the authorities on the matter aware that there is a problem, but they seem to have a general idea to the causes. For example two of the biggest trends creating difficulties for producing accurate polling results are: 1) the increased use of cell phones and the resultant decease in the use of landlines making it more difficult and expensive to reach people; 2) people are less inclined to actually answer surveys even when they can be reached. These two reasons are rather interesting and almost ironic in a sense.

The expansion of technology was though to make polling more convenient and cheaper, yet it seems that the opposite has occurred. The transition from landlines to cell phones has made polling more difficult in multiple respects. First, the general mobility of cell phones creates a problem in that the area code assigned to the cell phone may not match the area code of where the owner now lives. Obviously this is a problem for asking someone who lives in Maryland about a state Senate election in Washington because of their phone has a 206 area code will not produce an accurate or meaningful result.

Second, increased cell phone use has significantly increased costs associated with polling through the common random means of creating a sample size. While dual sampling frames have addressed the problem of finding the cell phone users, Federal law reduces general polling efficiency. In the past automatic dialers were utilized to speed through the process of numbers that were disconnected or were not answered only passing to a live person when the call was answered.

However, the FCC has ruled that the 1991 Telephone Consumer Protection Act prohibits calling cell phones through automatic dialers. With common call ratios commonly exceeding at least 10 times the desired end result (i.e. for a survey response of 1000 people at least 10,000 numbers are commonly dialed), these calls having to be made by live people significantly increases costs against auto dialers. Furthermore all survey participants must be compensated for the call resources (commonly cell phone minutes); in a landline dominant world any required compensation was much cheaper relative to a cell phone dominant world.

Making matters worse the transition from landlines to “cell phone only using” individuals have followed the typical rapid incorporation path of proven technology where in the U.S. the National Health Interview Survey identified only 6% of the public only used cell phones (no landlines) in 2004 with an increase to 48.3% with an additional 18% almost never using a landline by 2015. So in a sense almost 2/3rds (66.3%) of the U.S. population were more than likely not reachable via landline in 2015.1

Obviously even if a pollster is able to reach an individual that is only step one in the process for that respondent must be willing to answer the asked questions. Unfortunately for pollsters the general response rates for individuals have collapsed in a continuous trend from about 90% in 1930 to 36% in 1997 to 9% in 2012.2,3 Not surprisingly there is a concern that this lack of success produces an environment where those who do respond do not comprise an accurate representation of the demographic that is pertinent to the idea of polling. While some studies have demonstrated that so far fancy statistical footwork (so to speak) has been able to neutralize these possible holes, most believe that it is only a matter of time before these problems can no longer be marginalized.3

This dramatic reduction is somewhat ironic, especially in an Internet era; while a number of people are more than content to spill their guts out on various social media sites about the intricate details of their lives and even events that occur day to day including mundane things like pictures of the lunch they’re about to eat, they are less willing to participate in public polling. Some theorize that Americans as a whole are too busy to answer polling questions, but this explanation does nothing but paint most of those Americans as shallow for it would be easy for most of them to make time if so desired.

Another theory is that the digital age has made actual social interaction more awkward (less comfortable); people are easily able to post various types of information on social networks because the interaction is indirect with a time gap typically with somewhat known individuals, online “friends”, whereas polls are direct interaction in real-time with a stranger. This theory holds much more water than the “not enough time” theory, but is also more problematic because it demands a significant personality shift away from how society seems to be trending.

For example cell phones offer a more effective means to call screen and a number of individuals are unwilling to answer calls from unknown numbers unless one is expected (like the results from a job interview). This behavior may also explain why older individuals, those born before the digital age, are much more likely to answer pollster question; they live outside this digital bubble and have not had their personalities influenced by it.

A third theory is that people before the digital age were more likely to respond to pollsters because of the psychological belief that answering those questions granted validity and even importance to their opinions due to nature of the medium, especially over those who were not polled. However, now in the digital age where anyone can have a Facebook page or a blog to post their opinion to the world, there is less psychological value to polling producing a medium for someone to express their opinions. Tie this reality to the fact that the information ubiquitous environment of the Internet has also sullied the waters so to speak regarding what information is important and what information is meaningless. Overall it could be effectively argued that most people do not see an ego boost from participating in polls anymore; therefore little to no value is assigned to that participation, but also people are more socially awkward about participating further driving down participation probabilities.

What can be done about these issues? The most obvious suggestion is as polling moved from being face-to-face to the telephone thanks to the advancement of technology; polling must once again evolve from telephones to online. While the most obvious suggestion, there are numerous problems with such a strategy. The first and most pressing concern is that Internet polls on meaningful political issues run by reputable companies have similar response rates as telephone polls. However, the level of bias associated with respondents switches from older individuals to younger individuals, for a vast majority of Internet use is performed by younger individuals. Also drawing a statistically random sample through the Internet seems incredibly difficult in general and without a random sample, bias is almost guaranteed.

Polling can be conducted either based on a probability or non-probability scale. Probability involves creating a sample frame, a randomized selection from a population via a certain type of procedure with a specific method of contact and medium for the questions (data collection method). At times this is easy like using a employee roster at a company A to ask about working conditions; other times it is difficult, especially on larger state/national questions because the sample population is larger and more disorganized creating problems in devising an appropriate sample frame, both logistically and financially.

Non-probability samples for polling are drawn simply from a suitable collection of respondents with only small similarity, largely involving a convenience sample (i.e. those who can most easily be recruited to complete the survey). Internet polling is largely based on non-probability. This structure has problems because without self-selection it is more difficult to statistically project the opinions of those polled to the general population within the typical margin of error. Also there are problems in comparing the survey population and any target population, creating unknown bias. The inherent age and ethnicity bias with online polling also persists. Some services attempt to overcome bias via weighting, pop-up recruitment and statistical modeling.

Weighting is commonly used when a sample has a small portion of a particular demographic that is not representative of the total target population (i.e. for a national poll only 17% of the respondents are women). With the national population of women in hovering round 51% the preferences of the women in the sample would be “weighted” three times as much. Obviously the most immediate concern with this method is with the smaller number of respondents the weighting system can “conclude” that more extreme/uncommon views are more widely held if such views are present in the survey. Weighting can also lead to herding and other possible statistical manipulation, especially when compared against other similar polls. Overall one of the biggest problems with weighting is that it is rarely reported directly to the public in the polls that they see presented by media outlets.

Pop-up recruitment attempts to create a more demographic appropriate sample size by having various polling advertisements for a particular poll appear over a variety of different websites where some of those websites are primarily visited by young black men, others visited by middle aged white women and others visited by gay Hispanic men, etc. hoping to pull in enough diversity to find representation in all parties. These pop-ups also attempt to reduce “busy work” for the participants (i.e. filling out personal information forms) by using proxy demographics based on browser visitation histories. While such a strategy is viable their overall level of consistent and long-term accuracy is questionable. A meaningful problem is that the tools made to smooth out the accuracy of these methods do not appear universally applicable. Another problem is that only more politically engaged individuals bother to take note of pop-up recruitments and may have certain characteristics that skew accuracy.

Finally some organizations like and use poll averaging including weighting historical accuracy and specific characteristics associated with certain demographics to create election models and “more complete” polls. While some champion these methods as the future, there is the concern that if most of the polls become Internet based then the feedstock for these aggregate polls will have the same general flaws and the aggregate polls will also carry over those flaws resulting in no meaningful improvement in value or accuracy.

It is interesting to note that the age bias associated with Internet polling is naturally self-correcting. Similar to how telephone bias towards more wealthy households existed in the 1940s and 50s and then self-corrected as telephones became more widespread, Internet polling will also self-correct, but in a little more grizzly fashion. The problem in Internet polling is not a lack of availability, but a lack of usage. As older individuals who have little interest in using the Internet die and have their age group replaced by individuals who became familiar with the Internet in their late 20s, age bias should significantly decrease. However, it is unlikely that polling can wait the two decades+ for this “natural” self-correction and even then there is no guarantee that inherent issues with Internet polling will be solved.

While producing an accurate and meaningful sample size is becoming more difficult and expensive, it certainly is not impossible and various polls have sufficient size and representation. So what could lead to inaccuracies in these polls outside of sampling issues?

The two most common problems in polling accuracy are inability to predict how a voter will change his/her mind before actually voting and inaccurate conclusions regarding who will actually vote. Not surprisingly the former is less the fault of the polling organization than the latter. While they can certainly attempt it, it really is not the responsibility of the polling organization to accurately forecast the probability that voter A who reports a desire to vote for candidate A will change that desire and vote for candidate B two weeks later. However, polling organizations can do a better job of determining the likelihood of a particular individual voting and weighting that probability into their polling conclusions.

For example this “probability of voting” factor is another significantly problem with Internet polling for while 95% of all 18-29 year-olds use the Internet, only 13% made up the total 2014 electorate. However, while only 60% of those 65 and older use the Internet, a significant percentage of those resort to only utilizing email, individuals 65 and older made up 28 percent of the 2014 electorate.2,4 Therefore, Internet polls completely missed a portion of the electorate and heavily overvalued the opinions of another portion. That is not the only problem; a Pew study suggested that non-probability surveys, i.e. Internet surveys, struggle to represent certain demographics, i.e. Hispanics and Blacks adults results have an average estimated bias of 15.1 and 11.3% respectively.2

It is important to note that a voter reporting a higher than actualized probability to vote is nothing new. Over the years it is common that 25% to 40% of those who say they will vote end up failing to do so.2 To combat this behavior polling organizations attempt to predict voting probability through the creation of a “likely voter” scale.

One method polling organizations utilize to estimate the likelihood of voting is to review past turnout levels in previous elections, while applying appropriate adjustments regarding voter interest due to the type of candidates, the type of prominent issues, the competitiveness of the races, ease of voting and level of voter mobilization in the polling area.2 These estimates produce a range for a voting probability, a floor and ceiling, which is used to create a cutoff region.

A pool of possible voters to compare to the voting range is created based on answers to a separate set of questions. For example a recent Pew analysis utilized the following question based to determine voting probability:2

- How much thought have you given to the coming November election? Quite a lot, some, only a little, none
- Have you ever voted in your precinct or election district? Yes, no
- Would you say you follow what’s going on in government and public affairs most of the time, some of the time, only now and then, hardly at all?
- How often would you say you vote? Always, nearly always, part of the time, seldom
- How likely are you to vote in the general election this November? Definitely will vote, probably will vote, probably will not vote, definitely will not vote
- In the 2012 presidential election between Barack Obama and Mitt Romney, did things come up that kept you from voting, or did you happen to vote? Yes, voted; no
- Please rate your chance of voting in November on a scale of 10 to 1. 0-8, 9, 10

From these questions statistical models are created that assign a probability of voting to each participant based on their answers and the weighting of each question. Sometimes these models are also used in other present elections or even future elections, but when this occurs one must be careful to ensure the assumptions remain appropriate for accuracy considerations. This modeling method is viewed as more accurate because it incorporates all of the questions instead of focusing on one or two like the last one “Please rate your chance of voting in November on a scale of 10 to 1.” Also this method still allows for the incorporation of respondents who answer low on one particular question, like they did not vote in the last election, as possible voters.

While asking these types of questions is appropriate, polling organizations may hurt themselves because while there is no single silver bullet question to determine whether or not person A votes, different organizations use different question to produce their probability results. This lack of standardization can create inefficiencies; it seems to make more sense that all organizations would use the same questions to determine voting probability to better identify questions that are good predictors.

Past-voter history is not the only meaningful factor, it has been demonstrated as a rather effective means of predicting future turnout.2 However, there is a concern that poll participants may misremember their voting history, especially because it takes place so rarely and is rather an unmemorable event for most. Therefore, pollsters also attempt to measure voting probability by including voter history from voter registration files, but this method is somewhat inconsistent between polling organizations. The reason for this inconsistency is that most surveys still require random phone dialing or Internet recruitment and it is difficult to acquire the necessary names and addresses of the roster to tie back into the voter file due to increased work load or lack of willingness by the respondents.

Another way that voter registration files could be useful is eliminating some of the randomness when utilizing the phone to produce a poll roster. For example matching telephone numbers to a voter file can produce information that can narrow the number of calls that are needed to fill a poll roster for a certain demographic. Some organizations have claimed to reduce the number of calls required to fill poll rosters by up to 70% using this type of method.5 Such a method is also though to reduce problems associated with sampling error as well.

Interestingly enough the general response of the polling community to the issue of inaccuracy, smaller sample sizes and increase costs is to depend more on technology, data mining and statistical analysis, which have only demonstrated the ability to “hold-off” worse results, but do not appear to have any direct means at improving the situation.

However, one wonders why polling organizations do not simply return to their roots in a sense. Instead of resorting to more technology and more statistics why not simply “go out among the people”. What are the negative issues with the larger organizations producing branch offices of sorts where they can setup polling stations in high traffic areas to directly engage individuals instead of calling at awkward times or hoping to get proper sample sizes from various politically motivated Internet users while the rest ignore those pop-ups advertising a poll.

To facilitate better interaction with possible poll responders instead of an individual standing in a general location with a survey and clipboard which can put a number of people immediately on guard where some purposely alter their paths to avoid the clipboard individual, the polling agents should set up a table clearly labeling their intent. Also to compensate individuals for their time, the polling agents should offer small items in exchange for answered questions: Frisbees, lighters, little Nerf footballs, etc. It would surprise a number of individuals how many people walking down the street for other business would be willing to spend 5-10 minutes asking questions for a free little Nerf football. It would be easy to set up such an environment rather seamlessly at a farmer’s market or in a shopping mall.

The results of this information could then been reported to a main “data center” for the polling organization and pooled into a single poll relative to a national issue. Such a method should more than likely reduce overall costs while producing more accurate information. Of course this is only one possible means to address the problem without hoping that technology can “magically” fix it.

In the end the “crisis” in polling might simply be an internal one of little relevance. For example is polling even important anymore with regards to elections? Suppose candidate A has ideas A, B and C and opposes ideas D, E and F. If polling demonstrates that candidate A’s constituency values ideas A, C and F doesn’t candidate A look bad changing his position on idea F from con to pro based on that data? The change would be based on public option not an actual change in the facts surrounding idea F. Typically governance by political polling leads to poor governance.

Another important question is why is it important that the public have polling information available? Are polls only useful for individuals to have a measuring stick to the level of value that the rest of society places on a particular issue or the popularity of a particular candidate? If so, what is the value that John Q. Public has this information? Certainly person A will not change their value system if a public poll seems to produce a differing opinion.

The reality of the situation is that for the most part polling information available to most candidates to a particular office is more accurate and advanced than that information given to the public. Also only those who work for a particular issue or candidate seem to have enough motivation to be influenced by a poll result to work harder for their particular issue. Overall is media reported polling just another something for the media to talk about, a time filler? Maybe the real issue with public polling is not how can its accuracy be improved/maintained, but what role does it really serve in society? Perhaps changing the nature of polling back from an indirect activity on a computer screen or telephone to a direct face-to-face exchange between people can help answer that more important question.


Citations –

1. Blumberg, S, and Luke, J. “Wireless substitution: early release of estimates from the national health interview survey, July – December 2015.” National Health Interview Survey. May 2016.

2. Keeter, S, Igielnik, R, Weisel, R. “Can likely voter models be improve?” Pew Research Center. January 2016.

3. DeSilver, Drew and Keeter, Scott. “The challenges of polling when fewer people are available to be polled.” Pew Research. July 21, 2015.

4. File, T. “Who Votes? Congressional Elections and the American Electorate: 1978–2014.” US Census Bureau. July. Accessed October 7 (2015): 2015.

5. Graff, Garrett. “The polls are all wrong. A startup called civis is our best hope to fix them.” Wired. June 6, 2016.