Nick Sparrow’s April polling column

Nick Sparrow’s April polling column

Online polls rule. OK?

Online polls are taking over the UK polling industry. So far in 2011 no less than 96 vote intention polls have been published, 84 of which have been conducted online, just 12 by telephone. Of the 84 online polls 63 (and counting …) have been conducted by YouGov, 21 by others.

On average, telephone polls have had the Conservatives and Labour each around 1.5% lower than online polls and perhaps most significantly for the tone of recent political debate the Liberal Democrats 3% higher.

The averages for 2011 hide a wide spread of poll results, and for each of the three main parties these ranges span around 10 percentage points, an uncomfortably wide array for such a short period.

CON % LAB % LD % OTH % LAB Lead
All online 36.3 41.9 9.8 12 5.6
YouGov 36.8 42.8 9.4 11 6.1
All telephone 34.9 40.4 12.8 11.9 5.5
Lowest 32 (AR March) 36 (Opinium Feb) 7 (YG Jan)    
Highest 41 (YG Jan) 45 (YG Feb) 18 (ICM Feb)  

For an industry that is becoming increasingly dominated by online polls, a new paper from the USA by Prof. Jon Krosnik and Assistant Prof. Josh Pasek will make uncomfortable reading.

On behalf of the US Census Bureau, they analysed a set of parallel surveys (by telephone and an opt-in online panel) which charted the willingness of people to complete the 2010 US census. The findings are relevant in the UK because pollsters here use the same methods.

Krosnik and Pasek found that telephone samples were more demographically representative of the population as a whole after the samples had been stratified by the variables used in setting quotas and/or weighting variables. They write as follows …..

“This investigation revealed systematic and often sizeable differences between probability sample telephone data and non-probability internet data in terms of demographic representativeness of the samples, the proportion of respondents reporting certain opinions and behaviours, the predictors of intent to complete the Census form and actual completion of the form, changes over time in responses, and relationships between variables,”

On attitudinal questions Krosnik and Pasek found most common responses differed by 13% on average and by more than 30% in some cases. The authors write that analysis of the data “suggests that respondents in the Internet surveys were systematically different from or approached their tasks differently than did the telephone respondents. Differences between data streams were sufficiently common that a researcher’s choice of a data stream to use would affect the conclusions he or she would reach”

For consumers of poll results wishing to grasp the truth of public opinion, these conclusions must give cause for concern, as they suggest that the story of public opinion, where it stands on issues of the day and how it is moving is critically dependent on the method used to gather the data. Use a different method and it might well tell a different story! Does that matter? Of course it does. Between elections the polls keep the score, who is winning, losing, on the up or going down. They set the tone for political debate.

The work of Krosnik and Pasek, and others of a similar nature suggest at least two areas are in need of close scrutiny.

Mode of interview. Respondents may well react differently to the same basic question when it is written out on a screen rather than read out over the telephone by an interviewer. Differences are not confined to sensitive questions, but to a very wide range of variables. Understanding those differences, and what specific formulations come closest to gathering reliable opinions should be a priority for the industry.

Samples vs panels. Telephone polls use the principles of random sampling (somewhat imperfectly) while most online polls rely on opt-in/volunteer panels, comprising people who sign up to participate in opinion polls – usually for some form of payment. The obvious danger here is that people who seek out opportunities to give their opinions on social and political issues for some form of payment might be rather different to others in the first place. Second, the people who remain on the panel may be rather different from those who decide, having completed a few surveys that they would rather not do any more. Hence, for example, panel polls may find it easiest to re-contact people who are most interested in politics, most likely to have read or seen the latest political news and therefore most likely to change their intended vote as a result. Tell tale signs of panel effects might include smaller than expected proportions of people not intending to vote or saying that it unlikely they will vote, saying don’t know to vote intention and other questions and (of course) refusing to answer.

The polling industry might ignore these issues but at some risk. Of course, media clients want cheap exciting, headline grabbing polls. But they are a fickle lot. The dangers for the polling industry is that, as in the immediate aftermath of the 1992 election, the clients belatedly add that they needed reliability as well.

Nick Sparrow was the head of polling at ICM

Comments are closed.