By David Coletto
News of political pollingâ€™s demise has been greatly exaggerated. Donald Trumpâ€™s improbable victory in November 2016 resurrected the debate about the credibility of political polling. CNNâ€™s Jake Tapper even announced on air, long before all the votes were counted, that the polling industry was â€śgoing out of business.” The fact is that the national polls and most state polls were describing the race quite accurately. It was a close election.
Similarly, in the 2015 Canadian general election, people asked why the polls performed so poorly. Evidence indicates that the polls performed quite well. There were 115 national horserace polls released during the long campaign (1.47 per day) and those released in the final week of the campaign were more accurate than polls released in the 2011 and 2008 elections. The problem was not with the data or polling methodology, but with our interpretation of the numbers.
We are increasingly reliant on seat projection models that rely on polling estimates that perfectly match election results. Polls generally underestimated the Liberal vote share by about two points, well within expectation. All the aggregation models suggested a minority government was most likely, but many said a majority Liberal government was possible. We should not have been surprised when thatâ€™s what happened. The 2015 Canadian general election was not a polling failure.
Thanks in large measure to these aggregator models, our expectations of how accurate polls can be has become too high. Whereas one person could interpret a three-point lead for, say, the Red Party in a poll of 1,000 Canadians as a lead for the Red Party, another person can just as correctly describe the race as a tie. There is an 83 per cent chance that the Red Party is leading, but an almost one-in-five chance that the Red Party is trailing. Too often we focus only on the actual number, concluding the Red Party is certain to win and ignoring the real chance that another outcome is possible.
This interpretation is all too common amongst commentators and journalists. Instead of recognizing that survey research is based on probability, we frequently view poll results at face value, ignoring the range of possibilities they describe.
Imagine you are a campaign manager for a candidate running for Parliament. Your pollster says the candidate is at 43 per cent and your main opponent is at 39 per cent, with 15 per cent of eligible voters saying they are undecided from a survey of 500 voters in the riding. Do you tell your candidate she is certain win the election? You probably wonâ€™t and a good campaign manager shouldnâ€™t. Instead, you would tell her that sheâ€™s leading, but thereâ€™s a chance that her opponent could still win.
As technology has evolved, so too have our methods of conducting survey research. One in four Canadians do not have a landline. We now include mobile numbers into our sample. More than 90 per cent of Canadians have access to the internet on a daily basis, making online research more reliable. We are testing sampling methods using SMS technology to reach the increasing number of Canadians with smart phones and inviting them to complete surveys through their mobile devices.
Thereâ€™s no doubt it is more difficult to obtain a representative sample of Canadians today than it was in the past. But when did we stop being amazed by a surveyâ€™s ability to estimate vote intention within three percentage points after speaking with 1,000 people from a population of 33 million? Or to be within one or two points of estimating the national popular vote in the United States with a sample of 1,000 Americans taken from a population of more than 300 million?
Some will argue that the rise of Big Data has made polling obsolete. Consumer brands, technology companies, and even governments can use this social media data, search data, and consumer transactions can help us to anticipate shifts in behaviour.
But Big Data will never be able to fully explain why people do what they do or think what they think.
There is still great value in political pollingâ€™s ability to ask voters direct questions, to measure their mood, and to get a sense of their impressions of political leaders, public policies and the economy. These underlying feelings influence our behaviour and are things Big Data has difficulty measuring.
I admit, there have been polling failures in Canada: the 2013 British Columbia election and the 2012 Alberta election are just two examples. But there have been far more instances of success. Instead of declaring political polling dead, we should recognize its limitations, but also its ability to measure perceptions, impressions, and opinions accurately despite massive changes in technology. Combining the virtues of political polling with the opportunities that Big Data offers is the best way forward.
David Coletto is CEO and founding partner at Abacus Data. He has a PhD in Political Science from the University of Calgary and teaches undergraduate and graduate courses at Carleton University on polling, public affairs, and political marketing.
By Erin Kelly and Kenton White
â€śHow did the pollsters get it so wrong?â€ť was the big question lingering after Donald Trump was elected president of the United States.
The answer: itâ€™s not the pollsters who are getting it wrong, itâ€™s the technology.
The pollsterâ€™s first task is to get a representative, randomized sample of the population. During the last 50 years, the preferred method has been to use random telephone dialling. The premise is that by randomly dialling numbers, you obtain a representative sample. This is preferable to web surveys that are not sent out randomly, as no one has devised a way to randomly guess email addresses and ensure those who receive them are from different walks of life.
Telephone polling worked well when every demographic had a landline and people werenâ€™t screening their calls or blocking them outright. In 1997, pollsters could expect to receive a 36 per centÂ response rate using telephone polling. Today, it is less than 10 per cent; a number that researchers say creates a high likelihood of â€śnon-response biasâ€ť (Nature, October 2016).
But there is a technology that has proven to be incredibly successful for public opinion research. This tool avoids the pitfalls of small sample sizes and non-response bias that have plagued telephone polling over the last decade. This technology allows scientists to randomly select social media accounts the way pollsters randomly dial phone numbers, and to obtain representative samples of hundreds of thousands of people in the population.
Called Conditional Independence Coupling (CIC), this technology successfully predicted the results of the 2016 U.S. presidential election, the Brexit vote and the 2015 Canadian federal election, as well as more than 100 other elections and referenda.
And the best part: it costs much less than telephone polling. CIC (pronounced â€śkickâ€ť) is a Canadian technology developed by researchers in artificial intelligence at the University of Ottawa.
Brought to market in 2015, it has been used successfully by leading companies in the United States and Canada, and by media outlets that do election forecasting. In 2016, our company was one of only 16 firms worldwide to successfully predict the Brexit outcome at 52 per cent using CIC, which we shared ahead of the vote on CBC radioâ€™s The Current. Unlike traditional methods, CIC did not require our scientists to use any statistical weighting tools or other tricks to â€ścorrectâ€ť for deficiencies in data â€“ because there were no deficiencies. With large sample sizes, the data corrects itself.
CIC brings several advantages over telephone polling. The first is sample size. While a traditional pollster works with samples of 1,500-2,000 people, collecting information over a period of a few weeks, CIC generates representative samples of 100,000 to 200,000 citizens and analyzes their opinions over one or more years (depending on what it is trying to measure), carefully understanding what factors cause people to change their opinions.
The information can be updated weekly to provide a â€śliving surveyâ€ť for clients who want up-to-date information. Regular surveys using telephone polling are too expensive to update weekly.
Second, the CIC method tells us the natural engagement of the population. When you ask someone a question, they give you an answer. But that doesnâ€™t tell you how passionately they believe in it.
However, if someone is online, showing a natural interest in a topic without prompting, this is much more revealing.
Analyzing the language allows you to distinguish between the zealots and those who show â€śaverage levels of concern.â€ť If we want a more representative democracy, it is important to be able to distinguish these levels of engagement. Far too often, politicians are disproportionately exposed to the opinions of the well-heeled and well-organized, rather than the majority of the population.
With traditional polling, analysts are frequently left guessing how to weight opinions of different populations. For example, during the 2015 Canadian election, pollsters competed with different â€ślikely voter modelsâ€ť which entailed discounting the opinions of demographic groups the pollster felt were less likely to vote. The opinions of youth and aboriginals, for example, often fell victim to these â€śmethodologies.â€ť
With CIC, the guesswork is removed. When you have a sample of 100,000 Canadians, each demographic is represented. We had 2,000 aboriginals in our sample, for example, and could see that, contrary to past elections, aboriginals were highly engaged in 2015. Artificial intelligence is set to revolutionize the public opinion research industry the way it is transforming so many others. And Canadian technology is at the forefront of this revolution.”
Erin Kelly is CEO of Advanced Symbolics Inc.
Kenton White is an adjunct professor of computer science at the University of Ottawa, and the chief scientist at Advanced Symbolics Inc.