In this interactive graph Rappler averages the results of three major polling firms and creates average trend lines. I’ve put a static version of the graph on this page, but you should check out that link for the full thing.
FiveThirtyEight’s Nate Silver became famous for accurately predicting U.S. election results based on a simple method: averaging polls. But then he had over 6,000 polls (of varying quality) to average, while in the Philippines we only have 3 major ones – SWS, Pulse Asia, and Laylo Research Strategies. (Laylo does polls for Manila Standard). Including all the other polls you can find online, including at Rappler, I would estimate less than 20. Also, I believe Pulse Asia and Laylo were founded by former SWS people. We need more diversity here.
Some things are definitely weird or interesting with this graph though.
1. Check out October 2015 on the right. Grace Poe is listed as having TWO polling percentages of 47.0% and 41.0% from the same survey on the same day (2015-10-21, Laylo). Mar Roxas also has two polling percentages of 26.0% and 22.0% for that same survey. Duterte and Binay only have one, as is expected. This graph needs an editor.
2. The graph, while not obvious, highlights some interesting trends among polling firms themselves. For example, Binay consistently does much better in SWS polls than in Pulse Asia polls conducted within the same day or less than a week apart.
One major difference between those two polls is that Pulse Asia asked respondents to choose a single candidate and SWS asked respondents to choose up to three people – though even in September 2015 when both Pulse Asia and SWS used single-candidate questions, Binay does better by 5 percentage points in SWS than in Pulse Asia.
I want to emphasize that this is not evidence of bias or binayaran, it’s probably a sampling issue.
3. Not in the graph but worth noting: The margin of error around a survey result tells you only one thing: how much the results can vary if you could infinitely repeat the same exact survey from the same exact population with the same exact method on the same exact day. The margin of error does not include other sources of error, such as flaws in the sampling methodology, differences in question wording and interpretation, differences in choices, people who refuse to respond or give joke responses, staff making responses up, etc., etc.
For example, Pulse Asia and Laylo both conducted polls from Dec. 4-12, 2015. Pulse Asia reported that Jejomar Binay led with 33%, with a margin of error of 2.6%. This means that there is a 95% chance that the interval [30.4%, 35.6%] captures how Binay would have truly performed in the polls.
Laylo reported that Binay was second, with 23% and a margin of error of 2%. This means that there is a 95% chance that the interval [21%, 25%] captures how Binay would have truly performed in the polls.
How can two polls conducted at the same time have such different outcomes? It all boils down to how they did the polls in the first place. I believe that all three major firms have their own ways of trying to get a “probability sample” (a sample representative of all eligible Philippine voters), but there are many possible approaches to that tricky problem, and they might be different enough that the results will also be wildly different.